summaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)AuthorFilesLines
2011-12-14target: Remove extra se_device->execute_task_lock access in fast pathNicholas Bellinger1-1/+0
This patch makes __transport_execute_tasks() perform the addition of tasks to dev->execute_task_list via __transport_add_tasks_from_cmd() while holding dev->execute_task_lock during normal I/O fast path submission. It effectively removes the unnecessary re-acquire of dev->execute_task_lock during transport_execute_tasks() -> transport_add_tasks_from_cmd() ahead of calling __transport_execute_tasks() to queue tasks for the passed *se_cmd descriptor. (v2: Re-add goto check_depth usage for multi-task submission for now..) Cc: Christoph Hellwig <hch@lst.de> Cc: Roland Dreier <roland@purestorage.com> Cc: Joern Engel <joern@logfs.org> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
2011-12-14target: Drop se_device TCQ queue_depth usage from I/O pathNicholas Bellinger1-1/+0
Historically, pSCSI devices have been the ones that required target-core to enforce a per se_device->depth_left. This patch changes target-core to no longer (by default) enforce a per se_device->depth_left or sleep in transport_tcq_window_closed() when we out of queue slots for all backend export cases. Cc: Christoph Hellwig <hch@lst.de> Cc: Roland Dreier <roland@purestorage.com> Cc: Joern Engel <joern@logfs.org> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
2011-12-14target: Remove TFO->check_release_cmd() fabric API callerNicholas Bellinger1-4/+0
Remove the now unused target_core_fabric_ops->check_release_cmd() as target_core handles this directly for se_cmd->cmd_kref objects now. Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
2011-12-14target: Add target_submit_cmd() for process context fabric submissionNicholas Bellinger2-1/+8
This patch adds a target_submit_cmd() caller that can be used by fabrics to submit an uninitialized se_cmd descriptor to an struct se_session + unpacked_lun from workqueue process context. This call will invoke the following steps: - transport_init_se_cmd() to setup se_cmd specific pointers - Obtain se_cmd->cmd_kref references with target_get_sess_cmd() - set se_cmd->t_tasks_bidi - transport_lookup_cmd_lun() to setup struct se_cmd->se_lun from the passed unpacked_lun - transport_generic_allocate_tasks() to setup the passed *cdb, and - transport_handle_cdb_direct() handle READ dispatch or WRITE ready-to-transfer callback to fabric v2 changes from hch feedback: *) Add target_sc_flags_table for target_submit_cmd flags *) Rename bidi parameter to flags, add TARGET_SCF_BIDI_OP *) Convert checks to BUG_ON *) Add out_check_cond for transport_send_check_condition_and_sense usage v3 changes: *) Add TARGET_SCF_ACK_KREF for target_submit_cmd into target_get_sess_cmd to determine when the fabric caller is expecting a second kref_put() from fabric packet acknowledgement. Cc: Christoph Hellwig <hch@lst.de> Cc: Roland Dreier <roland@purestorage.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
2011-12-14target: Make target_put_sess_cmd use target_release_cmd_krefNicholas Bellinger1-0/+1
This patch moves target_put_sess_cmd() to use a se_cmd->cmd_kref callback target_release_cmd_kref when performing driver release of fabric->se_cmd descriptor memory. It sets the default cmd_kref count value to '2' within target_get_sess_cmd() setup, and currently assumes TFO->check_stop_free() usage. It drops se_tfo->check_release_cmd() usage in the main transport_release_cmd codepath. Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
2011-12-14target: remove overagressive ____cacheline_aligned annoationsChristoph Hellwig1-31/+31
If we want dynamically allocated objects to be cacheline aligned we need to tell that to the slab allocator by using the proper flags and not by liberally sprinkling annotations onto all structures. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
2011-12-14target: make the se_task task_state_active a normal boolChristoph Hellwig1-1/+1
There is no need to make task_state_active an atomic_t given that it is always set under the execute_task_lock so we can make it a simple bool. Also rename it to t_state_active to be closer to the list it guards, and make sure all checks before the list addion/removal actually happen under execute_task_lock. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
2011-12-14target: remove the se_task task_error_status fieldChristoph Hellwig1-1/+0
We only reach transport_complete_task once per task, so the test and set on task_error_status is never going to have an effect. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
2011-12-14target: fold se_task.task_sense into task_flagsChristoph Hellwig1-2/+2
Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
2011-12-14target: header reshuffle, part2Christoph Hellwig8-348/+242
This reorganized the headers under include/target into: - target_core_base.h stays as is with all target-wide data stuctures and defines - target_core_backend.h contains the whole interface to I/O backends - target_core_fabric.h contains the whole interface to fabric modules Except for those only the various configfs macro headers stay around. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
2011-12-14iommu/amd: Add invalid_ppr callbackJoerg Roedel1-3/+31
This callback can be used to change the PRI response code sent to a device when a PPR fault fails. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
2011-12-14target: reshuffle headersChristoph Hellwig4-106/+4
Create a new headers, drivers/target/target_core_internal.h that is supposed to hold all target_core-internal prototypes. Move all non-exported includes from include/target to it, and merge the smaller prototype-only includes inside drivers/target into it as well. Mark functions that were found to not be called outside their implementation file static. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
2011-12-14Merge branch 'rcu/next' of ↵Ingo Molnar7-93/+272
git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into core/rcu
2011-12-14ARM: Orion: Remove address map info from all platform data structuresAndrew Lunn1-3/+0
Signed-off-by: Andrew Lunn <andrew@lunn.ch> Tested-by: Michael Walle <michael@walle.cc> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Nicolas Pitre <nico@fluxnic.net>
2011-12-14ARM: Orion: Get address map from plat-orion instead of via platform_dataAndrew Lunn1-1/+12
Use an getter function in plat-orion/addr-map.c to get the address map structure, rather than pass it to drivers in the platform_data structures. When the drivers are built for none orion platforms, a dummy function is provided instead which returns NULL. Signed-off-by: Andrew Lunn <andrew@lunn.ch> Tested-by: Michael Walle <michael@walle.cc> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Nicolas Pitre <nico@fluxnic.net>
2011-12-14block, cfq: move icq creation and rq->elv.icq association to block coreTejun Heo2-4/+63
Now block layer knows everything necessary to create and associate icq's with requests. Move ioc_create_icq() to blk-ioc.c and update get_request() such that, if elevator_type->icq_size is set, requests are automatically associated with their matching icq's before elv_set_request(). io_context reference is also managed by block core on request alloc/free. * Only ioprio/cgroup changed handling remains from cfq_get_cic(). Collapsed into cfq_set_request(). * This removes queue kicking on icq allocation failure (for now). As icq allocation failure is rare and the only effect of queue kicking achieved was possibily accelerating queue processing, this change shouldn't be noticeable. There is a larger underlying problem. Unlike request allocation, icq allocation is not guaranteed to succeed eventually after retries. The number of icq is unbound and thus mempool can't be the solution either. This effectively adds allocation dependency on memory free path and thus possibility of deadlock. This usually wouldn't happen because icq allocation is not a hot path and, even when the condition triggers, it's highly unlikely that none of the writeback workers already has icq. However, this is still possible especially if elevator is being switched under high memory pressure, so we better get it fixed. Probably the only solution is just bypassing elevator and appending to dispatch queue on any elevator allocation failure. * Comment added to explain how icq's are managed and synchronized. This completes cleanup of io_context interface. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2011-12-14block, cfq: restructure io_cq creation path for io_context interface cleanupTejun Heo1-0/+2
Add elevator_ops->elevator_init_icq_fn() and restructure cfq_create_cic() and rename it to ioc_create_icq(). The new function expects its caller to pass in io_context, uses elevator_type->icq_cache, handles generic init, calls the new elevator operation for elevator specific initialization, and returns pointer to created or looked up icq. This leaves cfq_icq_pool variable without any user. Removed. This prepares for io_context interface cleanup and doesn't introduce any functional difference. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2011-12-14block, cfq: move io_cq exit/release to blk-ioc.cTejun Heo2-6/+19
With kmem_cache managed by blk-ioc, io_cq exit/release can be moved to blk-ioc too. The odd ->io_cq->exit/release() callbacks are replaced with elevator_ops->elevator_exit_icq_fn() with unlinking from both ioc and q, and freeing automatically handled by blk-ioc. The elevator operation only need to perform exit operation specific to the elevator - in cfq's case, exiting the cfqq's. Also, clearing of io_cq's on q detach is moved to block core and automatically performed on elevator switch and q release. Because the q io_cq points to might be freed before RCU callback for the io_cq runs, blk-ioc code should remember to which cache the io_cq needs to be freed when the io_cq is released. New field io_cq->__rcu_icq_cache is added for this purpose. As both the new field and rcu_head are used only after io_cq is released and the q/ioc_node fields aren't, they are put into unions. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2011-12-14block, cfq: move icq cache management to block coreTejun Heo1-1/+10
Let elevators set ->icq_size and ->icq_align in elevator_type and elv_register() and elv_unregister() respectively create and destroy kmem_cache for icq. * elv_register() now can return failure. All callers updated. * icq caches are automatically named "ELVNAME_io_cq". * cfq_slab_setup/kill() are collapsed into cfq_init/exit(). * While at it, minor indentation change for iosched_cfq.elevator_name for consistency. This will help moving icq management to block core. This doesn't introduce any functional change. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2011-12-14block, cfq: move cfqd->icq_list to request_queue and add request->elv.icqTejun Heo1-2/+8
Most of icq management is about to be moved out of cfq into blk-ioc. This patch prepares for it. * Move cfqd->icq_list to request_queue->icq_list * Make request explicitly point to icq instead of through elevator private data. ->elevator_private[3] is replaced with sub struct elv which contains icq pointer and priv[2]. cfq is updated accordingly. * Meaningless clearing of ->elevator_private[0] removed from elv_set_request(). At that point in code, the field was guaranteed to be %NULL anyway. This patch doesn't introduce any functional change. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2011-12-14block, cfq: reorganize cfq_io_context into generic and cfq specific partsTejun Heo1-29/+14
Currently io_context and cfq logics are mixed without clear boundary. Most of io_context is independent from cfq but cfq_io_context handling logic is dispersed between generic ioc code and cfq. cfq_io_context represents association between an io_context and a request_queue, which is a concept useful outside of cfq, but it also contains fields which are useful only to cfq. This patch takes out generic part and put it into io_cq (io context-queue) and the rest into cfq_io_cq (cic moniker remains the same) which contains io_cq. The following changes are made together. * cfq_ttime and cfq_io_cq now live in cfq-iosched.c. * All related fields, functions and constants are renamed accordingly. * ioc->ioc_data is now "struct io_cq *" instead of "void *" and renamed to icq_hint. This prepares for io_context API cleanup. Documentation is currently sparse. It will be added later. Changes in this patch are mechanical and don't cause functional change. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2011-12-14block: remove elevator_queue->opsTejun Heo1-3/+2
elevator_queue->ops points to the same ops struct ->elevator_type.ops is pointing to. The only effect of caching it in elevator_queue is shorter notation - it doesn't save any indirect derefence. Relocate elevator_type->list which used only during module init/exit to the end of the structure, rename elevator_queue->elevator_type to ->type, and replace elevator_queue->ops with elevator_queue->type.ops. This doesn't introduce any functional difference. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2011-12-14block, cfq: kill cic->keyTejun Heo1-1/+0
Now that lazy paths are removed, cfqd_dead_key() is meaningless and cic->q can be used whereever cic->key is used. Kill cic->key. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2011-12-14block, cfq: kill ioc_goneTejun Heo1-17/+0
Now that cic's are immediately unlinked under both locks, there's no need to count and drain cic's before module unload. RCU callback completion is waited with rcu_barrier(). While at it, remove residual RCU operations on cic_list. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2011-12-14block, cfq: remove delayed unlinkTejun Heo1-1/+0
Now that all cic's are immediately unlinked from both ioc and queue, lazy dropping from lookup path and trimming on elevator unregister are unnecessary. Kill them and remove now unused elevator_ops->trim(). This also leaves call_for_each_cic() without any user. Removed. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2011-12-14block, cfq: unlink cfq_io_context's immediatelyTejun Heo2-4/+11
cic is association between io_context and request_queue. A cic is linked from both ioc and q and should be destroyed when either one goes away. As ioc and q both have their own locks, locking becomes a bit complex - both orders work for removal from one but not from the other. Currently, cfq tries to circumvent this locking order issue with RCU. ioc->lock nests inside queue_lock but the radix tree and cic's are also protected by RCU allowing either side to walk their lists without grabbing lock. This rather unconventional use of RCU quickly devolves into extremely fragile convolution. e.g. The following is from cfqd going away too soon after ioc and q exits raced. general protection fault: 0000 [#1] PREEMPT SMP CPU 2 Modules linked in: [ 88.503444] Pid: 599, comm: hexdump Not tainted 3.1.0-rc10-work+ #158 Bochs Bochs RIP: 0010:[<ffffffff81397628>] [<ffffffff81397628>] cfq_exit_single_io_context+0x58/0xf0 ... Call Trace: [<ffffffff81395a4a>] call_for_each_cic+0x5a/0x90 [<ffffffff81395ab5>] cfq_exit_io_context+0x15/0x20 [<ffffffff81389130>] exit_io_context+0x100/0x140 [<ffffffff81098a29>] do_exit+0x579/0x850 [<ffffffff81098d5b>] do_group_exit+0x5b/0xd0 [<ffffffff81098de7>] sys_exit_group+0x17/0x20 [<ffffffff81b02f2b>] system_call_fastpath+0x16/0x1b The only real hot path here is cic lookup during request initialization and avoiding extra locking requires very confined use of RCU. This patch makes cic removal from both ioc and request_queue perform double-locking and unlink immediately. * From q side, the change is almost trivial as ioc->lock nests inside queue_lock. It just needs to grab each ioc->lock as it walks cic_list and unlink it. * From ioc side, it's a bit more difficult because of inversed lock order. ioc needs its lock to walk its cic_list but can't grab the matching queue_lock and needs to perform unlock-relock dancing. Unlinking is now wholly done from put_io_context() and fast path is optimized by using the queue_lock the caller already holds, which is by far the most common case. If the ioc accessed multiple devices, it tries with trylock. In unlikely cases of fast path failure, it falls back to full double-locking dance from workqueue. Double-locking isn't the prettiest thing in the world but it's *far* simpler and more understandable than RCU trick without adding any meaningful overhead. This still leaves a lot of now unnecessary RCU logics. Future patches will trim them. -v2: Vivek pointed out that cic->q was being dereferenced after cic->release() was called. Updated to use local variable @this_q instead. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2011-12-14block, cfq: move ioc ioprio/cgroup changed handling to cicTejun Heo1-5/+9
ioprio/cgroup change was handled by marking the changed state in ioc and, on the following access to the ioc, performing RCU-protected iteration through all cic's grabbing the matching queue_lock. This patch moves the changed state to each cic. When ioprio or cgroup changes, the respective bit is set on all cic's of the ioc and when each of those cic (not ioc) is accessed, change is applied for that specific ioc-queue pair. This also fixes the following two race conditions between setting and clearing of changed states. * Missing barrier between assign/load of ioprio and ioprio_changed allowed applying old ioprio. * Change requests could happen between application of change and clearing of changed variables. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2011-12-14block, cfq: misc updates to cfq_io_contextTejun Heo1-0/+1
Make the following changes to prepare for ioc/cic management cleanup. * Add cic->q so that ioc can determine the associated queue without querying cfq. This will eventually replace ->key. * Factor out cfq_release_cic() from cic_free_func(). This function assumes that the caller handled locking. * Rename __cfq_exit_single_io_context() to cfq_exit_cic() and make it take only @cic. * Restructure cfq_cic_link() for future updates. This patch doesn't introduce any functional changes. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2011-12-14block: misc updates to blk_get_queue()Tejun Heo1-1/+1
* blk_get_queue() is peculiar in that it returns 0 on success and 1 on failure instead of 0 / -errno or boolean. Update it such that it returns %true on success and %false on failure. * Make sure the caller checks for the return value. * Separate out __blk_get_queue() which doesn't check whether @q is dead and put it in blk.h. This will be used later. This patch doesn't introduce any functional changes. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2011-12-14block: make ioc get/put interface more conventional and fix race on alloctionTejun Heo1-2/+2
Ignoring copy_io() during fork, io_context can be allocated from two places - current_io_context() and set_task_ioprio(). The former is always called from local task while the latter can be called from different task. The synchornization between them are peculiar and dubious. * current_io_context() doesn't grab task_lock() and assumes that if it saw %NULL ->io_context, it would stay that way until allocation and assignment is complete. It has smp_wmb() between alloc/init and assignment. * set_task_ioprio() grabs task_lock() for assignment and does smp_read_barrier_depends() between "ioc = task->io_context" and "if (ioc)". Unfortunately, this doesn't achieve anything - the latter is not a dependent load of the former. ie, if ioc itself were being dereferenced "ioc->xxx", it would mean something (not sure what tho) but as the code currently stands, the dependent read barrier is noop. As only one of the the two test-assignment sequences is task_lock() protected, the task_lock() can't do much about race between the two. Nothing prevents current_io_context() and set_task_ioprio() allocating its own ioc for the same task and overwriting the other's. Also, set_task_ioprio() can race with exiting task and create a new ioc after exit_io_context() is finished. ioc get/put doesn't have any reason to be complex. The only hot path is accessing the existing ioc of %current, which is simple to achieve given that ->io_context is never destroyed as long as the task is alive. All other paths can happily go through task_lock() like all other task sub structures without impacting anything. This patch updates ioc get/put so that it becomes more conventional. * alloc_io_context() is replaced with get_task_io_context(). This is the only interface which can acquire access to ioc of another task. On return, the caller has an explicit reference to the object which should be put using put_io_context() afterwards. * The functionality of current_io_context() remains the same but when creating a new ioc, it shares the code path with get_task_io_context() and always goes through task_lock(). * get_io_context() now means incrementing ref on an ioc which the caller already has access to (be that an explicit refcnt or implicit %current one). * PF_EXITING inhibits creation of new io_context and once exit_io_context() is finished, it's guaranteed that both ioc acquisition functions return %NULL. * All users are updated. Most are trivial but smp_read_barrier_depends() removal from cfq_get_io_context() needs a bit of explanation. I suppose the original intention was to ensure ioc->ioprio is visible when set_task_ioprio() allocates new io_context and installs it; however, this wouldn't have worked because set_task_ioprio() doesn't have wmb between init and install. There are other problems with this which will be fixed in another patch. * While at it, use NUMA_NO_NODE instead of -1 for wildcard node specification. -v2: Vivek spotted contamination from debug patch. Removed. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2011-12-14block: misc ioc cleanupsTejun Heo1-9/+3
* int return from put_io_context() wasn't used by anybody. Make it return void like other put functions and docbook-fy the function comment. * Reorder dummy declarations for !CONFIG_BLOCK case a bit. * Make alloc_ioc_context() use __GFP_ZERO allocation, take init out of if block and drop 0'ing. * Docbook-fy current_io_context() comment. This patch doesn't introduce any functional change. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2011-12-14block, cfq: move cfqd->cic_index to q->idTejun Heo1-0/+6
cfq allocates per-queue id using ida and uses it to index cic radix tree from io_context. Move it to q->id and allocate on queue init and free on queue release. This simplifies cfq a bit and will allow for further improvements of io context life-cycle management. This patch doesn't introduce any functional difference. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2011-12-14block: add blk_queue_dead()Tejun Heo1-0/+1
There are a number of QUEUE_FLAG_DEAD tests. Add blk_queue_dead() macro and use it. This patch doesn't introduce any functional difference. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2011-12-14block, sx8: kill blk_insert_request()Tejun Heo1-1/+0
The only user left for blk_insert_request() is sx8 and it can be trivially switched to use blk_execute_rq_nowait() - special requests aren't included in io stat and sx8 doesn't use block layer tagging. Switch sx8 and kill blk_insert_requeset(). This patch doesn't introduce any functional difference. Only compile tested. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Jeff Garzik <jgarzik@pobox.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2011-12-14Merge branch 'pl022' of ↵Grant Likely1-0/+4
git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-stericsson into gpio/next
2011-12-14net: Remove unused neighbour layer ops.David S. Miller2-2/+0
It's simpler to just keep these things out until there is a real user of them, so we can see what the needs actually are, rather than keep these things around as useless overhead. Signed-off-by: David S. Miller <davem@davemloft.net>
2011-12-14bcma: use static keyword for inline function declaration in bcma.hArend van Spriel1-11/+12
Just scratching an itch here, but it makes more sense to use the static keyword if you think about how the compiler treats inline functions. Reviewed-by: Pieter-Paul Giesberts <pieterpg@broadcom.com> Reviewed-by: Alwin Beukers <alwin@broadcom.com> Signed-off-by: Arend van Spriel <arend@broadcom.com> Signed-off-by: Franky Lin <frankyl@broadcom.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
2011-12-14bcma: add set/mask macros for 16-bit register accessArend van Spriel1-6/+26
The BCMA header only had definitions for 32-bit register access. Used those as a template for the 16-bit flavour. Also changed them to inline functions to be on the safe side. As offset parameter is used twice there would be a problem when used like this: bcma_set32(core, offset++, val); Reviewed-by: Pieter-Paul Giesberts <pieterpg@broadcom.com> Reviewed-by: Alwin Beukers <alwin@broadcom.com> Signed-off-by: Arend van Spriel <arend@broadcom.com> Signed-off-by: Franky Lin <frankyl@broadcom.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
2011-12-14bcma: extract FEM info from SPROMRafał Miłecki1-0/+1
Signed-off-by: Rafał Miłecki <zajec5@gmail.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
2011-12-14ssb: extract FEM info from SPROMRafał Miłecki2-0/+26
Signed-off-by: Rafał Miłecki <zajec5@gmail.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
2011-12-14ieee80211: Introduce ieee80211_is_first_fragHelmut Schaa1-0/+9
Signed-off-by: Helmut Schaa <helmut.schaa@googlemail.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
2011-12-14cfg80211: Fix race in bss timeoutVasanthakumar Thiagarajan1-0/+26
It is quite possible to run into a race in bss timeout where the drivers see the bss entry just before notifying cfg80211 of a roaming event but it got timed out by the time rdev->event_work got scehduled from cfg80211_wq. This would result in the following WARN-ON() along with the failure to notify the user space of the roaming. The other situation which is happening with ath6kl that runs into issue is when the driver reports roam to same AP event where the AP bss entry already got expired. To fix this, move cfg80211_get_bss() from __cfg80211_roamed() to cfg80211_roamed(). [158645.538384] WARNING: at net/wireless/sme.c:586 __cfg80211_roamed+0xc2/0x1b1() [158645.538810] Call Trace: [158645.538838] [<c1033527>] warn_slowpath_common+0x65/0x7a [158645.538917] [<c14cfacf>] ? __cfg80211_roamed+0xc2/0x1b1 [158645.538946] [<c103354b>] warn_slowpath_null+0xf/0x13 [158645.539055] [<c14cfacf>] __cfg80211_roamed+0xc2/0x1b1 [158645.539086] [<c14beb5b>] cfg80211_process_rdev_events+0x153/0x1cc [158645.539166] [<c14bd57b>] cfg80211_event_work+0x26/0x36 [158645.539195] [<c10482ae>] process_one_work+0x219/0x38b [158645.539273] [<c14bd555>] ? wiphy_new+0x419/0x419 [158645.539301] [<c10486cb>] worker_thread+0xf6/0x1bf [158645.539379] [<c10485d5>] ? rescuer_thread+0x1b5/0x1b5 [158645.539407] [<c104b3e2>] kthread+0x62/0x67 [158645.539484] [<c104b380>] ? __init_kthread_worker+0x42/0x42 [158645.539514] [<c151309a>] kernel_thread_helper+0x6/0xd Reported-by: Kalle Valo <kvalo@qca.qualcomm.com> Signed-off-by: Vasanthakumar Thiagarajan <vthiagar@qca.qualcomm.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
2011-12-13mlx4_core: Modify driver initialization flow to accommodate SRIOV for EthernetJack Morgenstein2-0/+3
1. Added module parameters sr_iov and probe_vf for controlling enablement of SRIOV mode. 2. Increased default max num-qps, num-mpts and log_num_macs to accomodate SRIOV mode 3. Added port_type_array as a module parameter to allow driver startup with ports configured as desired. In SRIOV mode, only ETH is supported, and this array is ignored; otherwise, for the case where the FW supports both port types (ETH and IB), the port_type_array parameter is used. By default, the port_type_array is set to configure both ports as IB. 4. When running in sriov mode, the master needs to initialize the ICM eq table to hold the eq's for itself and also for all the slaves. 5. mlx4_set_port_mask() now invoked from mlx4_init_hca, instead of in mlx4_dev_cap. 6. Introduced sriov VF (slave) device startup/teardown logic (mainly procedures mlx4_init_slave, mlx4_slave_exit, mlx4_slave_cap, mlx4_slave_exit and flow modifications in __mlx4_init_one, mlx4_init_hca, and mlx4_setup_hca). VFs obtain their startup information from the PF (master) device via the comm channel. 7. In SRIOV mode (both PF and VF), MSI_X must be enabled, or the driver aborts loading the device. 8. Do not allow setting port type via sysfs when running in SRIOV mode. 9. mlx4_get_ownership: Currently, only one PF is supported by the driver. If the HCA is burned with FW which enables more than one PF, only one of the PFs is allowed to run. The first one up grabs a FW ownership semaphone -- all other PFs will find that semaphore taken, and the driver will not allow them to run. Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.co.il> Signed-off-by: Liran Liss <liranl@mellanox.co.il> Signed-off-by: Marcel Apfelbaum <marcela@mellanox.co.il> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-12-13mlx4_core: mtts resources units changed to offsetMarcel Apfelbaum1-3/+2
In the previous implementation mtts are managed by: 1. order - log(mtt segments), 'mtt segment' groups several mtts together. 2. first_seg - segment location relative to mtt table. In the current implementation: 1. order - log(mtts) rather than segments 2. offset - mtt index in mtt table Note: The actual mtt allocation is made in segments but it is transparent to callers. Rational: The mtt resource holders are not interested on how the allocation of mtt is done, but rather on how they will use it. Signed-off-by: Marcel Apfelbaum <marcela@dev.mellanox.co.il> Reviewed-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-12-13mlx4: Ethernet port management modificationsEugenia Emantayev1-3/+9
The physical port is now common to the PF and VFs. The port resources and configuration is managed by the PF, VFs can only influence the MTU of the port, it is set as max among all functions, Each function allocates RX buffers of required size to meet it's MTU enforcement. Port management code was moved to mlx4_core, as the mlx4_en module is virtualization unaware Move handling qp functionality to mlx4_get_eth_qp/mlx4_put_eth_qp including reserve/release range and add/release unicast steering. Let mlx4_register/unregister_mac deal only with MAC (un)registration. Signed-off-by: Eugenia Emantayev <eugenia@mellanox.co.il> Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.co.il> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-12-13mlx4_core: Reduce number of PD bits to 17Jack Morgenstein1-0/+1
When SRIOV is enabled on the chip (at FW burning time), the HCA uses only 17 bits for the PD. The remaining 7 high-order bits are ignored. Change the allocator to return only 17 bits for the PD. The MSB 7 bits will be used to encode the slave number for consistency checking later on in the resource tracker. Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-12-13mlx4_core: Add "native" argument to mlx4_cmd and its callers (where needed)Jack Morgenstein1-7/+13
For SRIOV, some Hypervisor commands can be executed directly (native = 1). Others should go through the command wrapper flow (for tracking resource usage, for example, or for changing some HCA configurations that slaves need to be notified of). This patch sets the groundwork for this capability -- adding the correct value of "native" in each case. Note that if SRIOV is not activated, this parameter has no effect. Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-12-13mlx4: Extanding port_mask functionalityJack Morgenstein1-7/+6
Port mask now has additional state. Port can be set as "none". In this case neither the mlx4_en or mlx4_ib drivers take ownership of the port. In multifunction mode there is an option to set the vfs as single ported devices. (in single function mode, both physical ports belong to same function) Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.co.il> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-12-13mlx4_core: initial header-file changes for SRIOV supportJack Morgenstein2-3/+68
These changes will not affect module operation as yet. They are only to get some structs and enums in place for use by subsequent patches (making those smaller). Added here: * sriov state structs and inlines (mlx4_is_master/slave/mfunc) * comm-channel and vhcr support structures * enum values for new FW and comm-channel virtual commands (i.e., commands, passed via the comm channel to the PF-driver). * prototypes for many command wrapper functions (used by the PF context for processing FW commands passed to it by the VFs). * struct mlx4_eqe is moved from eq.c to mlx4.h (it will be used by other mlx4_core source files). Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-12-13net: fix build error if CONFIG_CGROUPS=nEric Dumazet1-0/+2
Reported-by: Christoph Paasch <christoph.paasch@uclouvain.be> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>