summaryrefslogtreecommitdiff
path: root/include/linux
AgeCommit message (Collapse)AuthorFilesLines
2011-07-21superblock: introduce per-sb cache shrinker infrastructureDave Chinner3-39/+50
With context based shrinkers, we can implement a per-superblock shrinker that shrinks the caches attached to the superblock. We currently have global shrinkers for the inode and dentry caches that split up into per-superblock operations via a coarse proportioning method that does not batch very well. The global shrinkers also have a dependency - dentries pin inodes - so we have to be very careful about how we register the global shrinkers so that the implicit call order is always correct. With a per-sb shrinker callout, we can encode this dependency directly into the per-sb shrinker, hence avoiding the need for strictly ordering shrinker registrations. We also have no need for any proportioning code for the shrinker subsystem already provides this functionality across all shrinkers. Allowing the shrinker to operate on a single superblock at a time means that we do less superblock list traversals and locking and reclaim should batch more effectively. This should result in less CPU overhead for reclaim and potentially faster reclaim of items from each filesystem. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-21locks: rename lock-manager opsJ. Bruce Fields1-6/+6
Both the filesystem and the lock manager can associate operations with a lock. Confusingly, one of them (fl_release_private) actually has the same name in both operation structures. It would save some confusion to give the lock-manager ops different names. Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2011-07-21Merge branch 'core-urgent-for-linus' of ↵Linus Torvalds1-0/+3
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: signal: align __lock_task_sighand() irq disabling and RCU softirq,rcu: Inform RCU of irq_exit() activity sched: Add irq_{enter,exit}() to scheduler_ipi() rcu: protect __rcu_read_unlock() against scheduler-using irq handlers rcu: Streamline code produced by __rcu_read_unlock() rcu: Fix RCU_BOOST race handling current->rcu_read_unlock_special rcu: decrease rcu_report_exp_rnp coupling with scheduler
2011-07-21mmc: core: Set non-default Drive Strength via platform hookPhilip Rakity1-0/+1
Non default Drive Strength cannot be set automatically. It is a function of the board design and only if there is a specific platform handler can it be set. The platform handler needs to take into account the board design. Pass to the platform code the necessary information. For example: The card and host controller may indicate they support HIGH and LOW drive strength. There is no way to know what should be chosen without specific board knowledge. Setting HIGH may lead to reflections and setting LOW may not suffice. There is no mechanism (like ethernet duplex or speed pulses) to determine what should be done automatically. If no platform handler is defined -- use the default value. Signed-off-by: Philip Rakity <prakity@marvell.com> Reviewed-by: Arindam Nath <arindam.nath@amd.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-21mmc: core: add non-blocking mmc request functionPer Forlin2-1/+26
Previously there has only been one function mmc_wait_for_req() to start and wait for a request. This patch adds: * mmc_start_req() - starts a request wihtout waiting If there is on ongoing request wait for completion of that request and start the new one and return. Does not wait for the new command to complete. This patch also adds new function members in struct mmc_host_ops only called from core.c: * pre_req - asks the host driver to prepare for the next job * post_req - asks the host driver to clean up after a completed job The intention is to use pre_req() and post_req() to do cache maintenance while a request is active. pre_req() can be called while a request is active to minimize latency to start next job. post_req() can be used after the next job is started to clean up the request. This will minimize the host driver request end latency. post_req() is typically used before ending the block request and handing over the buffer to the block layer. Add a host-private member in mmc_data to be used by pre_req to mark the data. The host driver will then check this mark to see if the data is prepared or not. Signed-off-by: Per Forlin <per.forlin@linaro.org> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Venkatraman S <svenkatr@ti.com> Tested-by: Sourav Poddar <sourav.poddar@ti.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-21mmc: dw_mmc: fix stop when fallen back to PIOJames Hogan1-0/+2
There are several situations when dw_mci_submit_data_dma() decides to fall back to PIO mode instead of using DMA, due to a short (to avoid overhead) or "complex" (e.g. with unaligned buffers) transaction, even though host->use_dma is set. However dw_mci_stop_dma() decides whether to stop DMA or set the EVENT_XFER_COMPLETE event based on host->use_dma. When falling back to PIO mode this results in data timeout errors getting missed and the driver locking up. Therefore add host->using_dma to indicate whether the current transaction is using dma or not, and adjust dw_mci_stop_dma() to use that instead. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Tested-by: Jaehoon Chung <jh80.chung@samsung.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-21mmc: queue: let host controllers specify maximum discard timeoutAdrian Hunter2-0/+2
Some host controllers will not operate without a hardware timeout that is limited in value. However large discards require large timeouts, so there needs to be a way to specify the maximum discard size. A host controller driver may now specify the maximum discard timeout possible so that max_discard_sectors can be calculated. However, for eMMC when the High Capacity Erase Group Size is not in use, the timeout calculation depends on clock rate which may change. For that case Preferred Erase Size is used instead. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-21mmc: dw_mmc: handle unaligned buffers and sizesJames Hogan1-0/+10
Update functions for PIO pushing and pulling data to and from the FIFO so that they can handle unaligned output buffers and unaligned buffer lengths. This makes more of the tests in mmc_test pass. Unaligned lengths in pulls are handled by reading the full FIFO item, and storing the remaining bytes in a small internal buffer (part_buf). The next data pull will copy data out of this buffer first before accessing the FIFO again. Similarly, for pushes the final bytes that don't fill a FIFO item are stored in the part_buf (or sent anyway if it's the last transfer), and then the part_buf is included at the beginning of the next buffer pushed. Unaligned buffers in pulls are handled specially if the architecture cannot do efficient unaligned accesses, by reading FIFO items into a aligned local buffer, and memcpy'ing them into the output buffer, again storing any remaining bytes in the internal buffer. Similarly for pushes the buffer is memcpy'd into an aligned local buffer then written to the FIFO. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-21mmc: dw_mmc: don't hard code fifo depth, fix usageJames Hogan1-0/+8
The FIFO_DEPTH hardware configuration parameter can be found from the power-on value of RX_WMark in the FIFOTH register. This is used to initialise the watermarks, but when calculating the number of free fifo spaces a preprocessor definition is used which is hard coded to 32. Fix reading the value out of FIFOTH (the default value in the RX_WMark field is FIFO_DEPTH-1 not FIFO_DEPTH). Allow the fifo depth to be overriden by platform data (since a bootloader may have changed FIFOTH making auto-detection unreliable). Store the fifo_depth for later use. Also fix the calculation to find the number of free bytes in the fifo to include the fifo depth in the left shift by the data shift, since the fifo depth is measured in fifo items not bytes. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-21mmc: dw_mmc: convert card tasklet to workqueueJames Hogan1-1/+1
Convert the card insert/remove tasklet to a workqueue, and call the setpower platform specific callback without the spinlock held. This means neither of the setpower or get_cd callbacks are called from atomic context which allows them to sleep. Signed-off-by: James Hogan <james.hogan@imgtec.com> Acked-by: Will Newton <will.newton@imgtec.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-21mmc: sdhi: Add write16_hookSimon Horman2-0/+9
Some controllers require waiting for the bus to become idle before writing to some registers. I have implemented this by adding a hook to sd_ctrl_write16() and implementing a hook for SDHI which waits for the bus to become idle. Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Cc: Magnus Damm <magnus.damm@gmail.com> Signed-off-by: Simon Horman <horms@verge.net.au> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-21mmc: tmio: name 0xd8 as CTL_DMA_ENABLESimon Horman1-0/+1
This reflects at least the current usage of this register and I think it improves the readability of the code ever so slightly. Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Cc: Magnus Damm <magnus.damm@gmail.com> Signed-off-by: Simon Horman <horms@verge.net.au> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-21mmc: block: allow get_card_status() to return error statusRussell King - ARM Linux1-0/+10
If the MMC_SEND_STATUS command is not successful, we should not return a zero status word, but instead allow the caller to know positively that an error occurred. Convert the open-coded get_card_status() to use the helper function, and provide definitions for the card state field. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Acked-by: Linus Walleij <linus.walleij@linaro.org> Tested-by: Pawel Moll <pawel.moll@arm.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-21mmc: sdhci-pxa: move platform data to include/linux/platform_dataZhangfei Gao1-0/+60
As suggested by Arnd, move platform data to include/linux/platform_data in order to improve build coverage for the driver. Signed-off-by: Zhangfei Gao <zhangfei.gao@marvell.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-21mmc: Standardize header file inclusion checks.Robert P. J. Day17-46/+41
Standardize the checks for multiple MMC header file inclusion, including adding comments to terminating #endif's, and fixing one incorrect comment. Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-21mmc: sdhci: merge two sdhci-pltfm.h into oneShawn Guo1-29/+0
The structure sdhci_pltfm_data is not necessarily to be in a public header like include/linux/mmc/sdhci-pltfm.h, so the patch moves it into drivers/mmc/host/sdhci-pltfm.h and eliminates the former one. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Reviewed-by: Grant Likely <grant.likely@secretlab.ca> Reviewed-by: Wolfram Sang <w.sang@pengutronix.de> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-21mmc: sdhci: make sdhci-pltfm device drivers self registeredShawn Guo1-6/+0
The patch turns the common stuff in sdhci-pltfm.c into functions, and add device drivers their own .probe and .remove which in turn call into the common functions, so that those sdhci-pltfm device drivers register itself and keep all device specific things away from common sdhci-pltfm file. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Reviewed-by: Grant Likely <grant.likely@secretlab.ca> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Anton Vorontsov <cbouatmailru@gmail.com> Signed-off-by: Chris Ball <cjb@laptop.org>
2011-07-21rcu: Fix wrong check in list_splice_init_rcu()Jan H. Schönherr1-1/+1
If the list to be spliced is empty, then list_splice_init_rcu() has nothing to do. Unfortunately, list_splice_init_rcu() does not check the list to be spliced; it instead checks the list to be spliced into. This results in memory leaks given current usage. This commit therefore fixes the empty-list check. Signed-off-by: Jan H. Schönherr <schnhrr@cs.tu-berlin.de> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2011-07-20Merge branch 'rcu/urgent' of ↵Ingo Molnar1-0/+3
git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-2.6-rcu into core/urgent
2011-07-20slab: shrink sizeof(struct kmem_cache)Eric Dumazet1-13/+13
Reduce high order allocations for some setups. (NR_CPUS=4096 -> we need 64KB per kmem_cache struct) We now allocate exact needed size (using nr_cpu_ids and nr_node_ids) This also makes code a bit smaller on x86_64, since some field offsets are less than the 127 limit : Before patch : # size mm/slab.o text data bss dec hex filename 22605 361665 32 384302 5dd2e mm/slab.o After patch : # size mm/slab.o text data bss dec hex filename 22349 353473 8224 384046 5dc2e mm/slab.o CC: Andrew Morton <akpm@linux-foundation.org> Reported-by: Konstantin Khlebnikov <khlebnikov@openvz.org> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
2011-07-20sched: Allow for overlapping sched_domain spansPeter Zijlstra1-0/+2
Allow for sched_domain spans that overlap by giving such domains their own sched_group list instead of sharing the sched_groups amongst each-other. This is needed for machines with more than 16 nodes, because sched_domain_node_span() will generate a node mask from the 16 nearest nodes without regard if these masks have any overlap. Currently sched_domains have a sched_group that maps to their child sched_domain span, and since there is no overlap we share the sched_group between the sched_domains of the various CPUs. If however there is overlap, we would need to link the sched_group list in different ways for each cpu, and hence sharing isn't possible. In order to solve this, allocate private sched_groups for each CPU's sched_domain but have the sched_groups share a sched_group_power structure such that we can uniquely track the power. Reported-and-tested-by: Anton Blanchard <anton@samba.org> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/n/tip-08bxqw9wis3qti9u5inifh3y@git.kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-07-20sched: Break out cpu_power from the sched_group structurePeter Zijlstra1-5/+9
In order to prepare for non-unique sched_groups per domain, we need to carry the cpu_power elsewhere, so put a level of indirection in. Reported-and-tested-by: Anton Blanchard <anton@samba.org> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/n/tip-qkho2byuhe4482fuknss40ad@git.kernel.org Signed-off-by: Ingo Molnar <mingo@elte.hu>
2011-07-20quota: Remove unused declarationJan Kara1-8/+0
There is no point in declaring quotactl() syscall prototype in kernel header and 'make headers_check' complains about it. So just remove those lines. Signed-off-by: Jan Kara <jack@suse.cz>
2011-07-20inode: move to per-sb LRU locksDave Chinner1-1/+2
With the inode LRUs moving to per-sb structures, there is no longer a need for a global inode_lru_lock. The locking can be made more fine-grained by moving to a per-sb LRU lock, isolating the LRU operations of different filesytsems completely from each other. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20inode: Make unused inode LRU per superblockDave Chinner1-0/+4
The inode unused list is currently a global LRU. This does not match the other global filesystem cache - the dentry cache - which uses per-superblock LRU lists. Hence we have related filesystem object types using different LRU reclaimation schemes. To enable a per-superblock filesystem cache shrinker, both of these caches need to have per-sb unused object LRU lists. Hence this patch converts the global inode LRU to per-sb LRUs. The patch only does rudimentary per-sb propotioning in the shrinker infrastructure, as this gets removed when the per-sb shrinker callouts are introduced later on. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20vmscan: add customisable shrinker batch sizeDave Chinner1-0/+1
For shrinkers that have their own cond_resched* calls, having shrink_slab break the work down into small batches is not paticularly efficient. Add a custom batchsize field to the struct shrinker so that shrinkers can use a larger batch size if they desire. A value of zero (uninitialised) means "use the default", so behaviour is unchanged by this patch. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20btrfs: kill magical embedded struct superblockAl Viro1-0/+2
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20switch vfs_path_lookup() to struct pathAl Viro1-1/+1
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20kill lookup_create()Al Viro1-1/+0
folded into the only caller (kern_path_create()) Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20make sure that nsproxy_cache is initialized early enoughAl Viro1-0/+1
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20new helpers: kern_path_create/user_path_createAl Viro1-0/+2
combination of kern_path_parent() and lookup_create(). Does *not* expose struct nameidata to caller. Syscalls converted to that... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20kill LOOKUP_CONTINUEAl Viro1-1/+0
LOOKUP_PARENT is equivalent to it now Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20nfs_open_context doesn't need struct path eitherAl Viro1-2/+2
just dentry, please... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20kill IPERM_FLAG_RCUAl Viro1-2/+0
not used anymore Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20->permission() sanitizing: don't pass flags to exec_permission()Al Viro1-7/+0
pass mask instead; kill security_inode_exec_permission() since we can use security_inode_permission() instead. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20->permission() sanitizing: don't pass flags to ->inode_permission()Al Viro1-1/+1
pass that via mask instead. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20->permission() sanitizing: don't pass flags to ->permission()Al Viro3-3/+3
not used by the instances anymore. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20->permission() sanitizing: don't pass flags to generic_permission()Al Viro1-1/+1
redundant; all callers get it duplicated in mask & MAY_NOT_BLOCK and none of them removes that bit. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20->permission() sanitizing: don't pass flags to ->check_acl()Al Viro3-3/+3
not used in the instances anymore. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20->permission() sanitizing: MAY_NOT_BLOCKAl Viro1-0/+1
Duplicate the flags argument into mask bitmap. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20kill check_acl callback of generic_permission()Al Viro2-2/+3
its value depends only on inode and does not change; we might as well store it in ->i_op->check_acl and be done with that. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20lockless get_write_access/deny_write_accessAl Viro2-3/+52
new helpers: atomic_inc_unless_negative()/atomic_dec_unless_positive() Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20kill file_permission() completelyAl Viro1-1/+0
convert the last remaining caller to inode_permission() Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20consolidate BINPRM_FLAGS_ENFORCE_NONDUMP handlingAl Viro1-0/+1
new helper: would_dump(bprm, file). Checks if we are allowed to read the file and if we are not - sets ENFORCE_NODUMP. Exported, used in places that previously open-coded the same logics. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20new helper: iterate_supers_type()Al Viro1-0/+2
Call the given function for all superblocks of given type. Function gets a superblock (with s_umount locked shared) and (void *) argument supplied by caller of iterator. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20fs: add a DCACHE_NEED_LOOKUP flag for d_flagsJosef Bacik1-0/+7
Btrfs (and I'd venture most other fs's) stores its indexes in nice disk order for readdir, but unfortunately in the case of anything that stats the files in order that readdir spits back (like oh say ls) that means we still have to do the normal lookup of the file, which means looking up our other index and then looking up the inode. What I want is a way to create dummy dentries when we find them in readdir so that when ls or anything else subsequently does a stat(), we already have the location information in the dentry and can go straight to the inode itself. The lookup stuff just assumes that if it finds a dentry it is done, it doesn't perform a lookup. So add a DCACHE_NEED_LOOKUP flag so that the lookup code knows it still needs to run i_op->lookup() on the parent to get the inode for the dentry. I have tested this with btrfs and I went from something that looks like this http://people.redhat.com/jwhiter/ls-noreada.png To this http://people.redhat.com/jwhiter/ls-good.png Thats a savings of 1300 seconds, or 22 minutes. That is a significant savings. Thanks, Signed-off-by: Josef Bacik <josef@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20rcu: Fix RCU_BOOST race handling current->rcu_read_unlock_specialPaul E. McKenney1-0/+3
The RCU_BOOST commits for TREE_PREEMPT_RCU introduced an other-task write to a new RCU_READ_UNLOCK_BOOSTED bit in the task_struct structure's ->rcu_read_unlock_special field, but, as noted by Steven Rostedt, without correctly synchronizing all accesses to ->rcu_read_unlock_special. This could result in bits in ->rcu_read_unlock_special being spuriously set and cleared due to conflicting accesses, which in turn could result in deadlocks between the rcu_node structure's ->lock and the scheduler's rq and pi locks. These deadlocks would result from RCU incorrectly believing that the just-ended RCU read-side critical section had been preempted and/or boosted. If that RCU read-side critical section was executed with either rq or pi locks held, RCU's ensuing (incorrect) calls to the scheduler would cause the scheduler to attempt to once again acquire the rq and pi locks, resulting in deadlock. More complex deadlock cycles are also possible, involving multiple rq and pi locks as well as locks from multiple rcu_node structures. This commit fixes synchronization by creating ->rcu_boosted field in task_struct that is accessed and modified only when holding the ->lock in the rcu_node structure on which the task is queued (on that rcu_node structure's ->blkd_tasks list). This results in tasks accessing only their own current->rcu_read_unlock_special fields, making unsynchronized access once again legal, and keeping the rcu_read_unlock() fastpath free of atomic instructions and memory barriers. The reason that the rcu_read_unlock() fastpath does not need to access the new current->rcu_boosted field is that this new field cannot be non-zero unless the RCU_READ_UNLOCK_BLOCKED bit is set in the current->rcu_read_unlock_special field. Therefore, rcu_read_unlock() need only test current->rcu_read_unlock_special: if that is zero, then current->rcu_boosted must also be zero. This bug does not affect TINY_PREEMPT_RCU because this implementation of RCU accesses current->rcu_read_unlock_special with irqs disabled, thus preventing races on the !SMP systems that TINY_PREEMPT_RCU runs on. Maybe-reported-by: Dave Jones <davej@redhat.com> Maybe-reported-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Reported-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
2011-07-20bcma: allow enabling PLLRafał Miłecki1-0/+2
Signed-off-by: Rafał Miłecki <zajec5@gmail.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
2011-07-20bcma: allow setting FAST clockmode for a coreRafał Miłecki1-1/+7
Signed-off-by: Rafał Miłecki <zajec5@gmail.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>
2011-07-20bcma: trivial: add helpers for masking/settingRafał Miłecki1-0/+8
Signed-off-by: Rafał Miłecki <zajec5@gmail.com> Signed-off-by: John W. Linville <linville@tuxdriver.com>