Age | Commit message (Collapse) | Author | Files | Lines |
|
Most of the kernel assumes that PFN0 is the start of the physical
memory (RAM). This assumptions is not true on most of the ARM SOCs
and hence and if one try to update the ARM port to follow the assumptions,
we end of breaking the dma bounce limit for few block layer drivers.
One such example is trying to unify the meaning of max*_pfn on ARM
as the bootmem layer expects, breaks few block layer driver dma
bounce limit.
To fix this problem, we introduce dma_max_pfn(dev) generic helper with
a possibility of override from the architecture code. The helper converts
a DMA bitmask of bits to a block PFN number. In all the generic cases,
it is just "dev->dma_mask >> PAGE_SHIFT" and hence default behavior
is maintained as is.
Subsequent patches will make use of the helper. No functional change.
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Many drivers contain code such as:
dev->dma_mask = &dev->coherent_dma_mask;
dev->coherent_dma_mask = MASK;
Let's move this pattern out of drivers and have the DMA API provide a
helper for it. This helper uses dma_set_mask_and_coherent() to allow
platform issues to be properly dealt with via dma_set_mask()/
dma_is_supported().
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
AMBA Primecell devices always treat streaming and coherent DMA exactly
the same, so there's no point in having the masks separated.
Acked-by: Grant Likely <grant.likely@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
This patch adds support for hardware syncpoint bases. This creates
a simple mechanism to stall the command FIFO until an operation is
completed.
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Functions host1x_syncpt_request() and _host1x_syncpt_alloc() have
been taking a separate boolean flag ('client_managed') for indicating
if the syncpoint value should be tracked by the host1x driver.
This patch converts the field into generic 'flags' field so that
we can easily add more information while requesting a syncpoint.
Clients are adapted to use the new interface accordingly.
Signed-off-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Initialize and power the 3D unit on Tegra20, Tegra30 and Tegra114 and
register a channel with the Tegra DRM driver so that the unit can be
used from userspace.
Signed-off-by: Thierry Reding <thierry.reding@avionic-design.de>
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
The Tegra DRM driver currently uses some infrastructure to defer the DRM
core initialization until all required devices have registered. The same
infrastructure can potentially be used by any other driver that requires
more than a single sub-device of the host1x module.
Make the infrastructure more generic and keep only the DRM specific code
in the DRM part of the driver. Eventually this will make it easy to move
the DRM driver part back to the DRM subsystem.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Expose the buffer objects, syncpoint and channel functionality in the
public public header so that drivers can use them.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
This structure derives from host1x_client. DRM-specific fields are moved
from host1x_client to this structure, so that host1x_client can remain
agnostic of DRM.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
In preparation to support host1x clients other than DRM, move this
header into a public location.
Signed-off-by: Thierry Reding <treding@nvidia.com>
|
|
Since the definition of_find_next_cache_node is architecture independent,
the existing definition in powerpc can be moved to driver/of/base.c
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Grant Likely <grant.likely@linaro.org>
Cc: Rob Herring <rob.herring@calxeda.com>
Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
In some environments it is to prefer to postpone the resume of the card
device until runtime_resume is being carried out, since it will mean a
signficant decrease of the total system resume time.
The reason of the decreased resume time is simply because of the actual
re-initalization of the card, which typically takes hundreds of
milliseconds, is performed outside the resume sequence and wont thus
affect it.
For removable card, the detect work tries to re-detect the card to make
sure it is still present, as a part of that sequence the card will also
be runtime_resumed and thus also fully resumed.
For a non-removable card, typically a mmc blk request will trigger a
runtime_resume and thus fully resume the card. This also means the
first request will likely suffer from an inital latency since the
re-initialization of the card needs to be performed.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
|
|
By adding a card state that records if it is suspended or resumed, we
can accept asyncronus suspend/resume requests for the mmc and sd
bus_ops.
MMC_CAP_AGGRESSIVE_PM, will at request inactivity through the runtime
bus_ops callbacks, execute a suspend of the the card. In the state were
this has been done, we can receive a suspend request for the mmc bus,
which for sd and mmc forced the card to active state by a
pm_runtime_get_sync. In other words, the card was resumed and then
immediately suspended again, completely unnecessary.
Since the suspend/resume bus_ops callbacks for sd and mmc are now
capable of handling asynchronous requests, we no longer need to force
the card to active state before executing suspend. Evidently preventing
the above sequence for MMC_CAP_AGGRESSIVE_PM.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
|
|
The are no more users of the deprecated mmc_suspend|resume_host API,
so let's remove it.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
|
|
The negotiated ocr mask is directly related to the card. Once a card
gets removed, the mask shall be dropped. By moving the cache of the ocr
mask from the host struct to the card struct we have accomplished this.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
|
|
Some switch operations like poweroff notify, shall according to the
spec not be followed by any other new commands. For these cases and
when the host does'nt support MMC_CAP_WAIT_WHILE_BUSY, we must not
send status commands to poll for busy detection. Instead wait for
the stated timeout from the EXT_CSD before completing the request.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Jaehoon Chung <jh80.chung@samsung.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
|
|
There are few special cases like exynos5440 which doesn't send POSTCHANGE
notification from their ->target() routine and call some kind of bottom halves
for doing this work, work/tasklet/etc.. From which they finally send POSTCHANGE
notification.
Its better if we distinguish them from other cpufreq drivers in some way so that
core can handle them specially. So this patch introduces another flag:
CPUFREQ_ASYNC_NOTIFICATION, which will be set by such drivers.
This also changes exynos5440-cpufreq.c and powernow-k8 in order to set this
flag.
Acked-by: Amit Daniel Kachhap <amit.daniel@samsung.com>
Acked-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
The server does allow NFS over v4.2, even if it doesn't add any new
operations yet.
I also switch to using constants to represent the last operation for
each minor version since this makes the code cleaner and easier to
understand at a quick glance.
Signed-off-by: Anna Schumaker <bjschuma@netapp.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
|
|
git://git.baserock.org/delta/linux into devel-stable
Conflicts:
arch/arm/kernel/head.S
This series has been well tested and it would be great to get this
merged now.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
this_cpu_sub() is implemented as negation and addition.
This patch casts the adjustment to the counter type before negation to
sign extend the adjustment. This helps in cases where the counter type
is wider than an unsigned adjustment. An alternative to this patch is
to declare such operations unsupported, but it seemed useful to avoid
surprises.
This patch specifically helps the following example:
unsigned int delta = 1
preempt_disable()
this_cpu_write(long_counter, 0)
this_cpu_sub(long_counter, delta)
preempt_enable()
Before this change long_counter on a 64 bit machine ends with value
0xffffffff, rather than 0xffffffffffffffff. This is because
this_cpu_sub(pcp, delta) boils down to this_cpu_add(pcp, -delta),
which is basically:
long_counter = 0 + 0xffffffff
Also apply the same cast to:
__this_cpu_sub()
__this_cpu_sub_return()
this_cpu_sub_return()
All percpu_test.ko passes, especially the following cases which
previously failed:
l -= ui_one;
__this_cpu_sub(long_counter, ui_one);
CHECK(l, long_counter, -1);
l -= ui_one;
this_cpu_sub(long_counter, ui_one);
CHECK(l, long_counter, -1);
CHECK(l, long_counter, 0xffffffffffffffff);
ul -= ui_one;
__this_cpu_sub(ulong_counter, ui_one);
CHECK(ul, ulong_counter, -1);
CHECK(ul, ulong_counter, 0xffffffffffffffff);
ul = this_cpu_sub_return(ulong_counter, ui_one);
CHECK(ul, ulong_counter, 2);
ul = __this_cpu_sub_return(ulong_counter, ui_one);
CHECK(ul, ulong_counter, 1);
Signed-off-by: Greg Thelen <gthelen@google.com>
Acked-by: Tejun Heo <tj@kernel.org>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
We currently use some ad-hoc arch variables tied to legacy KVM device
assignment to manage emulation of instructions that depend on whether
non-coherent DMA is present. Create an interface for this, adapting
legacy KVM device assignment and adding VFIO via the KVM-VFIO device.
For now we assume that non-coherent DMA is possible any time we have a
VFIO group. Eventually an interface can be developed as part of the
VFIO external user interface to query the coherency of a group.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Default to operating in coherent mode. This simplifies the logic when
we switch to a model of registering and unregistering noncoherent I/O
with KVM.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
So far we've succeeded at making KVM and VFIO mostly unaware of each
other, but areas are cropping up where a connection beyond eventfds
and irqfds needs to be made. This patch introduces a KVM-VFIO device
that is meant to be a gateway for such interaction. The user creates
the device and can add and remove VFIO groups to it via file
descriptors. When a group is added, KVM verifies the group is valid
and gets a reference to it via the VFIO external user interface.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
edma header defines DMA_COMPLETE, this causes issues as commit adfedd9a32e4 move
DMA_SUCCESS to DMA_COMPLETE. edma should properly namespace its defines and
needs a future fix
Reported-by: Olof Johansson <olof@lixom.net>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
|
|
Using a spinlock to atomically increase a counter sounds wrong -- we've
atomic_t for this!
Also move 'seq_nr' to a different cache line than 'lock' to reduce cache
line trashing. This has the nice side effect of decreasing the size of
struct parallel_data from 192 to 128 bytes for a x86-64 build, e.g.
occupying only two instead of three cache lines.
Those changes results in a 5% performance increase on an IPsec test run
using pcrypt.
Btw. the seq_lock spinlock was never explicitly initialized -- one more
reason to get rid of it.
Signed-off-by: Mathias Krause <mathias.krause@secunet.com>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Fixes this build error:
In file included from include/asm-generic/gpio.h:13:0,
from include/linux/gpio.h:51,
from include/linux/of_gpio.h:20,
from arch/powerpc/sysdev/ppc4xx_gpio.c:29:
include/linux/gpio/driver.h:85:14: error: 'struct seq_file' declared inside=
parameter list [-Werror]
include/linux/gpio/driver.h:85:14: error: its scope is only this definition=
or declaration, which is probably not what you want [-Werror]
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
Fixes this build error on sparc:
In file included from drivers/spi/spi.c:33:0:
include/linux/of_gpio.h: In function 'of_get_named_gpio_flags':
include/linux/of_gpio.h:93:3: error: implicit declaration of function 'desc_to_gpio' [-Werror=implicit-function-declaration]
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
commit 6b3d8145dcfdbbb43f13544e16f44f4574f941dd
"gpiolib: make GPIO_DEVRES depend on GPIOLIB"
breaks builds when device drivers are using devm_gpio*
devres functions without enabling GPIOLIB, relying on
the devres code to be compiled anyway.
Provide stubs so that we get these if we're using the
devres functions without GPIOLIB.
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Cc: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
cpuidle_unregister_governor() and cpuidle_replace_governor() aren't
used anymore and can be removed. They were used by cpufreq governors
earlier, but since the governors can't be compiled as modules any
more, these two functions aren't necessary.
Suggested-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Use tabs for cpumask indentation in struct cpuidle_driver.
[rjw: Changelog]
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
suppressed" messages
pr_debug_ratelimited should be coded similarly to dev_dbg_ratelimited
to reduce the "callbacks suppressed" messages.
Add #include <linux/dynamic_debug.h> to printk.h. Unfortunately, this
new #include must be after the prototype/declaration of function printk.
It may be better to split out these _ratelimited declarations into
a separate file one day.
Any use of these pr_<foo>_ratelimited functions must also have another
specific #include <ratelimited.h>. Most users have this done indirectly
via #include <linux/kernel.h>
printk.h may not #include <linux/ratelimit.h> as it causes circular
dependencies and compilation failures.
Signed-off-by: Joe Perches <joe@perches.com>
Tested-by: Krzysztof Mazur <krzysiek@podlesie.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
dev_WARN() and dev_WARN_ONCE() are annoying because (1) they include
only the driver name, not the device name, and (2) they print a spurious
newline in the middle. This results in messages like this that are less
useful than they should be:
[ 40.094995] Device pcieport
disabling already-disabled device
This patch makes them work more like dev_printk().
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
uprobe_copy_process() does nothing if the child shares ->mm with
the forking process, but there is a special case: CLONE_VFORK.
In this case it would be more correct to do dup_utask() but avoid
dup_xol(). This is not that important, the child should not unwind
its stack too much, this can corrupt the parent's stack, but at
least we need this to allow to ret-probe __vfork() itself.
Note: in theory, it would be better to check task_pt_regs(p)->sp
instead of CLONE_VFORK, we need to dup_utask() if and only if the
child can return from the function called by the parent. But this
needs the arch-dependant helper, and I think that nobody actually
does clone(same_stack, CLONE_VM).
Reported-by: Martin Cermak <mcermak@redhat.com>
Reported-by: David Smith <dsmith@redhat.com>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
|
|
linux/uprobes.h declares arch_uprobe_skip_sstep() as a weak function.
But as there is no definition of generic version so when trying to build
uprobes for an architecture that doesn't yet have a arch_uprobe_skip_sstep()
implementation, the vmlinux will try to call arch_uprobe_skip_sstep()
somehwere in Stupidhistan leading to a system crash. We rather want a
proper link error so remove arch_uprobe_skip_sstep().
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
|
|
Add fb_<level> convenience macros for emitting the
"fb%d: ", struct fb_info->node value.
Neatens and shortens the code a bit.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
|
|
Conflicts:
tools/perf/builtin-record.c
tools/perf/builtin-top.c
tools/perf/util/hist.h
|
|
napi_disable uses an msleep() call to wait for outstanding napi work to be
finished after setting the disable bit. It does not always sleep incase there
was no outstanding work. This resulted in a rare bug in ixgbe_down operation
where a napi_disable call took place inside of a local_bh_disable()d context.
In order to enable easier detection of future sleep while atomic BUGs, this
patch adds a might_sleep() call, so that every use of napi_disable during
atomic context will be visible.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Cc: Eliezer Tamir <eliezer.tamir@linux.intel.com>
Cc: Alexander Duyck <alexander.duyck@intel.com>
Cc: Hyong-Youb Kim <hykim@myri.com>
Cc: Amir Vadai <amirv@mellanox.com>
Cc: Dmitry Kravkov <dmitry@broadcom.com>
Tested-by: Phil Schmitt <phillip.j.schmitt@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|
|
Add new function virtqueue_is_broken(). Callers of virtqueue_get_buf()
should check for a broken queue.
Signed-off-by: Heinz Graalfs <graalfs@linux.vnet.ibm.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
virtqueue_{kick()/notify()} should exploit the new host notification API.
If the notify call returned with a negative value the host kick failed
(e.g. a kick triggered after a device was hot-unplugged). In this case
the virtqueue is set to 'broken' and false is returned, otherwise true.
Signed-off-by: Heinz Graalfs <graalfs@linux.vnet.ibm.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Currently a host kick error is silently ignored and not reflected in
the virtqueue of a particular virtio device.
Changing the notify API for guest->host notification seems to be one
prerequisite in order to be able to handle such errors in the context
where the kick is triggered.
This patch changes the notify API. The notify function must return a
bool return value. It returns false if the host notification failed.
Signed-off-by: Heinz Graalfs <graalfs@linux.vnet.ibm.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
The code for privacy extentions is very mature, and making it
configurable only gives marginal memory/code savings in exchange
for obfuscation and hard to read code via CPP ifdef'ery.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs into linux-next
Pull fs-cache fixes from David Howells:
Can you pull these commits to fix an issue with NFS whereby caching can be
enabled on a file that is open for writing by subsequently opening it for
reading. This can be made to crash by opening it for writing again if you're
quick enough.
The gist of the patchset is that the cookie should be acquired at inode
creation only and subsequently enabled and disabled as appropriate (which
dispenses with the backing objects when they're not needed).
The extra synchronisation that NFS does can then be dispensed with as it is
thenceforth managed by FS-Cache.
Could you send these on to Linus?
This likely will need fixing also in CIFS and 9P also once the FS-Cache
changes are upstream. AFS and Ceph are probably safe.
* 'fscache' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs:
NFS: Use i_writecount to control whether to get an fscache cookie in nfs_open()
FS-Cache: Provide the ability to enable/disable cookies
FS-Cache: Add use/unuse/wake cookie wrappers
|
|
Make the ksymtab symbols for EXPORT_SYMBOL visible.
This prevents the LTO compiler from adding a .NUMBER prefix,
which avoids various problems in later export processing.
Cc: rusty@rustcorp.com.au
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into next/cleanup
From Paul Walmsley <paul@pwsan.com> via Tony Lindgren:
Move some of the OMAP2+ CM and System Control Module direct
register accesses into CM- and System Control
Module-specific "drivers" underneath arch/arm/mach-omap2/. This
is a prerequisite for moving this code out of arch/arm/mach-omap2/ into
drivers/.
Basic test logs are available here:
http://www.pwsan.com/omap/testlogs/cm_scm_cleanup_a_v3.13/20131019101809/
* tag 'omap-for-v3.13/cm-scm-cleanup-signed' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap:
ARM: OMAP3: control: add API for setting IVA bootmode
ARM: OMAP3: CM/control: move CM scratchpad save to CM driver
ARM: OMAP3: McBSP: do not access CM register directly
ARM: OMAP3: clock: add API to enable/disable autoidle for a single clock
ARM: OMAP2: CM/PM: remove direct register accesses outside CM code
+ Linux 3.12-rc4
Signed-off-by: Olof Johansson <olof@lixom.net>
|
|
|
|
|
|
|
|
|
|
|