| Age | Commit message (Collapse) | Author | Files | Lines |
|
kernfs_addrm_cxt and the accompanying kernfs_addrm_start/finish() were
added because there were operations which should be performed outside
kernfs_mutex after adding and removing kernfs_nodes. The necessary
operations were recorded in kernfs_addrm_cxt and performed by
kernfs_addrm_finish(); however, after the recent changes which
relocated deactivation and unmapping so that they're performed
directly during removal, the only operation kernfs_addrm_finish()
performs is kernfs_put(), which can be moved inside the removal path
too.
This patch moves the kernfs_put() of the base ref to __kernfs_remove()
and remove kernfs_addrm_cxt and kernfs_addrm_start/finish().
* kernfs_add_one() is updated to grab and release the parent's active
ref and kernfs_mutex itself. kernfs_get/put_active() and
kernfs_addrm_start/finish() invocations around it are removed from
all users.
* __kernfs_remove() puts an unlinked node directly instead of chaining
it to kernfs_addrm_cxt. Its callers are updated to grab and release
kernfs_mutex instead of calling kernfs_addrm_start/finish() around
it.
v2: Updated to fit the v2 restructuring of removal path.
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
The recursive nature of kernfs_remove() means that, even if
kernfs_remove() is not allowed to be called multiple times on the same
node, there may be race conditions between removal of parent and its
descendants. While we can claim that kernfs_remove() shouldn't be
called on one of the descendants while the removal of an ancestor is
in progress, such rule is unnecessarily restrictive and very difficult
to enforce. It's better to simply allow invoking kernfs_remove() as
the caller sees fit as long as the caller ensures that the node is
accessible.
The current behavior in such situations is broken. Whoever enters
removal path first takes the node off the hierarchy and then
deactivates. Following removers either return as soon as it notices
that it's not the first one or can't even find the target node as it
has already been removed from the hierarchy. In both cases, the
following removers may finish prematurely while the nodes which should
be removed and drained are still being processed by the first one.
This patch restructures so that multiple removers, whether through
recursion or direction invocation, always follow the following rules.
* When there are multiple concurrent removers, only one puts the base
ref.
* Regardless of which one puts the base ref, all removers are blocked
until the target node is fully deactivated and removed.
To achieve the above, removal path now first deactivates the subtree,
drains it and then unlinks one-by-one. __kernfs_deactivate() is
called directly from __kernfs_removal() and drops and regrabs
kernfs_mutex for each descendant to drain active refs. As this means
that multiple removers can enter __kernfs_deactivate() for the same
node, the function is updated so that it can handle multiple
deactivators of the same node - only one actually deactivates but all
wait till drain completion.
The restructured removal path guarantees that a removed node gets
unlinked only after the node is deactivated and drained. Combined
with proper multiple deactivator handling, this guarantees that any
invocation of kernfs_remove() returns only after the node itself and
all its descendants are deactivated, drained and removed.
v2: Draining separated into a separate loop (used to be in the same
loop as unlink) and done from __kernfs_deactivate(). This is to
allow exposing deactivation as a separate interface later.
Root node removal was broken in v1 patch. Fixed.
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
KERNFS_REMOVED is used to mark half-initialized and dying nodes so
that they don't show up in lookups and deny adding new nodes under or
renaming it; however, its role overlaps those of deactivation and
removal from rbtree.
It's necessary to deny addition of new children while removal is in
progress; however, this role considerably intersects with deactivation
- KERNFS_REMOVED prevents new children while deactivation prevents new
file operations. There's no reason to have them separate making
things more complex than necessary.
KERNFS_REMOVED is also used to decide whether a node is still visible
to vfs layer, which is rather redundant as equivalent determination
can be made by testing whether the node is on its parent's children
rbtree or not.
This patch removes KERNFS_REMOVED.
* Instead of KERNFS_REMOVED, each node now starts its life
deactivated. This means that we now use both atomic_add() and
atomic_sub() on KN_DEACTIVATED_BIAS, which is INT_MIN. The compiler
generates an overflow warnings when negating INT_MIN as the negation
can't be represented as a positive number. Nothing is actually
broken but let's bump BIAS by one to avoid the warnings for archs
which negates the subtrahend..
* KERNFS_REMOVED tests in add and rename paths are replaced with
kernfs_get/put_active() of the target nodes. Due to the way the add
path is structured now, active ref handling is done in the callers
of kernfs_add_one(). This will be consolidated up later.
* kernfs_remove_one() is updated to deactivate instead of setting
KERNFS_REMOVED. This removes deactivation from kernfs_deactivate(),
which is now renamed to kernfs_drain().
* kernfs_dop_revalidate() now tests RB_EMPTY_NODE(&kn->rb) instead of
KERNFS_REMOVED and KERNFS_REMOVED test in kernfs_dir_pos() is
dropped. A node which is removed from the children rbtree is not
included in the iteration in the first place. This means that a
node may be visible through vfs a bit longer - it's now also visible
after deactivation until the actual removal. This slightly enlarged
window difference doesn't make any difference to the userland.
* Sanity check on KERNFS_REMOVED in kernfs_put() is replaced with
checks on the active ref.
* Some comment style updates in the affected area.
v2: Reordered before removal path restructuring. kernfs_active()
dropped and kernfs_get/put_active() used instead. RB_EMPTY_NODE()
used in the lookup paths.
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
There currently are two mechanisms gating active ref lockdep
annotations - KERNFS_LOCKDEP flag and KERNFS_ACTIVE_REF type mask.
The former disables lockdep annotations in kernfs_get/put_active()
while the latter disables all of kernfs_deactivate().
While KERNFS_ACTIVE_REF also behaves as an optimization to skip the
deactivation step for non-file nodes, the benefit is marginal and it
needlessly diverges code paths. Let's drop KERNFS_ACTIVE_REF and use
KERNFS_LOCKDEP in kernfs_deactivate() too.
While at it, add a test helper kernfs_lockdep() to test KERNFS_LOCKDEP
flag so that it's more convenient and the related code can be compiled
out when not enabled.
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
kernfs_node->u.completion is used to notify deactivation completion
from kernfs_put_active() to kernfs_deactivate(). We now allow
multiple racing removals of the same node and the current removal
scheme is no longer correct - kernfs_remove() invocation may return
before the node is properly deactivated if it races against another
removal. The removal path will be restructured to address the issue.
To help such restructure which requires supporting multiple waiters,
this patch replaces kernfs_node->u.completion with
kernfs_root->deactivate_waitq. This makes deactivation event
notifications share a per-root waitqueue_head; however, the wait path
is quite cold and this will also allow shaving one pointer off
kernfs_node.
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
* pci/resource:
PCI: Allocate 64-bit BARs above 4G when possible
PCI: Enforce bus address limits in resource allocation
PCI: Split out bridge window override of minimum allocation address
agp/ati: Use PCI_COMMAND instead of hard-coded 4
agp/intel: Use CPU physical address, not bus address, for ioremap()
agp/intel: Use pci_bus_address() to get GTTADR bus address
agp/intel: Use pci_bus_address() to get MMADR bus address
agp/intel: Support 64-bit GMADR
agp/intel: Rename gtt_bus_addr to gtt_phys_addr
drm/i915: Rename gtt_bus_addr to gtt_phys_addr
agp: Use pci_resource_start() to get CPU physical address for BAR
agp: Support 64-bit APBASE
PCI: Add pci_bus_address() to get bus address of a BAR
PCI: Convert pcibios_resource_to_bus() to take a pci_bus, not a pci_dev
PCI: Change pci_bus_region addresses to dma_addr_t
|
|
My philosophy is unused code is dead code. And dead code is subject to bit
rot and is a likely source of bugs. Use it or lose it.
This reverts parts of c320b976d783 ("PCI: Add implementation for PRI
capability"), removing these interfaces:
pci_pri_enabled()
pci_pri_stopped()
pci_pri_status()
[bhelgaas: split to separate patch]
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
CC: Joerg Roedel <joro@8bytes.org>
|
|
Add SMS4 key length definition to ieee80211_key_len enum.
It's used by WAPI.
Signed-off-by: Avinash Patil <patila@marvell.com>
Signed-off-by: Bing Zhao <bzhao@marvell.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Currently, the tx queue were selected implicitly in ndo_dfwd_start_xmit(). The
will cause several issues:
- NETIF_F_LLTX were removed for macvlan, so txq lock were done for macvlan
instead of lower device which misses the necessary txq synchronization for
lower device such as txq stopping or frozen required by dev watchdog or
control path.
- dev_hard_start_xmit() was called with NULL txq which bypasses the net device
watchdog.
- dev_hard_start_xmit() does not check txq everywhere which will lead a crash
when tso is disabled for lower device.
Fix this by explicitly introducing a new param for .ndo_select_queue() for just
selecting queues in the case of l2 forwarding offload. netdev_pick_tx() was also
extended to accept this parameter and dev_queue_xmit_accel() was used to do l2
forwarding transmission.
With this fixes, NETIF_F_LLTX could be preserved for macvlan and there's no need
to check txq against NULL in dev_hard_start_xmit(). Also there's no need to keep
a dedicated ndo_dfwd_start_xmit() and we can just reuse the code of
dev_queue_xmit() to do the transmission.
In the future, it was also required for macvtap l2 forwarding support since it
provides a necessary synchronization method.
Cc: John Fastabend <john.r.fastabend@intel.com>
Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: e1000-devel@lists.sourceforge.net
Signed-off-by: Jason Wang <jasowang@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-next into for-davem
Conflicts:
net/ieee802154/6lowpan.c
|
|
Add the registers necessary to enable/disable the headphone short
circuit protection.
Signed-off-by: Charles Keepax <ckeepax@opensource.wolfsonmicro.com>
Acked-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
Linux 3.13-rc3
|
|
Seventh bit of 8th byte of extended capabilities specifies wide
bandwidth support for TDLS links. Add this definition to ieee80211.
Signed-off-by: Avinash Patil <patila@marvell.com>
Signed-off-by: Bing Zhao <bzhao@marvell.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
The event trigger code that checks for callback triggers before and
after recording of an event has lots of flags checks. This code is
duplicated throughout the ftrace events, kprobes and system calls.
They all do the exact same checks against the event flags.
Added helper functions ftrace_trigger_soft_disabled(),
event_trigger_unlock_commit() and event_trigger_unlock_commit_regs()
that consolidated the code and these are used instead.
Link: http://lkml.kernel.org/r/20140106222703.5e7dbba2@gandalf.local.home
Acked-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Tested-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
For a relocatable kernel since it can be loaded at any place, there
is no any relation between the kernel start addr and the memstart_addr.
So we can't calculate the memstart_addr from kernel start addr. And
also we can't wait to do the relocation after we get the real
memstart_addr from device tree because it is so late. So introduce
a new function we can use to get the first memblock address and size
in a very early stage (before machine_init).
Signed-off-by: Kevin Hao <haokexin@gmail.com>
Signed-off-by: Scott Wood <scottwood@freescale.com>
|
|
The Nomadik I2C is now configured from the device tree on all platforms
using this controller. Delete the platform data header and move the
definitions into the driver so it is all contained in one single file.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
|
|
The Nomadik I2C controller needs to have the slave set-up time
configured based off the clock used to drive the I2C bus block.
Currently this is done with static assignments assuming that the
block is clocked 48MHz which is pretty likely to be bug-prone.
Calculate the SLSU from the equation given in the datasheet
instead.
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
|
|
|
|
|
|
These sentences are not proper English due to the typos. I had initially caught
one of them while trying to understand the regmap feature, and then I just ran
the vim spell checker, and went through all the items that would need to be
fixed for this header file.
Signed-off-by: Laszlo Papp <lpapp@kde.org>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
'ppc/pamu', 'iommu/fixes' and 'arm/msm' into next
|
|
Data structure drhd->iommu is shared between DMA remapping driver and
interrupt remapping driver, so DMA remapping driver shouldn't release
drhd->iommu when it failed to initialize IOMMU devices. Otherwise it
may cause invalid memory access to the interrupt remapping driver.
Sample stack dump:
[ 13.315090] BUG: unable to handle kernel paging request at ffffc9000605a088
[ 13.323221] IP: [<ffffffff81461bac>] qi_submit_sync+0x15c/0x400
[ 13.330107] PGD 82f81e067 PUD c2f81e067 PMD 82e846067 PTE 0
[ 13.336818] Oops: 0002 [#1] SMP
[ 13.340757] Modules linked in:
[ 13.344422] CPU: 0 PID: 4 Comm: kworker/0:0 Not tainted 3.13.0-rc1-gerry+ #7
[ 13.352474] Hardware name: Intel Corporation LH Pass ........../SVRBD-ROW_T, BIOS SE5C600.86B.99.99.x059.091020121352 09/10/2012
[ 13.365659] Workqueue: events work_for_cpu_fn
[ 13.370774] task: ffff88042ddf00d0 ti: ffff88042ddee000 task.ti: ffff88042dde e000
[ 13.379389] RIP: 0010:[<ffffffff81461bac>] [<ffffffff81461bac>] qi_submit_sy nc+0x15c/0x400
[ 13.389055] RSP: 0000:ffff88042ddef940 EFLAGS: 00010002
[ 13.395151] RAX: 00000000000005e0 RBX: 0000000000000082 RCX: 0000000200000025
[ 13.403308] RDX: ffffc9000605a000 RSI: 0000000000000010 RDI: ffff88042ddb8610
[ 13.411446] RBP: ffff88042ddef9a0 R08: 00000000000005d0 R09: 0000000000000001
[ 13.419599] R10: 0000000000000000 R11: 000000000000005d R12: 000000000000005c
[ 13.427742] R13: ffff88102d84d300 R14: 0000000000000174 R15: ffff88042ddb4800
[ 13.435877] FS: 0000000000000000(0000) GS:ffff88043de00000(0000) knlGS:00000 00000000000
[ 13.445168] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 13.451749] CR2: ffffc9000605a088 CR3: 0000000001a0b000 CR4: 00000000000407f0
[ 13.459895] Stack:
[ 13.462297] ffff88042ddb85d0 000000000000005d ffff88042ddef9b0 0000000000000 5d0
[ 13.471147] 00000000000005c0 ffff88042ddb8000 000000000000005c 0000000000000 015
[ 13.480001] ffff88042ddb4800 0000000000000282 ffff88042ddefa40 ffff88042ddef ac0
[ 13.488855] Call Trace:
[ 13.491771] [<ffffffff8146848d>] modify_irte+0x9d/0xd0
[ 13.497778] [<ffffffff8146886d>] intel_setup_ioapic_entry+0x10d/0x290
[ 13.505250] [<ffffffff810a92a6>] ? trace_hardirqs_on_caller+0x16/0x1e0
[ 13.512824] [<ffffffff810346b0>] ? default_init_apic_ldr+0x60/0x60
[ 13.519998] [<ffffffff81468be0>] setup_ioapic_remapped_entry+0x20/0x30
[ 13.527566] [<ffffffff8103683a>] io_apic_setup_irq_pin+0x12a/0x2c0
[ 13.534742] [<ffffffff8136673b>] ? acpi_pci_irq_find_prt_entry+0x2b9/0x2d8
[ 13.544102] [<ffffffff81037fd5>] io_apic_setup_irq_pin_once+0x85/0xa0
[ 13.551568] [<ffffffff8103816f>] ? mp_find_ioapic_pin+0x8f/0xf0
[ 13.558434] [<ffffffff81038044>] io_apic_set_pci_routing+0x34/0x70
[ 13.565621] [<ffffffff8102f4cf>] mp_register_gsi+0xaf/0x1c0
[ 13.572111] [<ffffffff8102f5ee>] acpi_register_gsi_ioapic+0xe/0x10
[ 13.579286] [<ffffffff8102f33f>] acpi_register_gsi+0xf/0x20
[ 13.585779] [<ffffffff81366b86>] acpi_pci_irq_enable+0x171/0x1e3
[ 13.592764] [<ffffffff8146d771>] pcibios_enable_device+0x31/0x40
[ 13.599744] [<ffffffff81320e9b>] do_pci_enable_device+0x3b/0x60
[ 13.606633] [<ffffffff81322248>] pci_enable_device_flags+0xc8/0x120
[ 13.613887] [<ffffffff813222f3>] pci_enable_device+0x13/0x20
[ 13.620484] [<ffffffff8132fa7e>] pcie_port_device_register+0x1e/0x510
[ 13.627947] [<ffffffff810a92a6>] ? trace_hardirqs_on_caller+0x16/0x1e0
[ 13.635510] [<ffffffff810a947d>] ? trace_hardirqs_on+0xd/0x10
[ 13.642189] [<ffffffff813302b8>] pcie_portdrv_probe+0x58/0xc0
[ 13.648877] [<ffffffff81323ba5>] local_pci_probe+0x45/0xa0
[ 13.655266] [<ffffffff8106bc44>] work_for_cpu_fn+0x14/0x20
[ 13.661656] [<ffffffff8106fa79>] process_one_work+0x369/0x710
[ 13.668334] [<ffffffff8106fa02>] ? process_one_work+0x2f2/0x710
[ 13.675215] [<ffffffff81071d56>] ? worker_thread+0x46/0x690
[ 13.681714] [<ffffffff81072194>] worker_thread+0x484/0x690
[ 13.688109] [<ffffffff81071d10>] ? cancel_delayed_work_sync+0x20/0x20
[ 13.695576] [<ffffffff81079c60>] kthread+0xf0/0x110
[ 13.701300] [<ffffffff8108e7bf>] ? local_clock+0x3f/0x50
[ 13.707492] [<ffffffff81079b70>] ? kthread_create_on_node+0x250/0x250
[ 13.714959] [<ffffffff81574d2c>] ret_from_fork+0x7c/0xb0
[ 13.721152] [<ffffffff81079b70>] ? kthread_create_on_node+0x250/0x250
Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
|
|
Simplify vt-d related code with existing macros and introduce a new
macro for_each_active_drhd_unit() to enumerate all active DRHD unit.
Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
|
|
Functions alloc_iommu() and parse_ioapics_under_ir()
are only used internally, so mark them as static.
[Joerg: Made detect_intel_iommu() non-static again for IA64]
Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
|
|
Function dmar_parse_dev_scope() should release the PCI device reference
count gained in function dmar_parse_one_dev_scope() on error recovery,
otherwise it will cause PCI device object leakage.
This patch also introduces dmar_free_dev_scope(), which will be used
to support DMAR device hotplug.
Reviewed-by: Yijing Wang <wangyijing@huawei.com>
Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
|
|
* qcom/drivers:
tty: serial: Limit msm_serial_hs driver to platforms that use it
mmc: msm_sdcc: Limit driver to platforms that use it
usb: phy: msm: Move mach dependent code to platform data
Signed-off-by: Olof Johansson <olof@lixom.net>
|
|
This patch fix compilation error when driver is compiled
in multi-platform builds.
drivers/built-in.o: In function `msm_otg_link_clk_reset':
./drivers/usb/phy/phy-msm-usb.c:314: undefined reference to `clk_reset'
./drivers/usb/phy/phy-msm-usb.c:318: undefined reference to `clk_reset'
Use platform data supplied reset handlers and adjust error
messages reported when reset sequence fail.
This is an intermediate step before adding support for reset
framework and newer targets.
Signed-off-by: Ivan T. Ivanov <iivanov@mm-sol.com>
Acked-by: David Brown <davidb@codeaurora.org>
Cc: Daniel Walker <dwalker@fifo99.com>
Acked-by: Felipe Balbi <balbi@ti.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Olof Johansson <olof@lixom.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/chanwoo/extcon into char-misc-next
Chanwoo writes:
Update extcon for v3.14
This patchset add new driver of extcon-max14577.c which detect various external
connector and fix minor issue of extcon provider driver(extcon-arizona/palams/
gpio.c). Also, update documentation of previous 'switch' porting guide and
extcon git repository url.
Detailed description for patchset:
- New driver of extcon-max14577.c
: Add extcon-max14577.c drvier to support Maxim MUIC(Micro USB Interface
Controller) which detect USB/TA/JIG/AUDIO-DOCK and additional accessory
according to each resistance when connected external connector.
- extcon-arizoan.c driver
: Code clean to use define macro instead of hex value
: Fix minor issue to reset back to our staring state
: Fix race with microphone detection and removal
- extcon-palmas.c driver
: Fix minor issue and renaming compatible string of Devicetree
- extcon-gpio.c driver
: Fix bug about ordering initialization of gpio pin on probe()
: Send uevent after wakeup from suspend state because some SoC
haven't wakeup interrupt on suspend state.
- Documentation (Documentation/extcon/porting-android-switch-class)
: Fix switch class porting guide
- Update extcon git repository url
|
|
When system on the suspend state, Some SoC can't get gpio interrupt.
After system resume, need send extcon uevent to userspace.
Signed-off-by: Rongjun Ying <rongjun.ying@csr.com>
Reviewed-by: Barry Song <Baohua.Song@csr.com>
Acked-by: Myungjoo Ham <myungjoo.ham@samsung.com>
Signed-off-by: Chanwoo Choi <cw00.choi@samsung.com>
|
|
Now that we've got code for raid5/6 stripe awareness, bcache just needs
to know about the stripes and when writing partial stripes is expensive
- we probably don't want to enable this optimization for raid1 or 10,
even though they have stripes. So add a flag to queue_limits.
Signed-off-by: Kent Overstreet <kmo@daterainc.com>
|
|
The function kvm_io_bus_read_cookie is defined but never used
in current in-tree code.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
|
|
Running 'make namespacecheck' found lots of functions that
should be declared static, since only used in one file.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
|
|
Use macro JUMP_LABEL_TRUE_BRANCH instead of hard-coding for better
readability.
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Acked-by: Jason Baron <jbaron@akamai.com>
Signed-off-by: Jiang Liu <liuj97@gmail.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Now that all users of acpi_gpio.h have been moved to use either the GPIO
descriptor interface or to the internal gpiolib.h we can get rid of
acpi_gpio.h entirely.
Once this is done the only interface to get GPIOs to drivers enumerated
from ACPI namespace is the descriptor based interface.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
Instead of asking each driver to register to ACPI events we can just call
acpi_gpiochip_register_interrupts() for each chip that has an ACPI handle.
The function checks chip->to_irq and if it is set to NULL (a GPIO driver
that doesn't do interrupts) the function does nothing.
We also add the a new header drivers/gpio/gpiolib.h that is used for
functions internal to gpiolib and add ACPI GPIO chip registering functions
to that header.
Once that is done we can remove call to acpi_gpiochip_register_interrupts()
from its only user, pinctrl-baytrail.c
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
fiq.h contains only a function declaration and is not used by anyone
else. Move the declaration to the driver header file and remove the
unnecessary platform dependency from the driver.
Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
The register states now tracked by the regmap implementation in the core which
makes the reset registers functionality 'redundant' since we know the state
of the registers now all the time.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
No need to keep the check defaults functionality anymore.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
If the regcache is enabled on the regmap module drivers might need to access
to HW register(s) in certain cases in cache bypass mode.
As an example of this is the audio block's ANAMICL register. In normal
operation the content can be cached but during initialization one bit from
the register need to be monitored. With the twl_set_regcache_bypass() the
client driver can switch regcache bypass on and off when it is needed so
we can utilize the regcache for more registers.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Acked-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
|
|
* pci/msi:
PCI/MSI: Add pci_enable_msi_range() and pci_enable_msix_range()
PCI/MSI: Add pci_msix_vec_count()
PCI/MSI: Remove pci_enable_msi_block_auto()
PCI/MSI: Add pci_msi_vec_count()
|
|
We should const-ify comparisons on skb_queue_* inline helper
functions as their parameters are const as well, so lets not
drop that.
Suggested-by: Brad Spengler <spender@grsecurity.net>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When allocating space for 32-bit BARs, we previously limited RESOURCE
addresses so they would fit in 32 bits. However, the BUS address need not
be the same as the resource address, and it's the bus address that must fit
in the 32-bit BAR.
This patch adds:
- pci_clip_resource_to_region(), which clips a resource so it contains
only the range that maps to the specified bus address region, e.g., to
clip a resource to 32-bit bus addresses, and
- pci_bus_alloc_from_region(), which allocates space for a resource from
the specified bus address region,
and changes pci_bus_alloc_resource() to allocate space for 64-bit BARs from
the entire bus address region, and space for 32-bit BARs from only the bus
address region below 4GB.
If we had this window:
pci_root HWP0002:0a: host bridge window [mem 0xf0180000000-0xf01fedfffff] (bus address [0x80000000-0xfedfffff])
we previously could not put a 32-bit BAR there, because the CPU addresses
don't fit in 32 bits. This patch fixes this, so we can use this space for
32-bit BARs.
It's also possible (though unlikely) to have resources with 32-bit CPU
addresses but bus addresses above 4GB. In this case the previous code
would allocate space that a 32-bit BAR could not map.
Remove PCIBIOS_MAX_MEM_32, which is no longer used.
[bhelgaas: reworked starting from http://lkml.kernel.org/r/1386658484-15774-3-git-send-email-yinghai@kernel.org]
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
|
|
This patch built on top of Commit 299603e8370a93dd5d8e8d800f0dff1ce2c53d36
("net-gro: Prepare GRO stack for the upcoming tunneling support") to add
the support of the standard GRE (RFC1701/RFC2784/RFC2890) to the GRO
stack. It also serves as an example for supporting other encapsulation
protocols in the GRO stack in the future.
The patch supports version 0 and all the flags (key, csum, seq#) but
will flush any pkt with the S (seq#) flag. This is because the S flag
is not support by GSO, and a GRO pkt may end up in the forwarding path,
thus requiring GSO support to break it up correctly.
Currently the "packet_offload" structure only contains L3 (ETH_P_IP/
ETH_P_IPV6) GRO offload support so the encapped pkts are limited to
IP pkts (i.e., w/o L2 hdr). But support for other protocol type can
be easily added, so is the support for GRE variations like NVGRE.
The patch also support csum offload. Specifically if the csum flag is on
and the h/w is capable of checksumming the payload (CHECKSUM_COMPLETE),
the code will take advantage of the csum computed by the h/w when
validating the GRE csum.
Note that commit 60769a5dcd8755715c7143b4571d5c44f01796f1 "ipv4: gre:
add GRO capability" already introduces GRO capability to IPv4 GRE
tunnels, using the gro_cells infrastructure. But GRO is done after
GRE hdr has been removed (i.e., decapped). The following patch applies
GRO when pkts first come in (before hitting the GRE tunnel code). There
is some performance advantage for applying GRO as early as possible.
Also this approach is transparent to other subsystem like Open vSwitch
where GRE decap is handled outside of the IP stack hence making it
harder for the gro_cells stuff to apply. On the other hand, some NICs
are still not capable of hashing on the inner hdr of a GRE pkt (RSS).
In that case the GRO processing of pkts from the same remote host will
all happen on the same CPU and the performance may be suboptimal.
I'm including some rough preliminary performance numbers below. Note
that the performance will be highly dependent on traffic load, mix as
usual. Moreover it also depends on NIC offload features hence the
following is by no means a comprehesive study. Local testing and tuning
will be needed to decide the best setting.
All tests spawned 50 copies of netperf TCP_STREAM and ran for 30 secs.
(super_netperf 50 -H 192.168.1.18 -l 30)
An IP GRE tunnel with only the key flag on (e.g., ip tunnel add gre1
mode gre local 10.246.17.18 remote 10.246.17.17 ttl 255 key 123)
is configured.
The GRO support for pkts AFTER decap are controlled through the device
feature of the GRE device (e.g., ethtool -K gre1 gro on/off).
1.1 ethtool -K gre1 gro off; ethtool -K eth0 gro off
thruput: 9.16Gbps
CPU utilization: 19%
1.2 ethtool -K gre1 gro on; ethtool -K eth0 gro off
thruput: 5.9Gbps
CPU utilization: 15%
1.3 ethtool -K gre1 gro off; ethtool -K eth0 gro on
thruput: 9.26Gbps
CPU utilization: 12-13%
1.4 ethtool -K gre1 gro on; ethtool -K eth0 gro on
thruput: 9.26Gbps
CPU utilization: 10%
The following tests were performed on a different NIC that is capable of
csum offload. I.e., the h/w is capable of computing IP payload csum
(CHECKSUM_COMPLETE).
2.1 ethtool -K gre1 gro on (hence will use gro_cells)
2.1.1 ethtool -K eth0 gro off; csum offload disabled
thruput: 8.53Gbps
CPU utilization: 9%
2.1.2 ethtool -K eth0 gro off; csum offload enabled
thruput: 8.97Gbps
CPU utilization: 7-8%
2.1.3 ethtool -K eth0 gro on; csum offload disabled
thruput: 8.83Gbps
CPU utilization: 5-6%
2.1.4 ethtool -K eth0 gro on; csum offload enabled
thruput: 8.98Gbps
CPU utilization: 5%
2.2 ethtool -K gre1 gro off
2.2.1 ethtool -K eth0 gro off; csum offload disabled
thruput: 5.93Gbps
CPU utilization: 9%
2.2.2 ethtool -K eth0 gro off; csum offload enabled
thruput: 5.62Gbps
CPU utilization: 8%
2.2.3 ethtool -K eth0 gro on; csum offload disabled
thruput: 7.69Gbps
CPU utilization: 8%
2.2.4 ethtool -K eth0 gro on; csum offload enabled
thruput: 8.96Gbps
CPU utilization: 5-6%
Signed-off-by: H.K. Jerry Chu <hkchu@google.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Currently Intel interrupt remapping drivers uses the "present" flag bit
in remapping entry to track whether an entry is allocated or not.
It works as follow:
1) allocate a remapping entry and set its "present" flag bit to 1
2) compose other fields for the entry
3) update the remapping entry with the composed value
The remapping hardware may access the entry between step 1 and step 3,
which then observers an entry with the "present" flag set but random
values in all other fields.
This patch introduces a dedicated bitmap to track remapping entry
allocation status instead of sharing the "present" flag with hardware,
thus eliminate the race window. It also simplifies the implementation.
Tested-and-reviewed-by: Yijing Wang <wangyijing@huawei.com>
Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
|
|
Conflicts:
drivers/dma/mmp_pdma.c
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
|
|
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
domain_has_cap is a misnomer bc the func name should be
the same for CONFIG_IOMMU_API and !CONFIG_IOMMU_API.
Signed-off-by: Upinder Malhi <umalhi@cisco.com>
Signed-off-by: Joerg Roedel <joro@8bytes.org>
|
|
The ADC driver always programs all possible ADC values and discards
them except for the value IIO asked for. On the am335x-evm the driver
programs four values and it takes 500us to gather them. Reducing the number
of conversations down to the (required) one also reduces the busy loop down
to 125us.
This leads to another error, namely the FIFOCOUNT register is sometimes
(like one out of 10 attempts) not updated in time leading to EBUSY.
The next read has the FIFOCOUNT register updated.
Checking for the ADCSTAT register for being idle isn't a good choice either.
The problem is that if TSC is used at the same time, the HW completes the
conversation for ADC *and* before the driver noticed it, the HW begins to
perform a TSC conversation and so the driver never seen the HW idle. The
next time we would have two values in the FIFO but since the driver reads
everything we always see the current one.
So instead of polling for the IDLE bit in ADCStatus register, we should
check the FIFOCOUNT register. It should be one instead of zero because we
request one value.
This change in turn leads to another error. Sometimes if TSC & ADC are
used together the TSC starts generating interrupts even if nobody
actually touched the touchscreen. The interrupts seem valid because TSC's
FIFO is filled with values for each channel of the TSC. This condition stops
after a few ADC reads but will occur again. Not good.
On top of this (even without the changes I just mentioned) there is a ADC
& TSC lockup condition which was reported to me by Jeff Lance including the
following test case:
A busy loop of "cat /sys/bus/iio/devices/iio\:device0/in_voltage4_raw"
and a mug on touch screen. With this setup, the hardware will lockup after
something between 20 minutes and it could take up to a couple of hours.
During that lockup, the ADCSTAT register says 0x30 (or 0x70) which means
STEP_ID = IDLE and FSM_BUSY = yes. That means the hardware says that it is
idle and busy at the same time which is an invalid condition.
For all this reasons I decided to rework this TSC/ADC part and add a
handshake / synchronization here:
First the ADC signals that it needs the HW and writes a 0 mask into the
SE register. The HW (if active) will complete the current conversation
and become idle. The TSC driver will gather the values from the FIFO
(woken up by an interrupt) and won't "enable" another conversation.
Instead it will wake up the ADC driver which is already waiting. The ADC
driver will start "its" conversation and once it is done, it will
enable the TSC steps so the TSC will work again.
After this rework I haven't observed the lockup so far. Plus the busy
loop has been reduced from 500us to 125us.
The continues-read mode remains unchanged.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
|
|
The purpose of reg_se_cache has been defeated. It should avoid the
read-back of the register to avoid the latency and the fact that the
bits are reset to 0 after the individual conversation took place.
The reason why this is required like this to work, is that read-back of
the register removes the bits of the ADC so they do not start another
conversation after the register is re-written from the TSC side for the
update.
To avoid the not required read-back I introduce a "set once" variant which
does not update the cache mask. After the conversation completes, the
bit is removed from the SE register anyway and we don't plan a new
conversation "any time soon". The current set function is renamed to
set_cache to distinguish the two operations.
This is a small preparation for a larger sync-rework.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Acked-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
|