summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2014-09-17Drivers: scsi: storvsc: Implement a eh_timed_out handlerK. Y. Srinivasan1-0/+12
commit 56b26e69c8283121febedd12b3cc193384af46b9 upstream. On Azure, we have seen instances of unbounded I/O latencies. To deal with this issue, implement handler that can reset the timeout. Note that the host gaurantees that it will respond to each command that has been issued. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: Hannes Reinecke <hare@suse.de> [hch: added a better comment explaining the issue] Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17powerpc/thp: Use ACCESS_ONCE when loading pmdpAneesh Kumar K.V1-1/+3
commit 7e467245bf5226db34c4b12d3cbacfa2f7a15a8b upstream. We would get wrong results in compiler recomputed old_pmd. Avoid that by using ACCESS_ONCE Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17powerpc/thp: Invalidate with vpn in loopAneesh Kumar K.V1-16/+7
commit 969b7b208f7408712a3526856e4ae60ad13f6928 upstream. As per ISA, for 4k base page size we compare 14..65 bits of VA specified with the entry_VA in tlb. That implies we need to make sure we do a tlbie with all the possible 4k va we used to access the 16MB hugepage. With 64k base page size we compare 14..57 bits of VA. Hence we cannot ignore the lower 24 bits of va while tlbie .We also cannot tlb invalidate a 16MB entry with just one tlbie instruction because we don't track which va was used to instantiate the tlb entry. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17powerpc/thp: Handle combo pages in invalidateAneesh Kumar K.V3-5/+13
commit fc0479557572375100ef16c71170b29a98e0d69a upstream. If we changed base page size of the segment, either via sub_page_protect or via remap_4k_pfn, we do a demote_segment which doesn't flush the hash table entries. We do a lazy hash page table flush for all mapped pages in the demoted segment. This happens when we handle hash page fault for these pages. We use _PAGE_COMBO bit along with _PAGE_HASHPTE to indicate whether a pte is backed by 4K hash pte. If we find _PAGE_COMBO not set on the pte, that implies that we could possibly have older 64K hash pte entries in the hash page table and we need to invalidate those entries. Use _PAGE_COMBO to determine the page size with which we should invalidate the hash table entries on unmap. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17powerpc/thp: Invalidate old 64K based hash page mapping before insert of 4k pteAneesh Kumar K.V1-9/+70
commit 629149fae478f0ac6bf705a535708b192e9c6b59 upstream. If we changed base page size of the segment, either via sub_page_protect or via remap_4k_pfn, we do a demote_segment which doesn't flush the hash table entries. We do a lazy hash page table flush for all mapped pages in the demoted segment. This happens when we handle hash page fault for these pages. We use _PAGE_COMBO bit along with _PAGE_HASHPTE to indicate whether a pte is backed by 4K hash pte. If we find _PAGE_COMBO not set on the pte, that implies that we could possibly have older 64K hash pte entries in the hash page table and we need to invalidate those entries. Handle this correctly for 16M pages Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17powerpc/thp: Don't recompute vsid and ssize in loop on invalidateAneesh Kumar K.V4-43/+26
commit fa1f8ae80f8bb996594167ff4750a0b0a5a5bb5d upstream. The segment identifier and segment size will remain the same in the loop, So we can compute it outside. We also change the hugepage_invalidate interface so that we can use it the later patch Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17powerpc/thp: Add write barrier after updating the valid bitAneesh Kumar K.V1-1/+4
commit b0aa44a3dfae3d8f45bd1264349aa87f87b7774f upstream. With hugepages, we store the hpte valid information in the pte page whose address is stored in the second half of the PMD. Use a write barrier to make sure clearing pmd busy bit and updating hpte valid info are ordered properly. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17powerpc/pseries: Avoid deadlock on removing ddwGavin Shan1-6/+14
commit 5efbabe09d986f25c02d19954660238fcd7f008a upstream. Function remove_ddw() could be called in of_reconfig_notifier and we potentially remove the dynamic DMA window property, which invokes of_reconfig_notifier again. Eventually, it leads to the deadlock as following backtrace shows. The patch fixes the above issue by deferring releasing the dynamic DMA window property while releasing the device node. ============================================= [ INFO: possible recursive locking detected ] 3.16.0+ #428 Tainted: G W --------------------------------------------- drmgr/2273 is trying to acquire lock: ((of_reconfig_chain).rwsem){.+.+..}, at: [<c000000000091890>] \ .__blocking_notifier_call_chain+0x40/0x78 but task is already holding lock: ((of_reconfig_chain).rwsem){.+.+..}, at: [<c000000000091890>] \ .__blocking_notifier_call_chain+0x40/0x78 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock((of_reconfig_chain).rwsem); lock((of_reconfig_chain).rwsem); *** DEADLOCK *** May be due to missing lock nesting notation 2 locks held by drmgr/2273: #0: (sb_writers#4){.+.+.+}, at: [<c0000000001cbe70>] \ .vfs_write+0xb0/0x1f8 #1: ((of_reconfig_chain).rwsem){.+.+..}, at: [<c000000000091890>] \ .__blocking_notifier_call_chain+0x40/0x78 stack backtrace: CPU: 17 PID: 2273 Comm: drmgr Tainted: G W 3.16.0+ #428 Call Trace: [c0000000137e7000] [c000000000013d9c] .show_stack+0x88/0x148 (unreliable) [c0000000137e70b0] [c00000000083cd34] .dump_stack+0x7c/0x9c [c0000000137e7130] [c0000000000b8afc] .__lock_acquire+0x128c/0x1c68 [c0000000137e7280] [c0000000000b9a4c] .lock_acquire+0xe8/0x104 [c0000000137e7350] [c00000000083588c] .down_read+0x4c/0x90 [c0000000137e73e0] [c000000000091890] .__blocking_notifier_call_chain+0x40/0x78 [c0000000137e7490] [c000000000091900] .blocking_notifier_call_chain+0x38/0x48 [c0000000137e7520] [c000000000682a28] .of_reconfig_notify+0x34/0x5c [c0000000137e75b0] [c000000000682a9c] .of_property_notify+0x4c/0x54 [c0000000137e7650] [c000000000682bf0] .of_remove_property+0x30/0xd4 [c0000000137e76f0] [c000000000052a44] .remove_ddw+0x144/0x168 [c0000000137e7790] [c000000000053204] .iommu_reconfig_notifier+0x30/0xe0 [c0000000137e7820] [c00000000009137c] .notifier_call_chain+0x6c/0xb4 [c0000000137e78c0] [c0000000000918ac] .__blocking_notifier_call_chain+0x5c/0x78 [c0000000137e7970] [c000000000091900] .blocking_notifier_call_chain+0x38/0x48 [c0000000137e7a00] [c000000000682a28] .of_reconfig_notify+0x34/0x5c [c0000000137e7a90] [c000000000682e14] .of_detach_node+0x44/0x1fc [c0000000137e7b40] [c0000000000518e4] .ofdt_write+0x3ac/0x688 [c0000000137e7c20] [c000000000238430] .proc_reg_write+0xb8/0xd4 [c0000000137e7cd0] [c0000000001cbeac] .vfs_write+0xec/0x1f8 [c0000000137e7d70] [c0000000001cc3b0] .SyS_write+0x58/0xa0 [c0000000137e7e30] [c00000000000a064] syscall_exit+0x0/0x98 Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17powerpc/pseries: Failure on removing device nodeGavin Shan1-1/+1
commit f1b3929c232784580e5d8ee324b6bc634e709575 upstream. While running command "drmgr -c phb -r -s 'PHB 528'", following backtrace jumped out because the target device node isn't marked with OF_DETACHED by of_detach_node(), which caused by error returned from memory hotplug related reconfig notifier when disabling CONFIG_MEMORY_HOTREMOVE. The patch fixes it. ERROR: Bad of_node_put() on /pci@800000020000210/ethernet@0 CPU: 14 PID: 2252 Comm: drmgr Tainted: G W 3.16.0+ #427 Call Trace: [c000000012a776a0] [c000000000013d9c] .show_stack+0x88/0x148 (unreliable) [c000000012a77750] [c00000000083cd34] .dump_stack+0x7c/0x9c [c000000012a777d0] [c0000000006807c4] .of_node_release+0x58/0xe0 [c000000012a77860] [c00000000038a7d0] .kobject_release+0x174/0x1b8 [c000000012a77900] [c00000000038a884] .kobject_put+0x70/0x78 [c000000012a77980] [c000000000681680] .of_node_put+0x28/0x34 [c000000012a77a00] [c000000000681ea8] .__of_get_next_child+0x64/0x70 [c000000012a77a90] [c000000000682138] .of_find_node_by_path+0x1b8/0x20c [c000000012a77b40] [c000000000051840] .ofdt_write+0x308/0x688 [c000000012a77c20] [c000000000238430] .proc_reg_write+0xb8/0xd4 [c000000012a77cd0] [c0000000001cbeac] .vfs_write+0xec/0x1f8 [c000000012a77d70] [c0000000001cc3b0] .SyS_write+0x58/0xa0 [c000000012a77e30] [c00000000000a064] syscall_exit+0x0/0x98 Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17powerpc/mm: Use read barrier when creating real_pteAneesh Kumar K.V1-5/+25
commit 85c1fafd7262e68ad821ee1808686b1392b1167d upstream. On ppc64 we support 4K hash pte with 64K page size. That requires us to track the hash pte slot information on a per 4k basis. We do that by storing the slot details in the second half of pte page. The pte bit _PAGE_COMBO is used to indicate whether the second half need to be looked while building real_pte. We need to use read memory barrier while doing that so that load of hidx is not reordered w.r.t _PAGE_COMBO check. On the store side we already do a lwsync in __hash_page_4K Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17powerpc/mm/numa: Fix break placementAndrey Utkin1-1/+1
commit b00fc6ec1f24f9d7af9b8988b6a198186eb3408c upstream. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=81631 Reported-by: David Binderman <dcb314@hotmail.com> Signed-off-by: Andrey Utkin <andrey.krieger.utkin@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17regulator: arizona-ldo1: remove bypass functionalityNikesh Oswal1-2/+0
commit 5b919f3ebb533cbe400664837e24f66a0836b907 upstream. WM5110/8280 devices do not support bypass mode for LDO1 so remove the bypass callbacks registered with regulator core. Signed-off-by: Nikesh Oswal <nikesh@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@linaro.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17mfd: omap-usb-host: Fix improper mask use.Michael Welling1-1/+1
commit 46de8ff8e80a6546aa3d2fdf58c6776666301a0c upstream. single-ulpi-bypass is a flag used for older OMAP3 silicon. The flag when set, can excite code that improperly uses the OMAP_UHH_HOSTCONFIG_UPLI_BYPASS define to clear the corresponding bit. Instead it clears all of the other bits disabling all of the ports in the process. Signed-off-by: Michael Welling <mwelling@emacinc.com> Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17kernel/smp.c:on_each_cpu_cond(): fix warning in fallback pathSasha Levin1-1/+1
commit 618fde872163e782183ce574c77f1123e2be8887 upstream. The rarely-executed memry-allocation-failed callback path generates a WARN_ON_ONCE() when smp_call_function_single() succeeds. Presumably it's supposed to warn on failures. Signed-off-by: Sasha Levin <sasha.levin@oracle.com> Cc: Christoph Lameter <cl@gentwo.org> Cc: Gilad Ben-Yossef <gilad@benyossef.com> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Tejun Heo <htejun@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17CAPABILITIES: remove undefined caps from all processesEric Paris5-12/+13
commit 7d8b6c63751cfbbe5eef81a48c22978b3407a3ad upstream. This is effectively a revert of 7b9a7ec565505699f503b4fcf61500dceb36e744 plus fixing it a different way... We found, when trying to run an application from an application which had dropped privs that the kernel does security checks on undefined capability bits. This was ESPECIALLY difficult to debug as those undefined bits are hidden from /proc/$PID/status. Consider a root application which drops all capabilities from ALL 4 capability sets. We assume, since the application is going to set eff/perm/inh from an array that it will clear not only the defined caps less than CAP_LAST_CAP, but also the higher 28ish bits which are undefined future capabilities. The BSET gets cleared differently. Instead it is cleared one bit at a time. The problem here is that in security/commoncap.c::cap_task_prctl() we actually check the validity of a capability being read. So any task which attempts to 'read all things set in bset' followed by 'unset all things set in bset' will not even attempt to unset the undefined bits higher than CAP_LAST_CAP. So the 'parent' will look something like: CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: ffffffc000000000 All of this 'should' be fine. Given that these are undefined bits that aren't supposed to have anything to do with permissions. But they do... So lets now consider a task which cleared the eff/perm/inh completely and cleared all of the valid caps in the bset (but not the invalid caps it couldn't read out of the kernel). We know that this is exactly what the libcap-ng library does and what the go capabilities library does. They both leave you in that above situation if you try to clear all of you capapabilities from all 4 sets. If that root task calls execve() the child task will pick up all caps not blocked by the bset. The bset however does not block bits higher than CAP_LAST_CAP. So now the child task has bits in eff which are not in the parent. These are 'meaningless' undefined bits, but still bits which the parent doesn't have. The problem is now in cred_cap_issubset() (or any operation which does a subset test) as the child, while a subset for valid cap bits, is not a subset for invalid cap bits! So now we set durring commit creds that the child is not dumpable. Given it is 'more priv' than its parent. It also means the parent cannot ptrace the child and other stupidity. The solution here: 1) stop hiding capability bits in status This makes debugging easier! 2) stop giving any task undefined capability bits. it's simple, it you don't put those invalid bits in CAP_FULL_SET you won't get them in init and you won't get them in any other task either. This fixes the cap_issubset() tests and resulting fallout (which made the init task in a docker container untraceable among other things) 3) mask out undefined bits when sys_capset() is called as it might use ~0, ~0 to denote 'all capabilities' for backward/forward compatibility. This lets 'capsh --caps="all=eip" -- -c /bin/bash' run. 4) mask out undefined bit when we read a file capability off of disk as again likely all bits are set in the xattr for forward/backward compatibility. This lets 'setcap all+pe /bin/bash; /bin/bash' run Signed-off-by: Eric Paris <eparis@redhat.com> Reviewed-by: Kees Cook <keescook@chromium.org> Cc: Andrew Vagin <avagin@openvz.org> Cc: Andrew G. Morgan <morgan@kernel.org> Cc: Serge E. Hallyn <serge.hallyn@canonical.com> Cc: Kees Cook <keescook@chromium.org> Cc: Steve Grubb <sgrubb@redhat.com> Cc: Dan Walsh <dwalsh@redhat.com> Signed-off-by: James Morris <james.l.morris@oracle.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17tpm: Provide a generic means to override the chip returned timeoutsJason Gunthorpe3-21/+75
commit 8e54caf407b98efa05409e1fee0e5381abd2b088 upstream. Some Atmel TPMs provide completely wrong timeouts from their TPM_CAP_PROP_TIS_TIMEOUT query. This patch detects that and returns new correct values via a DID/VID table in the TIS driver. Tested on ARM using an AT97SC3204T FW version 37.16 [PHuewe: without this fix these 'broken' Atmel TPMs won't function on older kernels] Signed-off-by: "Berg, Christopher" <Christopher.Berg@atmel.com> Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: Peter Huewe <peterhuewe@gmx.de> [bwh: Backported to 3.10: - Adjust filename, context - s/chip->ops->/chip->vendor./] Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17tpm: missing tpm_chip_put in tpm_get_random()Jarkko Sakkinen1-3/+4
commit 3e14d83ef94a5806a865b85b513b4e891923c19b upstream. Regression in 41ab999c. Call to tpm_chip_put is missing. This will cause TPM device driver not to unload if tmp_get_random() is called. Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Peter Huewe <peterhuewe@gmx.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17firmware: Do not use WARN_ON(!spin_is_locked())Guenter Roeck1-4/+4
commit aee530cfecf4f3ec83b78406bac618cec35853f8 upstream. spin_is_locked() always returns false for uniprocessor configurations in several architectures, so do not use WARN_ON with it. Use lockdep_assert_held() instead to also reduce overhead in non-debug kernels. Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17s390/locking: Reenable optimistic spinningChristian Borntraeger1-0/+1
commit 36e7fdaa1a04fcf65b864232e1af56a51c7814d6 upstream. commit 4badad352a6bb202ec68afa7a574c0bb961e5ebc (locking/mutex: Disable optimistic spinning on some architectures) fenced spinning for architectures without proper cmpxchg. There is no need to disable mutex spinning on s390, though: The instructions CS,CSG and friends provide the proper guarantees. (We dont implement cmpxchg with locks). Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17spi: orion: fix incorrect handling of cell-index DT propertyThomas Petazzoni1-6/+4
commit e06871cd2c92e5c65d7ca1d32866b4ca5dd4ac30 upstream. In commit f814f9ac5a81 ("spi/orion: add device tree binding"), Device Tree support was added to the spi-orion driver. However, this commit reads the "cell-index" property, without taking into account the fact that DT properties are big-endian encoded. Since most of the platforms using spi-orion with DT have apparently not used anything but cell-index = <0>, the problem was not visible. But as soon as one starts using cell-index = <1>, the problem becomes clearly visible, as the master->bus_num gets a wrong value (actually it gets the value 0, which conflicts with the first bus that has cell-index = <0>). This commit fixes that by using of_property_read_u32() to read the property value, which does the appropriate endianness conversion when needed. Fixes: f814f9ac5a81 ("spi/orion: add device tree binding") Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Signed-off-by: Mark Brown <broonie@linaro.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17iommu/amd: Fix cleanup_domain for mass device removalJoerg Roedel1-4/+6
commit 9b29d3c6510407d91786c1cf9183ff4debb3473a upstream. When multiple devices are detached in __detach_device, they are also removed from the domains dev_list. This makes it unsafe to use list_for_each_entry_safe, as the next pointer might also not be in the list anymore after __detach_device returns. So just repeatedly remove the first element of the list until it is empty. Tested-by: Marti Raudsepp <marti@juffo.org> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17media: sms: Remove CONFIG_ prefix from Kconfig symbolsPaul Bolle1-2/+1
commit 3c4b422adb7694418848cefc2a4669d63192c649 upstream. X-Patchwork-Delegate: mchehab@redhat.com Remove the CONFIG_ prefix from two Kconfig symbols in a dependency for SMS_SIANO_DEBUGFS. This prefix is invalid inside Kconfig files. Note that the current (common sense) dependency on SMS_USB_DRV and SMS_SDIO_DRV being equal ensures that SMS_SIANO_DEBUGFS will not violate its constraints. These constraint are that: - it should only be built if SMS_USB_DRV is set; - it can't be builtin if USB support is modular. So drop the dependency on SMS_USB_DRV, as it is unneeded. Fixes: 6c84b214284e ("[media] sms: fix randconfig building error") Reported-by: Martin Walch <walch.martin@web.de> Signed-off-by: Paul Bolle <pebolle@tiscali.nl> Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17media: v4l: vsp1: Remove the unneeded vsp1_video_buffer video fieldLaurent Pinchart2-3/+0
commit e51daefc228aa164adcc17fe8fce0f856ad0a1cc upstream. The field is assigned but never read, remove it. This fixes a bug caused by the struct vb2_buffer field not being be the very first field of the vsp1_video_buffer buffer structure as required by videobuf2. Reported-by: Takanari Hayama <taki@igel.co.jp> Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17media: media-device: Remove duplicated memset() in media_enum_entities()Salva Peiró1-2/+0
commit f8ca6ac00d2ba24c5557f08f81439cd3432f0802 upstream. After the zeroing the whole struct struct media_entity_desc u_ent, it is no longer necessary to memset(0) its u_ent.name field. Signed-off-by: Salva Peiró <speiro@ai2.upv.es> Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17media: au0828: Only alt setting logic when neededMauro Carvalho Chehab1-17/+17
commit 64ea37bbd8a5815522706f0099ad3f11c7537e15 upstream. It seems that there's a bug at au0828 hardware/firmware related to alternate setting: when the device is already at alt 5, a further call causes the URBs to receive -ESHUTDOWN. I found two different encarnations of this issue: 1) at qv4l2, it fails the second time we try to open the video screen; 2) at xawtv, when audio underrun occurs, with is very frequent, at least on my test machine. The fix is simple: just check if alt=5 before calling set_usb_interface(). Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17media: xc4000: Fix get_frequency()Mauro Carvalho Chehab1-9/+11
commit 4c07e32884ab69574cfd9eb4de3334233c938071 upstream. The programmed frequency on xc4000 is not the middle frequency, but the initial frequency on the bandwidth range. However, the DVB API works with the middle frequency. This works fine on set_frontend, as the device calculates the needed offset. However, at get_frequency(), the returned value is the initial frequency. That's generally not a big problem on most drivers, however, starting with changeset 6fe1099c7aec, the frequency drift is taken into account at dib7000p driver. This broke support for PCTV 340e, with uses dib7000p demod and xc4000 tuner. Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17media: xc5000: Fix get_frequency()Mauro Carvalho Chehab1-10/+12
commit a3eec916cbc17dc1aaa3ddf120836cd5200eb4ef upstream. The programmed frequency on xc5000 is not the middle frequency, but the initial frequency on the bandwidth range. However, the DVB API works with the middle frequency. Signed-off-by: Mauro Carvalho Chehab <m.chehab@samsung.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17USB: fix build error with CONFIG_PM_RUNTIME disabledGreg Kroah-Hartman1-0/+2
commit a9ef803d740bfadf5e505fbc57efa57692e27025 upstream. commit bdd405d2a528 ("usb: hub: Prevent hub autosuspend if usbcore.autosuspend is -1") causes a build error if CONFIG_PM_RUNTIME is disabled. Fix that by doing a simple #ifdef guard around it. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Reported-by: kbuild test robot <fengguang.wu@intel.com> Cc: Roger Quadros <rogerq@ti.com> Cc: Michael Welling <mwelling@emacinc.com> Cc: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17vm_is_stack: use for_each_thread() rather then buggy while_each_thread()Oleg Nesterov1-6/+3
commit 4449a51a7c281602d3a385044ab928322a122a02 upstream. Aleksei hit the soft lockup during reading /proc/PID/smaps. David investigated the problem and suggested the right fix. while_each_thread() is racy and should die, this patch updates vm_is_stack(). Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reported-by: Aleksei Besogonov <alex.besogonov@gmail.com> Tested-by: Aleksei Besogonov <alex.besogonov@gmail.com> Suggested-by: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17NFSv4: Fix problems with close in the presence of a delegationTrond Myklebust1-5/+12
commit aee7af356e151494d5014f57b33460b162f181b5 upstream. In the presence of delegations, we can no longer assume that the state->n_rdwr, state->n_rdonly, state->n_wronly reflect the open stateid share mode, and so we need to calculate the initial value for calldata->arg.fmode using the state->flags. Reported-by: James Drews <drews@engr.wisc.edu> Fixes: 88069f77e1ac5 (NFSv41: Fix a potential state leakage when...) Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17svcrdma: Select NFSv4.1 backchannel transport based on forward channelChuck Lever5-2/+7
commit 3c45ddf823d679a820adddd53b52c6699c9a05ac upstream. The current code always selects XPRT_TRANSPORT_BC_TCP for the back channel, even when the forward channel was not TCP (eg, RDMA). When a 4.1 mount is attempted with RDMA, the server panics in the TCP BC code when trying to send CB_NULL. Instead, construct the transport protocol number from the forward channel transport or'd with XPRT_TRANSPORT_BC. Transports that do not support bi-directional RPC will not have registered a "BC" transport, causing create_backchannel_client() to fail immediately. Fixes: https://bugzilla.linux-nfs.org/show_bug.cgi?id=265 Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17NFSD: Decrease nfsd_users in nfsd_startup_generic failKinglong Mee1-1/+4
commit d9499a95716db0d4bc9b67e88fd162133e7d6b08 upstream. A memory allocation failure could cause nfsd_startup_generic to fail, in which case nfsd_users wouldn't be incorrectly left elevated. After nfsd restarts nfsd_startup_generic will then succeed without doing anything--the first consequence is likely nfs4_start_net finding a bad laundry_wq and crashing. Signed-off-by: Kinglong Mee <kinglongmee@gmail.com> Fixes: 4539f14981ce "nfsd: replace boolean nfsd_up flag by users counter" Signed-off-by: J. Bruce Fields <bfields@redhat.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17usb: hub: Prevent hub autosuspend if usbcore.autosuspend is -1Roger Quadros1-1/+5
commit bdd405d2a5287bdb9b04670ea255e1f122138e66 upstream. If user specifies that USB autosuspend must be disabled by module parameter "usbcore.autosuspend=-1" then we must prevent autosuspend of USB hub devices as well. commit 596d789a211d introduced in v3.8 changed the original behaivour and stopped respecting the usbcore.autosuspend parameter for hubs. Fixes: 596d789a211d "USB: set hub's default autosuspend delay as 0" Signed-off-by: Roger Quadros <rogerq@ti.com> Tested-by: Michael Welling <mwelling@emacinc.com> Acked-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17usb: ehci: using wIndex + 1 for hub portPeter Chen1-1/+1
commit 5cbcc35e5bf0eae3c7494ce3efefffc9977827ae upstream. The roothub's index per controller is from 0, but the hub port index per hub is from 1, this patch fixes "can't find device at roohub" problem for connecting test fixture at roohub when do USB-IF Embedded Host High-Speed Electrical Test. This patch is for v3.12+. Signed-off-by: Peter Chen <peter.chen@freescale.com> Acked-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17USB: whiteheat: Added bounds checking for bulk command responseJames Forshaw1-1/+6
commit 6817ae225cd650fb1c3295d769298c38b1eba818 upstream. This patch fixes a potential security issue in the whiteheat USB driver which might allow a local attacker to cause kernel memory corrpution. This is due to an unchecked memcpy into a fixed size buffer (of 64 bytes). On EHCI and XHCI busses it's possible to craft responses greater than 64 bytes leading a buffer overflow. Signed-off-by: James Forshaw <forshaw@google.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17USB: ftdi_sio: Added PID for new ekey deviceJaša Bartelj2-0/+7
commit 646907f5bfb0782c731ae9ff6fb63471a3566132 upstream. Added support to the ftdi_sio driver for ekey Converter USB which uses an FT232BM chip. Signed-off-by: Jaša Bartelj <jasa.bartelj@gmail.com> Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17USB: ftdi_sio: add Basic Micro ATOM Nano USB2Serial PIDJohan Hovold2-0/+3
commit 6552cc7f09261db2aeaae389aa2c05a74b3a93b4 upstream. Add device id for Basic Micro ATOM Nano USB2Serial adapters. Reported-by: Nicolas Alt <n.alt@mytum.de> Tested-by: Nicolas Alt <n.alt@mytum.de> Signed-off-by: Johan Hovold <johan@kernel.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17usb: xhci: amd chipset also needs short TX quirkHuang Rui1-0/+4
commit 2597fe99bb0259387111d0431691f5daac84f5a5 upstream. AMD xHC also needs short tx quirk after tested on most of chipset generations. That's because there is the same incorrect behavior like Fresco Logic host. Please see below message with on USB webcam attached on xHC host: [ 139.262944] xhci_hcd 0000:00:10.0: WARN Successful completion on short TX: needs XHCI_TRUST_TX_LENGTH quirk? [ 139.266934] xhci_hcd 0000:00:10.0: WARN Successful completion on short TX: needs XHCI_TRUST_TX_LENGTH quirk? [ 139.270913] xhci_hcd 0000:00:10.0: WARN Successful completion on short TX: needs XHCI_TRUST_TX_LENGTH quirk? [ 139.274937] xhci_hcd 0000:00:10.0: WARN Successful completion on short TX: needs XHCI_TRUST_TX_LENGTH quirk? [ 139.278914] xhci_hcd 0000:00:10.0: WARN Successful completion on short TX: needs XHCI_TRUST_TX_LENGTH quirk? [ 139.282936] xhci_hcd 0000:00:10.0: WARN Successful completion on short TX: needs XHCI_TRUST_TX_LENGTH quirk? [ 139.286915] xhci_hcd 0000:00:10.0: WARN Successful completion on short TX: needs XHCI_TRUST_TX_LENGTH quirk? [ 139.290938] xhci_hcd 0000:00:10.0: WARN Successful completion on short TX: needs XHCI_TRUST_TX_LENGTH quirk? [ 139.294913] xhci_hcd 0000:00:10.0: WARN Successful completion on short TX: needs XHCI_TRUST_TX_LENGTH quirk? [ 139.298917] xhci_hcd 0000:00:10.0: WARN Successful completion on short TX: needs XHCI_TRUST_TX_LENGTH quirk? Reported-by: Arindam Nath <arindam.nath@amd.com> Tested-by: Shriraj-Rai P <shriraj-rai.p@amd.com> Signed-off-by: Huang Rui <ray.huang@amd.com> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17xhci: Treat not finding the event_seg on COMP_STOP the same as COMP_STOP_INVALHans de Goede1-1/+2
commit 9a54886342e227433aebc9d374f8ae268a836475 upstream. When using a Renesas uPD720231 chipset usb-3 uas to sata bridge with a 120G Crucial M500 ssd, model string: Crucial_ CT120M500SSD1, together with a the integrated Intel xhci controller on a Haswell laptop: 00:14.0 USB controller [0c03]: Intel Corporation 8 Series USB xHCI HC [8086:9c31] (rev 04) The following error gets logged to dmesg: xhci error: Transfer event TRB DMA ptr not part of current TD Treating COMP_STOP the same as COMP_STOP_INVAL when no event_seg gets found fixes this. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17staging: r8188eu: Add new USB IDLarry Finger1-0/+1
commit a2fa6721c7237b5a666f16f732628c0c09c0b954 upstream. The Elecom WDC-150SU2M uses this chip. Reported-by: Hiroki Kondo <kompiro@gmail.com> Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17staging/rtl8188eu: add 0df6:0076 Sitecom Europe B.V.Holger Paradies1-0/+1
commit 8626d524ef08f10fccc0c41e5f75aef8235edf47 upstream. The stick is not recognized. This dongle uses r8188eu but usb-id is missing. 3.16.0 Signed-off-by: Holger Paradies <retabell@gmx.de> Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17jbd2: fix descriptor block size handling errors with journal_csumDarrick J. Wong6-49/+95
commit db9ee220361de03ee86388f9ea5e529eaad5323c upstream. It turns out that there are some serious problems with the on-disk format of journal checksum v2. The foremost is that the function to calculate descriptor tag size returns sizes that are too big. This causes alignment issues on some architectures and is compounded by the fact that some parts of jbd2 use the structure size (incorrectly) to determine the presence of a 64bit journal instead of checking the feature flags. Therefore, introduce journal checksum v3, which enlarges the descriptor block tag format to allow for full 32-bit checksums of journal blocks, fix the journal tag function to return the correct sizes, and fix the jbd2 recovery code to use feature flags to determine 64bitness. Add a few function helpers so we don't have to open-code quite so many pieces. Switching to a 16-byte block size was found to increase journal size overhead by a maximum of 0.1%, to convert a 32-bit journal with no checksumming to a 32-bit journal with checksum v3 enabled. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Reported-by: TR Reardon <thomas_reardon@hotmail.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17jbd2: fix infinite loop when recovering corrupt journal blocksDarrick J. Wong1-2/+5
commit 022eaa7517017efe4f6538750c2b59a804dc7df7 upstream. When recovering the journal, don't fall into an infinite loop if we encounter a corrupt journal block. Instead, just skip the block and return an error, which fails the mount and thus forces the user to run a full filesystem fsck. Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17ext4: update i_disksize coherently with block allocation on error pathDmitry Monakhov1-2/+8
commit 6603120e96eae9a5d6228681ae55c7fdc998d1bb upstream. In case of delalloc block i_disksize may be less than i_size. So we have to update i_disksize each time we allocated and submitted some blocks beyond i_disksize. We weren't doing this on the error paths, so fix this. testcase: xfstest generic/019 Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17mei: nfc: fix memory leak in error pathAlexander Usyskin1-6/+5
commit 8e8248b1369c97c7bb6f8bcaee1f05deeabab8ef upstream. NFC will leak buffer if send failed. Use single exit point that does the freeing Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com> Signed-off-by: Tomas Winkler <tomas.winkler@intel.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17Btrfs: fix crash on endio of reading corrupted blockLiu Bo1-0/+1
commit 38c1c2e44bacb37efd68b90b3f70386a8ee370ee upstream. The crash is ------------[ cut here ]------------ kernel BUG at fs/btrfs/extent_io.c:2124! [...] Workqueue: btrfs-endio normal_work_helper [btrfs] RIP: 0010:[<ffffffffa02d6055>] [<ffffffffa02d6055>] end_bio_extent_readpage+0xb45/0xcd0 [btrfs] This is in fact a regression. It is because we forgot to increase @offset properly in reading corrupted block, so that the @offset remains, and this leads to checksum errors while reading left blocks queued up in the same bio, and then ends up with hiting the above BUG_ON. Reported-by: Chris Murphy <lists@colorremedies.com> Signed-off-by: Liu Bo <bo.li.liu@oracle.com> Signed-off-by: Chris Mason <clm@fb.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17Btrfs: fix compressed write corruption on enospcLiu Bo1-0/+12
commit ce62003f690dff38d3164a632ec69efa15c32cbf upstream. When failing to allocate space for the whole compressed extent, we'll fallback to uncompressed IO, but we've forgotten to redirty the pages which belong to this compressed extent, and these 'clean' pages will simply skip 'submit' part and go to endio directly, at last we got data corruption as we write nothing. Signed-off-by: Liu Bo <bo.li.liu@oracle.com> Tested-By: Martin Steigerwald <martin@lichtvoll.de> Signed-off-by: Chris Mason <clm@fb.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17Btrfs: read lock extent buffer while walking backrefsFilipe Manana1-0/+3
commit 6f7ff6d7832c6be13e8c95598884dbc40ad69fb7 upstream. Before processing the extent buffer, acquire a read lock on it, so that we're safe against concurrent updates on the extent buffer. Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Chris Mason <clm@fb.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17Btrfs: fix csum tree corruption, duplicate and outdated checksumsFilipe Manana1-1/+1
commit 27b9a8122ff71a8cadfbffb9c4f0694300464f3b upstream. Under rare circumstances we can end up leaving 2 versions of a checksum for the same file extent range. The reason for this is that after calling btrfs_next_leaf we process slot 0 of the leaf it returns, instead of processing the slot set in path->slots[0]. Most of the time (by far) path->slots[0] is 0, but after btrfs_next_leaf() releases the path and before it searches for the next leaf, another task might cause a split of the next leaf, which migrates some of its keys to the leaf we were processing before calling btrfs_next_leaf(). In this case btrfs_next_leaf() returns again the same leaf but with path->slots[0] having a slot number corresponding to the first new key it got, that is, a slot number that didn't exist before calling btrfs_next_leaf(), as the leaf now has more keys than it had before. So we must really process the returned leaf starting at path->slots[0] always, as it isn't always 0, and the key at slot 0 can have an offset much lower than our search offset/bytenr. For example, consider the following scenario, where we have: sums->bytenr: 40157184, sums->len: 16384, sums end: 40173568 four 4kb file data blocks with offsets 40157184, 40161280, 40165376, 40169472 Leaf N: slot = 0 slot = btrfs_header_nritems() - 1 |-------------------------------------------------------------------| | [(CSUM CSUM 39239680), size 8] ... [(CSUM CSUM 40116224), size 4] | |-------------------------------------------------------------------| Leaf N + 1: slot = 0 slot = btrfs_header_nritems() - 1 |--------------------------------------------------------------------| | [(CSUM CSUM 40161280), size 32] ... [((CSUM CSUM 40615936), size 8 | |--------------------------------------------------------------------| Because we are at the last slot of leaf N, we call btrfs_next_leaf() to find the next highest key, which releases the current path and then searches for that next key. However after releasing the path and before finding that next key, the item at slot 0 of leaf N + 1 gets moved to leaf N, due to a call to ctree.c:push_leaf_left() (via ctree.c:split_leaf()), and therefore btrfs_next_leaf() will returns us a path again with leaf N but with the slot pointing to its new last key (CSUM CSUM 40161280). This new version of leaf N is then: slot = 0 slot = btrfs_header_nritems() - 2 slot = btrfs_header_nritems() - 1 |----------------------------------------------------------------------------------------------------| | [(CSUM CSUM 39239680), size 8] ... [(CSUM CSUM 40116224), size 4] [(CSUM CSUM 40161280), size 32] | |----------------------------------------------------------------------------------------------------| And incorrecly using slot 0, makes us set next_offset to 39239680 and we jump into the "insert:" label, which will set tmp to: tmp = min((sums->len - total_bytes) >> blocksize_bits, (next_offset - file_key.offset) >> blocksize_bits) = min((16384 - 0) >> 12, (39239680 - 40157184) >> 12) = min(4, (u64)-917504 = 18446744073708634112 >> 12) = 4 and ins_size = csum_size * tmp = 4 * 4 = 16 bytes. In other words, we insert a new csum item in the tree with key (CSUM_OBJECTID CSUM_KEY 40157184 = sums->bytenr) that contains the checksums for all the data (4 blocks of 4096 bytes each = sums->len). Which is wrong, because the item with key (CSUM CSUM 40161280) (the one that was moved from leaf N + 1 to the end of leaf N) contains the old checksums of the last 12288 bytes of our data and won't get those old checksums removed. So this leaves us 2 different checksums for 3 4kb blocks of data in the tree, and breaks the logical rule: Key_N+1.offset >= Key_N.offset + length_of_data_its_checksums_cover An obvious bad effect of this is that a subsequent csum tree lookup to get the checksum of any of the blocks with logical offset of 40161280, 40165376 or 40169472 (the last 3 4kb blocks of file data), will get the old checksums. Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Chris Mason <clm@fb.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
2014-09-17Btrfs: Fix memory corruption by ulist_add_merge() on 32bit archTakashi Iwai2-6/+20
commit 4eb1f66dce6c4dc28dd90a7ffbe6b2b1cb08aa4e upstream. We've got bug reports that btrfs crashes when quota is enabled on 32bit kernel, typically with the Oops like below: BUG: unable to handle kernel NULL pointer dereference at 00000004 IP: [<f9234590>] find_parent_nodes+0x360/0x1380 [btrfs] *pde = 00000000 Oops: 0000 [#1] SMP CPU: 0 PID: 151 Comm: kworker/u8:2 Tainted: G S W 3.15.2-1.gd43d97e-default #1 Workqueue: btrfs-qgroup-rescan normal_work_helper [btrfs] task: f1478130 ti: f147c000 task.ti: f147c000 EIP: 0060:[<f9234590>] EFLAGS: 00010213 CPU: 0 EIP is at find_parent_nodes+0x360/0x1380 [btrfs] EAX: f147dda8 EBX: f147ddb0 ECX: 00000011 EDX: 00000000 ESI: 00000000 EDI: f147dda4 EBP: f147ddf8 ESP: f147dd38 DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 CR0: 8005003b CR2: 00000004 CR3: 00bf3000 CR4: 00000690 Stack: 00000000 00000000 f147dda4 00000050 00000001 00000000 00000001 00000050 00000001 00000000 d3059000 00000001 00000022 000000a8 00000000 00000000 00000000 000000a1 00000000 00000000 00000001 00000000 00000000 11800000 Call Trace: [<f923564d>] __btrfs_find_all_roots+0x9d/0xf0 [btrfs] [<f9237bb1>] btrfs_qgroup_rescan_worker+0x401/0x760 [btrfs] [<f9206148>] normal_work_helper+0xc8/0x270 [btrfs] [<c025e38b>] process_one_work+0x11b/0x390 [<c025eea1>] worker_thread+0x101/0x340 [<c026432b>] kthread+0x9b/0xb0 [<c0712a71>] ret_from_kernel_thread+0x21/0x30 [<c0264290>] kthread_create_on_node+0x110/0x110 This indicates a NULL corruption in prefs_delayed list. The further investigation and bisection pointed that the call of ulist_add_merge() results in the corruption. ulist_add_merge() takes u64 as aux and writes a 64bit value into old_aux. The callers of this function in backref.c, however, pass a pointer of a pointer to old_aux. That is, the function overwrites 64bit value on 32bit pointer. This caused a NULL in the adjacent variable, in this case, prefs_delayed. Here is a quick attempt to band-aid over this: a new function, ulist_add_merge_ptr() is introduced to pass/store properly a pointer value instead of u64. There are still ugly void ** cast remaining in the callers because void ** cannot be taken implicitly. But, it's safer than explicit cast to u64, anyway. Bugzilla: https://bugzilla.novell.com/show_bug.cgi?id=887046 Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Chris Mason <clm@fb.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz>