summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2017-11-30ASoC: wm_adsp: Don't overrun firmware file buffer when reading region dataRichard Fitzgerald1-1/+24
[ Upstream commit 1cab2a84f470e15ecc8e5143bfe9398c6e888032 ] Protect against corrupt firmware files by ensuring that the length we get for the data in a region actually lies within the available firmware file data buffer. Signed-off-by: Richard Fitzgerald <rf@opensource.wolfsonmicro.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30btrfs: return the actual error value from from btrfs_uuid_tree_iteratePan Bian1-3/+1
[ Upstream commit 73ba39ab9307340dc98ec3622891314bbc09cc2e ] In function btrfs_uuid_tree_iterate(), errno is assigned to variable ret on errors. However, it directly returns 0. It may be better to return ret. This patch also removes the warning, because the caller already prints a warning. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=188731 Signed-off-by: Pan Bian <bianpan2016@163.com> Reviewed-by: Omar Sandoval <osandov@fb.com> [ edited subject ] Signed-off-by: David Sterba <dsterba@suse.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30crypto: marvell - Copy IVDIG before launching partial DMA ahash requestsRomain Perier3-3/+43
[ Upstream commit 8759fec4af222f338d08f8f1a7ad6a77ca6cb301 ] Currently, inner IV/DIGEST data are only copied once into the hash engines and not set explicitly before launching a request that is not a first frag. This is an issue especially when multiple ahash reqs are computed in parallel or chained with cipher request, as the state of the request being computed is not updated into the hash engine. It leads to non-deterministic corrupted digest results. Fixes: commit 2786cee8e50b ("crypto: marvell - Move SRAM I/O operations to step functions") Signed-off-by: Romain Perier <romain.perier@free-electrons.com> Acked-by: Boris Brezillon <boris.brezillon@free-electrons.com> Cc: <stable@vger.kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30ASoC: rsnd: don't double free kctrlColin Ian King1-3/+1
[ Upstream commit 0ea617a298dcdc2251b4e10f83ac3f3e627b66e3 ] On an error, snd_ctl_add already free's kctrl, so calling snd_ctl_free_one to free it again leads to a double free error. Fix this by removing the extraneous snd_ctl_free_one call. Issue found using static analysis with CoverityScan, CID 1372908 Signed-off-by: Colin Ian King <colin.king@canonical.com> Acked-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30netfilter: nf_tables: fix oob accessFlorian Westphal1-1/+1
[ Upstream commit 3e38df136e453aa69eb4472108ebce2fb00b1ba6 ] BUG: KASAN: slab-out-of-bounds in nf_tables_rule_destroy+0xf1/0x130 at addr ffff88006a4c35c8 Read of size 8 by task nft/1607 When we've destroyed last valid expr, nft_expr_next() returns an invalid expr. We must not dereference it unless it passes != nft_expr_last() check. Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30netfilter: nft_queue: use raw_smp_processor_id()Pablo Neira Ayuso1-1/+1
[ Upstream commit c2e756ff9e699865d294cdc112acfc36419cf5cc ] Using smp_processor_id() causes splats with PREEMPT_RCU: [19379.552780] BUG: using smp_processor_id() in preemptible [00000000] code: ping/32389 [19379.552793] caller is debug_smp_processor_id+0x17/0x19 [...] [19379.552823] Call Trace: [19379.552832] [<ffffffff81274e9e>] dump_stack+0x67/0x90 [19379.552837] [<ffffffff8129a4d4>] check_preemption_disabled+0xe5/0xf5 [19379.552842] [<ffffffff8129a4fb>] debug_smp_processor_id+0x17/0x19 [19379.552849] [<ffffffffa07c42dd>] nft_queue_eval+0x35/0x20c [nft_queue] No need to disable preemption since we only fetch the numeric value, so let's use raw_smp_processor_id() instead. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30spi: SPI_FSL_DSPI should depend on HAS_DMAGeert Uytterhoeven1-0/+1
[ Upstream commit dadab2d4e3cf708ceba22ecddd94aedfecb39199 ] If NO_DMA=y: ERROR: "bad_dma_ops" [drivers/spi/spi-fsl-dspi.ko] undefined! Add a dependency on HAS_DMA to fix this. Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30staging: iio: cdc: fix improper return valuePan Bian1-1/+1
[ Upstream commit 91ca1a8c584f55857b1f6ab20a1d3a1ce7a559bb ] At the end of function ad7150_write_event_config(), directly returns 0. As a result, the errors will be ignored by the callers. It may be better to return variable "ret". Signed-off-by: Pan Bian <bianpan2016@163.com> Signed-off-by: Jonathan Cameron <jic23@kernel.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30iio: light: fix improper return valuePan Bian1-1/+1
[ Upstream commit db4e5376d058af8924fafd0520a0942d92538d0e ] In function cm3232_reg_init(), it returns 0 even if the last call to i2c_smbus_write_byte_data() returns a negative value (indicates error). As a result, the return value may be inconsistent with the execution status, and the caller of cm3232_reg_init() will not be able to detect the error. This patch fixes the bug. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=188641 Signed-off-by: Pan Bian <bianpan2016@163.com> Signed-off-by: Jonathan Cameron <jic23@kernel.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30adm80211: add checks for dma mapping errorsAlexey Khoroshilov1-2/+22
[ Upstream commit d15697de60db5570532fdedb8e13b2251d65b8e3 ] The driver does not check if mapping dma memory succeed. The patch adds the checks and failure handling. Found by Linux Driver Verification project (linuxtesting.org). Signed-off-by: Alexey Khoroshilov <khoroshilov@ispras.ru> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30mac80211: Suppress NEW_PEER_CANDIDATE event if no roomMasashi Honma1-6/+8
[ Upstream commit 11197d006bcfabf0173a7820a163fcaac420d10e ] Previously, kernel sends NEW_PEER_CANDIDATE event to user land even if the found peer does not have any room to accept other peer. This causes continuous connection trials. Signed-off-by: Masashi Honma <masashi.honma@gmail.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30mac80211: Remove invalid flag operations in mesh TSF synchronizationMasashi Honma3-15/+0
[ Upstream commit 76f43b4c0a9337af22827d78de4f2b8fd5328489 ] mesh_sync_offset_adjust_tbtt() implements Extensible synchronization framework ([1] 13.13.2 Extensible synchronization framework). It shall not operate the flag "TBTT Adjusting subfield" ([1] 8.4.2.100.8 Mesh Capability), since it is used only for MBCA ([1] 13.13.4 Mesh beacon collision avoidance, see 13.13.4.4.3 TBTT scanning and adjustment procedures for detail). So this patch remove the flag operations. [1] IEEE Std 802.11 2012 Signed-off-by: Masashi Honma <masashi.honma@gmail.com> [remove adjusting_tbtt entirely, since it's now unused] Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30drm/mediatek: don't use drm_put_devDaniel Vetter1-1/+2
[ Upstream commit ae9d2daecf086958a41ad216152ec208d70ba325 ] fsl is already fully demidlayered in the probe function, but for convenience stuck with drm_put_dev. Call the unregister/unref parts separately, to make sure this driver works correct. Cc: Philipp Zabel <p.zabel@pengutronix.de> Cc: CK Hu <ck.hu@mediatek.com> Reviewed-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/20161208110739.24417-3-daniel.vetter@ffwll.ch Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30clk: qcom: ipq4019: Add all the frequencies for apss cpuAbhishek Sahu1-2/+12
[ Upstream commit 86c654d41a52e3d17e9bc2c2ba37f3c963e66a4a ] The APSS CPU clock does not contain all the frequencies in its frequency table so this patch adds the same. Signed-off-by: Abhishek Sahu <absahu@codeaurora.org> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30drm: Apply range restriction after color adjustment when allocationChris Wilson1-10/+6
[ Upstream commit 3db93756b501e5f0a3951c79cfa9ed43c26d3455 ] mm->color_adjust() compares the hole with its neighbouring nodes. They only abutt before we restrict the hole, so we have to apply color_adjust before we apply the range restriction. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: http://patchwork.freedesktop.org/patch/msgid/20161222083641.2691-36-chris@chris-wilson.co.uk Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30gpio: mockup: dynamically allocate memory for chip nameBartosz Golaszewski1-3/+7
[ Upstream commit ad6d8004fa29a8958381b60215e32d1e903b0492 ] Currently the chip name buffer is allocated on the stack and the address of the buffer is passed to the gpio framework. It's invalid after probe() returns, so the sysfs label attribute displays garbage. Use devm_kasprintf() for each string instead. Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30ALSA: hda - Apply ALC269_FIXUP_NO_SHUTUP on HDA_FIXUP_ACT_PROBEGabriele Mazzotta1-1/+1
[ Upstream commit 972aa2c708703c21f14eb958b37e82aae2530e44 ] Setting shutup when the action is HDA_FIXUP_ACT_PRE_PROBE might not have the desired effect since it could be overridden by another more generic shutup function. Prevent this by setting the more specific shutup function on HDA_FIXUP_ACT_PROBE. Signed-off-by: Gabriele Mazzotta <gabriele.mzt@gmail.com> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30ath10k: set CTS protection VDEV param only if VDEV is upBartosz Markowski1-8/+43
[ Upstream commit 7cfe0455ee1218add152e986b89b4bb8dbeafcdd ] The cts protection vdev parameter, in new QCA9377 TF2.0 firmware, requires bss peer to be created for the STATION vdev type. bss peer is being allocated by the firmware after vdev_start/_up commands. mac80211 may call the cts protection setup at any time, so the we needs to track the situation and defer the cts configuration to prevent firmware asserts, like below: [00]: 0x05020001 0x000015B3 0x0099ACE2 0x00955B31 [04]: 0x0099ACE2 0x00060730 0x00000004 0x00000000 [08]: 0x0044C754 0x00412C10 0x00000000 0x00409C54 [12]: 0x00000009 0x00000000 0x00952F6C 0x00952F77 [16]: 0x00952CC4 0x00910712 0x00000000 0x00000000 [20]: 0x4099ACE2 0x0040E858 0x00421254 0x004127F4 [24]: 0x8099B9B2 0x0040E8B8 0x00000000 0xC099ACE2 [28]: 0x800B75CB 0x0040E8F8 0x00000007 0x00005008 [32]: 0x809B048A 0x0040E958 0x00000010 0x00433B10 [36]: 0x809AFBBC 0x0040E9A8 0x0042BB74 0x0042BBBC [40]: 0x8091D252 0x0040E9C8 0x0042BBBC 0x00000001 [44]: 0x809FFA45 0x0040EA78 0x0043D3E4 0x0042C2C8 [48]: 0x809FCEF4 0x0040EA98 0x0043D3E4 0x00000001 [52]: 0x80911210 0x0040EAE8 0x00000010 0x004041D0 [56]: 0x80911154 0x0040EB28 0x00400000 0x00000000 Signed-off-by: Bartosz Markowski <bartosz.markowski@tieto.com> Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30bnxt_en: Set default completion ring for async events.Michael Chan1-0/+30
[ Upstream commit 486b5c22ea1d35e00e90dd79a32a9ee530b18915 ] With the added support for the bnxt_re RDMA driver, both drivers can be allocating completion rings in any order. The firmware does not know which completion ring should be receiving async events. Add an extra step to tell firmware the completion ring number for receiving async events after bnxt_en allocates the completion rings. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30pinctrl: sirf: atlas7: Add missing 'of_node_put()'Christophe JAILLET1-3/+4
[ Upstream commit 7af355e6715b325d8af29822f4c3dbecd7eeebec ] Reference to 'sys2pci_np' should be dropped in all cases here, not only in error handling path. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30ath10k: fix potential memory leak in ath10k_wmi_tlv_op_pull_fw_stats()Christian Lamparter1-3/+9
[ Upstream commit 097e46d2ae90265d1afe141ba6208ba598b79e01 ] ath10k_wmi_tlv_op_pull_fw_stats() uses tb = ath10k_wmi_tlv_parse_alloc(...) function, which allocates memory. If any of the three error-paths are taken, this tb needs to be freed. Signed-off-by: Christian Lamparter <chunkeey@googlemail.com> Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30ath10k: ignore configuring the incorrect board_idRyan Hsu1-1/+4
[ Upstream commit d2e202c06ca42d353d95df12437740921a6d05b5 ] With command to get board_id from otp, in the case of following boot get otp board id result 0x00000000 board_id 0 chip_id 0 boot using board name 'bus=pci,bmi-chip-id=0,bmi-board-id=0" ... failed to fetch board data for bus=pci,bmi-chip-id=0,bmi-board-id=0 from ath10k/QCA6174/hw3.0/board-2.bin The invalid board_id=0 will be used as index to search in the board-2.bin. Ignore the case with board_id=0, as it means the otp is not carrying the board id information. Signed-off-by: Ryan Hsu <ryanhsu@qca.qualcomm.com> Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30ath10k: fix incorrect txpower set by P2P_DEVICE interfaceRyan Hsu1-3/+4
[ Upstream commit 88407beb1b1462f706a1950a355fd086e1c450b6 ] Ath10k reports the phy capability that supports P2P_DEVICE interface. When we use the P2P supported wpa_supplicant to start connection, it'll create two interfaces, one is wlan0 (vdev_id=0) and one is P2P_DEVICE p2p-dev-wlan0 which is for p2p control channel (vdev_id=1). ath10k_pci mac vdev create 0 (add interface) type 2 subtype 0 ath10k_add_interface: vdev_id: 0, txpower: 0, bss_power: 0 ... ath10k_pci mac vdev create 1 (add interface) type 2 subtype 1 ath10k_add_interface: vdev_id: 1, txpower: 0, bss_power: 0 And the txpower in per vif bss_conf will only be set to valid tx power when the interface is assigned with channel_ctx. But this P2P_DEVICE interface will never be used for any connection, so that the uninitialized bss_conf.txpower=0 is assinged to the arvif->txpower when interface created. Since the txpower configuration is firmware per physical interface. So the smallest txpower of all vifs will be the one limit the tx power of the physical device, that causing the low txpower issue on other active interfaces. wlan0: Limiting TX power to 21 (24 - 3) dBm ath10k_pci mac vdev_id 0 txpower 21 ath10k_mac_txpower_recalc: vdev_id: 1, txpower: 0 ath10k_mac_txpower_recalc: vdev_id: 0, txpower: 21 ath10k_pci mac txpower 0 This issue only happens when we use the wpa_supplicant that supports P2P or if we use the iw tool to create the control P2P_DEVICE interface. Signed-off-by: Ryan Hsu <ryanhsu@qca.qualcomm.com> Signed-off-by: Kalle Valo <kvalo@qca.qualcomm.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30mwifiex: sdio: fix use after free issue for save_adapterAmitkumar Karwar1-0/+6
[ Upstream commit 74c8719b8ee0922593a5cbec0bd6127d86d8a2f4 ] If we have sdio work requests received when sdio card reset is happening, we may end up accessing older save_adapter pointer later which is already freed during card reset. This patch solves the problem by cancelling those pending requests. Signed-off-by: Amitkumar Karwar <akarwar@marvell.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30adm80211: return an error if adm8211_alloc_rings() failsDan Carpenter1-1/+2
[ Upstream commit c705a6b3aa7804d7bc6660183f51e510c61dc807 ] We accidentally return success when adm8211_alloc_rings() fails but we should preserve the error code. Fixes: cc0b88cf5ecf ("[PATCH] Add adm8211 802.11b wireless driver") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30rt2800: set minimum MPDU and PSDU lengths to sane valuesStanislaw Gruszka1-2/+2
[ Upstream commit a51b89698ccc93c7e274eb71377fae49c4593ab2 ] Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30drm/armada: Fix compile failDaniel Vetter1-0/+2
[ Upstream commit 7357f89954b6d005df6ab8929759e78d7d9a80f9 ] I reported the include issue for tracepoints a while ago, but nothing seems to have happened. Now it bit us, since the drm_mm_print conversion was broken for armada. Fix it, so I can re-enable armada in the drm-misc build configs. v2: Rebase just the compile fix on top of Chris' build fix. Cc: Russell King <rmk+kernel@armlinux.org.uk> Cc: Chris Wilson <chris@chris-wilson.co.uk> Acked: Chris Wilson <chris@chris-wilson.co.uk> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com> Link: http://patchwork.freedesktop.org/patch/msgid/1483115932-19584-1-git-send-email-daniel.vetter@ffwll.ch Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30net: 3com: typhoon: typhoon_init_one: fix incorrect return valuesThomas Preisner1-3/+6
[ Upstream commit 107fded7bf616ad6f46823d98b8ed6405d7adf2d ] In a few cases the err-variable is not set to a negative error code if a function call in typhoon_init_one() fails and thus 0 is returned instead. It may be better to set err to the appropriate negative error code before returning. Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=188841 Reported-by: Pan Bian <bianpan2016@163.com> Signed-off-by: Thomas Preisner <thomas.preisner+linux@fau.de> Signed-off-by: Milan Stephan <milan.stephan+linux@fau.de> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30net: 3com: typhoon: typhoon_init_one: make return values more specificThomas Preisner1-8/+8
[ Upstream commit 6b6bbb5922a4b1d4b58125a572da91010295fba3 ] In some cases the return value of a failing function is not being used and the function typhoon_init_one() returns another negative error code instead. Signed-off-by: Thomas Preisner <thomas.preisner+linux@fau.de> Signed-off-by: Milan Stephan <milan.stephan+linux@fau.de> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30net: Allow IP_MULTICAST_IF to set index to L3 slaveDavid Ahern2-5/+18
[ Upstream commit 7bb387c5ab12aeac3d5eea28686489ff46b53ca9 ] IP_MULTICAST_IF fails if sk_bound_dev_if is already set and the new index does not match it. e.g., ntpd[15381]: setsockopt IP_MULTICAST_IF 192.168.1.23 fails: Invalid argument Relax the check in setsockopt to allow setting mc_index to an L3 slave if sk_bound_dev_if points to an L3 master. Make a similar change for IPv6. In this case change the device lookup to take the rcu_read_lock avoiding a refcnt. The rcu lock is also needed for the lookup of a potential L3 master device. This really only silences a setsockopt failure since uses of mc_index are secondary to sk_bound_dev_if if it is set. In both cases, if either index is an L3 slave or master, lookups are directed to the same FIB table so relaxing the check at setsockopt time causes no harm. Patch is based on a suggested change by Darwin for a problem noted in their code base. Suggested-by: Darwin Dingel <darwin.dingel@alliedtelesis.co.nz> Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30fscrypt: use ENOTDIR when setting encryption policy on nondirectoryEric Biggers1-1/+1
[ Upstream commit dffd0cfa06d4ed83bb3ae8eb067989ceec5d18e1 ] As part of an effort to clean up fscrypt-related error codes, make FS_IOC_SET_ENCRYPTION_POLICY fail with ENOTDIR when the file descriptor does not refer to a directory. This is more descriptive than EINVAL, which was ambiguous with some of the other error cases. I am not aware of any users who might be relying on the previous error code of EINVAL, which was never documented anywhere, and in some buggy kernels did not exist at all as the S_ISDIR() check was missing. This failure case will be exercised by an xfstest. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30fscrypt: use ENOKEY when file cannot be created w/o keyEric Biggers5-7/+12
[ Upstream commit 54475f531bb8d7078f63c159e5e0615d486c498c ] As part of an effort to clean up fscrypt-related error codes, make attempting to create a file in an encrypted directory that hasn't been "unlocked" fail with ENOKEY. Previously, several error codes were used for this case, including ENOENT, EACCES, and EPERM, and they were not consistent between and within filesystems. ENOKEY is a better choice because it expresses that the failure is due to lacking the encryption key. It also matches the error code returned when trying to open an encrypted regular file without the key. I am not aware of any users who might be relying on the previous inconsistent error codes, which were never documented anywhere. This failure case will be exercised by an xfstest. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30dmaengine: zx: set DMA_CYCLIC cap_mask bitShawn Guo1-0/+1
[ Upstream commit fc318d64f3d91e15babac00e08354b1beb650b57 ] The zx_dma driver supports cyclic transfer mode. Let's set DMA_CYCLIC cap_mask bit to make that clear, and avoid unnecessary failure when clients request channel via dma_request_chan_by_mask() with DMA_CYCLIC bit set in mask. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Reviewed-by: Jun Nie <jun.nie@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30clk: sunxi-ng: fix PLL_CPUX adjusting on A33Icenowy Zheng1-0/+10
[ Upstream commit 790d929b540661945d1c70652ffb602c5c06ad85 ] When adjusting PLL_CPUX on A33, the PLL is temporarily driven too high, and the system hangs. Add a notifier to avoid this situation by temporarily switching to a known stable 24 MHz oscillator. Signed-off-by: Icenowy Zheng <icenowy@aosc.xyz> Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30clk: sunxi-ng: A31: Fix spdif clock registerMarcus Cooper1-2/+2
[ Upstream commit 70421257c068b91476e70cade15fca68045d0693 ] As the SPDIF was rarely documented on the earlier Allwinner SoCs it was assumed that it had a similar clock register to the one described in the H3 User Manual. However this is not the case and it looks to shares the same setup as the I2S clock registers. Signed-off-by: Marcus Cooper <codekipper@gmail.com> Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30drm/sun4i: Fix a return value in case of errorChristophe JAILLET1-1/+1
[ Upstream commit 0f0861e31e3c59ca4bc1ec59d99260cfca79740e ] If 'sun4i_backend_drm_format_to_layer()' does not return 0, then 'val' is left unmodified. As it is not initialized either, the return value can be anything. It is likely that returning the error code was expected here. As the only caller of 'sun4i_backend_update_layer_formats()' does not check the return value, this fix is purely theorical. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30PCI: Apply _HPX settings only to relevant devicesBjorn Helgaas1-2/+13
[ Upstream commit 977509f7c5c6fb992ffcdf4291051af343b91645 ] Previously we didn't check the type of device before trying to apply Type 1 (PCI-X) or Type 2 (PCIe) Setting Records from _HPX. We don't support PCI-X Setting Records, so this was harmless, but the warning was useless. We do support PCIe Setting Records, and we didn't check whether a device was PCIe before applying settings. I don't think anything bad happened on non-PCIe devices because pcie_capability_clear_and_set_word(), pcie_cap_has_lnkctl(), etc., would fail before doing any harm. But it's ugly to depend on those internals. Check the device type before attempting to apply Type 1 and Type 2 Setting Records (Type 0 records are applicable to PCI, PCI-X, and PCIe devices). A side benefit is that this prevents useless "not supported" warnings when a BIOS supplies a Type 1 (PCI-X) Setting Record and we try to apply it to every single device: pci 0000:00:00.0: PCI-X settings not supported After this patch, we'll get the warning only when a BIOS supplies a Type 1 record and we have a PCI-X device to which it should be applied. Link: https://bugzilla.kernel.org/show_bug.cgi?id=187731 Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30RDS: RDMA: fix the ib_map_mr_sg_zbva() argumentSantosh Shilimkar1-2/+3
[ Upstream commit 3e56c2f856d7aba6a03feea834d68f9c05f7d0b6 ] Fixes warning: Using plain integer as NULL pointer Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30RDS: RDMA: return appropriate error on rdma map failuresSantosh Shilimkar1-1/+10
[ Upstream commit 584a8279a44a800dea5a5c1e9d53a002e03016b4 ] The first message to a remote node should prompt a new connection even if it is RDMA operation. For RDMA operation the MR mapping can fail because connections is not yet up. Since the connection establishment is asynchronous, we make sure the map failure because of unavailable connection reach to the user by appropriate error code. Before returning to the user, lets trigger the connection so that its ready for the next retry. Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30RDS: make message size limit compliant with specAvinash Repaka3-1/+42
[ Upstream commit f9fb69adb6c7acca60977a4db5a5f95b8e66c041 ] RDS support max message size as 1M but the code doesn't check this in all cases. Patch fixes it for RDMA & non-RDMA and RDS MR size and its enforced irrespective of underlying transport. Signed-off-by: Avinash Repaka <avinash.repaka@oracle.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30e1000e: Avoid receiver overrun interrupt burstsBenjamin Poirier2-8/+26
commit 4aea7a5c5e940c1723add439f4088844cd26196d upstream. When e1000e_poll() is not fast enough to keep up with incoming traffic, the adapter (when operating in msix mode) raises the Other interrupt to signal Receiver Overrun. This is a double problem because 1) at the moment e1000_msix_other() assumes that it is only called in case of Link Status Change and 2) if the condition persists, the interrupt is repeatedly raised again in quick succession. Ideally we would configure the Other interrupt to not be raised in case of receiver overrun but this doesn't seem possible on this adapter. Instead, we handle the first part of the problem by reverting to the practice of reading ICR in the other interrupt handler, like before commit 16ecba59bc33 ("e1000e: Do not read ICR in Other interrupt"). Thanks to commit 0a8047ac68e5 ("e1000e: Fix msi-x interrupt automask") which cleared IAME from CTRL_EXT, reading ICR doesn't interfere with RxQ0, TxQ0 interrupts anymore. We handle the second part of the problem by not re-enabling the Other interrupt right away when there is overrun. Instead, we wait until traffic subsides, napi polling mode is exited and interrupts are re-enabled. Reported-by: Lennart Sorensen <lsorense@csclub.uwaterloo.ca> Fixes: 16ecba59bc33 ("e1000e: Do not read ICR in Other interrupt") Signed-off-by: Benjamin Poirier <bpoirier@suse.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Amit Pundir <amit.pundir@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30e1000e: Separate signaling for link check/link upBenjamin Poirier2-4/+9
commit 19110cfbb34d4af0cdfe14cd243f3b09dc95b013 upstream. Lennart reported the following race condition: \ e1000_watchdog_task \ e1000e_has_link \ hw->mac.ops.check_for_link() === e1000e_check_for_copper_link /* link is up */ mac->get_link_status = false; /* interrupt */ \ e1000_msix_other hw->mac.get_link_status = true; link_active = !hw->mac.get_link_status /* link_active is false, wrongly */ This problem arises because the single flag get_link_status is used to signal two different states: link status needs checking and link status is down. Avoid the problem by using the return value of .check_for_link to signal the link status to e1000e_has_link(). Reported-by: Lennart Sorensen <lsorense@csclub.uwaterloo.ca> Signed-off-by: Benjamin Poirier <bpoirier@suse.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Amit Pundir <amit.pundir@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30e1000e: Fix return value testBenjamin Poirier1-1/+1
commit d3509f8bc7b0560044c15f0e3ecfde1d9af757a6 upstream. All the helpers return -E1000_ERR_PHY. Signed-off-by: Benjamin Poirier <bpoirier@suse.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Amit Pundir <amit.pundir@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30e1000e: Fix error path in link detectionBenjamin Poirier1-3/+4
commit c4c40e51f9c32c6dd8adf606624c930a1c4d9bbb upstream. In case of error from e1e_rphy(), the loop will exit early and "success" will be set to true erroneously. Signed-off-by: Benjamin Poirier <bpoirier@suse.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Amit Pundir <amit.pundir@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30Revert "drm/i915: Do not rely on wm preservation for ILK watermarks"Greg Kroah-Hartman2-22/+31
This reverts commit 7de694782cbe7840f2c0de6f1e70f41fc1b8b6e8 which is commit 8777b927b92cf5b6c29f9f9d3c737addea9ac8a7 upstream. It was reported to cause flickering and other regressions. Reported-by: Rainer Fiebig <jrf@mailbox.org> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Jani Nikula <jani.nikula@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> reverted:
2017-11-30PM / OPP: Add missing of_node_put(np)Tobias Jordan1-0/+1
commit 7978db344719dab1e56d05e6fc04aaaddcde0a5e upstream. The for_each_available_child_of_node() loop in _of_add_opp_table_v2() doesn't drop the reference to "np" on errors. Fix that. Fixes: 274659029c9d (PM / OPP: Add support to parse "operating-points-v2" bindings) Signed-off-by: Tobias Jordan <Tobias.Jordan@elektrobit.com> [ VK: Improved commit log. ] Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30net/9p: Switch to wait_event_killable()Tuomas Tynkkynen2-9/+7
commit 9523feac272ccad2ad8186ba4fcc89103754de52 upstream. Because userspace gets Very Unhappy when calls like stat() and execve() return -EINTR on 9p filesystem mounts. For instance, when bash is looking in PATH for things to execute and some SIGCHLD interrupts stat(), bash can throw a spurious 'command not found' since it doesn't retry the stat(). In practice, hitting the problem is rare and needs a really slow/bogged down 9p server. Signed-off-by: Tuomas Tynkkynen <tuomas@tuxera.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30fscrypt: lock mutex before checking for bounce page poolEric Biggers1-3/+0
commit a0b3bc855374c50b5ea85273553485af48caf2f7 upstream. fscrypt_initialize(), which allocates the global bounce page pool when an encrypted file is first accessed, uses "double-checked locking" to try to avoid locking fscrypt_init_mutex. However, it doesn't use any memory barriers, so it's theoretically possible for a thread to observe a bounce page pool which has not been fully initialized. This is a classic bug with "double-checked locking". While "only a theoretical issue" in the latest kernel, in pre-4.8 kernels the pointer that was checked was not even the last to be initialized, so it was easily possible for a crash (NULL pointer dereference) to happen. This was changed only incidentally by the large refactor to use fs/crypto/. Solve both problems in a trivial way that can easily be backported: just always take the mutex. It's theoretically less efficient, but it shouldn't be noticeable in practice as the mutex is only acquired very briefly once per encrypted file. Later I'd like to make this use a helper macro like DO_ONCE(). However, DO_ONCE() runs in atomic context, so we'd need to add a new macro that allows blocking. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30sched/rt: Simplify the IPI based RT balancing logicSteven Rostedt (Red Hat)3-127/+138
commit 4bdced5c9a2922521e325896a7bbbf0132c94e56 upstream. When a CPU lowers its priority (schedules out a high priority task for a lower priority one), a check is made to see if any other CPU has overloaded RT tasks (more than one). It checks the rto_mask to determine this and if so it will request to pull one of those tasks to itself if the non running RT task is of higher priority than the new priority of the next task to run on the current CPU. When we deal with large number of CPUs, the original pull logic suffered from large lock contention on a single CPU run queue, which caused a huge latency across all CPUs. This was caused by only having one CPU having overloaded RT tasks and a bunch of other CPUs lowering their priority. To solve this issue, commit: b6366f048e0c ("sched/rt: Use IPI to trigger RT task push migration instead of pulling") changed the way to request a pull. Instead of grabbing the lock of the overloaded CPU's runqueue, it simply sent an IPI to that CPU to do the work. Although the IPI logic worked very well in removing the large latency build up, it still could suffer from a large number of IPIs being sent to a single CPU. On a 80 CPU box, I measured over 200us of processing IPIs. Worse yet, when I tested this on a 120 CPU box, with a stress test that had lots of RT tasks scheduling on all CPUs, it actually triggered the hard lockup detector! One CPU had so many IPIs sent to it, and due to the restart mechanism that is triggered when the source run queue has a priority status change, the CPU spent minutes! processing the IPIs. Thinking about this further, I realized there's no reason for each run queue to send its own IPI. As all CPUs with overloaded tasks must be scanned regardless if there's one or many CPUs lowering their priority, because there's no current way to find the CPU with the highest priority task that can schedule to one of these CPUs, there really only needs to be one IPI being sent around at a time. This greatly simplifies the code! The new approach is to have each root domain have its own irq work, as the rto_mask is per root domain. The root domain has the following fields attached to it: rto_push_work - the irq work to process each CPU set in rto_mask rto_lock - the lock to protect some of the other rto fields rto_loop_start - an atomic that keeps contention down on rto_lock the first CPU scheduling in a lower priority task is the one to kick off the process. rto_loop_next - an atomic that gets incremented for each CPU that schedules in a lower priority task. rto_loop - a variable protected by rto_lock that is used to compare against rto_loop_next rto_cpu - The cpu to send the next IPI to, also protected by the rto_lock. When a CPU schedules in a lower priority task and wants to make sure overloaded CPUs know about it. It increments the rto_loop_next. Then it atomically sets rto_loop_start with a cmpxchg. If the old value is not "0", then it is done, as another CPU is kicking off the IPI loop. If the old value is "0", then it will take the rto_lock to synchronize with a possible IPI being sent around to the overloaded CPUs. If rto_cpu is greater than or equal to nr_cpu_ids, then there's either no IPI being sent around, or one is about to finish. Then rto_cpu is set to the first CPU in rto_mask and an IPI is sent to that CPU. If there's no CPUs set in rto_mask, then there's nothing to be done. When the CPU receives the IPI, it will first try to push any RT tasks that is queued on the CPU but can't run because a higher priority RT task is currently running on that CPU. Then it takes the rto_lock and looks for the next CPU in the rto_mask. If it finds one, it simply sends an IPI to that CPU and the process continues. If there's no more CPUs in the rto_mask, then rto_loop is compared with rto_loop_next. If they match, everything is done and the process is over. If they do not match, then a CPU scheduled in a lower priority task as the IPI was being passed around, and the process needs to start again. The first CPU in rto_mask is sent the IPI. This change removes this duplication of work in the IPI logic, and greatly lowers the latency caused by the IPIs. This removed the lockup happening on the 120 CPU machine. It also simplifies the code tremendously. What else could anyone ask for? Thanks to Peter Zijlstra for simplifying the rto_loop_start atomic logic and supplying me with the rto_start_trylock() and rto_start_unlock() helper functions. Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Clark Williams <williams@redhat.com> Cc: Daniel Bristot de Oliveira <bristot@redhat.com> Cc: John Kacur <jkacur@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Scott Wood <swood@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20170424114732.1aac6dc4@gandalf.local.home Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-30media: v4l2-ctrl: Fix flags field on Control eventsRicardo Ribalda Delgado1-4/+12
commit 9cac9d2fb2fe0e0cadacdb94415b3fe49e3f724f upstream. VIDIOC_DQEVENT and VIDIOC_QUERY_EXT_CTRL should give the same output for the control flags field. This patch creates a new function user_flags(), that calculates the user exported flags value (which is different than the kernel internal flags structure). This function is then used by all the code that exports the internal flags to userspace. Reported-by: Dimitrios Katsaros <patcherwork@gmail.com> Signed-off-by: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com> Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>