summaryrefslogtreecommitdiff
path: root/Documentation
AgeCommit message (Collapse)AuthorFilesLines
2025-03-19riscv: hwprobe: export Zaamo and Zalrsc extensionsClément Léger1-0/+8
Export the Zaamo and Zalrsc extensions to userspace using hwprobe. Signed-off-by: Clément Léger <cleger@rivosinc.com> Reviewed-by: Charlie Jenkins <charlie@rivosinc.com> Link: https://lore.kernel.org/r/20240619153913.867263-4-cleger@rivosinc.com Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
2025-03-19dt-bindings: riscv: add Zaamo and Zalrsc ISA extension descriptionClément Léger1-0/+19
Add description for the Zaamo and Zalrsc ISA extension[1]. Link: https://github.com/riscv/riscv-zaamo-zalrsc [1] Signed-off-by: Clément Léger <cleger@rivosinc.com> Acked-by: Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20240619153913.867263-2-cleger@rivosinc.com Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
2025-03-19x86/rfds: Exclude P-only parts from the RFDS affected listPawan Gupta1-8/+0
The affected CPU table (cpu_vuln_blacklist) marks Alderlake and Raptorlake P-only parts affected by RFDS. This is not true because only E-cores are affected by RFDS. With the current family/model matching it is not possible to differentiate the unaffected parts, as the affected and unaffected hybrid variants have the same model number. Add a cpu-type match as well for such parts so as to exclude P-only parts being marked as affected. Note, family/model and cpu-type enumeration could be inaccurate in virtualized environments. In a guest affected status is decided by RFDS_NO and RFDS_CLEAR bits exposed by VMMs. Signed-off-by: Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20250311-add-cpu-type-v8-5-e8514dcaaff2@linux.intel.com
2025-03-19Merge drm/drm-next into drm-xe-nextThomas Hellström32-98/+1184
Backmerging to bring in the xe shrinker from drm-next. Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
2025-03-19Merge tag 'v6.14-rc7' into x86/core, to pick up fixesIngo Molnar8-5/+21
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2025-03-19Merge branch 'for-linus' into for-nextTakashi Iwai29-27/+214
Back-merge of 6.14 devel branch for further developments of TAS codecsBack-merge of 6.14 devel branch for further developments. Signed-off-by: Takashi Iwai <tiwai@suse.de>
2025-03-19docs/zh_CN: fix spelling mistakePeng Jiang1-1/+1
The word watermark was misspelled as "watemark". Signed-off-by: Peng Jiang <jiang.peng9@zte.com.cn> Reviewed-by: Dongliang Mu <dzm91@hust.edu.cn> Signed-off-by: Alex Shi <alexs@kernel.org> Link: https://lore.kernel.org/r/20250317100811126QvOaWRPxSgm2ttU5faitl@zte.com.cn
2025-03-19docs/Chinese: change the disclaimer wordsAlex Shi1-5/+3
2 paragraph warning and note take a bit more space, let's merge them together, and guide to other maintainer and reviewers. Cc: Yanteng Si <si.yanteng@linux.dev> Cc: Dongliang Mu <dzm91@hust.edu.cn> Cc: Jonathan Corbet <corbet@lwn.net> Cc: linux-doc@vger.kernel.org Reviewed-by: Tang Yizhou <yizhou.tang@shopee.com> Reviewed-by: Dongliang Mu <dzm91@hust.edu.cn> Signed-off-by: Alex Shi <alexs@kernel.org> Link: https://lore.kernel.org/r/20250305025101.27717-1-alexs@kernel.org
2025-03-19docs/zh_CN: Add snp-tdx-threat-model index Chinese translationYuxian Mao2-1/+210
Translate .../security/snp-tdx-threat-model.rst into Chinese. Update the translation through commit "cdae7e8a69c3" ("docs/MAINTAINERS: Update my email address") Fixed pdfdocs warning by Alex Shi. Reviewed-by: Yanteng Si <si.yanteng@linux.dev> Signed-off-by: Yuxian Mao <maoyuxian@cqsoftware.com.cn> Signed-off-by: Alex Shi <alexs@kernel.org> Link: https://lore.kernel.org/r/20250304071401.117780-1-maoyuxian@cqsoftware.com.cn
2025-03-18dt-bindings: i2c: i2c-rk3x: Add rk3562 supportKever Yang1-0/+1
rk3562 i2c compatible to the existing rk3399 binding. Signed-off-by: Kever Yang <kever.yang@rock-chips.com> Reviewed-by: Heiko Stuebner <heiko@sntech.de> Acked-by: Rob Herring (Arm) <robh@kernel.org> Signed-off-by: Andi Shyti <andi.shyti@kernel.org> Link: https://lore.kernel.org/r/20250227111913.2344207-5-kever.yang@rock-chips.com
2025-03-18dt-bindings: i2c: imx-lpi2c: add i.MX94 LPI2CFrank Li1-0/+1
Add compatible string "fsl,imx94-lpi2c" for the i.MX94 chip, which is backward compatible with i.MX7ULP. Set it to fall back to "fsl,imx7ulp-lpi2c". Signed-off-by: Frank Li <Frank.Li@nxp.com> Acked-by: Conor Dooley <conor.dooley@microchip.com> Signed-off-by: Andi Shyti <andi.shyti@kernel.org> Link: https://lore.kernel.org/r/20250306155815.110514-1-Frank.Li@nxp.com
2025-03-18dt-bindings: i2c: qup: Document interconnectsStephan Gerhold1-0/+5
When the I2C QUP controller is used together with a DMA engine it needs to vote for the interconnect path to the DRAM. Otherwise it may be unable to access the memory quickly enough. Signed-off-by: Stephan Gerhold <stephan.gerhold@kernkonzept.com> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Andi Shyti <andi.shyti@kernel.org> Link: https://lore.kernel.org/r/20231128-i2c-qup-dvfs-v1-2-59a0e3039111@kernkonzept.com
2025-03-18dt-bindings: i2c: qcom,i2c-qup: Document power-domainsStephan Gerhold1-0/+9
Similar to qcom,geni-i2c, for i2c-qup we need to vote for performance states on the VDDCX power domain to ensure that required clock rates can be generated correctly. I2C is typically used with a fixed clock rate, so a single required-opp is sufficient without a full OPP table (unlike spi-qup for example). Signed-off-by: Stephan Gerhold <stephan.gerhold@kernkonzept.com> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Andi Shyti <andi.shyti@kernel.org> Link: https://lore.kernel.org/r/20231128-i2c-qup-dvfs-v1-1-59a0e3039111@kernkonzept.com
2025-03-18dt-bindings: i2c: exynos5: add exynos7870-hsi2c compatibleKaustabh Chakraborty1-0/+1
Exynos7870's HS-I2C controllers are entirely compatible with samsung,exynos7-hsi2c. Document Exynos7870's HS-I2C compatible string appropriately. Signed-off-by: Kaustabh Chakraborty <kauschluss@disroot.org> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Andi Shyti <andi.shyti@kernel.org> Link: https://lore.kernel.org/r/20250204-exynos7870-i2c-v1-2-63d67871ab7e@disroot.org
2025-03-18dt-bindings: i2c: samsung,s3c2410: add exynos7870-i2c compatibleKaustabh Chakraborty1-0/+1
Exynos7870's (non-HS) I2C controllers are entirely compatible with samsung,s3c2440-i2c. Document Exynos7870's compatible string appropriately. Signed-off-by: Kaustabh Chakraborty <kauschluss@disroot.org> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Andi Shyti <andi.shyti@kernel.org> Link: https://lore.kernel.org/r/20250204-exynos7870-i2c-v1-1-63d67871ab7e@disroot.org
2025-03-18Documentation: userspace-api: iommufd: Update FAULT and VEVENTQNicolin Chen1-0/+17
With the introduction of the new objects, update the doc to reflect that. Link: https://patch.msgid.link/r/09829fbc218872d242323d8834da4bec187ce6f4.1741719725.git.nicolinc@nvidia.com Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Bagas Sanjaya <bagasdotme@gmail.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2025-03-18dt-bindings: mtd: atmel,dataflash: convert txt to yamlNayab Sayed2-17/+55
Convert atmel-dataflash.txt into atmel,dataflash.yaml Signed-off-by: Nayab Sayed <nayabbasha.sayed@microchip.com> Reviewed-by: Rob Herring (Arm) <robh@kernel.org> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
2025-03-18dt-bindings: mtd: gpmi-nand: Add compatible string for i.MX8 chipsFrank Li1-0/+7
Add compatible string "fsl,imx8mp-gpmi-nand" and "fsl,imx8mq-gpmi-nand", which back compatible with i.MX7D. So set these fall back to "fsl,imx7d-gpmi-nand". Add compatible string "fsl,imx8qm-gpmi-nand" and "fsl,imx8dxl-gpmi-nand", which back compatible with i.MX8QXP. So set these fall back to "fsl,imx8qxp-gpmi-nand". Signed-off-by: Frank Li <Frank.Li@nxp.com> Reviewed-by: Han Xu <han.xu@nxp.com> Acked-by: Conor Dooley <conor.dooley@microchip.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
2025-03-18ASoC: codecs: Add aw88166 amplifier driverMark Brown1-0/+1
Merge series from wangweidong.a@awinic.com: Add the awinic,aw88166 property to support the aw88166 chip. The driver is for amplifiers aw88166 of Awinic Technology Corporation. The AW88166 is a high efficiency digital Smart K audio amplifier
2025-03-18add sof support on imx95Mark Brown3-0/+95
Merge series from Laurentiu Mihalcea <laurentiu.mihalcea@nxp.com>: Add sof support on imx95. This series also includes some changes to the audio-graph-card2 binding required for the support.
2025-03-18hwmon: add driver for HTU31Andrei Lalaev2-0/+38
Add base support for HTU31 temperature and humidity sensor. Besides temperature and humidity values, the driver also exports a 24-bit heater control to sysfs and serial number to debugfs. Signed-off-by: Andrei Lalaev <andrey.lalaev@gmail.com> Link: https://lore.kernel.org/r/20250217051110.46827-2-andrey.lalaev@gmail.com [groeck: Fixed continuation line alignment] Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2025-03-18dt-bindings: hwmon: Add description for sensor HTU31Andrei Lalaev1-0/+2
Add trivial binding for HTU31 Temperature and Humidity sensor. Signed-off-by: Andrei Lalaev <andrey.lalaev@gmail.com> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Link: https://lore.kernel.org/r/20250217051110.46827-3-andrey.lalaev@gmail.com Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2025-03-18hwmon: Add driver for TI INA233 Current and Power MonitorLeo Yang2-0/+76
Driver for Texas Instruments INA233 Current and Power Monitor With I2C-, SMBus-, and PMBus-Compatible Interface Signed-off-by: Leo Yang <leo.yang.sy0@gmail.com> Link: https://lore.kernel.org/r/20250116085939.1235598-3-leo.yang.sy0@gmail.com Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2025-03-18RISC-V: hwprobe: Expose Zicbom extension and its block sizeYunhui Cui1-0/+6
Expose Zicbom through hwprobe and also provide a key to extract its respective block size. [ alex: Fix merge conflicts and hwprobe numbering ] Reviewed-by: Andrew Jones <ajones@ventanamicro.com> Reviewed-by: Samuel Holland <samuel.holland@sifive.com> Signed-off-by: Yunhui Cui <cuiyunhui@bytedance.com> Link: https://lore.kernel.org/r/20250226063206.71216-3-cuiyunhui@bytedance.com Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
2025-03-18spi: Merge up fixesMark Brown16-16/+52
They are a dependency for applying some changes to the MAINTAINERS file.
2025-03-18regulator: dt-bindings: rtq2208: Cleanup whitespaceChiYuan Huang1-1/+1
Remove the whitespace that checkpatch scaned. Signed-off-by: ChiYuan Huang <cy_huang@richtek.com> Link: https://patch.msgid.link/9faa2e3aaa0fca0e66e478df4f59c6ec4bfc8def.1742295647.git.cy_huang@richtek.com Signed-off-by: Mark Brown <broonie@kernel.org>
2025-03-18regulator: dt-bindings: rtq2208: Mark fixed LDO VOUT property as deprecatedChiYuan Huang1-0/+1
Since the check for fixed LDO VOUT can be identified by the HW register, mark the unnecessary property as deprecated. Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: ChiYuan Huang <cy_huang@richtek.com> Link: https://patch.msgid.link/f813c7b49c152193f24198c4baf2c3f04cb0a74d.1742295647.git.cy_huang@richtek.com Signed-off-by: Mark Brown <broonie@kernel.org>
2025-03-18Merge patch series "riscv: Add bfloat16 instruction support"Alexandre Ghiti2-0/+57
Inochi Amaoto <inochiama@gmail.com> says: Add description for the BFloat16 precision Floating-Point ISA extension, (Zfbfmin, Zvfbfmin, Zvfbfwma). which was ratified in commit 4dc23d62 ("Added Chapter title to BF16") of the riscv-isa-manual. * patches from https://lore.kernel.org/r/20250213003849.147358-1-inochiama@gmail.com: riscv: hwprobe: export bfloat16 ISA extension riscv: add ISA extension parsing for bfloat16 ISA extension dt-bindings: riscv: add bfloat16 ISA extension description Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com> Link: https://lore.kernel.org/r/20250213003849.147358-1-inochiama@gmail.com
2025-03-18riscv: hwprobe: export bfloat16 ISA extensionInochi Amaoto1-0/+12
Export Zfbmin, Zvfbfmin, Zvfbfwma ISA extension through hwprobe. Signed-off-by: Inochi Amaoto <inochiama@gmail.com> Reviewed-by: Clément Léger <cleger@rivosinc.com> Link: https://lore.kernel.org/r/20250213003849.147358-4-inochiama@gmail.com Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
2025-03-18dt-bindings: riscv: add bfloat16 ISA extension descriptionInochi Amaoto1-0/+45
Add description for the BFloat16 precision Floating-Point ISA extension, (Zfbfmin, Zvfbfmin, Zvfbfwma). which was ratified in commit 4dc23d62 ("Added Chapter title to BF16") of the riscv-isa-manual. Signed-off-by: Inochi Amaoto <inochiama@gmail.com> Acked-by: Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20250213003849.147358-2-inochiama@gmail.com Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
2025-03-18Merge tag 'linux-can-next-for-6.15-20250314' of ↵Paolo Abeni1-0/+13
git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next Marc Kleine-Budde says: ==================== pull-request: can-next 2025-03-14 this is a pull request of 4 patches for net-next/main. In the first 2 patches by Dimitri Fedrau add CAN transceiver support to the flexcan driver. Frank Li's patch adds i.MX94 support to the flexcan device tree bindings. The last patch is by Davide Caratti and adds protocol counter for AF_CAN sockets. linux-can-next-for-6.15-20250314 * tag 'linux-can-next-for-6.15-20250314' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next: can: add protocol counter for AF_CAN sockets dt-bindings: can: fsl,flexcan: add i.MX94 support can: flexcan: add transceiver capabilities dt-bindings: can: fsl,flexcan: add transceiver capabilities ==================== Link: https://patch.msgid.link/20250314132327.2905693-1-mkl@pengutronix.de Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-03-18Merge tag 'batadv-next-pullrequest-20250313' of ↵Paolo Abeni1-1/+1
git://git.open-mesh.org/linux-merge Simon Wunderlich says: ==================== This feature/cleanup patchset includes the following patches: - bump version strings, by Simon Wunderlich - drop batadv_priv_debug_log struct, by Sven Eckelmann - adopt netdev_hold() / netdev_put(), by Eric Dumazet - add support for jumbo frames, by Sven Eckelmann - use consistent name for mesh interface, by Sven Eckelmann - cleanup B.A.T.M.A.N. IV OGM aggregation handling, by Sven Eckelmann (4 patches) - add missing newlines for log macros, by Sven Eckelmann * tag 'batadv-next-pullrequest-20250313' of git://git.open-mesh.org/linux-merge: batman-adv: add missing newlines for log macros batman-adv: Limit aggregation size to outgoing MTU batman-adv: Use actual packet count for aggregated packets batman-adv: Switch to bitmap helper for aggregation handling batman-adv: Limit number of aggregated packets directly batman-adv: Use consistent name for mesh interface batman-adv: Add support for jumbo frames batman-adv: adopt netdev_hold() / netdev_put() batman-adv: Drop batadv_priv_debug_log struct batman-adv: Start new development cycle ==================== Link: https://patch.msgid.link/20250313164519.72808-1-sw@simonwunderlich.de Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-03-18Merge tag 'samsung-pinctrl-6.15' of ↵Linus Walleij2-0/+5
https://git.kernel.org/pub/scm/linux/kernel/git/pinctrl/samsung into devel Samsung pinctrl drivers changes for v6.15 1. Add pin controller drivers for newly usptreamed Samsung Exynos2200 and Exynos7870. 2. Correct filter configuration offset of some of Google GS101 SoC pin banks, which later is supposed to be used during system suspend/resume. Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2025-03-18bnxt_en: Add devlink support for ENABLE_ROCE nvm parameterPavan Chebbi1-0/+2
Add set/show support for the ENABLE_ROCE NVM parameter to enable/disable RoCE for a PF. Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com> Co-developed-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: Pavan Chebbi <pavan.chebbi@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Link: https://patch.msgid.link/20250310183129.3154117-4-michael.chan@broadcom.com Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2025-03-18riscv: hwprobe: export Zicntr and Zihpm extensionsMiquel Sabaté Solà1-0/+6
Export Zicntr and Zihpm ISA extensions through the hwprobe syscall. [ alex: Fix hwprobe numbering ] Signed-off-by: Miquel Sabaté Solà <mikisabate@gmail.com> Acked-by: Jesse Taube <jesse@rivosinc.com> Reviewed-by: Conor Dooley <conor.dooley@microchip.com> Link: https://lore.kernel.org/r/20240913051324.8176-1-mikisabate@gmail.com Signed-off-by: Alexandre Ghiti <alexghiti@rivosinc.com>
2025-03-18Merge net-next/main to resolve conflictsJohannes Berg21-100/+770
There are a few conflicts between the work that went into wireless and that's here now, resolve them. Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2025-03-18mm: page_alloc: defrag_modeJohannes Weiner1-0/+9
The page allocator groups requests by migratetype to stave off fragmentation. However, in practice this is routinely defeated by the fact that it gives up *before* invoking reclaim and compaction - which may well produce suitable pages. As a result, fragmentation of physical memory is a common ongoing process in many load scenarios. Fragmentation deteriorates compaction's ability to produce huge pages. Depending on the lifetime of the fragmenting allocations, those effects can be long-lasting or even permanent, requiring drastic measures like forcible idle states or even reboots as the only reliable ways to recover the address space for THP production. In a kernel build test with supplemental THP pressure, the THP allocation rate steadily declines over 15 runs: thp_fault_alloc 61988 56474 57258 50187 52388 55409 52925 47648 43669 40621 36077 41721 36685 34641 33215 This is a hurdle in adopting THP in any environment where hosts are shared between multiple overlapping workloads (cloud environments), and rarely experience true idle periods. To make THP a reliable and predictable optimization, there needs to be a stronger guarantee to avoid such fragmentation. Introduce defrag_mode. When enabled, reclaim/compaction is invoked to its full extent *before* falling back. Specifically, ALLOC_NOFRAGMENT is enforced on the allocator fastpath and the reclaiming slowpath. For now, fallbacks are permitted to avert OOMs. There is a plan to add defrag_mode=2 to prefer OOMs over fragmentation, but this requires additional prep work in compaction and the reserve management to make it ready for all possible allocation contexts. The following test results are from a kernel build with periodic bursts of THP allocations, over 15 runs: vanilla defrag_mode=1 @claimer[unmovable]: 189 103 @claimer[movable]: 92 103 @claimer[reclaimable]: 207 61 @pollute[unmovable from movable]: 25 0 @pollute[unmovable from reclaimable]: 28 0 @pollute[movable from unmovable]: 38835 0 @pollute[movable from reclaimable]: 147136 0 @pollute[reclaimable from unmovable]: 178 0 @pollute[reclaimable from movable]: 33 0 @steal[unmovable from movable]: 11 0 @steal[unmovable from reclaimable]: 5 0 @steal[reclaimable from unmovable]: 107 0 @steal[reclaimable from movable]: 90 0 @steal[movable from reclaimable]: 354 0 @steal[movable from unmovable]: 130 0 Both types of polluting fallbacks are eliminated in this workload. Interestingly, whole block conversions are reduced as well. This is because once a block is claimed for a type, its empty space remains available for future allocations, instead of being padded with fallbacks; this allows the native type to group up instead of spreading out to new blocks. The assumption in the allocator has been that pollution from movable allocations is less harmful than from other types, since they can be reclaimed or migrated out should the space be needed. However, since fallbacks occur *before* reclaim/compaction is invoked, movable pollution will still cause non-movable allocations to spread out and claim more blocks. Without fragmentation, THP rates hold steady with defrag_mode=1: thp_fault_alloc 32478 20725 45045 32130 14018 21711 40791 29134 34458 45381 28305 17265 22584 28454 30850 While the downward trend is eliminated, the keen reader will of course notice that the baseline rate is much smaller than the vanilla kernel's to begin with. This is due to deficiencies in how reclaim and compaction are currently driven: ALLOC_NOFRAGMENT increases the extent to which smaller allocations are competing with THPs for pageblocks, while making no effort themselves to reclaim or compact beyond their own request size. This effect already exists with the current usage of ALLOC_NOFRAGMENT, but is amplified by defrag_mode insisting on whole block stealing much more strongly. Subsequent patches will address defrag_mode reclaim strategy to raise the THP success baseline above the vanilla kernel. Link: https://lkml.kernel.org/r/20250313210647.1314586-4-hannes@cmpxchg.org Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Zi Yan <ziy@nvidia.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-03-18xarray: add xas_try_split() to split a multi-index entryZi Yan1-1/+13
Patch series "Buddy allocator like (or non-uniform) folio split", v10. This patchset adds a new buddy allocator like (or non-uniform) large folio split from a order-n folio to order-m with m < n. It reduces 1. the total number of after-split folios from 2^(n-m) to n-m+1; 2. the amount of memory needed for multi-index xarray split from 2^(n/6-m/6) to n/6-m/6, assuming XA_CHUNK_SHIFT=6; 3. keep more large folios after a split from all order-m folios to order-(n-1) to order-m folios. For example, to split an order-9 to order-0, folio split generates 10 (or 11 for anonymous memory) folios instead of 512, allocates 1 xa_node instead of 8, and leaves 1 order-8, 1 order-7, ..., 1 order-1 and 2 order-0 folios (or 4 order-0 for anonymous memory) instead of 512 order-0 folios. Instead of duplicating existing split_huge_page*() code, __folio_split() is introduced as the shared backend code for both split_huge_page_to_list_to_order() and folio_split(). __folio_split() can support both uniform split and buddy allocator like (or non-uniform) split. All existing split_huge_page*() users can be gradually converted to use folio_split() if possible. In this patchset, I converted truncate_inode_partial_folio() to use folio_split(). xfstests quick group passed for both tmpfs and xfs. I also semi-replicated Hugh's test[12] and ran it without any issue for almost 24 hours. This patch (of 8): A preparation patch for non-uniform folio split, which always split a folio into half iteratively, and minimal xarray entry split. Currently, xas_split_alloc() and xas_split() always split all slots from a multi-index entry. They cost the same number of xa_node as the to-be-split slots. For example, to split an order-9 entry, which takes 2^(9-6)=8 slots, assuming XA_CHUNK_SHIFT is 6 (!CONFIG_BASE_SMALL), 8 xa_node are needed. Instead xas_try_split() is intended to be used iteratively to split the order-9 entry into 2 order-8 entries, then split one order-8 entry, based on the given index, to 2 order-7 entries, ..., and split one order-1 entry to 2 order-0 entries. When splitting the order-6 entry and a new xa_node is needed, xas_try_split() will try to allocate one if possible. As a result, xas_try_split() would only need 1 xa_node instead of 8. When a new xa_node is needed during the split, xas_try_split() can try to allocate one but no more. -ENOMEM will be return if a node cannot be allocated. -EINVAL will be return if a sibling node is split or cascade split happens, where two or more new nodes are needed, and these are not supported by xas_try_split(). xas_split_alloc() and xas_split() split an order-9 to order-0: --------------------------------- | | | | | | | | | | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | | | | | | | | | | --------------------------------- | | | | ------- --- --- ------- | | ... | | V V V V ----------- ----------- ----------- ----------- | xa_node | | xa_node | ... | xa_node | | xa_node | ----------- ----------- ----------- ----------- xas_try_split() splits an order-9 to order-0: --------------------------------- | | | | | | | | | | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | | | | | | | | | | --------------------------------- | | V ----------- | xa_node | ----------- Link: https://lkml.kernel.org/r/20250307174001.242794-1-ziy@nvidia.com Link: https://lkml.kernel.org/r/20250307174001.242794-2-ziy@nvidia.com Signed-off-by: Zi Yan <ziy@nvidia.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: David Hildenbrand <david@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: Kirill A. Shuemov <kirill.shutemov@linux.intel.com> Cc: Miaohe Lin <linmiaohe@huawei.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Yang Shi <yang@os.amperecomputing.com> Cc: Yu Zhao <yuzhao@google.com> Cc: Zi Yan <ziy@nvidia.com> Cc: Kairui Song <kasong@tencent.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-03-18Docs/admin-guide/mm/damon/usage: update for {core,ops}_filters directoriesSeongJae Park1-9/+22
Document {core,ops}_filters directories on usage document. Link: https://lkml.kernel.org/r/20250305222733.59089-9-sj@kernel.org Signed-off-by: SeongJae Park <sj@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-03-18Docs/ABI/damon: document {core,ops}_filters directoriesSeongJae Park1-0/+16
Document the new DAMOS filters sysfs directories on ABI doc. Link: https://lkml.kernel.org/r/20250305222733.59089-8-sj@kernel.org Signed-off-by: SeongJae Park <sj@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-03-18mm: stop maintaining the per-page mapcount of large folios ↵David Hildenbrand4-12/+43
(CONFIG_NO_PAGE_MAPCOUNT) Everything is in place to stop using the per-page mapcounts in large folios: the mapcount of tail pages will always be logically 0 (-1 value), just like it currently is for hugetlb folios already, and the page mapcount of the head page is either 0 (-1 value) or contains a page type (e.g., hugetlb). Maintaining _nr_pages_mapped without per-page mapcounts is impossible, so that one also has to go with CONFIG_NO_PAGE_MAPCOUNT. There are two remaining implications: (1) Per-node, per-cgroup and per-lruvec stats of "NR_ANON_MAPPED" ("mapped anonymous memory") and "NR_FILE_MAPPED" ("mapped file memory"): As soon as any page of the folio is mapped -- folio_mapped() -- we now account the complete folio as mapped. Once the last page is unmapped -- !folio_mapped() -- we account the complete folio as unmapped. This implies that ... * "AnonPages" and "Mapped" in /proc/meminfo and /sys/devices/system/node/*/meminfo * cgroup v2: "anon" and "file_mapped" in "memory.stat" and "memory.numa_stat" * cgroup v1: "rss" and "mapped_file" in "memory.stat" and "memory.numa_stat ... can now appear higher than before. But note that these folios do consume that memory, simply not all pages are actually currently mapped. It's worth nothing that other accounting in the kernel (esp. cgroup charging on allocation) is not affected by this change. [why oh why is "anon" called "rss" in cgroup v1] (2) Detecting partial mappings Detecting whether anon THPs are partially mapped gets a bit more unreliable. As long as a single MM maps such a large folio ("exclusively mapped"), we can reliably detect it. Especially before fork() / after a short-lived child process quit, we will detect partial mappings reliably, which is the common case. In essence, if the average per-page mapcount in an anon THP is < 1, we know for sure that we have a partial mapping. However, as soon as multiple MMs are involved, we might miss detecting partial mappings: this might be relevant with long-lived child processes. If we have a fully-mapped anon folio before fork(), once our child processes and our parent all unmap (zap/COW) the same pages (but not the complete folio), we might not detect the partial mapping. However, once the child processes quit we would detect the partial mapping. How relevant this case is in practice remains to be seen. Swapout/migration will likely mitigate this. In the future, RMAP walkers could check for that for that case (e.g., when collecting access bits during reclaim) and simply flag them for deferred-splitting. Link: https://lkml.kernel.org/r/20250303163014.1128035-21-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Cc: Andy Lutomirks^H^Hski <luto@kernel.org> Cc: Borislav Betkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Lance Yang <ioworker0@gmail.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Matthew Wilcow (Oracle) <willy@infradead.org> Cc: Michal Koutn <mkoutny@suse.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: tejun heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Zefan Li <lizefan.x@bytedance.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-03-18fs/proc/task_mmu: remove per-page mapcount dependency for smaps/smaps_rollup ↵David Hildenbrand1-3/+19
(CONFIG_NO_PAGE_MAPCOUNT) Let's implement an alternative when per-page mapcounts in large folios are no longer maintained -- soon with CONFIG_NO_PAGE_MAPCOUNT. When computing the output for smaps / smaps_rollups, in particular when calculating the USS (Unique Set Size) and the PSS (Proportional Set Size), we still rely on per-page mapcounts. To determine private vs. shared, we'll use folio_likely_mapped_shared(), similar to how we handle PM_MMAP_EXCLUSIVE. Similarly, we might now under-estimate the USS and count pages towards "shared" that are actually "private" ("exclusively mapped"). When calculating the PSS, we'll now also use the average per-page mapcount for large folios: this can result in both, an over-estimation and an under-estimation of the PSS. The difference is not expected to matter much in practice, but we'll have to learn as we go. We can now provide folio_precise_page_mapcount() only with CONFIG_PAGE_MAPCOUNT, and remove one of the last users of per-page mapcounts when CONFIG_NO_PAGE_MAPCOUNT is enabled. Document the new behavior. Link: https://lkml.kernel.org/r/20250303163014.1128035-20-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Cc: Andy Lutomirks^H^Hski <luto@kernel.org> Cc: Borislav Betkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Lance Yang <ioworker0@gmail.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Matthew Wilcow (Oracle) <willy@infradead.org> Cc: Michal Koutn <mkoutny@suse.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: tejun heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Zefan Li <lizefan.x@bytedance.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-03-18fs/proc/task_mmu: remove per-page mapcount dependency for "mapmax" ↵David Hildenbrand1-0/+5
(CONFIG_NO_PAGE_MAPCOUNT) Let's implement an alternative when per-page mapcounts in large folios are no longer maintained -- soon with CONFIG_NO_PAGE_MAPCOUNT. For calculating "mapmax", we now use the average per-page mapcount in a large folio instead of the per-page mapcount. For hugetlb folios and folios that are not partially mapped into MMs, there is no change. Likely, this change will not matter much in practice, and an alternative might be to simple remove this stat with CONFIG_NO_PAGE_MAPCOUNT. However, there might be value to it, so let's keep it like that and document the behavior. Link: https://lkml.kernel.org/r/20250303163014.1128035-19-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Cc: Andy Lutomirks^H^Hski <luto@kernel.org> Cc: Borislav Betkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Lance Yang <ioworker0@gmail.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Matthew Wilcow (Oracle) <willy@infradead.org> Cc: Michal Koutn <mkoutny@suse.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: tejun heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Zefan Li <lizefan.x@bytedance.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-03-18fs/proc/task_mmu: remove per-page mapcount dependency for PM_MMAP_EXCLUSIVE ↵David Hildenbrand1-0/+11
(CONFIG_NO_PAGE_MAPCOUNT) Let's implement an alternative when per-page mapcounts in large folios are no longer maintained -- soon with CONFIG_NO_PAGE_MAPCOUNT. PM_MMAP_EXCLUSIVE will now be set if folio_likely_mapped_shared() is true -- when the folio is considered "mapped shared", including when it once was "mapped shared" but no longer is, as documented. This might result in and under-indication of "exclusively mapped", which is considered better than over-indicating it: under-estimating the USS (Unique Set Size) is better than over-estimating it. As an alternative, we could simply remove that flag with CONFIG_NO_PAGE_MAPCOUNT completely, but there might be value to it. So, let's keep it like that and document the behavior. Link: https://lkml.kernel.org/r/20250303163014.1128035-18-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Cc: Andy Lutomirks^H^Hski <luto@kernel.org> Cc: Borislav Betkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Lance Yang <ioworker0@gmail.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Matthew Wilcow (Oracle) <willy@infradead.org> Cc: Michal Koutn <mkoutny@suse.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: tejun heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Zefan Li <lizefan.x@bytedance.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-03-18fs/proc/page: remove per-page mapcount dependency for /proc/kpagecount ↵David Hildenbrand1-1/+6
(CONFIG_NO_PAGE_MAPCOUNT) Let's implement an alternative when per-page mapcounts in large folios are no longer maintained -- soon with CONFIG_NO_PAGE_MAPCOUNT. For large folios, we'll return the per-page average mapcount within the folio, whereby we round to the closest integer when calculating the average: however, we'll always return at least 1 if the folio is mapped. So assuming a folio with 512 pages, the average would be: * 0 if not pages are mapped * 1 if there are 1 .. 767 per-page mappings * 2 if there are 767 .. 1279 per-page mappings ... For hugetlb folios and for large folios that are fully mapped into all address spaces, there is no change. We'll make use of this helper in other context next. As an alternative, we could simply return 0 for non-hugetlb large folios, or disable this legacy interface with CONFIG_NO_PAGE_MAPCOUNT. But the information exposed by this interface can still be valuable, and frequently we deal with fully-mapped large folios where the average corresponds to the actual page mapcount. So we'll leave it like this for now and document the new behavior. Note: this interface is likely not very relevant for performance. If ever required, we could try doing a rather expensive rmap walk to collect precisely how often this folio page is mapped. Link: https://lkml.kernel.org/r/20250303163014.1128035-17-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Cc: Andy Lutomirks^H^Hski <luto@kernel.org> Cc: Borislav Betkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Lance Yang <ioworker0@gmail.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Matthew Wilcow (Oracle) <willy@infradead.org> Cc: Michal Koutn <mkoutny@suse.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: tejun heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Zefan Li <lizefan.x@bytedance.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-03-18mm/rmap: basic MM owner tracking for large folios (!hugetlb)David Hildenbrand1-0/+8
For small folios, we traditionally use the mapcount to decide whether it was "certainly mapped exclusively" by a single MM (mapcount == 1) or whether it "maybe mapped shared" by multiple MMs (mapcount > 1). For PMD-sized folios that were PMD-mapped, we were able to use a similar mechanism (single PMD mapping), but for PTE-mapped folios and in the future folios that span multiple PMDs, this does not work. So we need a different mechanism to handle large folios. Let's add a new mechanism to detect whether a large folio is "certainly mapped exclusively", or whether it is "maybe mapped shared". We'll use this information next to optimize CoW reuse for PTE-mapped anonymous THP, and to convert folio_likely_mapped_shared() to folio_maybe_mapped_shared(), independent of per-page mapcounts. For each large folio, we'll have two slots, whereby a slot stores: (1) an MM id: unique id assigned to each MM (2) a per-MM mapcount If a slot is unoccupied, it can be taken by the next MM that maps folio page. In addition, we'll remember the current state -- "mapped exclusively" vs. "maybe mapped shared" -- and use a bit spinlock to sync on updates and to reduce the total number of atomic accesses on updates. In the future, it might be possible to squeeze a proper spinlock into "struct folio". For now, keep it simple, as we require the whole thing with THP only, that is incompatible with RT. As we have to squeeze this information into the "struct folio" of even folios of order-1 (2 pages), and we generally want to reduce the required metadata, we'll assign each MM a unique ID that can fit into an int. In total, we can squeeze everything into 4x int (2x long) on 64bit. 32bit support is a bit challenging, because we only have 2x long == 2x int in order-1 folios. But we can make it work for now, because we neither expect many MMs nor very large folios on 32bit. We will reliably detect folios as "mapped exclusively" vs. "mapped shared" as long as only two MMs map pages of a folio at one point in time -- for example with fork() and short-lived child processes, or with apps that hand over state from one instance to another. As soon as three MMs are involved at the same time, we might detect "maybe mapped shared" although the folio is "mapped exclusively". Example 1: (1) App1 faults in a (shmem/file-backed) folio page -> Tracked as MM0 (2) App2 faults in a folio page -> Tracked as MM1 (4) App1 unmaps all folio pages -> We will detect "mapped exclusively". Example 2: (1) App1 faults in a (shmem/file-backed) folio page -> Tracked as MM0 (2) App2 faults in a folio page -> Tracked as MM1 (3) App3 faults in a folio page -> No slot available, tracked as "unknown" (4) App1 and App2 unmap all folio pages -> We will detect "maybe mapped shared". Make use of __always_inline to keep possible performance degradation when (un)mapping large folios to a minimum. Note: by squeezing the two flags into the "unsigned long" that stores the MM ids, we can use non-atomic __bit_spin_unlock() and non-atomic setting/clearing of the "maybe mapped shared" bit, effectively not adding any new atomics on the hot path when updating the large mapcount + new metadata, which further helps reduce the runtime overhead in micro-benchmarks. Link: https://lkml.kernel.org/r/20250303163014.1128035-13-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Cc: Andy Lutomirks^H^Hski <luto@kernel.org> Cc: Borislav Betkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jann Horn <jannh@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Lance Yang <ioworker0@gmail.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Matthew Wilcow (Oracle) <willy@infradead.org> Cc: Michal Koutn <mkoutny@suse.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: tejun heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Zefan Li <lizefan.x@bytedance.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-03-18dcssblk: mark DAX broken, remove FS_DAX_LIMITED supportDan Williams1-1/+0
The dcssblk driver has long needed special case supoprt to enable limited dax operation, so called CONFIG_FS_DAX_LIMITED. This mode works around the incomplete support for ZONE_DEVICE on s390 by forgoing the ability of dax-mapped pages to support GUP. Now, pending cleanups to fsdax that fix its reference counting [1] depend on the ability of all dax drivers to supply ZONE_DEVICE pages. To allow that work to move forward, dax support needs to be paused for dcssblk until ZONE_DEVICE support arrives. That work has been known for a few years [2], and the removal of "pte_devmap" requirements [3] makes the conversion easier. For now, place the support behind CONFIG_BROKEN, and remove PFN_SPECIAL (dcssblk was the only user). Link: http://lore.kernel.org/cover.9f0e45d52f5cff58807831b6b867084d0b14b61c.1725941415.git-series.apopple@nvidia.com [1] Link: http://lore.kernel.org/20210820210318.187742e8@thinkpad/ [2] Link: http://lore.kernel.org/4511465a4f8429f45e2ac70d2e65dc5e1df1eb47.1725941415.git-series.apopple@nvidia.com [3] Link: https://lkml.kernel.org/r/33eef2379c0d240f40cc15453fad2df1a4ae34c8.1740713401.git-series.apopple@nvidia.com Signed-off-by: Dan Williams <dan.j.williams@intel.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Tested-by: Alexander Gordeev <agordeev@linux.ibm.com> Acked-by: David Hildenbrand <david@redhat.com> Tested-by: Alison Schofield <alison.schofield@intel.com> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@linux.ibm.com> Cc: Sven Schnelle <svens@linux.ibm.com> Cc: Jan Kara <jack@suse.cz> Cc: Matthew Wilcox <willy@infradead.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Alistair Popple <apopple@nvidia.com> Cc: Asahi Lina <lina@asahilina.net> Cc: Balbir Singh <balbirs@nvidia.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chunyan Zhang <zhang.lyra@gmail.com> Cc: "Darrick J. Wong" <djwong@kernel.org> Cc: Dave Chinner <david@fromorbit.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Huacai Chen <chenhuacai@kernel.org> Cc: Ira Weiny <ira.weiny@intel.com> Cc: Jason Gunthorpe <jgg@nvidia.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: John Hubbard <jhubbard@nvidia.com> Cc: linmiaohe <linmiaohe@huawei.com> Cc: Logan Gunthorpe <logang@deltatee.com> Cc: Michael "Camp Drill Sergeant" Ellerman <mpe@ellerman.id.au> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Peter Xu <peterx@redhat.com> Cc: Ted Ts'o <tytso@mit.edu> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: WANG Xuerui <kernel@xen0n.name> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-03-18docs: driver-api: firmware: clarify userspace requirementsJacek Lawrynowicz1-0/+5
The guidelines mention that firmware updates can't break the kernel, but it doesn't state directly that they can't break userspace programs. Make it explicit that firmware updates cannot break UAPI. Signed-off-by: Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com> Acked-by: Dave Airlie <airlied@redhat.com> [jc: fixed "no trailing newline"] Signed-off-by: Jonathan Corbet <corbet@lwn.net> Link: https://lore.kernel.org/r/20250314100137.2972355-1-jacek.lawrynowicz@linux.intel.com
2025-03-18docs: clarify rules wrt tagging other peopleThorsten Leemhuis2-16/+36
Point out that explicit permission is usually needed to tag other people in changes, but mention that implicit permission can be sufficient in certain cases. This fixes slight inconsistencies between Reported-by: and Suggested-by: and makes the usage more intuitive. While at it, explicitly mention the dangers of our bugzilla instance, as it makes it easy to forget that email addresses visible there are only shown to logged-in users. The latter is not a theoretical issue, as one maintainer mentioned that his employer received a EU GDPR (general data protection regulation) complaint after exposing a email address used in bugzilla through a tag in a patch description. Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Cc: Simona Vetter <simona.vetter@ffwll.ch> Cc: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Reviewed-by: Shuah Khan <skhan@linuxfoundation.org> Signed-off-by: Thorsten Leemhuis <linux@leemhuis.info> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Jonathan Corbet <corbet@lwn.net> Link: https://lore.kernel.org/r/588cf2763baa8fea1f4825f4eaa7023fe88bb6c1.1738852082.git.linux@leemhuis.info
2025-03-18docs: Remove outdated highuid.rst documentationkth2-81/+0
The highuid.rst document describes a transition that is outdated and no longer relevant. Additionally, it references filesystems (ncpfs and smbfs), which have been removed or replaced. Suggested-by: Matthew Wilcox <willy@infradead.org> Signed-off-by: Kang Taeho <kangtaeho2456@gmail.com> Link: https://lore.kernel.org/r/20250313145650.278346-1-kangtaeho2456@gmail.com Signed-off-by: Jonathan Corbet <corbet@lwn.net>