summaryrefslogtreecommitdiff
path: root/mm/hmm.c
AgeCommit message (Expand)AuthorFilesLines
2025-10-04Merge tag 'dma-mapping-6.18-2025-09-30' of git://git.kernel.org/pub/scm/linux...Linus Torvalds1-9/+10
2025-09-22mm/hmm: populate PFNs from PMD swap entryFrancois Dugast1-5/+65
2025-09-12mm/hmm: properly take MMIO pathLeon Romanovsky1-7/+8
2025-09-12mm/hmm: migrate to physical address-based DMA mapping APILeon Romanovsky1-4/+4
2025-07-20mm/hmm: move pmd_to_hmm_pfn_flags() to the respective #ifdefferyAndy Shevchenko1-1/+1
2025-07-10mm: remove devmap related functions and page table bitsAlistair Popple1-2/+1
2025-07-10mm: remove redundant pXd_devmap callsAlistair Popple1-2/+2
2025-07-10mm: convert vmf_insert_mixed() from using pte_devmap to pte_specialAlistair Popple1-3/+0
2025-07-10mm: convert pXd_devmap checks to vma_is_daxAlistair Popple1-1/+1
2025-05-25RDMA/core: Avoid hmm_dma_map_alloc() for virtual DMA devicesDaisuke Matsuda1-2/+1
2025-05-12mm/hmm: provide generic DMA managing logicLeon Romanovsky1-1/+213
2025-05-12mm/hmm: let users to tag specific PFN with DMA mapped bitLeon Romanovsky1-19/+32
2025-03-18mm: allow compound zone device pagesAlistair Popple1-1/+1
2024-07-13mm: provide mm_struct and address to huge_ptep_get()Christophe Leroy1-1/+1
2024-04-26mm/treewide: replace pXd_huge() with pXd_leaf()Peter Xu1-1/+1
2024-04-26mm/hmm: process pud swap entry without pud_huge()Peter Xu1-6/+1
2023-08-21mm: enable page walking API to lock vmas during the walkSuren Baghdasaryan1-0/+1
2023-06-20mm: ptep_get() conversionRyan Roberts1-1/+1
2023-06-20mm/hmm: retry if pte_offset_map() failsHugh Dickins1-0/+2
2023-06-20mm: use pmdp_get_lockless() without surplus barrier()Hugh Dickins1-1/+1
2023-01-19mm/hugetlb: make walk_hugetlb_range() safe to pmd unsharePeter Xu1-1/+14
2022-12-15mm: Remove pointless barrier() after pmdp_get_lockless()Peter Zijlstra1-1/+0
2022-12-15mm: Rename pmd_read_atomic()Peter Zijlstra1-1/+1
2022-09-27mm/swap: add swp_offset_pfn() to fetch PFN from swap entryPeter Xu1-1/+1
2022-07-29mm/hmm: fault non-owner device private entriesRalph Campbell1-11/+8
2022-05-13mm: teach core mm about pte markersPeter Xu1-1/+1
2022-03-23mm/hmm.c: remove unneeded local variable retMiaohe Lin1-2/+1
2022-01-15mm/hmm.c: allow VM_MIXEDMAP to work with hmm_range_faultAlistair Popple1-2/+3
2021-09-09mm/hmm: bypass devmap pte when all pfn requested flags are fulfilledLi Zhijian1-1/+4
2021-07-01mm: device exclusive memory accessAlistair Popple1-0/+5
2021-07-01mm/swapops: rework swap entry manipulation codeAlistair Popple1-1/+1
2021-07-01mm: remove special swap entry functionsAlistair Popple1-3/+2
2020-08-12mm: do page fault accounting in handle_mm_faultPeter Xu1-1/+2
2020-08-12mm/hmm.c: delete duplicated wordRandy Dunlap1-1/+1
2020-07-10mm/hmm: provide the page mapping order in hmm_range_fault()Ralph Campbell1-3/+13
2020-06-09mmap locking API: add mmap_assert_locked() and mmap_assert_write_locked()Michel Lespinasse1-1/+1
2020-05-11mm/hmm: remove the customizable pfn format from hmm_range_faultJason Gunthorpe1-84/+76
2020-05-11mm/hmm: remove HMM_PFN_SPECIALJason Gunthorpe1-1/+1
2020-05-11mm/hmm: make hmm_range_fault return 0 or -1Jason Gunthorpe1-16/+9
2020-03-30mm/hmm: return error for non-vma snapshotsJason Gunthorpe1-3/+5
2020-03-30mm/hmm: do not set pfns when returning an error codeJason Gunthorpe1-15/+3
2020-03-30mm/hmm: do not unconditionally set pfns when returning EBUSYJason Gunthorpe1-3/+4
2020-03-30mm/hmm: use device_private_entry_to_pfn()Jason Gunthorpe1-1/+1
2020-03-28mm/hmm: remove HMM_FAULT_SNAPSHOTJason Gunthorpe1-8/+9
2020-03-28mm/hmm: remove unused code and tidy commentsJason Gunthorpe1-7/+17
2020-03-28mm/hmm: return the fault type from hmm_pte_need_fault()Jason Gunthorpe1-102/+81
2020-03-28mm/hmm: remove pgmap checking for devmap pagesJason Gunthorpe1-48/+2
2020-03-26mm/hmm: check the device private page owner in hmm_range_fault()Christoph Hellwig1-1/+9
2020-03-26mm: simplify device private page handling in hmm_range_faultChristoph Hellwig1-20/+5
2020-03-26mm: merge hmm_vma_do_fault into into hmm_vma_walk_hole_Christoph Hellwig1-34/+16