summaryrefslogtreecommitdiff
path: root/arch/arc/include
AgeCommit message (Collapse)AuthorFilesLines
2022-03-24Merge tag 'asm-generic-5.18' of ↵Linus Torvalds3-53/+0
git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic Pull asm-generic updates from Arnd Bergmann: "There are three sets of updates for 5.18 in the asm-generic tree: - The set_fs()/get_fs() infrastructure gets removed for good. This was already gone from all major architectures, but now we can finally remove it everywhere, which loses some particularly tricky and error-prone code. There is a small merge conflict against a parisc cleanup, the solution is to use their new version. - The nds32 architecture ends its tenure in the Linux kernel. The hardware is still used and the code is in reasonable shape, but the mainline port is not actively maintained any more, as all remaining users are thought to run vendor kernels that would never be updated to a future release. - A series from Masahiro Yamada cleans up some of the uapi header files to pass the compile-time checks" * tag 'asm-generic-5.18' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic: (27 commits) nds32: Remove the architecture uaccess: remove CONFIG_SET_FS ia64: remove CONFIG_SET_FS support sh: remove CONFIG_SET_FS support sparc64: remove CONFIG_SET_FS support lib/test_lockup: fix kernel pointer check for separate address spaces uaccess: generalize access_ok() uaccess: fix type mismatch warnings from access_ok() arm64: simplify access_ok() m68k: fix access_ok for coldfire MIPS: use simpler access_ok() MIPS: Handle address errors for accesses above CPU max virtual user address uaccess: add generic __{get,put}_kernel_nofault nios2: drop access_ok() check from __put_user() x86: use more conventional access_ok() definition x86: remove __range_not_ok() sparc64: add __{get,put}_kernel_nofault() nds32: fix access_ok() checks in get/put_user uaccess: fix nios2 and microblaze get_user_8() sparc64: fix building assembly files ...
2022-03-21arch: Add pmd_pfn() where it is missingMike Rapoport2-1/+1
We need to use this function in common code, so define it for architectures and/or configrations that miss it. The result of pmd_pfn() will only be used if TRANSPARENT_HUGEPAGE is enabled, but a function or macro called pmd_pfn() must be defined, even on machines with two level page tables. Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
2022-02-25uaccess: remove CONFIG_SET_FSArnd Bergmann3-24/+0
There are no remaining callers of set_fs(), so CONFIG_SET_FS can be removed globally, along with the thread_info field and any references to it. This turns access_ok() into a cheaper check against TASK_SIZE_MAX. As CONFIG_SET_FS is now gone, drop all remaining references to set_fs()/get_fs(), mm_segment_t, user_addr_max() and uaccess_kernel(). Acked-by: Sam Ravnborg <sam@ravnborg.org> # for sparc32 changes Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Tested-by: Sergey Matyukevich <sergey.matyukevich@synopsys.com> # for arc changes Acked-by: Stafford Horne <shorne@gmail.com> # [openrisc, asm-generic] Acked-by: Dinh Nguyen <dinguyen@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-02-25uaccess: generalize access_ok()Arnd Bergmann1-29/+0
There are many different ways that access_ok() is defined across architectures, but in the end, they all just compare against the user_addr_max() value or they accept anything. Provide one definition that works for most architectures, checking against TASK_SIZE_MAX for user processes or skipping the check inside of uaccess_kernel() sections. For architectures without CONFIG_SET_FS(), this should be the fastest check, as it comes down to a single comparison of a pointer against a compile-time constant, while the architecture specific versions tend to do something more complex for historic reasons or get something wrong. Type checking for __user annotations is handled inconsistently across architectures, but this is easily simplified as well by using an inline function that takes a 'const void __user *' argument. A handful of callers need an extra __user annotation for this. Some architectures had trick to use 33-bit or 65-bit arithmetic on the addresses to calculate the overflow, however this simpler version uses fewer registers, which means it can produce better object code in the end despite needing a second (statically predicted) branch. Reviewed-by: Christoph Hellwig <hch@lst.de> Acked-by: Mark Rutland <mark.rutland@arm.com> [arm64, asm-generic] Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: Stafford Horne <shorne@gmail.com> Acked-by: Dinh Nguyen <dinguyen@kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2022-01-23Merge tag 'bitmap-5.17-rc1' of git://github.com/norov/linuxLinus Torvalds1-1/+0
Pull bitmap updates from Yury Norov: - introduce for_each_set_bitrange() - use find_first_*_bit() instead of find_next_*_bit() where possible - unify for_each_bit() macros * tag 'bitmap-5.17-rc1' of git://github.com/norov/linux: vsprintf: rework bitmap_list_string lib: bitmap: add performance test for bitmap_print_to_pagebuf bitmap: unify find_bit operations mm/percpu: micro-optimize pcpu_is_populated() Replace for_each_*_bit_from() with for_each_*_bit() where appropriate find: micro-optimize for_each_{set,clear}_bit() include/linux: move for_each_bit() macros from bitops.h to find.h cpumask: replace cpumask_next_* with cpumask_first_* where appropriate tools: sync tools/bitmap with mother linux all: replace find_next{,_zero}_bit with find_first{,_zero}_bit where appropriate cpumask: use find_first_and_bit() lib: add find_first_and_bit() arch: remove GENERIC_FIND_FIRST_BIT entirely include: move find.h from asm_generic to linux bitops: move find_bit_*_le functions from le.h to find.h bitops: protect find_first_{,zero}_bit properly
2022-01-15include: move find.h from asm_generic to linuxYury Norov1-1/+0
find_bit API and bitmap API are closely related, but inclusion paths are different - include/asm-generic and include/linux, correspondingly. In the past it made a lot of troubles due to circular dependencies and/or undefined symbols. Fix this by moving find.h under include/linux. Signed-off-by: Yury Norov <yury.norov@gmail.com> Tested-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
2021-12-29arc: perf: Move static structs to where they're really usedAlexey Brodkin1-162/+0
It is all well described by Stephen Rothwell who initially spotted that: ----------------------------->8---------------------------- After merging the origin tree, today's linux-next build (arc haps_hs_smp_defconfig+kselftest) produced these warnings: arch/arc/include/asm/perf_event.h:126:27: warning: 'arc_pmu_cache_map' defined but not used [-Wunused-const-variable=] arch/arc/include/asm/perf_event.h:91:27: warning: 'arc_pmu_ev_hw_map' defined but not used [-Wunused-const-variable=] Introduced by commit 0dd450fe13da ("ARC: Add perf support for ARC700 cores") The 2 static arrays should be moved into arch/arc/kernel/perf_event.c (the only place that uses them). We get the warning because perf_event.h is also included by arch/arc/kernel/unaligned.c. ----------------------------->8---------------------------- Could be easily reproduced by running make with "W=1" on any up-to-date sources, when extra warnings get enabled (in particular "-Wunused-const-variable"), otherwise disabled by default in the top-level Makefile as "These warnings generated too much noise in a regular build". Cc: Mischa Jonker <mjonker@synopsys.com> Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-12-29arc: Replace lkml.org links with loreKees Cook1-2/+6
As started by commit 05a5f51ca566 ("Documentation: Replace lkml.org links with lore"), replace lkml.org links with lore to better use a single source that's more likely to stay available long-term. Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Vineet Gupta <vineetg@rivosinc.com>
2021-12-29ARC: thread_info.h: correct two typos in a commentRandy Dunlap1-2/+2
Fix typos of "separately" and "remains". Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Suggested-by: Matthew Wilcox <willy@infradead.org> # "remains" Cc: Vineet Gupta <vgupta@kernel.org> Cc: linux-snps-arc@lists.infradead.org Signed-off-by: Vineet Gupta <vineetg@rivosinc.com>
2021-11-17Add linux/cacheflush.hMatthew Wilcox (Oracle)1-1/+0
Many architectures do not include asm-generic/cacheflush.h, so turn the includes on their head and add linux/cacheflush.h which includes asm/cacheflush.h. Move the flush_dcache_folio() declaration from asm-generic/cacheflush.h to linux/cacheflush.h and change linux/highmem.h to include linux/cacheflush.h instead of asm/cacheflush.h so that all necessary places will see flush_dcache_folio(). More functions should have their default implementations moved in the future, but those are for follow-on patches. This fixes csky, sparc and sparc64 which were missed in the commit which added flush_dcache_folio(). Fixes: 08b0b0059bf1 ("mm: Add flush_dcache_folio()") Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
2021-11-02Merge tag 'trace-v5.16' of ↵Linus Torvalds2-1/+6
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull tracing updates from Steven Rostedt: - kprobes: Restructured stack unwinder to show properly on x86 when a stack dump happens from a kretprobe callback. - Fix to bootconfig parsing - Have tracefs allow owner and group permissions by default (only denying others). There's been pressure to allow non root to tracefs in a controlled fashion, and using groups is probably the safest. - Bootconfig memory managament updates. - Bootconfig clean up to have the tools directory be less dependent on changes in the kernel tree. - Allow perf to be traced by function tracer. - Rewrite of function graph tracer to be a callback from the function tracer instead of having its own trampoline (this change will happen on an arch by arch basis, and currently only x86_64 implements it). - Allow multiple direct trampolines (bpf hooks to functions) be batched together in one synchronization. - Allow histogram triggers to add variables that can perform calculations against the event's fields. - Use the linker to determine architecture callbacks from the ftrace trampoline to allow for proper parameter prototypes and prevent warnings from the compiler. - Extend histogram triggers to key off of variables. - Have trace recursion use bit magic to determine preempt context over if branches. - Have trace recursion disable preemption as all use cases do anyway. - Added testing for verification of tracing utilities. - Various small clean ups and fixes. * tag 'trace-v5.16' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (101 commits) tracing/histogram: Fix semicolon.cocci warnings tracing/histogram: Fix documentation inline emphasis warning tracing: Increase PERF_MAX_TRACE_SIZE to handle Sentinel1 and docker together tracing: Show size of requested perf buffer bootconfig: Initialize ret in xbc_parse_tree() ftrace: do CPU checking after preemption disabled ftrace: disable preemption when recursion locked tracing/histogram: Document expression arithmetic and constants tracing/histogram: Optimize division by a power of 2 tracing/histogram: Covert expr to const if both operands are constants tracing/histogram: Simplify handling of .sym-offset in expressions tracing: Fix operator precedence for hist triggers expression tracing: Add division and multiplication support for hist triggers tracing: Add support for creating hist trigger variables from literal selftests/ftrace: Stop tracing while reading the trace file by default MAINTAINERS: Update KPROBES and TRACING entries test_kprobes: Move it from kernel/ to lib/ docs, kprobes: Remove invalid URL and add new reference samples/kretprobes: Fix return value if register_kretprobe() failed lib/bootconfig: Fix the xbc_get_info kerneldoc ...
2021-11-01Merge tag 'sched-core-2021-11-01' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler updates from Thomas Gleixner: - Revert the printk format based wchan() symbol resolution as it can leak the raw value in case that the symbol is not resolvable. - Make wchan() more robust and work with all kind of unwinders by enforcing that the task stays blocked while unwinding is in progress. - Prevent sched_fork() from accessing an invalid sched_task_group - Improve asymmetric packing logic - Extend scheduler statistics to RT and DL scheduling classes and add statistics for bandwith burst to the SCHED_FAIR class. - Properly account SCHED_IDLE entities - Prevent a potential deadlock when initial priority is assigned to a newly created kthread. A recent change to plug a race between cpuset and __sched_setscheduler() introduced a new lock dependency which is now triggered. Break the lock dependency chain by moving the priority assignment to the thread function. - Fix the idle time reporting in /proc/uptime for NOHZ enabled systems. - Improve idle balancing in general and especially for NOHZ enabled systems. - Provide proper interfaces for live patching so it does not have to fiddle with scheduler internals. - Add cluster aware scheduling support. - A small set of tweaks for RT (irqwork, wait_task_inactive(), various scheduler options and delaying mmdrop) - The usual small tweaks and improvements all over the place * tag 'sched-core-2021-11-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (69 commits) sched/fair: Cleanup newidle_balance sched/fair: Remove sysctl_sched_migration_cost condition sched/fair: Wait before decaying max_newidle_lb_cost sched/fair: Skip update_blocked_averages if we are defering load balance sched/fair: Account update_blocked_averages in newidle_balance cost x86: Fix __get_wchan() for !STACKTRACE sched,x86: Fix L2 cache mask sched/core: Remove rq_relock() sched: Improve wake_up_all_idle_cpus() take #2 irq_work: Also rcuwait for !IRQ_WORK_HARD_IRQ on PREEMPT_RT irq_work: Handle some irq_work in a per-CPU thread on PREEMPT_RT irq_work: Allow irq_work_sync() to sleep if irq_work() no IRQ support. sched/rt: Annotate the RT balancing logic irqwork as IRQ_WORK_HARD_IRQ sched: Add cluster scheduler level for x86 sched: Add cluster scheduler level in core and related Kconfig for ARM64 topology: Represent clusters of CPUs within a die sched: Disable -Wunused-but-set-variable sched: Add wrapper for get_wchan() to keep task blocked x86: Fix get_wchan() to support the ORC unwinder proc: Use task_is_running() for wchan in /proc/$pid/stat ...
2021-11-01Merge tag 'folio-5.16' of git://git.infradead.org/users/willy/pagecacheLinus Torvalds1-0/+1
Pull memory folios from Matthew Wilcox: "Add memory folios, a new type to represent either order-0 pages or the head page of a compound page. This should be enough infrastructure to support filesystems converting from pages to folios. The point of all this churn is to allow filesystems and the page cache to manage memory in larger chunks than PAGE_SIZE. The original plan was to use compound pages like THP does, but I ran into problems with some functions expecting only a head page while others expect the precise page containing a particular byte. The folio type allows a function to declare that it's expecting only a head page. Almost incidentally, this allows us to remove various calls to VM_BUG_ON(PageTail(page)) and compound_head(). This converts just parts of the core MM and the page cache. For 5.17, we intend to convert various filesystems (XFS and AFS are ready; other filesystems may make it) and also convert more of the MM and page cache to folios. For 5.18, multi-page folios should be ready. The multi-page folios offer some improvement to some workloads. The 80% win is real, but appears to be an artificial benchmark (postgres startup, which isn't a serious workload). Real workloads (eg building the kernel, running postgres in a steady state, etc) seem to benefit between 0-10%. I haven't heard of any performance losses as a result of this series. Nobody has done any serious performance tuning; I imagine that tweaking the readahead algorithm could provide some more interesting wins. There are also other places where we could choose to create large folios and currently do not, such as writes that are larger than PAGE_SIZE. I'd like to thank all my reviewers who've offered review/ack tags: Christoph Hellwig, David Howells, Jan Kara, Jeff Layton, Johannes Weiner, Kirill A. Shutemov, Michal Hocko, Mike Rapoport, Vlastimil Babka, William Kucharski, Yu Zhao and Zi Yan. I'd also like to thank those who gave feedback I incorporated but haven't offered up review tags for this part of the series: Nick Piggin, Mel Gorman, Ming Lei, Darrick Wong, Ted Ts'o, John Hubbard, Hugh Dickins, and probably a few others who I forget" * tag 'folio-5.16' of git://git.infradead.org/users/willy/pagecache: (90 commits) mm/writeback: Add folio_write_one mm/filemap: Add FGP_STABLE mm/filemap: Add filemap_get_folio mm/filemap: Convert mapping_get_entry to return a folio mm/filemap: Add filemap_add_folio() mm/filemap: Add filemap_alloc_folio mm/page_alloc: Add folio allocation functions mm/lru: Add folio_add_lru() mm/lru: Convert __pagevec_lru_add_fn to take a folio mm: Add folio_evictable() mm/workingset: Convert workingset_refault() to take a folio mm/filemap: Add readahead_folio() mm/filemap: Add folio_mkwrite_check_truncate() mm/filemap: Add i_blocks_per_folio() mm/writeback: Add folio_redirty_for_writepage() mm/writeback: Add folio_account_redirty() mm/writeback: Add folio_clear_dirty_for_io() mm/writeback: Add folio_cancel_dirty() mm/writeback: Add folio_account_cleaned() mm/writeback: Add filemap_dirty_folio() ...
2021-10-18mm: Add flush_dcache_folio()Matthew Wilcox (Oracle)1-0/+1
This is a default implementation which calls flush_dcache_page() on each page in the folio. If architectures can do better, they should implement their own version of it. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: Vlastimil Babka <vbabka@suse.cz>
2021-10-16ARC: fix potential build snafuVineet Gupta1-5/+0
In the big pgtable header split, I inadvertently introduced a couple of duplicate symbols. Fixes: fe6cb7b043b69cd9 ("ARC: mm: disintegrate pgtable.h into levels and flags") Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-10-15sched: Add wrapper for get_wchan() to keep task blockedKees Cook1-1/+1
Having a stable wchan means the process must be blocked and for it to stay that way while performing stack unwinding. Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> [arm] Tested-by: Mark Rutland <mark.rutland@arm.com> [arm64] Link: https://lkml.kernel.org/r/20211008111626.332092234@infradead.org
2021-10-01ARC: Add instruction_pointer_set() APIMasami Hiramatsu1-0/+5
Add instruction_pointer_set() API for arc. Link: https://lkml.kernel.org/r/163163050148.489837.15187799269793560256.stgit@devnote2 Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-10-01kprobes: treewide: Make it harder to refer kretprobe_trampoline directlyMasami Hiramatsu1-1/+1
Since now there is kretprobe_trampoline_addr() for referring the address of kretprobe trampoline code, we don't need to access kretprobe_trampoline directly. Make it harder to refer by renaming it to __kretprobe_trampoline(). Link: https://lkml.kernel.org/r/163163045446.489837.14510577516938803097.stgit@devnote2 Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2021-09-05Merge tag 'arc-5.15-rc1' of ↵Linus Torvalds21-1394/+1119
git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc Pull ARC updates from Vineet Gupta: "Finally a big pile of changes for ARC (atomics/mm). These are from our internal arc64 tree, preparing mainline for eventual arc64 support. I'm spreading them out to avoid tsunami of patches in one release. - MM rework: - Implement up to 4 paging levels - Enable STRICT_MM_TYPECHECK - switch pgtable_t back to 'struct page *' - Atomics rework / implement relaxed accessors - Retire legacy MMUv1,v2; ARC750 cores - A few other build errors, typos" * tag 'arc-5.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc: (33 commits) ARC: mm: vmalloc sync from kernel to user table to update PMD ... ARC: mm: support 4 levels of page tables ARC: mm: support 3 levels of page tables ARC: mm: switch to asm-generic/pgalloc.h ARC: mm: switch pgtable_t back to struct page * ARC: mm: hack to allow 2 level build with 4 level code ARC: mm: disintegrate pgtable.h into levels and flags ARC: mm: disintegrate mmu.h (arcv2 bits out) ARC: mm: move MMU specific bits out of entry code ... ARC: mm: move MMU specific bits out of ASID allocator ARC: mm: non-functional code movement/cleanup ARC: mm: pmd_populate* to use the canonical set_pmd (and drop pmd_set) ARC: ioremap: use more commonly used PAGE_KERNEL based uncached flag ARC: mm: Enable STRICT_MM_TYPECHECKS ARC: mm: Fixes to allow STRICT_MM_TYPECHECKS ARC: mm: move mmu/cache externs out to setup.h ARC: mm: remove tlb paranoid code ARC: mm: use SCRATCH_DATA0 register for caching pgdir in ARCv2 only ARC: retire MMUv1 and MMUv2 support ARC: retire ARC750 support ...
2021-09-02Merge tag 'asm-generic-5.15' of ↵Linus Torvalds1-72/+0
git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic Pull asm-generic updates from Arnd Bergmann: "The main content for 5.15 is a series that cleans up the handling of strncpy_from_user() and strnlen_user(), removing a lot of slightly incorrect versions of these in favor of the lib/strn*.c helpers that implement these correctly and more efficiently. The only architectures that retain a private version now are mips, ia64, um and parisc. I had offered to convert those at all, but Thomas Bogendoerfer wanted to keep the mips version for the moment until he had a chance to do regression testing. The branch also contains two patches for bitops and for ffs()" * tag 'asm-generic-5.15' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic: bitops/non-atomic: make @nr unsigned to avoid any DIV asm-generic: ffs: Drop bogus reference to ffz location asm-generic: reverse GENERIC_{STRNCPY_FROM,STRNLEN}_USER symbols asm-generic: remove extra strn{cpy_from,len}_user declarations asm-generic: uaccess: remove inline strncpy_from_user/strnlen_user s390: use generic strncpy/strnlen from_user microblaze: use generic strncpy/strnlen from_user csky: use generic strncpy/strnlen from_user arc: use generic strncpy/strnlen from_user hexagon: use generic strncpy/strnlen from_user h8300: remove stale strncpy_from_user asm-generic/uaccess.h: remove __strncpy_from_user/__strnlen_user
2021-08-26ARC: mm: support 4 levels of page tablesVineet Gupta3-5/+62
Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-26ARC: mm: support 3 levels of page tablesVineet Gupta4-6/+81
ARCv2 MMU is software walked and Linux implements 2 levels of paging: pgd/pte. Forthcoming hw will have multiple levels, so this change preps mm code for same. It is also fun to try multi levels even on soft-walked code to ensure generic mm code is robust to handle. overview ________ 2 levels {pgd, pte} : pmd is folded but pmd_* macros are valid and operate on pgd 3 levels {pgd, pmd, pte}: - pud is folded and pud_* macros point to pgd - pmd_* macros operate on actual pmd code changes ____________ 1. #include <asm-generic/pgtable-nopud.h> 2. Define CONFIG_PGTABLE_LEVELS 3 3a. Define PMD_SHIFT, PMD_SIZE, PMD_MASK, pmd_t 3b. Define pmd_val() which actually deals with pmd (pmd_offset(), pmd_index() are provided by generic code) 3c. pmd_alloc_one()/pmd_free() also provided by generic code (pmd_populate/pmd_free already exist) 4. Define pud_none(), pud_bad() macros based on generic pud_val() which internally pertains to pgd now. 4b. define pud_populate() to just setup pgd Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-26ARC: mm: switch to asm-generic/pgalloc.hVineet Gupta1-41/+1
With previous patch ARC pgalloc functions are same as generic, hence switch to that. Suggested-by: Mike Rapoport <rppt@linux.ibm.com> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-26ARC: mm: switch pgtable_t back to struct page *Vineet Gupta4-50/+25
So far ARC pgtable_t has not been struct page based to avoid extra page_address() calls involved. However the differences are down to noise and get in the way of using generic code, hence this patch. This also allows us to reuse generic THP depost/withdraw code. There's some additional consideration for PGDIR_SHIFT in 4K page config. Now due to page tables being PAGE_SIZE deep only, the address split can't be really arbitrary. Tested-by: kernel test robot <lkp@intel.com> Suggested-by: Mike Rapoport <rppt@linux.ibm.com> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-26ARC: mm: disintegrate pgtable.h into levels and flagsVineet Gupta3-273/+248
- pgtable-bits-arcv2.h (MMU specific page table flags) - pgtable-levels.h (paging levels) No functional changes, but paves way for easy addition of new MMU code with different bits and levels etc Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-26ARC: mm: disintegrate mmu.h (arcv2 bits out)Vineet Gupta3-84/+105
non functional change Tested-by: kernel test robot <lkp@intel.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: move MMU specific bits out of entry code ...Vineet Gupta1-0/+8
... to avoid polluting shared entry code (across three ISA variants) with ISA/MMU specific code. Cc: Jose Abreu <joabreu@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: move MMU specific bits out of ASID allocatorVineet Gupta2-15/+26
And while at it, rewrite commentary on ASID allocator Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: non-functional code movement/cleanupVineet Gupta1-14/+16
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: pmd_populate* to use the canonical set_pmd (and drop pmd_set)Vineet Gupta2-10/+10
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: ioremap: use more commonly used PAGE_KERNEL based uncached flagVineet Gupta1-3/+0
and remove the one off uncached definition for ARC Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: Enable STRICT_MM_TYPECHECKSVineet Gupta1-26/+0
In the past I've refrained from doing this (at least 2 times) due to the slight code bloat due to ABI implications of pte_t etc becoming struct Per ARC ABI, functions return struct via memory and not through register r0, even if the struct would fit in register(s) - caller allocates space on stack and passes the address as first arg (r0), shifting rest of args by one - callee creates return struct in memory (referenced via r0) This time around the code actually shrunk slightly (due to subtle inlining heuristic effects), but still slightly inefficient due to return values passed through memory. That however seems like a small cost compared to maintenance burden given the impending new mmu support for page walk etc Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: move mmu/cache externs out to setup.hVineet Gupta3-10/+10
Don't pollute mmu.h and cache.h with ARC internal bootlog/setup related functions. Move them aside to setup.h Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: remove tlb paranoid codeVineet Gupta1-6/+0
This was used back in arc700 days when ASID allocator was fragile. Not needed in last 5 years Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: mm: use SCRATCH_DATA0 register for caching pgdir in ARCv2 onlyVineet Gupta4-36/+1
MMU SCRATCH_DATA0 register is intended to cache task pgd. However in ARC700 SMP port, it has to be repurposed for re-entrant interrupt handling, while UP port doesn't. We currently handle these use-cases using a fabricated #define which has usual issues of dependency nesting and obvious ugliness. So clean this up: for ARC700 don't use to cache pgd (even in UP) and do the opposite for ARCv2. And while here, switch to canonical pgd_offset(). Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: retire MMUv1 and MMUv2 supportVineet Gupta3-143/+6
There's no known/active customer using them with latest kernels anyways. Removal helps cleanup code and remove the hack for MMU_VER to MMU_V[3-4] conversion Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: atomic_cmpxchg/atomic_xchg: implement relaxed variantsVineet Gupta2-23/+27
And move them out of cmpxchg.h to canonical atomic.h Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: cmpxchg/xchg: implement relaxed variants (LLSC config only)Vineet Gupta1-9/+2
It only makes sense to do this for the LLSC config Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: cmpxchg/xchg: rewrite as macros to make type safeVineet Gupta1-96/+117
Existing code forces/assume args to type "long" which won't work in LP64 regime, so prepare code for that Interestingly this should be a non functional change but I do see some codegen changes | bloat-o-meter vmlinux-cmpxchg-A vmlinux-cmpxchg-B | add/remove: 0/0 grow/shrink: 17/12 up/down: 218/-150 (68) | | Function old new delta | rwsem_optimistic_spin 518 550 +32 | rwsem_down_write_slowpath 1244 1274 +30 | __do_sys_perf_event_open 2576 2600 +24 | down_read 192 200 +8 | __down_read 192 200 +8 ... | task_work_run 168 148 -20 | dma_fence_chain_walk.part 760 736 -24 | __genradix_ptr_alloc 674 646 -28 Total: Before=6187409, After=6187477, chg +0.00% Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: xchg: !LLSC: remove UP micro-optimization/hackVineet Gupta1-7/+1
It gets in the way of cleaning things up and is a maintenance pain-in-neck ! Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: bitops: fls/ffs to take int (vs long) per asm-generic definesVineet Gupta1-2/+2
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: switch to generic bitopsVineet Gupta2-196/+2
- !LLSC now only needs a single spinlock for atomics and bitops - Some codegen changes (slight bloat) with generic bitops 1. code increase due to LD-check-atomic paradigm vs. unconditonal atomic (but dirty'ing the cache line even if set already). So despite increase, generic is right thing to do. 2. code decrease (but use of costlier instructions such as DIV vs. shifts based math) due to signed arithmetic. This needs to be revisited seperately. arc: static inline int test_bit(unsigned int nr, const volatile unsigned long *addr) ^^^^^^^^^^^^ generic: static inline int test_bit(int nr, const volatile unsigned long *addr) ^^^ Link: https://lore.kernel.org/r/20180830135749.GA13005@arm.com Signed-off-by: Will Deacon <will@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> [vgupta: wrote patch based on Will's poc, analysed codegen diffs] Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: atomics: implement relaxed variantsVineet Gupta2-30/+26
The current ARC fetch/return atomics provide fully ordered semantics only with 2 full barriers around the operation. Instead implement them as relaxed variants without any barriers and rely on generic code to generate the fully-ordered, acquire and release varaints by adding the appropriate full barriers. This helps elide some extra barriers in case of acquire/release/relaxed calls. bloat-o-meter for hsdk defconfig shows codegen improvements, although numbers below inflated due to unrelated inlining heuristic changes | bloat-o-meter vmlinux-643babe34fd7-non-relaxed vmlinux-45aa05cb44d7-relaxed | add/remove: 2/5 grow/shrink: 42/1222 up/down: 4158/-14312 (-10154) | Function old new delta | .. | sys_renameat 462 476 +14 | ip_mc_inc_group 424 436 +12 | do_read_cache_page 1882 1894 +12 | .. | refcount_dec_and_mutex_lock 254 250 -4 | refcount_dec_and_lock_irqsave 258 254 -4 | refcount_dec_and_lock 254 250 -4 | .. | tcp_v6_route_req 246 238 -8 | tcp_v4_destroy_sock 286 278 -8 | tcp_twsk_unique 352 344 -8 Link: https://lore.kernel.org/r/20180830144344.GW24142@hirez.programming.kicks-ass.net Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: atomic64: LLSC: elide unused atomic_{and,or,xor,andnot}_returnVineet Gupta1-0/+6
This is a non-functional change since those wrappers are not used in kernel sources at all. Link: http://lists.infradead.org/pipermail/linux-snps-arc/2018-August/004246.html Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: atomic: !LLSC: use int data type consistentlyVineet Gupta1-2/+2
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: atomic: !LLSC: remove hack in atomic_set() for for UPVineet Gupta1-13/+4
!LLSC atomics use spinlock (SMP) or irq-disable (UP) to implement criticla regions. UP atomic_set() however was "cheating" by not doing any of that so and still being functional. Remove this anomaly (primarily as cleanup for future code improvements) given that this config is not worth hassle of special case code. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-25ARC: atomics: disintegrate headerVineet Gupta4-424/+461
Non functional change, to ease future addition/removal Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-04arc: Prefer unsigned int to bare use of unsignedJinchao Wang2-2/+2
Fix checkpatch warnings: WARNING: Prefer 'unsigned int' to bare use of 'unsigned' Signed-off-by: Jinchao Wang <wjc@cdjrlc.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2021-07-28asm-generic: remove extra strn{cpy_from,len}_user declarationsArnd Bergmann1-5/+0
As these are now in asm-generic, it's no longer necessary to declare them in the architecture. Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2021-07-23arc: use generic strncpy/strnlen from_userArnd Bergmann1-78/+5
Remove the arc implemenation of strncpy/strnlen and instead use the generic versions. The arc version is fairly slow because it always does byte accesses even for aligned data, and its checks for user_addr_max() differ from the generic code. Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd@arndb.de>