From ec6347bb43395cb92126788a1a5b25302543f815 Mon Sep 17 00:00:00 2001 From: Dan Williams Date: Mon, 5 Oct 2020 20:40:16 -0700 Subject: x86, powerpc: Rename memcpy_mcsafe() to copy_mc_to_{user, kernel}() In reaction to a proposal to introduce a memcpy_mcsafe_fast() implementation Linus points out that memcpy_mcsafe() is poorly named relative to communicating the scope of the interface. Specifically what addresses are valid to pass as source, destination, and what faults / exceptions are handled. Of particular concern is that even though x86 might be able to handle the semantics of copy_mc_to_user() with its common copy_user_generic() implementation other archs likely need / want an explicit path for this case: On Fri, May 1, 2020 at 11:28 AM Linus Torvalds wrote: > > On Thu, Apr 30, 2020 at 6:21 PM Dan Williams wrote: > > > > However now I see that copy_user_generic() works for the wrong reason. > > It works because the exception on the source address due to poison > > looks no different than a write fault on the user address to the > > caller, it's still just a short copy. So it makes copy_to_user() work > > for the wrong reason relative to the name. > > Right. > > And it won't work that way on other architectures. On x86, we have a > generic function that can take faults on either side, and we use it > for both cases (and for the "in_user" case too), but that's an > artifact of the architecture oddity. > > In fact, it's probably wrong even on x86 - because it can hide bugs - > but writing those things is painful enough that everybody prefers > having just one function. Replace a single top-level memcpy_mcsafe() with either copy_mc_to_user(), or copy_mc_to_kernel(). Introduce an x86 copy_mc_fragile() name as the rename for the low-level x86 implementation formerly named memcpy_mcsafe(). It is used as the slow / careful backend that is supplanted by a fast copy_mc_generic() in a follow-on patch. One side-effect of this reorganization is that separating copy_mc_64.S to its own file means that perf no longer needs to track dependencies for its memcpy_64.S benchmarks. [ bp: Massage a bit. ] Signed-off-by: Dan Williams Signed-off-by: Borislav Petkov Reviewed-by: Tony Luck Acked-by: Michael Ellerman Cc: Link: http://lore.kernel.org/r/CAHk-=wjSqtXAqfUJxFtWNwmguFASTgB0dz1dT3V-78Quiezqbg@mail.gmail.com Link: https://lkml.kernel.org/r/160195561680.2163339.11574962055305783722.stgit@dwillia2-desk3.amr.corp.intel.com --- include/linux/string.h | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) (limited to 'include/linux/string.h') diff --git a/include/linux/string.h b/include/linux/string.h index 9b7a0632e87a..b1f3894a0a3e 100644 --- a/include/linux/string.h +++ b/include/linux/string.h @@ -161,20 +161,13 @@ extern int bcmp(const void *,const void *,__kernel_size_t); #ifndef __HAVE_ARCH_MEMCHR extern void * memchr(const void *,int,__kernel_size_t); #endif -#ifndef __HAVE_ARCH_MEMCPY_MCSAFE -static inline __must_check unsigned long memcpy_mcsafe(void *dst, - const void *src, size_t cnt) -{ - memcpy(dst, src, cnt); - return 0; -} -#endif #ifndef __HAVE_ARCH_MEMCPY_FLUSHCACHE static inline void memcpy_flushcache(void *dst, const void *src, size_t cnt) { memcpy(dst, src, cnt); } #endif + void *memchr_inv(const void *s, int c, size_t n); char *strreplace(char *s, char old, char new); -- cgit v1.2.3 From 6a39e62abbafd1d58d1722f40c7d26ef379c6a2f Mon Sep 17 00:00:00 2001 From: Daniel Axtens Date: Tue, 15 Dec 2020 20:43:44 -0800 Subject: lib: string.h: detect intra-object overflow in fortified string functions Patch series "Fortify strscpy()", v7. This patch implements a fortified version of strscpy() enabled by setting CONFIG_FORTIFY_SOURCE=y. The new version ensures the following before calling vanilla strscpy(): 1. There is no read overflow because either size is smaller than src length or we shrink size to src length by calling fortified strnlen(). 2. There is no write overflow because we either failed during compilation or at runtime by checking that size is smaller than dest size. Note that, if src and dst size cannot be got, the patch defaults to call vanilla strscpy(). The patches adds the following: 1. Implement the fortified version of strscpy(). 2. Add a new LKDTM test to ensures the fortified version still returns the same value as the vanilla one while panic'ing when there is a write overflow. 3. Correct some typos in LKDTM related file. I based my modifications on top of two patches from Daniel Axtens which modify calls to __builtin_object_size, in fortified string functions, to ensure the true size of char * are returned and not the surrounding structure size. About performance, I measured the slow down of fortified strscpy(), using the vanilla one as baseline. The hardware I used is an Intel i3 2130 CPU clocked at 3.4 GHz. I ran "Linux 5.10.0-rc4+ SMP PREEMPT" inside qemu 3.10 with 4 CPU cores. The following code, called through LKDTM, was used as a benchmark: #define TIMES 10000 char *src; char dst[7]; int i; ktime_t begin; src = kstrdup("foobar", GFP_KERNEL); if (src == NULL) return; begin = ktime_get(); for (i = 0; i < TIMES; i++) strscpy(dst, src, strlen(src)); pr_info("%d fortified strscpy() tooks %lld", TIMES, ktime_get() - begin); begin = ktime_get(); for (i = 0; i < TIMES; i++) __real_strscpy(dst, src, strlen(src)); pr_info("%d vanilla strscpy() tooks %lld", TIMES, ktime_get() - begin); kfree(src); I called the above code 30 times to compute stats for each version (in ns, round to int): | version | mean | std | median | 95th | | --------- | ------- | ------ | ------- | ------- | | fortified | 245_069 | 54_657 | 216_230 | 331_122 | | vanilla | 172_501 | 70_281 | 143_539 | 219_553 | On average, fortified strscpy() is approximately 1.42 times slower than vanilla strscpy(). For the 95th percentile, the fortified version is about 1.50 times slower. So, clearly the stats are not in favor of fortified strscpy(). But, the fortified version loops the string twice (one in strnlen() and another in vanilla strscpy()) while the vanilla one only loops once. This can explain why fortified strscpy() is slower than the vanilla one. This patch (of 5): When the fortify feature was first introduced in commit 6974f0c4555e ("include/linux/string.h: add the option of fortified string.h functions"), Daniel Micay observed: * It should be possible to optionally use __builtin_object_size(x, 1) for some functions (C strings) to detect intra-object overflows (like glibc's _FORTIFY_SOURCE=2), but for now this takes the conservative approach to avoid likely compatibility issues. This is a case that often cannot be caught by KASAN. Consider: struct foo { char a[10]; char b[10]; } void test() { char *msg; struct foo foo; msg = kmalloc(16, GFP_KERNEL); strcpy(msg, "Hello world!!"); // this copy overwrites foo.b strcpy(foo.a, msg); } The questionable copy overflows foo.a and writes to foo.b as well. It cannot be detected by KASAN. Currently it is also not detected by fortify, because strcpy considers __builtin_object_size(x, 0), which considers the size of the surrounding object (here, struct foo). However, if we switch the string functions over to use __builtin_object_size(x, 1), the compiler will measure the size of the closest surrounding subobject (here, foo.a), rather than the size of the surrounding object as a whole. See https://gcc.gnu.org/onlinedocs/gcc/Object-Size-Checking.html for more info. Only do this for string functions: we cannot use it on things like memcpy, memmove, memcmp and memchr_inv due to code like this which purposefully operates on multiple structure members: (arch/x86/kernel/traps.c) /* * regs->sp points to the failing IRET frame on the * ESPFIX64 stack. Copy it to the entry stack. This fills * in gpregs->ss through gpregs->ip. * */ memmove(&gpregs->ip, (void *)regs->sp, 5*8); This change passes an allyesconfig on powerpc and x86, and an x86 kernel built with it survives running with syz-stress from syzkaller, so it seems safe so far. Link: https://lkml.kernel.org/r/20201122162451.27551-1-laniel_francis@privacyrequired.com Link: https://lkml.kernel.org/r/20201122162451.27551-2-laniel_francis@privacyrequired.com Signed-off-by: Daniel Axtens Signed-off-by: Francis Laniel Reviewed-by: Kees Cook Cc: Daniel Micay Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- include/linux/string.h | 27 ++++++++++++++++----------- 1 file changed, 16 insertions(+), 11 deletions(-) (limited to 'include/linux/string.h') diff --git a/include/linux/string.h b/include/linux/string.h index b1f3894a0a3e..46e91d684c47 100644 --- a/include/linux/string.h +++ b/include/linux/string.h @@ -292,7 +292,7 @@ extern char *__underlying_strncpy(char *p, const char *q, __kernel_size_t size) __FORTIFY_INLINE char *strncpy(char *p, const char *q, __kernel_size_t size) { - size_t p_size = __builtin_object_size(p, 0); + size_t p_size = __builtin_object_size(p, 1); if (__builtin_constant_p(size) && p_size < size) __write_overflow(); if (p_size < size) @@ -302,7 +302,7 @@ __FORTIFY_INLINE char *strncpy(char *p, const char *q, __kernel_size_t size) __FORTIFY_INLINE char *strcat(char *p, const char *q) { - size_t p_size = __builtin_object_size(p, 0); + size_t p_size = __builtin_object_size(p, 1); if (p_size == (size_t)-1) return __underlying_strcat(p, q); if (strlcat(p, q, p_size) >= p_size) @@ -313,7 +313,7 @@ __FORTIFY_INLINE char *strcat(char *p, const char *q) __FORTIFY_INLINE __kernel_size_t strlen(const char *p) { __kernel_size_t ret; - size_t p_size = __builtin_object_size(p, 0); + size_t p_size = __builtin_object_size(p, 1); /* Work around gcc excess stack consumption issue */ if (p_size == (size_t)-1 || @@ -328,7 +328,7 @@ __FORTIFY_INLINE __kernel_size_t strlen(const char *p) extern __kernel_size_t __real_strnlen(const char *, __kernel_size_t) __RENAME(strnlen); __FORTIFY_INLINE __kernel_size_t strnlen(const char *p, __kernel_size_t maxlen) { - size_t p_size = __builtin_object_size(p, 0); + size_t p_size = __builtin_object_size(p, 1); __kernel_size_t ret = __real_strnlen(p, maxlen < p_size ? maxlen : p_size); if (p_size <= ret && maxlen != ret) fortify_panic(__func__); @@ -340,8 +340,8 @@ extern size_t __real_strlcpy(char *, const char *, size_t) __RENAME(strlcpy); __FORTIFY_INLINE size_t strlcpy(char *p, const char *q, size_t size) { size_t ret; - size_t p_size = __builtin_object_size(p, 0); - size_t q_size = __builtin_object_size(q, 0); + size_t p_size = __builtin_object_size(p, 1); + size_t q_size = __builtin_object_size(q, 1); if (p_size == (size_t)-1 && q_size == (size_t)-1) return __real_strlcpy(p, q, size); ret = strlen(q); @@ -361,8 +361,8 @@ __FORTIFY_INLINE size_t strlcpy(char *p, const char *q, size_t size) __FORTIFY_INLINE char *strncat(char *p, const char *q, __kernel_size_t count) { size_t p_len, copy_len; - size_t p_size = __builtin_object_size(p, 0); - size_t q_size = __builtin_object_size(q, 0); + size_t p_size = __builtin_object_size(p, 1); + size_t q_size = __builtin_object_size(q, 1); if (p_size == (size_t)-1 && q_size == (size_t)-1) return __underlying_strncat(p, q, count); p_len = strlen(p); @@ -475,11 +475,16 @@ __FORTIFY_INLINE void *kmemdup(const void *p, size_t size, gfp_t gfp) /* defined after fortified strlen and memcpy to reuse them */ __FORTIFY_INLINE char *strcpy(char *p, const char *q) { - size_t p_size = __builtin_object_size(p, 0); - size_t q_size = __builtin_object_size(q, 0); + size_t p_size = __builtin_object_size(p, 1); + size_t q_size = __builtin_object_size(q, 1); + size_t size; if (p_size == (size_t)-1 && q_size == (size_t)-1) return __underlying_strcpy(p, q); - memcpy(p, q, strlen(q) + 1); + size = strlen(q) + 1; + /* test here to use the more stringent object size */ + if (p_size < size) + fortify_panic(__func__); + memcpy(p, q, size); return p; } -- cgit v1.2.3 From 33e56a59e64dfb68778e5da0be13f0c47dc5d445 Mon Sep 17 00:00:00 2001 From: Francis Laniel Date: Tue, 15 Dec 2020 20:43:50 -0800 Subject: string.h: add FORTIFY coverage for strscpy() The fortified version of strscpy ensures the following before vanilla strscpy is called: 1. There is no read overflow because we either size is smaller than src length or we shrink size to src length by calling fortified strnlen. 2. There is no write overflow because we either failed during compilation or at runtime by checking that size is smaller than dest size. Link: https://lkml.kernel.org/r/20201122162451.27551-4-laniel_francis@privacyrequired.com Signed-off-by: Francis Laniel Acked-by: Kees Cook Cc: Daniel Axtens Cc: Daniel Micay Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- include/linux/string.h | 48 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+) (limited to 'include/linux/string.h') diff --git a/include/linux/string.h b/include/linux/string.h index 46e91d684c47..1cd63a8a23ab 100644 --- a/include/linux/string.h +++ b/include/linux/string.h @@ -6,6 +6,7 @@ #include /* for inline */ #include /* for size_t */ #include /* for NULL */ +#include /* for E2BIG */ #include #include @@ -357,6 +358,53 @@ __FORTIFY_INLINE size_t strlcpy(char *p, const char *q, size_t size) return ret; } +/* defined after fortified strnlen to reuse it */ +extern ssize_t __real_strscpy(char *, const char *, size_t) __RENAME(strscpy); +__FORTIFY_INLINE ssize_t strscpy(char *p, const char *q, size_t size) +{ + size_t len; + /* Use string size rather than possible enclosing struct size. */ + size_t p_size = __builtin_object_size(p, 1); + size_t q_size = __builtin_object_size(q, 1); + + /* If we cannot get size of p and q default to call strscpy. */ + if (p_size == (size_t) -1 && q_size == (size_t) -1) + return __real_strscpy(p, q, size); + + /* + * If size can be known at compile time and is greater than + * p_size, generate a compile time write overflow error. + */ + if (__builtin_constant_p(size) && size > p_size) + __write_overflow(); + + /* + * This call protects from read overflow, because len will default to q + * length if it smaller than size. + */ + len = strnlen(q, size); + /* + * If len equals size, we will copy only size bytes which leads to + * -E2BIG being returned. + * Otherwise we will copy len + 1 because of the final '\O'. + */ + len = len == size ? size : len + 1; + + /* + * Generate a runtime write overflow error if len is greater than + * p_size. + */ + if (len > p_size) + fortify_panic(__func__); + + /* + * We can now safely call vanilla strscpy because we are protected from: + * 1. Read overflow thanks to call to strnlen(). + * 2. Write overflow thanks to above ifs. + */ + return __real_strscpy(p, q, len); +} + /* defined after fortified strlen and strnlen to reuse them */ __FORTIFY_INLINE char *strncat(char *p, const char *q, __kernel_size_t count) { -- cgit v1.2.3 From 0fea6e9af889f1a4e072f5de999e07fe6859fc88 Mon Sep 17 00:00:00 2001 From: Andrey Konovalov Date: Tue, 22 Dec 2020 12:02:06 -0800 Subject: kasan, arm64: expand CONFIG_KASAN checks Some #ifdef CONFIG_KASAN checks are only relevant for software KASAN modes (either related to shadow memory or compiler instrumentation). Expand those into CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS. Link: https://lkml.kernel.org/r/e6971e432dbd72bb897ff14134ebb7e169bdcf0c.1606161801.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Signed-off-by: Vincenzo Frascino Reviewed-by: Catalin Marinas Reviewed-by: Alexander Potapenko Tested-by: Vincenzo Frascino Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Marco Elver Cc: Vasily Gorbik Cc: Will Deacon Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- arch/arm64/Kconfig | 2 +- arch/arm64/Makefile | 2 +- arch/arm64/include/asm/assembler.h | 2 +- arch/arm64/include/asm/memory.h | 2 +- arch/arm64/include/asm/string.h | 5 +++-- arch/arm64/kernel/head.S | 2 +- arch/arm64/kernel/image-vars.h | 2 +- arch/arm64/kernel/kaslr.c | 3 ++- arch/arm64/kernel/module.c | 6 ++++-- arch/arm64/mm/ptdump.c | 6 +++--- include/linux/kasan-checks.h | 2 +- include/linux/kasan.h | 7 ++++--- include/linux/moduleloader.h | 3 ++- include/linux/string.h | 2 +- mm/ptdump.c | 13 ++++++++----- scripts/Makefile.lib | 2 ++ 16 files changed, 36 insertions(+), 25 deletions(-) (limited to 'include/linux/string.h') diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index a33718f8b62f..9386b108b132 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -334,7 +334,7 @@ config BROKEN_GAS_INST config KASAN_SHADOW_OFFSET hex - depends on KASAN + depends on KASAN_GENERIC || KASAN_SW_TAGS default 0xdfff800000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && !KASAN_SW_TAGS default 0xdfffc00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS default 0xdffffe0000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 05f46a60b245..6be9b3750250 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -137,7 +137,7 @@ head-y := arch/arm64/kernel/head.o ifeq ($(CONFIG_KASAN_SW_TAGS), y) KASAN_SHADOW_SCALE_SHIFT := 4 -else +else ifeq ($(CONFIG_KASAN_GENERIC), y) KASAN_SHADOW_SCALE_SHIFT := 3 endif diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h index ddbe6bf00e33..bf125c591116 100644 --- a/arch/arm64/include/asm/assembler.h +++ b/arch/arm64/include/asm/assembler.h @@ -473,7 +473,7 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU #define NOKPROBE(x) #endif -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) #define EXPORT_SYMBOL_NOKASAN(name) #else #define EXPORT_SYMBOL_NOKASAN(name) EXPORT_SYMBOL(name) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 3bc08e6cf82e..cd671fb6707c 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -72,7 +72,7 @@ * address space for the shadow region respectively. They can bloat the stack * significantly, so double the (minimum) stack size when they are in use. */ -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) #define KASAN_SHADOW_END ((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) \ + KASAN_SHADOW_OFFSET) diff --git a/arch/arm64/include/asm/string.h b/arch/arm64/include/asm/string.h index b31e8e87a0db..3a3264ff47b9 100644 --- a/arch/arm64/include/asm/string.h +++ b/arch/arm64/include/asm/string.h @@ -5,7 +5,7 @@ #ifndef __ASM_STRING_H #define __ASM_STRING_H -#ifndef CONFIG_KASAN +#if !(defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) #define __HAVE_ARCH_STRRCHR extern char *strrchr(const char *, int c); @@ -48,7 +48,8 @@ extern void *__memset(void *, int, __kernel_size_t); void memcpy_flushcache(void *dst, const void *src, size_t cnt); #endif -#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) +#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \ + !defined(__SANITIZE_ADDRESS__) /* * For files that are not instrumented (e.g. mm/slub.c) we diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 42b23ce679dc..a0dc987724ed 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -433,7 +433,7 @@ SYM_FUNC_START_LOCAL(__primary_switched) bl __pi_memset dsb ishst // Make zero page visible to PTW -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) bl kasan_early_init #endif #ifdef CONFIG_RANDOMIZE_BASE diff --git a/arch/arm64/kernel/image-vars.h b/arch/arm64/kernel/image-vars.h index 39289d75118d..f676243abac6 100644 --- a/arch/arm64/kernel/image-vars.h +++ b/arch/arm64/kernel/image-vars.h @@ -37,7 +37,7 @@ __efistub_strncmp = __pi_strncmp; __efistub_strrchr = __pi_strrchr; __efistub___clean_dcache_area_poc = __pi___clean_dcache_area_poc; -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) __efistub___memcpy = __pi_memcpy; __efistub___memmove = __pi_memmove; __efistub___memset = __pi_memset; diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c index 0921aa1520b0..1c74c45b9494 100644 --- a/arch/arm64/kernel/kaslr.c +++ b/arch/arm64/kernel/kaslr.c @@ -161,7 +161,8 @@ u64 __init kaslr_early_init(u64 dt_phys) /* use the top 16 bits to randomize the linear region */ memstart_offset_seed = seed >> 48; - if (IS_ENABLED(CONFIG_KASAN)) + if (IS_ENABLED(CONFIG_KASAN_GENERIC) || + IS_ENABLED(CONFIG_KASAN_SW_TAGS)) /* * KASAN does not expect the module region to intersect the * vmalloc region, since shadow memory is allocated for each diff --git a/arch/arm64/kernel/module.c b/arch/arm64/kernel/module.c index 2a1ad95d9b2c..fe21e0f06492 100644 --- a/arch/arm64/kernel/module.c +++ b/arch/arm64/kernel/module.c @@ -30,7 +30,8 @@ void *module_alloc(unsigned long size) if (IS_ENABLED(CONFIG_ARM64_MODULE_PLTS)) gfp_mask |= __GFP_NOWARN; - if (IS_ENABLED(CONFIG_KASAN)) + if (IS_ENABLED(CONFIG_KASAN_GENERIC) || + IS_ENABLED(CONFIG_KASAN_SW_TAGS)) /* don't exceed the static module region - see below */ module_alloc_end = MODULES_END; @@ -39,7 +40,8 @@ void *module_alloc(unsigned long size) NUMA_NO_NODE, __builtin_return_address(0)); if (!p && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) && - !IS_ENABLED(CONFIG_KASAN)) + !IS_ENABLED(CONFIG_KASAN_GENERIC) && + !IS_ENABLED(CONFIG_KASAN_SW_TAGS)) /* * KASAN can only deal with module allocations being served * from the reserved module region, since the remainder of diff --git a/arch/arm64/mm/ptdump.c b/arch/arm64/mm/ptdump.c index 807dc634bbd2..04137a8f3d2d 100644 --- a/arch/arm64/mm/ptdump.c +++ b/arch/arm64/mm/ptdump.c @@ -29,7 +29,7 @@ enum address_markers_idx { PAGE_OFFSET_NR = 0, PAGE_END_NR, -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) KASAN_START_NR, #endif }; @@ -37,7 +37,7 @@ enum address_markers_idx { static struct addr_marker address_markers[] = { { PAGE_OFFSET, "Linear Mapping start" }, { 0 /* PAGE_END */, "Linear Mapping end" }, -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) { 0 /* KASAN_SHADOW_START */, "Kasan shadow start" }, { KASAN_SHADOW_END, "Kasan shadow end" }, #endif @@ -383,7 +383,7 @@ void ptdump_check_wx(void) static int ptdump_init(void) { address_markers[PAGE_END_NR].start_address = PAGE_END; -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) address_markers[KASAN_START_NR].start_address = KASAN_SHADOW_START; #endif ptdump_initialize(); diff --git a/include/linux/kasan-checks.h b/include/linux/kasan-checks.h index ac6aba632f2d..ca5e89fb10d3 100644 --- a/include/linux/kasan-checks.h +++ b/include/linux/kasan-checks.h @@ -9,7 +9,7 @@ * even in compilation units that selectively disable KASAN, but must use KASAN * to validate access to an address. Never use these in header files! */ -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) bool __kasan_check_read(const volatile void *p, unsigned int size); bool __kasan_check_write(const volatile void *p, unsigned int size); #else diff --git a/include/linux/kasan.h b/include/linux/kasan.h index d7042d129dc1..b1381ee6922a 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -238,7 +238,8 @@ static inline void kasan_release_vmalloc(unsigned long start, #endif /* CONFIG_KASAN_VMALLOC */ -#if defined(CONFIG_KASAN) && !defined(CONFIG_KASAN_VMALLOC) +#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \ + !defined(CONFIG_KASAN_VMALLOC) /* * These functions provide a special case to support backing module @@ -248,12 +249,12 @@ static inline void kasan_release_vmalloc(unsigned long start, int kasan_module_alloc(void *addr, size_t size); void kasan_free_shadow(const struct vm_struct *vm); -#else /* CONFIG_KASAN && !CONFIG_KASAN_VMALLOC */ +#else /* (CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS) && !CONFIG_KASAN_VMALLOC */ static inline int kasan_module_alloc(void *addr, size_t size) { return 0; } static inline void kasan_free_shadow(const struct vm_struct *vm) {} -#endif /* CONFIG_KASAN && !CONFIG_KASAN_VMALLOC */ +#endif /* (CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS) && !CONFIG_KASAN_VMALLOC */ #ifdef CONFIG_KASAN_INLINE void kasan_non_canonical_hook(unsigned long addr); diff --git a/include/linux/moduleloader.h b/include/linux/moduleloader.h index 4fa67a8b2265..9e09d11ffe5b 100644 --- a/include/linux/moduleloader.h +++ b/include/linux/moduleloader.h @@ -96,7 +96,8 @@ void module_arch_cleanup(struct module *mod); /* Any cleanup before freeing mod->module_init */ void module_arch_freeing_init(struct module *mod); -#if defined(CONFIG_KASAN) && !defined(CONFIG_KASAN_VMALLOC) +#if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \ + !defined(CONFIG_KASAN_VMALLOC) #include #define MODULE_ALIGN (PAGE_SIZE << KASAN_SHADOW_SCALE_SHIFT) #else diff --git a/include/linux/string.h b/include/linux/string.h index 1cd63a8a23ab..4fcfb56abcf5 100644 --- a/include/linux/string.h +++ b/include/linux/string.h @@ -267,7 +267,7 @@ void __write_overflow(void) __compiletime_error("detected write beyond size of o #if !defined(__NO_FORTIFY) && defined(__OPTIMIZE__) && defined(CONFIG_FORTIFY_SOURCE) -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) extern void *__underlying_memchr(const void *p, int c, __kernel_size_t size) __RENAME(memchr); extern int __underlying_memcmp(const void *p, const void *q, __kernel_size_t size) __RENAME(memcmp); extern void *__underlying_memcpy(void *p, const void *q, __kernel_size_t size) __RENAME(memcpy); diff --git a/mm/ptdump.c b/mm/ptdump.c index ba88ec43ff21..4354c1422d57 100644 --- a/mm/ptdump.c +++ b/mm/ptdump.c @@ -4,7 +4,7 @@ #include #include -#ifdef CONFIG_KASAN +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) /* * This is an optimization for KASAN=y case. Since all kasan page tables * eventually point to the kasan_early_shadow_page we could call note_page() @@ -31,7 +31,8 @@ static int ptdump_pgd_entry(pgd_t *pgd, unsigned long addr, struct ptdump_state *st = walk->private; pgd_t val = READ_ONCE(*pgd); -#if CONFIG_PGTABLE_LEVELS > 4 && defined(CONFIG_KASAN) +#if CONFIG_PGTABLE_LEVELS > 4 && \ + (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) if (pgd_page(val) == virt_to_page(lm_alias(kasan_early_shadow_p4d))) return note_kasan_page_table(walk, addr); #endif @@ -51,7 +52,8 @@ static int ptdump_p4d_entry(p4d_t *p4d, unsigned long addr, struct ptdump_state *st = walk->private; p4d_t val = READ_ONCE(*p4d); -#if CONFIG_PGTABLE_LEVELS > 3 && defined(CONFIG_KASAN) +#if CONFIG_PGTABLE_LEVELS > 3 && \ + (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) if (p4d_page(val) == virt_to_page(lm_alias(kasan_early_shadow_pud))) return note_kasan_page_table(walk, addr); #endif @@ -71,7 +73,8 @@ static int ptdump_pud_entry(pud_t *pud, unsigned long addr, struct ptdump_state *st = walk->private; pud_t val = READ_ONCE(*pud); -#if CONFIG_PGTABLE_LEVELS > 2 && defined(CONFIG_KASAN) +#if CONFIG_PGTABLE_LEVELS > 2 && \ + (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) if (pud_page(val) == virt_to_page(lm_alias(kasan_early_shadow_pmd))) return note_kasan_page_table(walk, addr); #endif @@ -91,7 +94,7 @@ static int ptdump_pmd_entry(pmd_t *pmd, unsigned long addr, struct ptdump_state *st = walk->private; pmd_t val = READ_ONCE(*pmd); -#if defined(CONFIG_KASAN) +#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS) if (pmd_page(val) == virt_to_page(lm_alias(kasan_early_shadow_pte))) return note_kasan_page_table(walk, addr); #endif diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib index 94133708889d..213677a5ed33 100644 --- a/scripts/Makefile.lib +++ b/scripts/Makefile.lib @@ -148,10 +148,12 @@ endif # we don't want to check (depends on variables KASAN_SANITIZE_obj.o, KASAN_SANITIZE) # ifeq ($(CONFIG_KASAN),y) +ifneq ($(CONFIG_KASAN_HW_TAGS),y) _c_flags += $(if $(patsubst n%,, \ $(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)y), \ $(CFLAGS_KASAN), $(CFLAGS_KASAN_NOSANITIZE)) endif +endif ifeq ($(CONFIG_UBSAN),y) _c_flags += $(if $(patsubst n%,, \ -- cgit v1.2.3