diff options
author | Li RongQing <lirongqing@baidu.com> | 2020-08-18 10:07:57 +0300 |
---|---|---|
committer | Tony Nguyen <anthony.l.nguyen@intel.com> | 2020-09-14 19:45:34 +0300 |
commit | 1fa5cef283420b3dad93cd6ab04d7125bc1562de (patch) | |
tree | 1c7286e4032a86334a4b14de7737c7c23bc08f13 /include | |
parent | f49be6dcd74bfb99b6f9deba4a976f74aad1e844 (diff) | |
download | linux-1fa5cef283420b3dad93cd6ab04d7125bc1562de.tar.xz |
i40e: optimise prefetch page refcount
refcount of rx_buffer page will be added here originally, so prefetchw
is needed, but after commit 1793668c3b8c ("i40e/i40evf: Update code to
better handle incrementing page count"), and refcount is not added
every time, so change prefetchw as prefetch.
Now it mainly services page_address(), but which accesses struct page
only when WANT_PAGE_VIRTUAL or HASHED_PAGE_VIRTUAL is defined otherwise
it returns address based on offset, so we prefetch it conditionally.
Jakub suggested to define prefetch_page_address in a common header.
Reported-by: kernel test robot <lkp@intel.com>
Suggested-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Li RongQing <lirongqing@baidu.com>
Reviewed-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/prefetch.h | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/include/linux/prefetch.h b/include/linux/prefetch.h index 13eafebf3549..b83a3f944f28 100644 --- a/include/linux/prefetch.h +++ b/include/linux/prefetch.h @@ -15,6 +15,7 @@ #include <asm/processor.h> #include <asm/cache.h> +struct page; /* prefetch(x) attempts to pre-emptively get the memory pointed to by address "x" into the CPU L1 cache. @@ -62,4 +63,11 @@ static inline void prefetch_range(void *addr, size_t len) #endif } +static inline void prefetch_page_address(struct page *page) +{ +#if defined(WANT_PAGE_VIRTUAL) || defined(HASHED_PAGE_VIRTUAL) + prefetch(page); +#endif +} + #endif |