diff options
author | Jens Axboe <axboe@kernel.dk> | 2020-05-22 18:12:09 +0300 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2020-06-22 05:44:25 +0300 |
commit | dd3e6d5039de1cbff4e20e2b34390ff44cdb182f (patch) | |
tree | 459a91bf1f1bf0ef2c2346a8f405499d1248d8cf /include/linux/pagemap.h | |
parent | c7510ab2cf5ccd997fe7f194edfe09cc511abf99 (diff) | |
download | linux-dd3e6d5039de1cbff4e20e2b34390ff44cdb182f.tar.xz |
mm: add support for async page locking
Normally waiting for a page to become unlocked, or locking the page,
requires waiting for IO to complete. Add support for lock_page_async()
and wait_on_page_locked_async(), which are callback based instead. This
allows a caller to get notified when a page becomes unlocked, rather
than wait for it.
We add a new iocb field, ki_waitq, to pass in the necessary data for this
to happen. We can unionize this with ki_cookie, since that is only used
for polled IO. Polled IO can never co-exist with async callbacks, as it is
(by definition) polled completions. struct wait_page_key is made public,
and we define struct wait_page_async as the interface between the caller
and the core.
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'include/linux/pagemap.h')
-rw-r--r-- | include/linux/pagemap.h | 17 |
1 files changed, 17 insertions, 0 deletions
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 2f18221bb5c8..e053e1d9a4d7 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -535,6 +535,7 @@ static inline int wake_page_match(struct wait_page_queue *wait_page, extern void __lock_page(struct page *page); extern int __lock_page_killable(struct page *page); +extern int __lock_page_async(struct page *page, struct wait_page_queue *wait); extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm, unsigned int flags); extern void unlock_page(struct page *page); @@ -572,6 +573,22 @@ static inline int lock_page_killable(struct page *page) } /* + * lock_page_async - Lock the page, unless this would block. If the page + * is already locked, then queue a callback when the page becomes unlocked. + * This callback can then retry the operation. + * + * Returns 0 if the page is locked successfully, or -EIOCBQUEUED if the page + * was already locked and the callback defined in 'wait' was queued. + */ +static inline int lock_page_async(struct page *page, + struct wait_page_queue *wait) +{ + if (!trylock_page(page)) + return __lock_page_async(page, wait); + return 0; +} + +/* * lock_page_or_retry - Lock the page, unless this would block and the * caller indicated that it can handle a retry. * |