summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorLongxuan Yu <ylong030@ucr.edu>2026-04-12 11:38:20 +0300
committerJens Axboe <axboe@kernel.dk>2026-04-15 23:07:47 +0300
commit326941b22806cbf2df1fbfe902b7908b368cce42 (patch)
treec0c386c043bce92a17e0b5ac565bfc145fb11974
parent5bdb4078e1efba9650c03753616866192d680718 (diff)
downloadlinux-326941b22806cbf2df1fbfe902b7908b368cce42.tar.xz
io_uring/poll: fix signed comparison in io_poll_get_ownership()
io_poll_get_ownership() uses a signed comparison to check whether poll_refs has reached the threshold for the slowpath: if (unlikely(atomic_read(&req->poll_refs) >= IO_POLL_REF_BIAS)) atomic_read() returns int (signed). When IO_POLL_CANCEL_FLAG (BIT(31)) is set in poll_refs, the value becomes negative in signed arithmetic, so the >= 128 comparison always evaluates to false and the slowpath is never taken. Fix this by casting the atomic_read() result to unsigned int before the comparison, so that the cancel flag is treated as a large positive value and correctly triggers the slowpath. Fixes: a26a35e9019f ("io_uring: make poll refs more robust") Cc: stable@vger.kernel.org Reported-by: Yifan Wu <yifanwucs@gmail.com> Reported-by: Juefei Pu <tomapufckgml@gmail.com> Co-developed-by: Yuan Tan <yuantan098@gmail.com> Signed-off-by: Yuan Tan <yuantan098@gmail.com> Suggested-by: Xin Liu <bird@lzu.edu.cn> Tested-by: Zhengchuan Liang <zcliangcn@gmail.com> Signed-off-by: Longxuan Yu <ylong030@ucr.edu> Signed-off-by: Ren Wei <n05ec@lzu.edu.cn> Reviewed-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://patch.msgid.link/3a3508b08bcd7f1bc3beff848ae6e1d73d355043.1775965597.git.ylong030@ucr.edu Signed-off-by: Jens Axboe <axboe@kernel.dk>
-rw-r--r--io_uring/poll.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/io_uring/poll.c b/io_uring/poll.c
index 74eef7884159..6834e2db937e 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -93,7 +93,7 @@ static bool io_poll_get_ownership_slowpath(struct io_kiocb *req)
*/
static inline bool io_poll_get_ownership(struct io_kiocb *req)
{
- if (unlikely(atomic_read(&req->poll_refs) >= IO_POLL_REF_BIAS))
+ if (unlikely((unsigned int)atomic_read(&req->poll_refs) >= IO_POLL_REF_BIAS))
return io_poll_get_ownership_slowpath(req);
return !(atomic_fetch_inc(&req->poll_refs) & IO_POLL_REF_MASK);
}