diff options
| author | Takashi Iwai <tiwai@suse.de> | 2026-04-20 09:17:20 +0300 |
|---|---|---|
| committer | Takashi Iwai <tiwai@suse.de> | 2026-04-20 18:59:19 +0300 |
| commit | 8146cd333d235ed32d48bb803fdf743472d7c783 (patch) | |
| tree | 7cf7101fa9de08bd719345f62bad80bb7dcf5a8c | |
| parent | 93985110329d9a66101c3de37aa7232f8c0bc3c9 (diff) | |
| download | linux-8146cd333d235ed32d48bb803fdf743472d7c783.tar.xz | |
ALSA: core: Fix potential data race at fasync handling
In snd_fasync_work_fn(), which is the offload work for traversing and
processing the pending fasync list, the call of kill_fasync() is done
outside the snd_fasync_lock for avoiding deadlocks. The problem is
that its the references of fasync->on, fasync->signal and fasync->poll
are done there also outside the lock. Since these may be modified by
snd_kill_fasync() call concurrently from other process, inconsistent
values might be passed to kill_fasync(). Although there shouldn't be
critical UAF, it's still better to be addressed.
This patch moves the kill_fasync() argument evaluations inside the
snd_fasync_lock for avoiding the data races above. The handling in
fasync->on flag is optimized in the loop to skip directly.
Also, for more clarity, snd_fasync_free() takes the lock and unlink
the pending entry more directly instead of clearing fasync->on flag.
Reported-by: Jake Lamberson <lamberson.jake@gmail.com>
Fixes: ef34a0ae7a26 ("ALSA: core: Add async signal helpers")
Cc: <stable@vger.kernel.org>
Link: https://patch.msgid.link/20260420061721.3253644-1-tiwai@suse.de
Signed-off-by: Takashi Iwai <tiwai@suse.de>
| -rw-r--r-- | sound/core/misc.c | 13 |
1 files changed, 10 insertions, 3 deletions
diff --git a/sound/core/misc.c b/sound/core/misc.c index 88d9e1f9a6e9..5aca09edf971 100644 --- a/sound/core/misc.c +++ b/sound/core/misc.c @@ -100,14 +100,18 @@ static LIST_HEAD(snd_fasync_list); static void snd_fasync_work_fn(struct work_struct *work) { struct snd_fasync *fasync; + int signal, poll; spin_lock_irq(&snd_fasync_lock); while (!list_empty(&snd_fasync_list)) { fasync = list_first_entry(&snd_fasync_list, struct snd_fasync, list); list_del_init(&fasync->list); + if (!fasync->on) + continue; + signal = fasync->signal; + poll = fasync->poll; spin_unlock_irq(&snd_fasync_lock); - if (fasync->on) - kill_fasync(&fasync->fasync, fasync->signal, fasync->poll); + kill_fasync(&fasync->fasync, signal, poll); spin_lock_irq(&snd_fasync_lock); } spin_unlock_irq(&snd_fasync_lock); @@ -158,7 +162,10 @@ void snd_fasync_free(struct snd_fasync *fasync) { if (!fasync) return; - fasync->on = 0; + + scoped_guard(spinlock_irq, &snd_fasync_lock) + list_del_init(&fasync->list); + flush_work(&snd_fasync_work); kfree(fasync); } |
