diff options
author | Jason A. Donenfeld <Jason@zx2c4.com> | 2022-05-03 15:14:32 +0300 |
---|---|---|
committer | Jason A. Donenfeld <Jason@zx2c4.com> | 2022-05-14 00:59:23 +0300 |
commit | cbe89e5a375a51bbb952929b93fa973416fea74e (patch) | |
tree | 7e679dde0dc8117d6a56934b74a497ee35dcb528 /drivers/char | |
parent | b7b67d1391a831eb9de133e85a2230e2e81a2ce4 (diff) | |
download | linux-cbe89e5a375a51bbb952929b93fa973416fea74e.tar.xz |
random: do not use batches when !crng_ready()
It's too hard to keep the batches synchronized, and pointless anyway,
since in !crng_ready(), we're updating the base_crng key really often,
where batching only hurts. So instead, if the crng isn't ready, just
call into get_random_bytes(). At this stage nothing is performance
critical anyhow.
Cc: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Dominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Diffstat (limited to 'drivers/char')
-rw-r--r-- | drivers/char/random.c | 14 |
1 files changed, 11 insertions, 3 deletions
diff --git a/drivers/char/random.c b/drivers/char/random.c index 2d10942ba534..a9f887b92ba2 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -466,10 +466,8 @@ static void crng_pre_init_inject(const void *input, size_t len, bool account) if (account) { crng_init_cnt += min_t(size_t, len, CRNG_INIT_CNT_THRESH - crng_init_cnt); - if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) { - ++base_crng.generation; + if (crng_init_cnt >= CRNG_INIT_CNT_THRESH) crng_init = 1; - } } spin_unlock_irqrestore(&base_crng.lock, flags); @@ -625,6 +623,11 @@ u64 get_random_u64(void) warn_unseeded_randomness(&previous); + if (!crng_ready()) { + _get_random_bytes(&ret, sizeof(ret)); + return ret; + } + local_lock_irqsave(&batched_entropy_u64.lock, flags); batch = raw_cpu_ptr(&batched_entropy_u64); @@ -659,6 +662,11 @@ u32 get_random_u32(void) warn_unseeded_randomness(&previous); + if (!crng_ready()) { + _get_random_bytes(&ret, sizeof(ret)); + return ret; + } + local_lock_irqsave(&batched_entropy_u32.lock, flags); batch = raw_cpu_ptr(&batched_entropy_u32); |