diff options
| author | Peter Zijlstra <peterz@infradead.org> | 2026-02-11 19:07:58 +0300 |
|---|---|---|
| committer | Peter Zijlstra <peterz@infradead.org> | 2026-02-23 20:04:10 +0300 |
| commit | db4551e2ba346663b7b16f0b5d36d308b615c50e (patch) | |
| tree | 238f54f9a875db49abd19134c544ac678488d847 | |
| parent | 101f3498b4bdfef97152a444847948de1543f692 (diff) | |
| download | linux-db4551e2ba346663b7b16f0b5d36d308b615c50e.tar.xz | |
sched/fair: Use full weight to __calc_delta()
Since we now use the full weight for avg_vruntime(), also make
__calc_delta() use the full value.
Since weight is effectively NICE_0_LOAD, this is 20 bits on 64bit.
This leaves 44 bits for delta_exec, which is ~16k seconds, way longer
than any one tick would ever be, so no worry about overflow.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Tested-by: K Prateek Nayak <kprateek.nayak@amd.com>
Tested-by: Shubhang Kaushik <shubhang@os.amperecomputing.com>
Link: https://patch.msgid.link/20260219080625.183283814%40infradead.org
| -rw-r--r-- | kernel/sched/fair.c | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 2b98054cd754..23315c294da1 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -225,6 +225,7 @@ void __init sched_init_granularity(void) update_sysctl(); } +#ifndef CONFIG_64BIT #define WMULT_CONST (~0U) #define WMULT_SHIFT 32 @@ -283,6 +284,12 @@ static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight return mul_u64_u32_shr(delta_exec, fact, shift); } +#else +static u64 __calc_delta(u64 delta_exec, unsigned long weight, struct load_weight *lw) +{ + return (delta_exec * weight) / lw->weight; +} +#endif /* * delta /= w |
