diff options
Diffstat (limited to 'Documentation/scheduler')
-rw-r--r-- | Documentation/scheduler/sched-bwc.rst | 2 | ||||
-rw-r--r-- | Documentation/scheduler/sched-deadline.rst | 13 | ||||
-rw-r--r-- | Documentation/scheduler/sched-debug.rst | 2 | ||||
-rw-r--r-- | Documentation/scheduler/sched-design-CFS.rst | 2 | ||||
-rw-r--r-- | Documentation/scheduler/sched-domains.rst | 5 | ||||
-rw-r--r-- | Documentation/scheduler/sched-ext.rst | 65 | ||||
-rw-r--r-- | Documentation/scheduler/sched-rt-group.rst | 11 | ||||
-rw-r--r-- | Documentation/scheduler/sched-stats.rst | 124 |
8 files changed, 144 insertions, 80 deletions
diff --git a/Documentation/scheduler/sched-bwc.rst b/Documentation/scheduler/sched-bwc.rst index 41ed2ceafc92..e881a945c188 100644 --- a/Documentation/scheduler/sched-bwc.rst +++ b/Documentation/scheduler/sched-bwc.rst @@ -59,7 +59,7 @@ At the same time, we can say that the worst case deadline miss, will be \Sum e_i; that is, there is a bounded tardiness (under the assumption that x+e is indeed WCET). -The interferenece when using burst is valued by the possibilities for +The interference when using burst is valued by the possibilities for missing the deadline and the average WCET. Test results showed that when there many cgroups or CPU is under utilized, the interference is limited. More details are shown in: diff --git a/Documentation/scheduler/sched-deadline.rst b/Documentation/scheduler/sched-deadline.rst index 22838ed8e13a..a727827b8dd5 100644 --- a/Documentation/scheduler/sched-deadline.rst +++ b/Documentation/scheduler/sched-deadline.rst @@ -591,12 +591,13 @@ Deadline Task Scheduling The system wide settings are configured under the /proc virtual file system. - For now the -rt knobs are used for -deadline admission control and the - -deadline runtime is accounted against the -rt runtime. We realize that this - isn't entirely desirable; however, it is better to have a small interface for - now, and be able to change it easily later. The ideal situation (see 5.) is to - run -rt tasks from a -deadline server; in which case the -rt bandwidth is a - direct subset of dl_bw. + For now the -rt knobs are used for -deadline admission control and with + CONFIG_RT_GROUP_SCHED the -deadline runtime is accounted against the (root) + -rt runtime. With !CONFIG_RT_GROUP_SCHED the knob only serves for the -dl + admission control. We realize that this isn't entirely desirable; however, it + is better to have a small interface for now, and be able to change it easily + later. The ideal situation (see 5.) is to run -rt tasks from a -deadline + server; in which case the -rt bandwidth is a direct subset of dl_bw. This means that, for a root_domain comprising M CPUs, -deadline tasks can be created while the sum of their bandwidths stays below: diff --git a/Documentation/scheduler/sched-debug.rst b/Documentation/scheduler/sched-debug.rst index 4d3d24f2a439..b5a92a39eccd 100644 --- a/Documentation/scheduler/sched-debug.rst +++ b/Documentation/scheduler/sched-debug.rst @@ -2,7 +2,7 @@ Scheduler debugfs ================= -Booting a kernel with CONFIG_SCHED_DEBUG=y will give access to +Booting a kernel with debugfs enabled will give access to scheduler specific debug files under /sys/kernel/debug/sched. Some of those files are described below. diff --git a/Documentation/scheduler/sched-design-CFS.rst b/Documentation/scheduler/sched-design-CFS.rst index 8786f219fc73..b574a2644c77 100644 --- a/Documentation/scheduler/sched-design-CFS.rst +++ b/Documentation/scheduler/sched-design-CFS.rst @@ -96,7 +96,7 @@ picked and the current task is preempted. CFS uses nanosecond granularity accounting and does not rely on any jiffies or other HZ detail. Thus the CFS scheduler has no notion of "timeslices" in the way the previous scheduler had, and has no heuristics whatsoever. There is -only one central tunable (you have to switch on CONFIG_SCHED_DEBUG): +only one central tunable: /sys/kernel/debug/sched/base_slice_ns diff --git a/Documentation/scheduler/sched-domains.rst b/Documentation/scheduler/sched-domains.rst index 5e996fe973b1..15e3a4cb304a 100644 --- a/Documentation/scheduler/sched-domains.rst +++ b/Documentation/scheduler/sched-domains.rst @@ -73,9 +73,8 @@ Architectures may override the generic domain builder and the default SD flags for a given topology level by creating a sched_domain_topology_level array and calling set_sched_topology() with this array as the parameter. -The sched-domains debugging infrastructure can be enabled by enabling -CONFIG_SCHED_DEBUG and adding 'sched_verbose' to your cmdline. If you -forgot to tweak your cmdline, you can also flip the +The sched-domains debugging infrastructure can be enabled by 'sched_verbose' +to your cmdline. If you forgot to tweak your cmdline, you can also flip the /sys/kernel/debug/sched/verbose knob. This enables an error checking parse of the sched domains which should catch most possible errors (described above). It also prints out the domain structure in a visual format. diff --git a/Documentation/scheduler/sched-ext.rst b/Documentation/scheduler/sched-ext.rst index 6cb8b676ce03..a1869c38046e 100644 --- a/Documentation/scheduler/sched-ext.rst +++ b/Documentation/scheduler/sched-ext.rst @@ -1,3 +1,5 @@ +.. _sched-ext: + ========================== Extensible Scheduler Class ========================== @@ -16,12 +18,12 @@ programs - the BPF scheduler. * The system integrity is maintained no matter what the BPF scheduler does. The default scheduling behavior is restored anytime an error is detected, a runnable task stalls, or on invoking the SysRq key sequence - :kbd:`SysRq-S`. + `SysRq-S`. * When the BPF scheduler triggers an error, debug information is dumped to aid debugging. The debug dump is passed to and printed out by the scheduler binary. The debug dump can also be accessed through the - `sched_ext_dump` tracepoint. The SysRq key sequence :kbd:`SysRq-D` + `sched_ext_dump` tracepoint. The SysRq key sequence `SysRq-D` triggers a debug dump. This doesn't terminate the BPF scheduler and can only be read through the tracepoint. @@ -47,8 +49,8 @@ options should be enabled to use sched_ext: sched_ext is used only when the BPF scheduler is loaded and running. If a task explicitly sets its scheduling policy to ``SCHED_EXT``, it will be -treated as ``SCHED_NORMAL`` and scheduled by CFS until the BPF scheduler is -loaded. +treated as ``SCHED_NORMAL`` and scheduled by the fair-class scheduler until the +BPF scheduler is loaded. When the BPF scheduler is loaded and ``SCX_OPS_SWITCH_PARTIAL`` is not set in ``ops->flags``, all ``SCHED_NORMAL``, ``SCHED_BATCH``, ``SCHED_IDLE``, and @@ -57,11 +59,11 @@ in ``ops->flags``, all ``SCHED_NORMAL``, ``SCHED_BATCH``, ``SCHED_IDLE``, and However, when the BPF scheduler is loaded and ``SCX_OPS_SWITCH_PARTIAL`` is set in ``ops->flags``, only tasks with the ``SCHED_EXT`` policy are scheduled by sched_ext, while tasks with ``SCHED_NORMAL``, ``SCHED_BATCH`` and -``SCHED_IDLE`` policies are scheduled by CFS. +``SCHED_IDLE`` policies are scheduled by the fair-class scheduler. -Terminating the sched_ext scheduler program, triggering :kbd:`SysRq-S`, or +Terminating the sched_ext scheduler program, triggering `SysRq-S`, or detection of any internal error including stalled runnable tasks aborts the -BPF scheduler and reverts all tasks back to CFS. +BPF scheduler and reverts all tasks back to the fair-class scheduler. .. code-block:: none @@ -107,8 +109,7 @@ detailed information: nr_rejected : 0 enable_seq : 1 -If ``CONFIG_SCHED_DEBUG`` is set, whether a given task is on sched_ext can -be determined as follows: +Whether a given task is on sched_ext can be determined as follows: .. code-block:: none @@ -198,8 +199,8 @@ Dispatch Queues To match the impedance between the scheduler core and the BPF scheduler, sched_ext uses DSQs (dispatch queues) which can operate as both a FIFO and a priority queue. By default, there is one global FIFO (``SCX_DSQ_GLOBAL``), -and one local dsq per CPU (``SCX_DSQ_LOCAL``). The BPF scheduler can manage -an arbitrary number of dsq's using ``scx_bpf_create_dsq()`` and +and one local DSQ per CPU (``SCX_DSQ_LOCAL``). The BPF scheduler can manage +an arbitrary number of DSQs using ``scx_bpf_create_dsq()`` and ``scx_bpf_destroy_dsq()``. A CPU always executes a task from its local DSQ. A task is "inserted" into a @@ -242,9 +243,9 @@ The following briefly shows how a waking task is scheduled and executed. task was inserted directly from ``ops.select_cpu()``). ``ops.enqueue()`` can make one of the following decisions: - * Immediately insert the task into either the global or local DSQ by - calling ``scx_bpf_dsq_insert()`` with ``SCX_DSQ_GLOBAL`` or - ``SCX_DSQ_LOCAL``, respectively. + * Immediately insert the task into either the global or a local DSQ by + calling ``scx_bpf_dsq_insert()`` with one of the following options: + ``SCX_DSQ_GLOBAL``, ``SCX_DSQ_LOCAL``, or ``SCX_DSQ_LOCAL_ON | cpu``. * Immediately insert the task into a custom DSQ by calling ``scx_bpf_dsq_insert()`` with a DSQ ID which is smaller than 2^63. @@ -294,6 +295,42 @@ dispatching, and must be dispatched to with ``scx_bpf_dsq_insert()``. See the function documentation and usage in ``tools/sched_ext/scx_simple.bpf.c`` for more information. +Task Lifecycle +-------------- + +The following pseudo-code summarizes the entire lifecycle of a task managed +by a sched_ext scheduler: + +.. code-block:: c + + ops.init_task(); /* A new task is created */ + ops.enable(); /* Enable BPF scheduling for the task */ + + while (task in SCHED_EXT) { + if (task can migrate) + ops.select_cpu(); /* Called on wakeup (optimization) */ + + ops.runnable(); /* Task becomes ready to run */ + + while (task is runnable) { + if (task is not in a DSQ) { + ops.enqueue(); /* Task can be added to a DSQ */ + + /* A CPU becomes available */ + + ops.dispatch(); /* Task is moved to a local DSQ */ + } + ops.running(); /* Task starts running on its assigned CPU */ + ops.tick(); /* Called every 1/HZ seconds */ + ops.stopping(); /* Task stops running (time slice expires or wait) */ + } + + ops.quiescent(); /* Task releases its assigned CPU (wait) */ + } + + ops.disable(); /* Disable BPF scheduling for the task */ + ops.exit_task(); /* Task is destroyed */ + Where to Look ============= diff --git a/Documentation/scheduler/sched-rt-group.rst b/Documentation/scheduler/sched-rt-group.rst index d685609ed3d7..ab464335d320 100644 --- a/Documentation/scheduler/sched-rt-group.rst +++ b/Documentation/scheduler/sched-rt-group.rst @@ -92,16 +92,19 @@ The system wide settings are configured under the /proc virtual file system: /proc/sys/kernel/sched_rt_runtime_us: A global limit on how much time real-time scheduling may use. This is always less or equal to the period_us, as it denotes the time allocated from the - period_us for the real-time tasks. Even without CONFIG_RT_GROUP_SCHED enabled, - this will limit time reserved to real-time processes. With - CONFIG_RT_GROUP_SCHED=y it signifies the total bandwidth available to all - real-time groups. + period_us for the real-time tasks. Without CONFIG_RT_GROUP_SCHED enabled, + this only serves for admission control of deadline tasks. With + CONFIG_RT_GROUP_SCHED=y it also signifies the total bandwidth available to + all real-time groups. * Time is specified in us because the interface is s32. This gives an operating range from 1us to about 35 minutes. * sched_rt_period_us takes values from 1 to INT_MAX. * sched_rt_runtime_us takes values from -1 to sched_rt_period_us. * A run time of -1 specifies runtime == period, ie. no limit. + * sched_rt_runtime_us/sched_rt_period_us > 0.05 inorder to preserve + bandwidth for fair dl_server. For accurate value check average of + runtime/period in /sys/kernel/debug/sched/fair_server/cpuX/ 2.2 Default behaviour diff --git a/Documentation/scheduler/sched-stats.rst b/Documentation/scheduler/sched-stats.rst index 7c2b16c4729d..d82e7d2b54f0 100644 --- a/Documentation/scheduler/sched-stats.rst +++ b/Documentation/scheduler/sched-stats.rst @@ -2,6 +2,12 @@ Scheduler Statistics ==================== +Version 17 of schedstats removed 'lb_imbalance' field as it has no +significance anymore and instead added more relevant fields namely +'lb_imbalance_load', 'lb_imbalance_util', 'lb_imbalance_task' and +'lb_imbalance_misfit'. The domain field prints the name of the +corresponding sched domain from this version onwards. + Version 16 of schedstats changed the order of definitions within 'enum cpu_idle_type', which changed the order of [CPU_MAX_IDLE_TYPES] columns in show_schedstat(). In particular the position of CPU_IDLE @@ -9,7 +15,9 @@ and __CPU_NOT_IDLE changed places. The size of the array is unchanged. Version 15 of schedstats dropped counters for some sched_yield: yld_exp_empty, yld_act_empty and yld_both_empty. Otherwise, it is -identical to version 14. +identical to version 14. Details are available at + + https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/scheduler/sched-stats.txt?id=1e1dbb259c79b Version 14 of schedstats includes support for sched_domains, which hit the mainline kernel in 2.6.20 although it is identical to the stats from version @@ -26,7 +34,14 @@ cpus on the machine, while domain0 is the most tightly focused domain, sometimes balancing only between pairs of cpus. At this time, there are no architectures which need more than three domain levels. The first field in the domain stats is a bit map indicating which cpus are affected -by that domain. +by that domain. Details are available at + + https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/sched-stats.txt?id=b762f3ffb797c + +The schedstat documentation is maintained version 10 onwards and is not +updated for version 11 and 12. The details for version 10 are available at + + https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/sched-stats.txt?id=1da177e4c3f4 These fields are counters, and only increment. Programs which make use of these will need to start with a baseline observation and then calculate @@ -71,88 +86,97 @@ Domain statistics ----------------- One of these is produced per domain for each cpu described. (Note that if CONFIG_SMP is not defined, *no* domains are utilized and these lines -will not appear in the output.) +will not appear in the output. <name> is an extension to the domain field +that prints the name of the corresponding sched domain. It can appear in +schedstat version 17 and above. -domain<N> <cpumask> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 +domain<N> <name> <cpumask> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 The first field is a bit mask indicating what cpus this domain operates over. -The next 24 are a variety of sched_balance_rq() statistics in grouped into types -of idleness (idle, busy, and newly idle): +The next 33 are a variety of sched_balance_rq() statistics in grouped into types +of idleness (busy, idle and newly idle): 1) # of times in this domain sched_balance_rq() was called when the + cpu was busy + 2) # of times in this domain sched_balance_rq() checked but found the + load did not require balancing when busy + 3) # of times in this domain sched_balance_rq() tried to move one or + more tasks and failed, when the cpu was busy + 4) Total imbalance in load when the cpu was busy + 5) Total imbalance in utilization when the cpu was busy + 6) Total imbalance in number of tasks when the cpu was busy + 7) Total imbalance due to misfit tasks when the cpu was busy + 8) # of times in this domain pull_task() was called when busy + 9) # of times in this domain pull_task() was called even though the + target task was cache-hot when busy + 10) # of times in this domain sched_balance_rq() was called but did not + find a busier queue while the cpu was busy + 11) # of times in this domain a busier queue was found while the cpu + was busy but no busier group was found + + 12) # of times in this domain sched_balance_rq() was called when the cpu was idle - 2) # of times in this domain sched_balance_rq() checked but found + 13) # of times in this domain sched_balance_rq() checked but found the load did not require balancing when the cpu was idle - 3) # of times in this domain sched_balance_rq() tried to move one or + 14) # of times in this domain sched_balance_rq() tried to move one or more tasks and failed, when the cpu was idle - 4) sum of imbalances discovered (if any) with each call to - sched_balance_rq() in this domain when the cpu was idle - 5) # of times in this domain pull_task() was called when the cpu + 15) Total imbalance in load when the cpu was idle + 16) Total imbalance in utilization when the cpu was idle + 17) Total imbalance in number of tasks when the cpu was idle + 18) Total imbalance due to misfit tasks when the cpu was idle + 19) # of times in this domain pull_task() was called when the cpu was idle - 6) # of times in this domain pull_task() was called even though + 20) # of times in this domain pull_task() was called even though the target task was cache-hot when idle - 7) # of times in this domain sched_balance_rq() was called but did + 21) # of times in this domain sched_balance_rq() was called but did not find a busier queue while the cpu was idle - 8) # of times in this domain a busier queue was found while the + 22) # of times in this domain a busier queue was found while the cpu was idle but no busier group was found - 9) # of times in this domain sched_balance_rq() was called when the - cpu was busy - 10) # of times in this domain sched_balance_rq() checked but found the - load did not require balancing when busy - 11) # of times in this domain sched_balance_rq() tried to move one or - more tasks and failed, when the cpu was busy - 12) sum of imbalances discovered (if any) with each call to - sched_balance_rq() in this domain when the cpu was busy - 13) # of times in this domain pull_task() was called when busy - 14) # of times in this domain pull_task() was called even though the - target task was cache-hot when busy - 15) # of times in this domain sched_balance_rq() was called but did not - find a busier queue while the cpu was busy - 16) # of times in this domain a busier queue was found while the cpu - was busy but no busier group was found - 17) # of times in this domain sched_balance_rq() was called when the + 23) # of times in this domain sched_balance_rq() was called when the cpu was just becoming idle - 18) # of times in this domain sched_balance_rq() checked but found the + 24) # of times in this domain sched_balance_rq() checked but found the load did not require balancing when the cpu was just becoming idle - 19) # of times in this domain sched_balance_rq() tried to move one or more + 25) # of times in this domain sched_balance_rq() tried to move one or more tasks and failed, when the cpu was just becoming idle - 20) sum of imbalances discovered (if any) with each call to - sched_balance_rq() in this domain when the cpu was just becoming idle - 21) # of times in this domain pull_task() was called when newly idle - 22) # of times in this domain pull_task() was called even though the + 26) Total imbalance in load when the cpu was just becoming idle + 27) Total imbalance in utilization when the cpu was just becoming idle + 28) Total imbalance in number of tasks when the cpu was just becoming idle + 29) Total imbalance due to misfit tasks when the cpu was just becoming idle + 30) # of times in this domain pull_task() was called when newly idle + 31) # of times in this domain pull_task() was called even though the target task was cache-hot when just becoming idle - 23) # of times in this domain sched_balance_rq() was called but did not + 32) # of times in this domain sched_balance_rq() was called but did not find a busier queue while the cpu was just becoming idle - 24) # of times in this domain a busier queue was found while the cpu + 33) # of times in this domain a busier queue was found while the cpu was just becoming idle but no busier group was found Next three are active_load_balance() statistics: - 25) # of times active_load_balance() was called - 26) # of times active_load_balance() tried to move a task and failed - 27) # of times active_load_balance() successfully moved a task + 34) # of times active_load_balance() was called + 35) # of times active_load_balance() tried to move a task and failed + 36) # of times active_load_balance() successfully moved a task Next three are sched_balance_exec() statistics: - 28) sbe_cnt is not used - 29) sbe_balanced is not used - 30) sbe_pushed is not used + 37) sbe_cnt is not used + 38) sbe_balanced is not used + 39) sbe_pushed is not used Next three are sched_balance_fork() statistics: - 31) sbf_cnt is not used - 32) sbf_balanced is not used - 33) sbf_pushed is not used + 40) sbf_cnt is not used + 41) sbf_balanced is not used + 42) sbf_pushed is not used Next three are try_to_wake_up() statistics: - 34) # of times in this domain try_to_wake_up() awoke a task that + 43) # of times in this domain try_to_wake_up() awoke a task that last ran on a different cpu in this domain - 35) # of times in this domain try_to_wake_up() moved a task to the + 44) # of times in this domain try_to_wake_up() moved a task to the waking cpu because it was cache-cold on its own cpu anyway - 36) # of times in this domain try_to_wake_up() started passive balancing + 45) # of times in this domain try_to_wake_up() started passive balancing /proc/<pid>/schedstat --------------------- |