diff options
author | Vikas Shivappa <vikas.shivappa@linux.intel.com> | 2017-08-16 04:00:43 +0300 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2017-08-16 13:05:41 +0300 |
commit | 24247aeeabe99eab13b798ccccc2dec066dd6f07 (patch) | |
tree | 5ae1fddc1d0ae510c0060783092a6a618add6103 /arch/x86/kernel/cpu/intel_rdt.c | |
parent | bbc4615e0b7df5e21d0991adb4b2798508354924 (diff) | |
download | linux-24247aeeabe99eab13b798ccccc2dec066dd6f07.tar.xz |
x86/intel_rdt/cqm: Improve limbo list processing
During a mkdir, the entire limbo list is synchronously checked on each
package for free RMIDs by sending IPIs. With a large number of RMIDs (SKL
has 192) this creates a intolerable amount of work in IPIs.
Replace the IPI based checking of the limbo list with asynchronous worker
threads on each package which periodically scan the limbo list and move the
RMIDs that have:
llc_occupancy < threshold_occupancy
on all packages to the free list.
mkdir now returns -ENOSPC if the free list and the limbo list ere empty or
returns -EBUSY if there are RMIDs on the limbo list and the free list is
empty.
Getting rid of the IPIs also simplifies the data structures and the
serialization required for handling the lists.
[ tglx: Rewrote changelog ... ]
Signed-off-by: Vikas Shivappa <vikas.shivappa@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: ravi.v.shankar@intel.com
Cc: tony.luck@intel.com
Cc: fenghua.yu@intel.com
Cc: peterz@infradead.org
Cc: eranian@google.com
Cc: vikas.shivappa@intel.com
Cc: ak@linux.intel.com
Cc: davidcc@google.com
Link: http://lkml.kernel.org/r/1502845243-20454-3-git-send-email-vikas.shivappa@linux.intel.com
Diffstat (limited to 'arch/x86/kernel/cpu/intel_rdt.c')
-rw-r--r-- | arch/x86/kernel/cpu/intel_rdt.c | 31 |
1 files changed, 27 insertions, 4 deletions
diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c index b8dc141896b6..6935c8ecad7f 100644 --- a/arch/x86/kernel/cpu/intel_rdt.c +++ b/arch/x86/kernel/cpu/intel_rdt.c @@ -426,6 +426,7 @@ static int domain_setup_mon_state(struct rdt_resource *r, struct rdt_domain *d) GFP_KERNEL); if (!d->rmid_busy_llc) return -ENOMEM; + INIT_DELAYED_WORK(&d->cqm_limbo, cqm_handle_limbo); } if (is_mbm_total_enabled()) { tsize = sizeof(*d->mbm_total); @@ -536,11 +537,33 @@ static void domain_remove_cpu(int cpu, struct rdt_resource *r) list_del(&d->list); if (is_mbm_enabled()) cancel_delayed_work(&d->mbm_over); + if (is_llc_occupancy_enabled() && has_busy_rmid(r, d)) { + /* + * When a package is going down, forcefully + * decrement rmid->ebusy. There is no way to know + * that the L3 was flushed and hence may lead to + * incorrect counts in rare scenarios, but leaving + * the RMID as busy creates RMID leaks if the + * package never comes back. + */ + __check_limbo(d, true); + cancel_delayed_work(&d->cqm_limbo); + } + kfree(d); - } else if (r == &rdt_resources_all[RDT_RESOURCE_L3] && - cpu == d->mbm_work_cpu && is_mbm_enabled()) { - cancel_delayed_work(&d->mbm_over); - mbm_setup_overflow_handler(d, 0); + return; + } + + if (r == &rdt_resources_all[RDT_RESOURCE_L3]) { + if (is_mbm_enabled() && cpu == d->mbm_work_cpu) { + cancel_delayed_work(&d->mbm_over); + mbm_setup_overflow_handler(d, 0); + } + if (is_llc_occupancy_enabled() && cpu == d->cqm_work_cpu && + has_busy_rmid(r, d)) { + cancel_delayed_work(&d->cqm_limbo); + cqm_setup_limbo_handler(d, 0); + } } } |