Age | Commit message (Collapse) | Author | Files | Lines |
|
* for-next/lpa2-prep:
arm64: mm: get rid of kimage_vaddr global variable
arm64: mm: Take potential load offset into account when KASLR is off
arm64: kernel: Disable latent_entropy GCC plugin in early C runtime
arm64: Add ARM64_HAS_LPA2 CPU capability
arm64/mm: Add FEAT_LPA2 specific ID_AA64MMFR0.TGRAN[2]
arm64/mm: Update tlb invalidation routines for FEAT_LPA2
arm64/mm: Add lpa2_is_enabled() kvm_lpa2_is_enabled() stubs
arm64/mm: Modify range-based tlbi to decrement scale
|
|
Currently the detection+enablement of boot cpucaps is separate from the
patching of boot cpucap alternatives, which means there's a period where
cpus_have_cap($CAP) and alternative_has_cap($CAP) may be mismatched.
It would be preferable to manage the boot cpucaps in the same way as the
system cpucaps, both for clarity and to minimize the risk of accidental
usage of code relying upon an alternative which has not yet been
patched.
This patch aligns the handling of boot cpucaps with the handling of
system cpucaps:
* The existing setup_boot_cpu_capabilities() function is moved to be
closer to the setup_system_capabilities() and setup_system_features()
functions so that they're more clearly related and more likely to be
updated together in future.
* The patching of boot cpucap alternatives is moved into
setup_boot_cpu_capabilities(), immediately after boot cpucaps are
detected and enabled.
* A new setup_boot_cpu_features() function is added to mirror
setup_system_features(); this handles initialization of cpucap data
structures and calls setup_boot_cpu_capabilities(). This makes
init_cpu_features() a closer mirror to update_cpu_features(), and
makes smp_prepare_boot_cpu() a closer mirror to smp_cpus_done().
Importantly, while these changes alter the structure of the code, they
retain the existing order of calls to:
init_cpu_features(); // prefix initializing feature regs
init_cpucap_indirect_list();
detect_system_supports_pseudo_nmi();
update_cpu_capabilities(SCOPE_BOOT_CPU | SCOPE_LOCAL_CPU);
enable_cpu_capabilities(SCOPE_BOOT_CPU);
apply_boot_alternatives();
... and hence there should be no functional change as a result of this
patch; this is purely a structural cleanup.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20231212170910.3745497-3-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Expose FEAT_LPA2 as a capability so that we can take advantage of
alternatives patching in the hypervisor.
Although FEAT_LPA2 presence is advertised separately for stage1 and
stage2, the expectation is that in practice both stages will either
support or not support it. Therefore, we combine both into a single
capability, allowing us to simplify the implementation. KVM requires
support in both stages in order to use LPA2 since the same library is
used for hyp stage 1 and guest stage 2 pgtables.
Reviewed-by: Oliver Upton <oliver.upton@linux.dev>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20231127111737.1897081-6-ryan.roberts@arm.com
|
|
* for-next/cpus_have_const_cap: (38 commits)
: cpus_have_const_cap() removal
arm64: Remove cpus_have_const_cap()
arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_REPEAT_TLBI
arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_NVIDIA_CARMEL_CNP
arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_CAVIUM_23154
arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_2645198
arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_1742098
arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_1542419
arm64: Avoid cpus_have_const_cap() for ARM64_WORKAROUND_843419
arm64: Avoid cpus_have_const_cap() for ARM64_UNMAP_KERNEL_AT_EL0
arm64: Avoid cpus_have_const_cap() for ARM64_{SVE,SME,SME2,FA64}
arm64: Avoid cpus_have_const_cap() for ARM64_SPECTRE_V2
arm64: Avoid cpus_have_const_cap() for ARM64_SSBS
arm64: Avoid cpus_have_const_cap() for ARM64_MTE
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_TLB_RANGE
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_WFXT
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_RNG
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_EPAN
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_PAN
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_GIC_PRIO_MASKING
arm64: Avoid cpus_have_const_cap() for ARM64_HAS_DIT
...
|
|
The AMU feature can be enabled on a subset of the cores in a system.
Because of that, it prints a message for each core as it is detected.
This becomes tedious when there are hundreds of cores. Instead, for
CPU features which can be enabled on a subset of the present cores,
lets wait until update_cpu_capabilities() and print the subset of cores
the feature was enabled on.
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Ionela Voinescu <ionela.voinescu@arm.com>
Tested-by: Ionela Voinescu <ionela.voinescu@arm.com>
Reviewed-by: Punit Agrawal <punit.agrawal@bytedance.com>
Tested-by: Punit Agrawal <punit.agrawal@bytedance.com>
Link: https://lore.kernel.org/r/20231017052322.1211099-2-jeremy.linton@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
There are no longer any users of cpus_have_const_cap(), and therefore it
can be removed.
Remove cpus_have_const_cap(). At the same time, remove
__cpus_have_const_cap(), as this is a trivial wrapper of
alternative_has_cap_unlikely(), which can be used directly instead.
The comment for __system_matches_cap() is updated to no longer refer to
cpus_have_const_cap(). As we have a number of ways to check the cpucaps,
the specific suggestions are removed.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
In system_supports_{sve,sme,sme2,fa64}() we use cpus_have_const_cap() to
check for the relevant cpucaps, but this is only necessary so that
sve_setup() and sme_setup() can run prior to alternatives being patched,
and otherwise alternative_has_cap_*() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
All of system_supports_{sve,sme,sme2,fa64}() will return false prior to
system cpucaps being detected. In the window between system cpucaps being
detected and patching alternatives, we need system_supports_sve() and
system_supports_sme() to run to initialize SVE and SME properties, but
all other users of system_supports_{sve,sme,sme2,fa64}() don't depend on
the relevant cpucap becoming true until alternatives are patched:
* No KVM code runs until after alternatives are patched, and so this can
safely use cpus_have_final_cap() or alternative_has_cap_*().
* The cpuid_cpu_online() callback in arch/arm64/kernel/cpuinfo.c is
registered later from cpuinfo_regs_init() as a device_initcall, and so
this can safely use cpus_have_final_cap() or alternative_has_cap_*().
* The entry, signal, and ptrace code isn't reachable until userspace has
run, and so this can safely use cpus_have_final_cap() or
alternative_has_cap_*().
* Currently perf_reg_validate() will un-reserve the PERF_REG_ARM64_VG
pseudo-register before alternatives are patched, and before
sve_setup() has run. If a sampling event is created early enough, this
would allow perf_ext_reg_value() to sample (the as-yet uninitialized)
thread_struct::vl[] prior to alternatives being patched.
It would be preferable to defer this until alternatives are patched,
and this can safely use alternative_has_cap_*().
* The context-switch code will run during this window as part of
stop_machine() used during alternatives_patch_all(), and potentially
for other work if other kernel threads are created early. No threads
require the use of SVE/SME/SME2/FA64 prior to alternatives being
patched, and it would be preferable for the related context-switch
logic to take effect after alternatives are patched so that ths is
guaranteed to see a consistent system-wide state (e.g. anything
initialized by sve_setup() and sme_setup().
This can safely ues alternative_has_cap_*().
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime. The sve_setup() and sme_setup() functions are modified to
use cpus_have_cap() directly so that they can observe the cpucaps being
set prior to alternatives being patched.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
In system_supports_mte() we use cpus_have_const_cap() to check for
ARM64_MTE, but this is not necessary and cpus_have_final_boot_cap()
would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The ARM64_MTE cpucap is a boot cpu feature which is detected and patched
early on the boot CPU under smp_prepare_boot_cpu(). In the window
between detecting the ARM64_MTE cpucap and patching alternatives,
nothing depends on the ARM64_MTE cpucap:
* The kasan_hw_tags_enabled() helper depends upon the kasan_flag_enabled
static key, which is initialized later in kasan_init_hw_tags() after
alternatives have been applied.
* No KVM code is called during this window, and KVM is not initialized
until after system cpucaps have been detected and patched. KVM code
can safely use cpus_have_final_cap() or alternative_has_cap_*().
* We don't context-switch prior to patching boot alternatives, and thus
mte_thread_switch() is not reachable during this window. Thus, we can
safely use cpus_have_final_boot_cap() or alternative_has_cap_*() in
the context-switch code.
* IRQ and FIQ are masked during this window, and we can only take SError
and Debug exceptions. SError exceptions are fatal at this point in
time, and we do not expect to take Debug exceptions, thus:
- It's fine to lave TCO set for exceptions taken during this window,
and mte_disable_tco_entry() doesn't need to do anything.
- We don't need to detect and report asynchronous tag cehck faults
during this window, and neither mte_check_tfsr_entry() nor
mte_check_tfsr_exit() need to do anything.
Since we want to report any SErrors taken during thiw window, these
cannot safely use cpus_have_final_boot_cap() or cpus_have_final_cap(),
but these can safely use alternative_has_cap_*().
* The __set_pte_at() function is not used during this window. It is
possible for this to be used on kernel mappings prior to boot cpucaps
being finalized, so this cannot safely use cpus_have_final_boot_cap()
or cpus_have_final_cap(), but this can safely use
alternative_has_cap_*().
* No userspace translation tables have been created yet, and swap has
not been initialized yet. Thus swapping is not possible and none of
the following are called:
- arch_thp_swp_supported()
- arch_prepare_to_swap()
- arch_swap_invalidate_page()
- arch_swap_invalidate_area()
- arch_swap_restore()
These can safely use system_has_final_cap() or
alternative_has_cap_*().
* The elfcore functions are only reachable after userspace is brought
up, which happens after system cpucaps have been detected and patched.
Thus the elfcore code can safely use cpus_have_final_cap() or
alternative_has_cap_*().
* Hibernation is only possible after userspace is brought up, which
happens after system cpucaps have been detected and patched. Thus the
hibernate code can safely use cpus_have_final_cap() or
alternative_has_cap_*().
* The set_tagged_addr_ctrl() function is only reachable after userspace
is brought up, which happens after system cpucaps have been detected
and patched. Thus this can safely use cpus_have_final_cap() or
alternative_has_cap_*().
* The copy_user_highpage() and copy_highpage() functions are not used
during this window, and can safely use alternative_has_cap_*().
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which avoid generating code to test the
system_cpucaps bitmap and should be better for all subsequent calls at
runtime.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Collingbourne <pcc@google.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
We use cpus_have_const_cap() to check for ARM64_HAS_TLB_RANGE, but this
is not necessary and alternative_has_cap_*() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
In the window between detecting the ARM64_HAS_TLB_RANGE cpucap and
patching alternative branches, we do not perform any TLB invalidation,
and even if we were to perform TLB invalidation here it would not be
functionally necessary to optimize this by using range invalidation.
Hence there's no need to use cpus_have_const_cap(), and
alternative_has_cap_unlikely() is sufficient.
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
In system_uses_hw_pan() we use cpus_have_const_cap() to check for
ARM64_HAS_PAN, but this is only necessary so that the
system_uses_ttbr0_pan() check in setup_cpu_features() can run prior to
alternatives being patched, and otherwise this is not necessary and
alternative_has_cap_*() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The ARM64_HAS_PAN cpucap is used by system_uses_hw_pan() and
system_uses_ttbr0_pan() depending on whether CONFIG_ARM64_SW_TTBR0_PAN
is selected, and:
* We only use system_uses_hw_pan() directly in __sdei_handler(), which
isn't reachable until after alternatives have been patched, and for
this it is safe to use alternative_has_cap_*().
* We use system_uses_ttbr0_pan() in a few places:
- In check_and_switch_context() and cpu_uninstall_idmap(), which will
defer installing a translation table into TTBR0 when the
ARM64_HAS_PAN cpucap is not detected.
Prior to patching alternatives, all CPUs will be using init_mm with
the reserved ttbr0 translation tables install in TTBR0, so these can
safely use alternative_has_cap_*().
- In update_saved_ttbr0(), which will only save the active TTBR0 into
a per-thread variable when the ARM64_HAS_PAN cpucap is not detected.
Prior to patching alternatives, all CPUs will be using init_mm with
the reserved ttbr0 translation tables install in TTBR0, so these can
safely use alternative_has_cap_*().
- In efi_set_pgd(), which will handle check_and_switch_context()
deferring the installation of TTBR0 when TTBR0 PAN is detected.
The EFI runtime services are not initialized until after
alternatives have been patched, and so this can safely use
alternative_has_cap_*() or cpus_have_final_cap().
- In uaccess_ttbr0_disable() and uaccess_ttbr0_enable(), where we'll
avoid installing/uninstalling a translation table in TTBR0 when
ARM64_HAS_PAN is detected.
Prior to patching alternatives we will not perform any uaccess and
will not call uaccess_ttbr0_disable() or uaccess_ttbr0_enable(), and
so these can safely use alternative_has_cap_*() or
cpus_have_final_cap().
- In is_el1_permission_fault() where we will consider a translation
fault on a TTBR0 address to be a permission fault when ARM64_HAS_PAN
is not detected *and* we have set the PAN bit in the SPSR (which
tells us that in the interrupted context, TTBR0 pointed at the
reserved zero ttbr).
In the window between detecting system cpucaps and patching
alternatives we should not perform any accesses to TTBR0 addresses,
and no userspace translation tables exist until after patching
alternatives. Thus it is safe for this to use alternative_has_cap*().
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime.
So that the check for TTBR0 PAN in setup_cpu_features() can run prior to
alternatives being patched, the call to system_uses_ttbr0_pan() is
replaced with an explicit check of the ARM64_HAS_PAN bit in the
system_cpucaps bitmap.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
In system_uses_irq_prio_masking() we use cpus_have_const_cap() to check
for ARM64_HAS_GIC_PRIO_MASKING, but this is not necessary and
alternative_has_cap_*() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
When CONFIG_ARM64_PSEUDO_NMI=y the ARM64_HAS_GIC_PRIO_MASKING cpucap is
a strict boot cpu feature which is detected and patched early on the
boot cpu, which both happen in smp_prepare_boot_cpu(). In the window
between the ARM64_HAS_GIC_PRIO_MASKING cpucap is detected and
alternatives are patched we don't run any code that depends upon the
ARM64_HAS_GIC_PRIO_MASKING cpucap:
* We leave DAIF.IF set until after boot alternatives are patched, and
interrupts are unmasked later in init_IRQ(), so we cannot reach
IRQ/FIQ entry code and will not use irqs_priority_unmasked().
* We don't call any code which uses arm_cpuidle_save_irq_context() and
arm_cpuidle_restore_irq_context() during this window.
* We don't call start_thread_common() during this window.
* The local_irq_*() code in <asm/irqflags.h> depends solely on an
alternative branch since commit:
a5f61cc636f48bdf ("arm64: irqflags: use alternative branches for pseudo-NMI logic")
... and hence will use the default (DAIF-only) masking behaviour until
alternatives are patched.
* Secondary CPUs are brought up later after alternatives are patched,
and alternatives are patched on the boot CPU immediately prior to
calling init_gic_priority_masking(), so we'll correctly initialize
interrupt masking regardless.
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which avoid generating code to test the
system_cpucaps bitmap and should be better for all subsequent calls at
runtime. As this makes system_uses_irq_prio_masking() equivalent to
__irqflags_uses_pmr(), the latter is removed and replaced with the
former for consistency.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
In system_supports_cnp() we use cpus_have_const_cap() to check for
ARM64_HAS_CNP, but this is only necessary so that the cpu_enable_cnp()
callback can run prior to alternatives being patched, and otherwise this
is not necessary and alternative_has_cap_*() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The cpu_enable_cnp() callback is run immediately after the ARM64_HAS_CNP
cpucap is detected system-wide under setup_system_capabilities(), prior
to alternatives being patched. During this window cpu_enable_cnp() uses
cpu_replace_ttbr1() to set the CNP bit for the swapper_pg_dir in TTBR1.
No other users of the ARM64_HAS_CNP cpucap need the up-to-date value
during this window:
* As KVM isn't initialized yet, kvm_get_vttbr() isn't reachable.
* As cpuidle isn't initialized yet, __cpu_suspend_exit() isn't
reachable.
* At this point all CPUs are using the swapper_pg_dir with a reserved
ASID in TTBR1, and the idmap_pg_dir in TTBR0, so neither
check_and_switch_context() nor cpu_do_switch_mm() need to do anything
special.
This patch replaces the use of cpus_have_const_cap() with
alternative_has_cap_unlikely(), which will avoid generating code to test
the system_cpucaps bitmap and should be better for all subsequent calls
at runtime. To allow cpu_enable_cnp() to function prior to alternatives
being patched, cpu_replace_ttbr1() is split into cpu_replace_ttbr1() and
cpu_enable_swapper_cnp(), with the former only used for early TTBR1
replacement, and the latter used by both cpu_enable_cnp() and
__cpu_suspend_exit().
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Vladimir Murzin <vladimir.murzin@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
In system_supports_bti() we use cpus_have_const_cap() to check for
ARM64_HAS_BTI, but this is not necessary and alternative_has_cap_*() or
cpus_have_final_*cap() would be preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
When CONFIG_ARM64_BTI_KERNEL=y, the ARM64_HAS_BTI cpucap is a strict
boot cpu feature which is detected and patched early on the boot cpu.
All uses guarded by CONFIG_ARM64_BTI_KERNEL happen after the boot CPU
has detected ARM64_HAS_BTI and patched boot alternatives, and hence can
safely use alternative_has_cap_*() or cpus_have_final_boot_cap().
Regardless of CONFIG_ARM64_BTI_KERNEL, all other uses of ARM64_HAS_BTI
happen after system capabilities have been finalized and alternatives
have been patched. Hence these can safely use alternative_has_cap_*) or
cpus_have_final_cap().
This patch splits system_supports_bti() into system_supports_bti() and
system_supports_bti_kernel(), with the former handling where the cpucap
affects userspace functionality, and ther latter handling where the
cpucap affects kernel functionality. The use of cpus_have_const_cap() is
replaced by cpus_have_final_cap() in cpus_have_const_cap, and
cpus_have_final_boot_cap() in system_supports_bti_kernel(). This will
avoid generating code to test the system_cpucaps bitmap and should be
better for all subsequent calls at runtime. The use of
cpus_have_final_cap() and cpus_have_final_boot_cap() will make it easier
to spot if code is chaanged such that these run before the ARM64_HAS_BTI
cpucap is guaranteed to have been finalized.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
In system_supports_address_auth() and system_supports_generic_auth() we
use cpus_have_const_cap to check for ARM64_HAS_ADDRESS_AUTH and
ARM64_HAS_GENERIC_AUTH respectively, but this is not necessary and
alternative_has_cap_*() would bre preferable.
For historical reasons, cpus_have_const_cap() is more complicated than
it needs to be. Before cpucaps are finalized, it will perform a bitmap
test of the system_cpucaps bitmap, and once cpucaps are finalized it
will use an alternative branch. This used to be necessary to handle some
race conditions in the window between cpucap detection and the
subsequent patching of alternatives and static branches, where different
branches could be out-of-sync with one another (or w.r.t. alternative
sequences). Now that we use alternative branches instead of static
branches, these are all patched atomically w.r.t. one another, and there
are only a handful of cases that need special care in the window between
cpucap detection and alternative patching.
Due to the above, it would be nice to remove cpus_have_const_cap(), and
migrate callers over to alternative_has_cap_*(), cpus_have_final_cap(),
or cpus_have_cap() depending on when their requirements. This will
remove redundant instructions and improve code generation, and will make
it easier to determine how each callsite will behave before, during, and
after alternative patching.
The ARM64_HAS_ADDRESS_AUTH cpucap is a boot cpu feature which is
detected and patched early on the boot CPU before any pointer
authentication keys are enabled via their respective SCTLR_ELx.EN* bits.
Nothing which uses system_supports_address_auth() is called before the
boot alternatives are patched. Thus it is safe for
system_supports_address_auth() to use cpus_have_final_boot_cap() to
check for ARM64_HAS_ADDRESS_AUTH.
The ARM64_HAS_GENERIC_AUTH cpucap is a system feature which is detected
on all CPUs, then finalized and patched under
setup_system_capabilities(). We use system_supports_generic_auth() in a
few places:
* The pac_generic_keys_get() and pac_generic_keys_set() functions are
only reachable from system calls once userspace is up and running. As
cpucaps are finalzied long before userspace runs, these can safely use
alternative_has_cap_*() or cpus_have_final_cap().
* The ptrauth_prctl_reset_keys() function is only reachable from system
calls once userspace is up and running. As cpucaps are finalized long
before userspace runs, this can safely use alternative_has_cap_*() or
cpus_have_final_cap().
* The ptrauth_keys_install_user() function is used during
context-switch. This is called prior to alternatives being applied,
and so cannot use cpus_have_final_cap(), but as this only needs to
switch the APGA key for userspace tasks, it's safe to use
alternative_has_cap_*().
* The ptrauth_keys_init_user() function is used to initialize userspace
keys, and is only reachable after system cpucaps have been finalized
and patched. Thus this can safely use alternative_has_cap_*() or
cpus_have_final_cap().
* The system_has_full_ptr_auth() helper function is only used by KVM
code, which is only reachable after system cpucaps have been finalized
and patched. Thus this can safely use alternative_has_cap_*() or
cpus_have_final_cap().
This patch modifies system_supports_address_auth() to use
cpus_have_final_boot_cap() to check ARM64_HAS_ADDRESS_AUTH, and modifies
system_supports_generic_auth() to use alternative_has_cap_unlikely() to
check ARM64_HAS_GENERIC_AUTH. In either case this will avoid generating
code to test the system_cpucaps bitmap and should be better for all
subsequent calls at runtime. The use of cpus_have_final_boot_cap() will
make it easier to spot if code is chaanged such that these run before
the relevant cpucap is guaranteed to have been finalized.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Currently we have a negative cpucap which describes the *absence* of
FP/SIMD rather than *presence* of FP/SIMD. This largely works, but is
somewhat awkward relative to other cpucaps that describe the presence of
a feature, and it would be nicer to have a cpucap which describes the
presence of FP/SIMD:
* This will allow the cpucap to be treated as a standard
ARM64_CPUCAP_SYSTEM_FEATURE, which can be detected with the standard
has_cpuid_feature() function and ARM64_CPUID_FIELDS() description.
* This ensures that the cpucap will only transition from not-present to
present, reducing the risk of unintentional and/or unsafe usage of
FP/SIMD before cpucaps are finalized.
* This will allow using arm64_cpu_capabilities::cpu_enable() to enable
the use of FP/SIMD later, with FP/SIMD being disabled at boot time
otherwise. This will ensure that any unintentional and/or unsafe usage
of FP/SIMD prior to this is trapped, and will ensure that FP/SIMD is
never unintentionally enabled for userspace in mismatched big.LITTLE
systems.
This patch replaces the negative ARM64_HAS_NO_FPSIMD cpucap with a
positive ARM64_HAS_FPSIMD cpucap, making changes as described above.
Note that as FP/SIMD will now be trapped when not supported system-wide,
do_fpsimd_acc() must handle these traps in the same way as for SVE and
SME. The commentary in fpsimd_restore_current_state() is updated to
describe the new scheme.
No users of system_supports_fpsimd() need to know that FP/SIMD is
available prior to alternatives being patched, so this is updated to
use alternative_has_cap_likely() to check for the ARM64_HAS_FPSIMD
cpucap, without generating code to test the system_cpucaps bitmap.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Currently setup_cpu_features() handles a mixture of one-time kernel
feature setup (e.g. cpucaps) and one-time user feature setup (e.g. ELF
hwcaps). Subsequent patches will rework other one-time setup and expand
the logic currently in setup_cpu_features(), and in preparation for this
it would be helpful to split the kernel and user setup into separate
functions.
This patch splits setup_user_features() out of setup_cpu_features(),
with a few additional cleanups of note:
* setup_cpu_features() is renamed to setup_system_features() to make it
clear that it handles system-wide feature setup rather than cpu-local
feature setup.
* setup_system_capabilities() is folded into setup_system_features().
* Presence of TTBR0 pan is logged immediately after
update_cpu_capabilities(), so that this is guaranteed to appear
alongside all the other detected system cpucaps.
* The 'cwg' variable is removed as its value is only consumed once and
it's simpler to use cache_type_cwg() directly without assigning its
return value to a variable.
* The call to setup_user_features() is moved after alternatives are
patched, which will allow user feature setup code to depend on
alternative branches and allow for simplifications in subsequent
patches.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
The cpus_have_final_cap() function can be used to test a cpucap while
also verifying that we do not consume the cpucap until system
capabilities have been finalized. It would be helpful if we could do
likewise for boot cpucaps.
This patch adds a new cpus_have_final_boot_cap() helper which can be
used to test a cpucap while also verifying that boot capabilities have
been finalized. Users will be added in subsequent patches.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Many cpucaps can only be set when certain CONFIG_* options are selected,
and we need to check the CONFIG_* option before the cap in order to
avoid generating redundant code. Due to this, we have a growing number
of helpers in <asm/cpufeature.h> of the form:
| static __always_inline bool system_supports_foo(void)
| {
| return IS_ENABLED(CONFIG_ARM64_FOO) &&
| cpus_have_const_cap(ARM64_HAS_FOO);
| }
This is unfortunate as it forces us to use cpus_have_const_cap()
unnecessarily, resulting in redundant code being generated by the
compiler. In the vast majority of cases, we only require that feature
checks indicate the presence of a feature after cpucaps have been
finalized, and so it would be sufficient to use alternative_has_cap_*().
However some code needs to handle a feature before alternatives have
been patched, and must test the system_cpucaps bitmap via
cpus_have_const_cap(). In other cases we'd like to check for
unintentional usage of a cpucap before alternatives are patched, and so
it would be preferable to use cpus_have_final_cap().
Placing the IS_ENABLED() checks in each callsite is tedious and
error-prone, and the same applies for writing wrappers for each
comination of cpucap and alternative_has_cap_*() / cpus_have_cap() /
cpus_have_final_cap(). It would be nicer if we could centralize the
knowledge of which cpucaps are possible, and have
alternative_has_cap_*(), cpus_have_cap(), and cpus_have_final_cap()
handle this automatically.
This patch adds a new cpucap_is_possible() function which will be
responsible for checking the CONFIG_* option, and updates the low-level
cpucap checks to use this. The existing CONFIG_* checks in
<asm/cpufeature.h> are moved over to cpucap_is_possible(), but the (now
trival) wrapper functions are retained for now.
There should be no functional change as a result of this patch alone.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
ClearBHB support is indicated by the CLRBHB field in ID_AA64ISAR2_EL1.
Following some refactoring the kernel incorrectly checks the BC field
instead. Fix the detection to use the right field.
(Note: The original ClearBHB support had it as FTR_HIGHER_SAFE, but this
patch uses FTR_LOWER_SAFE, which seems more correct.)
Also fix the detection of BC (hinted conditional branches) to use
FTR_LOWER_SAFE, so that it is not reported on mismatched systems.
Fixes: 356137e68a9f ("arm64/sysreg: Make BHB clear feature defines match the architecture")
Fixes: 8fcc8285c0e3 ("arm64/sysreg: Convert ID_AA64ISAR2_EL1 to automatic generation")
Cc: stable@vger.kernel.org
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20230912133429.2606875-1-kristina.martsenko@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Pull kvm updates from Paolo Bonzini:
"ARM64:
- Eager page splitting optimization for dirty logging, optionally
allowing for a VM to avoid the cost of hugepage splitting in the
stage-2 fault path.
- Arm FF-A proxy for pKVM, allowing a pKVM host to safely interact
with services that live in the Secure world. pKVM intervenes on
FF-A calls to guarantee the host doesn't misuse memory donated to
the hyp or a pKVM guest.
- Support for running the split hypervisor with VHE enabled, known as
'hVHE' mode. This is extremely useful for testing the split
hypervisor on VHE-only systems, and paves the way for new use cases
that depend on having two TTBRs available at EL2.
- Generalized framework for configurable ID registers from userspace.
KVM/arm64 currently prevents arbitrary CPU feature set
configuration from userspace, but the intent is to relax this
limitation and allow userspace to select a feature set consistent
with the CPU.
- Enable the use of Branch Target Identification (FEAT_BTI) in the
hypervisor.
- Use a separate set of pointer authentication keys for the
hypervisor when running in protected mode, as the host is untrusted
at runtime.
- Ensure timer IRQs are consistently released in the init failure
paths.
- Avoid trapping CTR_EL0 on systems with Enhanced Virtualization
Traps (FEAT_EVT), as it is a register commonly read from userspace.
- Erratum workaround for the upcoming AmpereOne part, which has
broken hardware A/D state management.
RISC-V:
- Redirect AMO load/store misaligned traps to KVM guest
- Trap-n-emulate AIA in-kernel irqchip for KVM guest
- Svnapot support for KVM Guest
s390:
- New uvdevice secret API
- CMM selftest and fixes
- fix racy access to target CPU for diag 9c
x86:
- Fix missing/incorrect #GP checks on ENCLS
- Use standard mmu_notifier hooks for handling APIC access page
- Drop now unnecessary TR/TSS load after VM-Exit on AMD
- Print more descriptive information about the status of SEV and
SEV-ES during module load
- Add a test for splitting and reconstituting hugepages during and
after dirty logging
- Add support for CPU pinning in demand paging test
- Add support for AMD PerfMonV2, with a variety of cleanups and minor
fixes included along the way
- Add a "nx_huge_pages=never" option to effectively avoid creating NX
hugepage recovery threads (because nx_huge_pages=off can be toggled
at runtime)
- Move handling of PAT out of MTRR code and dedup SVM+VMX code
- Fix output of PIC poll command emulation when there's an interrupt
- Add a maintainer's handbook to document KVM x86 processes,
preferred coding style, testing expectations, etc.
- Misc cleanups, fixes and comments
Generic:
- Miscellaneous bugfixes and cleanups
Selftests:
- Generate dependency files so that partial rebuilds work as
expected"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (153 commits)
Documentation/process: Add a maintainer handbook for KVM x86
Documentation/process: Add a label for the tip tree handbook's coding style
KVM: arm64: Fix misuse of KVM_ARM_VCPU_POWER_OFF bit index
RISC-V: KVM: Remove unneeded semicolon
RISC-V: KVM: Allow Svnapot extension for Guest/VM
riscv: kvm: define vcpu_sbi_ext_pmu in header
RISC-V: KVM: Expose IMSIC registers as attributes of AIA irqchip
RISC-V: KVM: Add in-kernel virtualization of AIA IMSIC
RISC-V: KVM: Expose APLIC registers as attributes of AIA irqchip
RISC-V: KVM: Add in-kernel emulation of AIA APLIC
RISC-V: KVM: Implement device interface for AIA irqchip
RISC-V: KVM: Skeletal in-kernel AIA irqchip support
RISC-V: KVM: Set kvm_riscv_aia_nr_hgei to zero
RISC-V: KVM: Add APLIC related defines
RISC-V: KVM: Add IMSIC related defines
RISC-V: KVM: Implement guest external interrupt line management
KVM: x86: Remove PRIx* definitions as they are solely for user space
s390/uv: Update query for secret-UVCs
s390/uv: replace scnprintf with sysfs_emit
s390/uvdevice: Add 'Lock Secret Store' UVC
...
|
|
* kvm-arm64/configurable-id-regs:
: Configurable ID register infrastructure, courtesy of Jing Zhang
:
: Create generalized infrastructure for allowing userspace to select the
: supported feature set for a VM, so long as the feature set is a subset
: of what hardware + KVM allows. This does not add any new features that
: are user-configurable, and instead focuses on the necessary refactoring
: to enable future work.
:
: As a consequence of the series, feature asymmetry is now deliberately
: disallowed for KVM. It is unlikely that VMMs ever configured VMs with
: asymmetry, nor does it align with the kernel's overall stance that
: features must be uniform across all cores in the system.
:
: Furthermore, KVM incorrectly advertised an IMP_DEF PMU to guests for
: some time. Migrations from affected kernels was supported by explicitly
: allowing such an ID register value from userspace, and forwarding that
: along to the guest. KVM now allows an IMP_DEF PMU version to be restored
: through the ID register interface, but reinterprets the user value as
: not implemented (0).
KVM: arm64: Rip out the vestiges of the 'old' ID register scheme
KVM: arm64: Handle ID register reads using the VM-wide values
KVM: arm64: Use generic sanitisation for ID_AA64PFR0_EL1
KVM: arm64: Use generic sanitisation for ID_(AA64)DFR0_EL1
KVM: arm64: Use arm64_ftr_bits to sanitise ID register writes
KVM: arm64: Save ID registers' sanitized value per guest
KVM: arm64: Reuse fields of sys_reg_desc for idreg
KVM: arm64: Rewrite IMPDEF PMU version as NI
KVM: arm64: Make vCPU feature flags consistent VM-wide
KVM: arm64: Relax invariance of KVM_ARM_VCPU_POWER_OFF
KVM: arm64: Separate out feature sanitisation and initialisation
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
|
|
Rather than reinventing the wheel in KVM to do ID register sanitisation
we can rely on the work already done in the core kernel. Implement a
generalized sanitisation of ID registers based on the combination of the
arm64_ftr_bits definitions from the core kernel and (optionally) a set
of KVM-specific overrides.
This all amounts to absolutely nothing for now, but will be used in
subsequent changes to realize user-configurable ID registers.
Signed-off-by: Jing Zhang <jingzhangos@google.com>
Link: https://lore.kernel.org/r/20230609190054.1542113-8-oliver.upton@linux.dev
[Oliver: split off from monster patch, rewrote commit description,
reworked RAZ handling, return EINVAL to userspace]
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
|
|
Expose a capability keying the hVHE feature as well as a new
predicate testing it. Nothing is so far using it, and nothing
is enabling it yet.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20230609162200.2024064-5-maz@kernel.org
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
|
|
Disabling KASLR from the command line is implemented as a feature
override. Repaint it slightly so that it can further be used as
more generic infrastructure for SW override purposes.
Signed-off-by: Marc Zyngier <maz@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20230609162200.2024064-4-maz@kernel.org
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
|
|
We only use cpus_set_cap() in update_cpu_capabilities(), where we
open-code an analgous update to boot_cpucaps.
Due to the way the cpucap_ptrs[] array is initialized, we know that the
capability number cannot be greater than or equal to ARM64_NCAPS, so the
warning is superfluous.
Fold cpus_set_cap() into update_cpu_capabilities(), matching what we do
for the boot_cpucaps, and making the relationship between the two a bit
clearer.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20230607164846.3967305-5-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
To more clearly align the various users of the cpucap enumeration, this patch
changes the alternative code to use the term `cpucap` in favour of `feature`.
The alternative_has_feature_{likely,unlikely}() functions are renamed to
alternative_has_cap_<likely,unlikely}() to more clearly align with the
cpus_have_{const_,}cap() helpers.
At the same time remove the stale comment referring to the "ARM64_CB
bit", which is evidently a typo for ARM64_CB_PATCH, which was removed in
commit:
4c0bd995d73ed889 ("arm64: alternatives: have callbacks take a cap")
There should be no functional change as a result of this patch; this is
purely a renaming exercise.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20230607164846.3967305-3-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
The 'cpu_hwcaps' and 'boot_capabilities' bitmaps are bitmaps have the
same enumerated bits, but are named wildly differently for no good
reason. The terms 'hwcaps' and 'capabilities' have become ambiguous over
time (e.g. due to clashes with ELF hwcaps and the structures used to
manage feature detection), and it would be nicer to use 'cpucaps',
matching the <asm/cpucaps.h> header the enumerated bit indices are
defined in.
While this isn't a functional problem, it makes the code harder than
necessary to understand, and hard to extend with related functionality
(e.g. per-cpu cpucap bitmaps).
To that end, this patch renames `boot_capabilities` to `boot_cpucaps`
and `cpu_hwcaps` to `system_cpucaps`. This more clearly indicates the
relationship between the two and aligns with terminology used elsewhere
in our feature management code.
This change was scripted with:
| find . -type f -name '*.[chS]' -print0 | \
| xargs -0 sed -i 's/\<boot_capabilities\>/boot_cpucaps/'
| find . -type f -name '*.[chS]' -print0 | \
| xargs -0 sed -i 's/\<cpu_hwcaps\>/system_cpucaps/'
... and the instance of "cpu_hwcap" (without a trailing "s") in
<asm/mmu_context.h> corrected manually to "system_cpucaps".
Subsequent patches will adjust the naming of related functions to better
align with the `cpucap` naming.
There should be no functional change as a result of this patch; this is
purely a renaming exercise.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20230607164846.3967305-2-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
'for-next/misc', 'for-next/sme2', 'for-next/tpidr2', 'for-next/scs', 'for-next/compat-hwcap', 'for-next/ftrace', 'for-next/efi-boot-mmu-on', 'for-next/ptrauth' and 'for-next/pseudo-nmi', remote-tracking branch 'arm64/for-next/perf' into for-next/core
* arm64/for-next/perf:
perf: arm_spe: Print the version of SPE detected
perf: arm_spe: Add support for SPEv1.2 inverted event filtering
perf: Add perf_event_attr::config3
drivers/perf: fsl_imx8_ddr_perf: Remove set-but-not-used variable
perf: arm_spe: Support new SPEv1.2/v8.7 'not taken' event
perf: arm_spe: Use new PMSIDR_EL1 register enums
perf: arm_spe: Drop BIT() and use FIELD_GET/PREP accessors
arm64/sysreg: Convert SPE registers to automatic generation
arm64: Drop SYS_ from SPE register defines
perf: arm_spe: Use feature numbering for PMSEVFR_EL1 defines
perf/marvell: Add ACPI support to TAD uncore driver
perf/marvell: Add ACPI support to DDR uncore driver
perf/arm-cmn: Reset DTM_PMU_CONFIG at probe
drivers/perf: hisi: Extract initialization of "cpa_pmu->pmu"
drivers/perf: hisi: Simplify the parameters of hisi_pmu_init()
drivers/perf: hisi: Advertise the PERF_PMU_CAP_NO_EXCLUDE capability
* for-next/sysreg:
: arm64 sysreg and cpufeature fixes/updates
KVM: arm64: Use symbolic definition for ISR_EL1.A
arm64/sysreg: Add definition of ISR_EL1
arm64/sysreg: Add definition for ICC_NMIAR1_EL1
arm64/cpufeature: Remove 4 bit assumption in ARM64_FEATURE_MASK()
arm64/sysreg: Fix errors in 32 bit enumeration values
arm64/cpufeature: Fix field sign for DIT hwcap detection
* for-next/sme:
: SME-related updates
arm64/sme: Optimise SME exit on syscall entry
arm64/sme: Don't use streaming mode to probe the maximum SME VL
arm64/ptrace: Use system_supports_tpidr2() to check for TPIDR2 support
* for-next/kselftest: (23 commits)
: arm64 kselftest fixes and improvements
kselftest/arm64: Don't require FA64 for streaming SVE+ZA tests
kselftest/arm64: Copy whole EXTRA context
kselftest/arm64: Fix enumeration of systems without 128 bit SME for SSVE+ZA
kselftest/arm64: Fix enumeration of systems without 128 bit SME
kselftest/arm64: Don't require FA64 for streaming SVE tests
kselftest/arm64: Limit the maximum VL we try to set via ptrace
kselftest/arm64: Correct buffer size for SME ZA storage
kselftest/arm64: Remove the local NUM_VL definition
kselftest/arm64: Verify simultaneous SSVE and ZA context generation
kselftest/arm64: Verify that SSVE signal context has SVE_SIG_FLAG_SM set
kselftest/arm64: Remove spurious comment from MTE test Makefile
kselftest/arm64: Support build of MTE tests with clang
kselftest/arm64: Initialise current at build time in signal tests
kselftest/arm64: Don't pass headers to the compiler as source
kselftest/arm64: Remove redundant _start labels from FP tests
kselftest/arm64: Fix .pushsection for strings in FP tests
kselftest/arm64: Run BTI selftests on systems without BTI
kselftest/arm64: Fix test numbering when skipping tests
kselftest/arm64: Skip non-power of 2 SVE vector lengths in fp-stress
kselftest/arm64: Only enumerate power of two VLs in syscall-abi
...
* for-next/misc:
: Miscellaneous arm64 updates
arm64/mm: Intercept pfn changes in set_pte_at()
Documentation: arm64: correct spelling
arm64: traps: attempt to dump all instructions
arm64: Apply dynamic shadow call stack patching in two passes
arm64: el2_setup.h: fix spelling typo in comments
arm64: Kconfig: fix spelling
arm64: cpufeature: Use kstrtobool() instead of strtobool()
arm64: Avoid repeated AA64MMFR1_EL1 register read on pagefault path
arm64: make ARCH_FORCE_MAX_ORDER selectable
* for-next/sme2: (23 commits)
: Support for arm64 SME 2 and 2.1
arm64/sme: Fix __finalise_el2 SMEver check
kselftest/arm64: Remove redundant _start labels from zt-test
kselftest/arm64: Add coverage of SME 2 and 2.1 hwcaps
kselftest/arm64: Add coverage of the ZT ptrace regset
kselftest/arm64: Add SME2 coverage to syscall-abi
kselftest/arm64: Add test coverage for ZT register signal frames
kselftest/arm64: Teach the generic signal context validation about ZT
kselftest/arm64: Enumerate SME2 in the signal test utility code
kselftest/arm64: Cover ZT in the FP stress test
kselftest/arm64: Add a stress test program for ZT0
arm64/sme: Add hwcaps for SME 2 and 2.1 features
arm64/sme: Implement ZT0 ptrace support
arm64/sme: Implement signal handling for ZT
arm64/sme: Implement context switching for ZT0
arm64/sme: Provide storage for ZT0
arm64/sme: Add basic enumeration for SME2
arm64/sme: Enable host kernel to access ZT0
arm64/sme: Manually encode ZT0 load and store instructions
arm64/esr: Document ISS for ZT0 being disabled
arm64/sme: Document SME 2 and SME 2.1 ABI
...
* for-next/tpidr2:
: Include TPIDR2 in the signal context
kselftest/arm64: Add test case for TPIDR2 signal frame records
kselftest/arm64: Add TPIDR2 to the set of known signal context records
arm64/signal: Include TPIDR2 in the signal context
arm64/sme: Document ABI for TPIDR2 signal information
* for-next/scs:
: arm64: harden shadow call stack pointer handling
arm64: Stash shadow stack pointer in the task struct on interrupt
arm64: Always load shadow stack pointer directly from the task struct
* for-next/compat-hwcap:
: arm64: Expose compat ARMv8 AArch32 features (HWCAPs)
arm64: Add compat hwcap SSBS
arm64: Add compat hwcap SB
arm64: Add compat hwcap I8MM
arm64: Add compat hwcap ASIMDBF16
arm64: Add compat hwcap ASIMDFHM
arm64: Add compat hwcap ASIMDDP
arm64: Add compat hwcap FPHP and ASIMDHP
* for-next/ftrace:
: Add arm64 support for DYNAMICE_FTRACE_WITH_CALL_OPS
arm64: avoid executing padding bytes during kexec / hibernation
arm64: Implement HAVE_DYNAMIC_FTRACE_WITH_CALL_OPS
arm64: ftrace: Update stale comment
arm64: patching: Add aarch64_insn_write_literal_u64()
arm64: insn: Add helpers for BTI
arm64: Extend support for CONFIG_FUNCTION_ALIGNMENT
ACPI: Don't build ACPICA with '-Os'
Compiler attributes: GCC cold function alignment workarounds
ftrace: Add DYNAMIC_FTRACE_WITH_CALL_OPS
* for-next/efi-boot-mmu-on:
: Permit arm64 EFI boot with MMU and caches on
arm64: kprobes: Drop ID map text from kprobes blacklist
arm64: head: Switch endianness before populating the ID map
efi: arm64: enter with MMU and caches enabled
arm64: head: Clean the ID map and the HYP text to the PoC if needed
arm64: head: avoid cache invalidation when entering with the MMU on
arm64: head: record the MMU state at primary entry
arm64: kernel: move identity map out of .text mapping
arm64: head: Move all finalise_el2 calls to after __enable_mmu
* for-next/ptrauth:
: arm64 pointer authentication cleanup
arm64: pauth: don't sign leaf functions
arm64: unify asm-arch manipulation
* for-next/pseudo-nmi:
: Pseudo-NMI code generation optimisations
arm64: irqflags: use alternative branches for pseudo-NMI logic
arm64: add ARM64_HAS_GIC_PRIO_RELAXED_SYNC cpucap
arm64: make ARM64_HAS_GIC_PRIO_MASKING depend on ARM64_HAS_GIC_CPUIF_SYSREGS
arm64: rename ARM64_HAS_IRQ_PRIO_MASKING to ARM64_HAS_GIC_PRIO_MASKING
arm64: rename ARM64_HAS_SYSREG_GIC_CPUIF to ARM64_HAS_GIC_CPUIF_SYSREGS
|
|
Subsequent patches will add more GIC-related cpucaps. When we do so, it
would be nice to give them a consistent HAS_GIC_* prefix.
In preparation for doing so, this patch renames the existing
ARM64_HAS_IRQ_PRIO_MASKING cap to ARM64_HAS_GIC_PRIO_MASKING.
The cpucaps file was hand-modified; all other changes were scripted
with:
find . -type f -name '*.[chS]' -print0 | \
xargs -0 sed -i 's/ARM64_HAS_IRQ_PRIO_MASKING/ARM64_HAS_GIC_PRIO_MASKING/'
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20230130145429.903791-3-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Accessing AA64MMFR1_EL1 is expensive in KVM guests, since it is emulated
in the hypervisor. In fact, ARM documentation mentions some feature
registers are not supposed to be accessed frequently by the OS, and
therefore should be emulated for guests [1].
Commit 0388f9c74330 ("arm64: mm: Implement
arch_wants_old_prefaulted_pte()") introduced a read of this register in
the page fault path. But, even when the feature of setting faultaround
pages with the old flag is disabled for a given cpu, we are still paying
the cost of checking the register on every pagefault. This results in an
explosion of vmexit events in KVM guests, which directly impacts the
performance of virtualized workloads. For instance, running kernbench
yields a 15% increase in system time solely due to the increased vmexit
cycles.
This patch avoids the extra cost by using the sanitized cached value.
It should be safe to do so, since this register mustn't change for a
given cpu.
[1] https://developer.arm.com/-/media/Arm%20Developer%20Community/PDF/Learn%20the%20Architecture/Armv8-A%20virtualization.pdf?revision=a765a7df-1a00-434d-b241-357bfda2dd31
Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de>
Acked-by: Will Deacon <will@kernel.org>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/20230109151955.8292-1-krisman@suse.de
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Add basic feature detection for SME2, detecting that the feature is present
and disabling traps for ZT0.
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20221208-arm64-sme2-v4-8-f2fa0aef982f@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
On CPUs without FEAT_IDST, ID register emulation is slower than it needs
to be, as all threads contend for the same lock to perform the
emulation. This patch reworks the emulation to avoid this unnecessary
contention.
On CPUs with FEAT_IDST (which is mandatory from ARMv8.4 onwards), EL0
accesses to ID registers result in a SYS trap, and emulation of these is
handled with a sys64_hook. These hooks are statically allocated, and no
locking is required to iterate through the hooks and perform the
emulation, allowing emulation to occur in parallel with no contention.
On CPUs without FEAT_IDST, EL0 accesses to ID registers result in an
UNDEFINED exception, and emulation of these accesses is handled with an
undef_hook. When an EL0 MRS instruction is trapped to EL1, the kernel
finds the relevant handler by iterating through all of the undef_hooks,
requiring undef_lock to be held during this lookup.
This locking is only required to safely traverse the list of undef_hooks
(as it can be concurrently modified), and the actual emulation of the
MRS does not require any mutual exclusion. This locking is an
unfortunate bottleneck, especially given that MRS emulation is enabled
unconditionally and is never disabled.
This patch reworks the non-FEAT_IDST MRS emulation logic so that it can
be invoked directly from do_el0_undef(). This removes the bottleneck,
allowing MRS traps to be handled entirely in parallel, and is a stepping
stone to making all of the undef_hooks lock-free.
I've tested this in a 64-vCPU VM on a 64-CPU ThunderX2 host, with a
benchmark which spawns a number of threads which each try to read
ID_AA64ISAR0_EL1 1000000 times. This is vastly more contention than will
ever be seen in realistic usage, but clearly demonstrates the removal of
the bottleneck:
| Threads || Time (seconds) |
| || Before || After |
| || Real | System || Real | System |
|---------++--------+---------++--------+---------|
| 1 || 0.29 | 0.20 || 0.24 | 0.12 |
| 2 || 0.35 | 0.51 || 0.23 | 0.27 |
| 4 || 1.08 | 3.87 || 0.24 | 0.56 |
| 8 || 4.31 | 33.60 || 0.24 | 1.11 |
| 16 || 9.47 | 149.39 || 0.23 | 2.15 |
| 32 || 19.07 | 605.27 || 0.24 | 4.38 |
| 64 || 65.40 | 3609.09 || 0.33 | 11.27 |
Aside from the speedup, there should be no functional change as a result
of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20221019144123.612388-6-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
* for-next/alternatives:
: Alternatives (code patching) improvements
arm64: fix the build with binutils 2.27
arm64: avoid BUILD_BUG_ON() in alternative-macros
arm64: alternatives: add shared NOP callback
arm64: alternatives: add alternative_has_feature_*()
arm64: alternatives: have callbacks take a cap
arm64: alternatives: make alt_region const
arm64: alternatives: hoist print out of __apply_alternatives()
arm64: alternatives: proton-pack: prepare for cap changes
arm64: alternatives: kvm: prepare for cap changes
arm64: cpufeature: make cpus_have_cap() noinstr-safe
|
|
'for-next/gettimeofday', 'for-next/stacktrace', 'for-next/atomics', 'for-next/el1-exceptions', 'for-next/a510-erratum-2658417', 'for-next/defconfig', 'for-next/tpidr2_el0' and 'for-next/ftrace', remote-tracking branch 'arm64/for-next/perf' into for-next/core
* arm64/for-next/perf:
arm64: asm/perf_regs.h: Avoid C++-style comment in UAPI header
arm64/sve: Add Perf extensions documentation
perf: arm64: Add SVE vector granule register to user regs
MAINTAINERS: add maintainers for Alibaba' T-Head PMU driver
drivers/perf: add DDR Sub-System Driveway PMU driver for Yitian 710 SoC
docs: perf: Add description for Alibaba's T-Head PMU driver
* for-next/doc:
: Documentation/arm64 updates
arm64/sve: Document our actual ABI for clearing registers on syscall
* for-next/sve:
: SVE updates
arm64/sysreg: Add hwcap for SVE EBF16
* for-next/sysreg: (35 commits)
: arm64 system registers generation (more conversions)
arm64/sysreg: Fix a few missed conversions
arm64/sysreg: Convert ID_AA64AFRn_EL1 to automatic generation
arm64/sysreg: Convert ID_AA64DFR1_EL1 to automatic generation
arm64/sysreg: Convert ID_AA64FDR0_EL1 to automatic generation
arm64/sysreg: Use feature numbering for PMU and SPE revisions
arm64/sysreg: Add _EL1 into ID_AA64DFR0_EL1 definition names
arm64/sysreg: Align field names in ID_AA64DFR0_EL1 with architecture
arm64/sysreg: Add defintion for ALLINT
arm64/sysreg: Convert SCXTNUM_EL1 to automatic generation
arm64/sysreg: Convert TIPDR_EL1 to automatic generation
arm64/sysreg: Convert ID_AA64PFR1_EL1 to automatic generation
arm64/sysreg: Convert ID_AA64PFR0_EL1 to automatic generation
arm64/sysreg: Convert ID_AA64MMFR2_EL1 to automatic generation
arm64/sysreg: Convert ID_AA64MMFR1_EL1 to automatic generation
arm64/sysreg: Convert ID_AA64MMFR0_EL1 to automatic generation
arm64/sysreg: Convert HCRX_EL2 to automatic generation
arm64/sysreg: Standardise naming of ID_AA64PFR1_EL1 SME enumeration
arm64/sysreg: Standardise naming of ID_AA64PFR1_EL1 BTI enumeration
arm64/sysreg: Standardise naming of ID_AA64PFR1_EL1 fractional version fields
arm64/sysreg: Standardise naming for MTE feature enumeration
...
* for-next/gettimeofday:
: Use self-synchronising counter access in gettimeofday() (if FEAT_ECV)
arm64: vdso: use SYS_CNTVCTSS_EL0 for gettimeofday
arm64: alternative: patch alternatives in the vDSO
arm64: module: move find_section to header
* for-next/stacktrace:
: arm64 stacktrace cleanups and improvements
arm64: stacktrace: track hyp stacks in unwinder's address space
arm64: stacktrace: track all stack boundaries explicitly
arm64: stacktrace: remove stack type from fp translator
arm64: stacktrace: rework stack boundary discovery
arm64: stacktrace: add stackinfo_on_stack() helper
arm64: stacktrace: move SDEI stack helpers to stacktrace code
arm64: stacktrace: rename unwind_next_common() -> unwind_next_frame_record()
arm64: stacktrace: simplify unwind_next_common()
arm64: stacktrace: fix kerneldoc comments
* for-next/atomics:
: arm64 atomics improvements
arm64: atomic: always inline the assembly
arm64: atomics: remove LL/SC trampolines
* for-next/el1-exceptions:
: Improve the reporting of EL1 exceptions
arm64: rework BTI exception handling
arm64: rework FPAC exception handling
arm64: consistently pass ESR_ELx to die()
arm64: die(): pass 'err' as long
arm64: report EL1 UNDEFs better
* for-next/a510-erratum-2658417:
: Cortex-A510: 2658417: remove BF16 support due to incorrect result
arm64: errata: remove BF16 HWCAP due to incorrect result on Cortex-A510
arm64: cpufeature: Expose get_arm64_ftr_reg() outside cpufeature.c
arm64: cpufeature: Force HWCAP to be based on the sysreg visible to user-space
* for-next/defconfig:
: arm64 defconfig updates
arm64: defconfig: Add Coresight as module
arm64: Enable docker support in defconfig
arm64: defconfig: Enable memory hotplug and hotremove config
arm64: configs: Enable all PMUs provided by Arm
* for-next/tpidr2_el0:
: arm64 ptrace() support for TPIDR2_EL0
kselftest/arm64: Add coverage of TPIDR2_EL0 ptrace interface
arm64/ptrace: Support access to TPIDR2_EL0
arm64/ptrace: Document extension of NT_ARM_TLS to cover TPIDR2_EL0
kselftest/arm64: Add test coverage for NT_ARM_TLS
* for-next/ftrace:
: arm64 ftraces updates/fixes
arm64: ftrace: fix module PLTs with mcount
arm64: module: Remove unused plt_entry_is_initialized()
arm64: module: Make plt_equals_entry() static
|
|
Currrently we use a mixture of alternative sequences and static branches
to handle features detected at boot time. For ease of maintenance we
generally prefer to use static branches in C code, but this has a few
downsides:
* Each static branch has metadata in the __jump_table section, which is
not discarded after features are finalized. This wastes some space,
and slows down the patching of other static branches.
* The static branches are patched at a different point in time from the
alternatives, so changes are not atomic. This leaves a transient
period where there could be a mismatch between the behaviour of
alternatives and static branches, which could be problematic for some
features (e.g. pseudo-NMI).
* More (instrumentable) kernel code is executed to patch each static
branch, which can be risky when patching certain features (e.g.
irqflags management for pseudo-NMI).
* When CONFIG_JUMP_LABEL=n, static branches are turned into a load of a
flag and a conditional branch. This means it isn't safe to use such
static branches in an alternative address space (e.g. the NVHE/PKVM
hyp code), where the generated address isn't safe to acccess.
To deal with these issues, this patch introduces new
alternative_has_feature_*() helpers, which work like static branches but
are patched using alternatives. This ensures the patching is performed
at the same time as other alternative patching, allows the metadata to
be freed after patching, and is safe for use in alternative address
spaces.
Note that all supported toolchains have asm goto support, and since
commit:
a0a12c3ed057af57 ("asm goto: eradicate CC_HAS_ASM_GOTO)"
... the CC_HAS_ASM_GOTO Kconfig symbol has been removed, so no feature
check is necessary, and we can always make use of asm goto.
Additionally, note that:
* This has no impact on cpus_have_cap(), which is a dynamic check.
* This has no functional impact on cpus_have_const_cap(). The branches
are patched slightly later than before this patch, but these branches
are not reachable until caps have been finalised.
* It is now invalid to use cpus_have_final_cap() in the window between
feature detection and patching. All existing uses are only expected
after patching anyway, so this should not be a problem.
* The LSE atomics will now be enabled during alternatives patching
rather than immediately before. As the LL/SC an LSE atomics are
functionally equivalent this should not be problematic.
When building defconfig with GCC 12.1.0, the resulting Image is 64KiB
smaller:
| % ls -al Image-*
| -rw-r--r-- 1 mark mark 37108224 Aug 23 09:56 Image-after
| -rw-r--r-- 1 mark mark 37173760 Aug 23 09:54 Image-before
According to bloat-o-meter.pl:
| add/remove: 44/34 grow/shrink: 602/1294 up/down: 39692/-61108 (-21416)
| Function old new delta
| [...]
| Total: Before=16618336, After=16596920, chg -0.13%
| add/remove: 0/2 grow/shrink: 0/0 up/down: 0/-1296 (-1296)
| Data old new delta
| arm64_const_caps_ready 16 - -16
| cpu_hwcap_keys 1280 - -1280
| Total: Before=8987120, After=8985824, chg -0.01%
| add/remove: 0/0 grow/shrink: 0/0 up/down: 0/0 (0)
| RO Data old new delta
| Total: Before=18408, After=18408, chg +0.00%
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20220912162210.3626215-8-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Today, callback alternatives are special-cased within
__apply_alternatives(), and are applied alongside patching for system
capabilities as ARM64_NCAPS is not part of the boot_capabilities feature
mask.
This special-casing is less than ideal. Giving special meaning to
ARM64_NCAPS for this requires some structures and loops to use
ARM64_NCAPS + 1 (AKA ARM64_NPATCHABLE), while others use ARM64_NCAPS.
It's also not immediately clear callback alternatives are only applied
when applying alternatives for system-wide features.
To make this a bit clearer, changes the way that callback alternatives
are identified to remove the special-casing of ARM64_NCAPS, and to allow
callback alternatives to be associated with a cpucap as with all other
alternatives.
New cpucaps, ARM64_ALWAYS_BOOT and ARM64_ALWAYS_SYSTEM are added which
are always detected alongside boot cpu capabilities and system
capabilities respectively. All existing callback alternatives are made
to use ARM64_ALWAYS_SYSTEM, and so will be patched at the same point
during the boot flow as before.
Subsequent patches will make more use of these new cpucaps.
There should be no functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20220912162210.3626215-7-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Currently it isn't safe to use cpus_have_cap() from noinstr code as
test_bit() is explicitly instrumented, and were cpus_have_cap() placed
out-of-line, cpus_have_cap() itself could be instrumented.
Make cpus_have_cap() noinstr safe by marking it __always_inline and
using arch_test_bit().
Aside from the prevention of instrumentation, there should be no
functional change as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Joey Gouly <joey.gouly@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20220912162210.3626215-2-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
get_arm64_ftr_reg() returns the properties of a system register based
on its instruction encoding.
This is needed by erratum workaround in cpu_errata.c to modify the
user-space visible view of id registers.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Link: https://lore.kernel.org/r/20220909165938.3931307-3-james.morse@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Normally we include the full register name in the defines for fields within
registers but this has not been followed for ID registers. In preparation
for automatic generation of defines add the _EL1s into the defines for
ID_AA64DFR0_EL1 to follow the convention. No functional changes.
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220910163354.860255-3-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
The naming scheme the architecture uses for the fields in ID_AA64DFR0_EL1
does not align well with kernel conventions, using as it does a lot of
MixedCase in various arrangements. In preparation for automatically
generating the defines for this register rename the defines used to match
what is in the architecture.
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220910163354.860255-2-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
In preparation for conversion to automatic generation refresh the names
given to the items in the MTE feture enumeration to reflect our standard
pattern for naming, corresponding to the architecture feature names they
reflect. No functional change.
Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-17-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
In preparation for converting the ID_AA64MMFR1_EL1 system register
defines to automatic generation, rename them to follow the conventions
used by other automatically generated registers:
* Add _EL1 in the register name.
* Rename fields to match the names in the ARM ARM:
* LOR -> LO
* HPD -> HPDS
* VHE -> VH
* HADBS -> HAFDBS
* SPECSEI -> SpecSEI
* VMIDBITS -> VMIDBits
There should be no functional change as a result of this patch.
Signed-off-by: Kristina Martsenko <kristina.martsenko@arm.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-11-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
For some reason we refer to ID_AA64MMFR0_EL1.BigEnd as BIGENDEL. Remove the
EL from the name, bringing the naming into sync with DDI0487H.a. Due to the
large amount of MixedCase in this register which isn't really consistent
with either the kernel style or the majority of the architecture the use of
upper case is preserved. No functional changes.
Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-9-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Our standard is to include the _EL1 in the constant names for registers but
we did not do that for ID_AA64PFR1_EL1, update to do so in preparation for
conversion to automatic generation. No functional change.
Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-8-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Normally we include the full register name in the defines for fields within
registers but this has not been followed for ID registers. In preparation
for automatic generation of defines add the _EL1s into the defines for
ID_AA64PFR0_EL1 to follow the convention. No functional changes.
Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-7-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Normally we include the full register name in the defines for fields within
registers but this has not been followed for ID registers. In preparation
for automatic generation of defines add the _EL1s into the defines for
ID_AA64MMFR0_EL1 to follow the convention. No functional changes.
Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Kristina Martsenko <kristina.martsenko@arm.com>
Link: https://lore.kernel.org/r/20220905225425.1871461-5-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
* for-next/boot: (34 commits)
arm64: fix KASAN_INLINE
arm64: Add an override for ID_AA64SMFR0_EL1.FA64
arm64: Add the arm64.nosve command line option
arm64: Add the arm64.nosme command line option
arm64: Expose a __check_override primitive for oddball features
arm64: Allow the idreg override to deal with variable field width
arm64: Factor out checking of a feature against the override into a macro
arm64: Allow sticky E2H when entering EL1
arm64: Save state of HCR_EL2.E2H before switch to EL1
arm64: Rename the VHE switch to "finalise_el2"
arm64: mm: fix booting with 52-bit address space
arm64: head: remove __PHYS_OFFSET
arm64: lds: use PROVIDE instead of conditional definitions
arm64: setup: drop early FDT pointer helpers
arm64: head: avoid relocating the kernel twice for KASLR
arm64: kaslr: defer initialization to initcall where permitted
arm64: head: record CPU boot mode after enabling the MMU
arm64: head: populate kernel page tables with MMU and caches on
arm64: head: factor out TTBR1 assignment into a macro
arm64: idreg-override: use early FDT mapping in ID map
...
|
|
* for-next/cpufeature:
arm64/hwcap: Support FEAT_EBF16
arm64/cpufeature: Store elf_hwcaps as a bitmap rather than unsigned long
arm64/hwcap: Document allocation of upper bits of AT_HWCAP
arm64: trap implementation defined functionality in userspace
|
|
When we added support for AT_HWCAP2 we took advantage of the fact that we
have limited hwcaps to the low 32 bits and stored it along with AT_HWCAP
in a single unsigned integer. Thanks to the ever expanding capabilities of
the architecture we have now allocated all 64 of the bits in an unsigned
long so in preparation for adding more hwcaps convert elf_hwcap to be a
bitmap instead, with 64 bits allocated to each AT_HWCAP.
There should be no functional change from this patch.
Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20220707103632.12745-3-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Normally we include the full register name in the defines for fields within
registers but this has not been followed for ID registers. In preparation
for automatic generation of defines add the _EL1s into the defines for
ID_AA64ISAR2_EL1 to follow the convention. No functional changes.
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20220704170302.2609529-17-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
|