Separate the "core" TDP level handling from the nested EPT path to make
it clear that kvm_x86_ops.get_tdp_level() is used if and only if nested
EPT is not in use (kvm_init_shadow_ept_mmu() calculates the level from
the passed in vmcs12->eptp). Add a WARN_ON() to enforce that the
kvm_x86_ops hook is not called for nested EPT.
This sets the stage for snapshotting the non-"nested EPT" TDP page level
during kvm_cpuid_update() to avoid the retpoline associated with
kvm_x86_ops.get_tdp_level() when resetting the MMU, a relatively
frequent operation when running a nested guest.
No functional change intended.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200502043234.12481-10-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move CR0 caching into the standard register caching mechanism in order
to take advantage of the availability checks provided by regs_avail.
This avoids multiple VMREADs in the (uncommon) case where kvm_read_cr0()
is called multiple times in a single VM-Exit, and more importantly
eliminates a kvm_x86_ops hook, saves a retpoline on SVM when reading
CR0, and squashes the confusing naming discrepancy of "cache_reg" vs.
"decache_cr0_guest_bits".
No functional change intended.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200502043234.12481-8-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move CR4 caching into the standard register caching mechanism in order
to take advantage of the availability checks provided by regs_avail.
This avoids multiple VMREADs and retpolines (when configured) during
nested VMX transitions as kvm_read_cr4_bits() is invoked multiple times
on each transition, e.g. when stuffing CR0 and CR3.
As an added bonus, this eliminates a kvm_x86_ops hook, saves a retpoline
on SVM when reading CR4, and squashes the confusing naming discrepancy
of "cache_reg" vs. "decache_cr4_guest_bits".
No functional change intended.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200502043234.12481-7-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Unconditionally check the validity of the incoming CR3 during nested
VM-Enter/VM-Exit to avoid invoking kvm_read_cr3() in the common case
where the guest isn't using PAE paging. If vmcs.GUEST_CR3 hasn't yet
been cached (common case), kvm_read_cr3() will trigger a VMREAD. The
VMREAD (~30 cycles) alone is likely slower than nested_cr3_valid()
(~5 cycles if vcpu->arch.maxphyaddr gets a cache hit), and the poor
exchange only gets worse when retpolines are enabled as the call to
kvm_x86_ops.cache_reg() will incur a retpoline (60+ cycles).
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200502043234.12481-3-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Save L1's TSC offset in 'struct kvm_vcpu_arch' and drop the kvm_x86_ops
hook read_l1_tsc_offset(). This avoids a retpoline (when configured)
when reading L1's effective TSC, which is done at least once on every
VM-Exit.
No functional change intended.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200502043234.12481-2-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Skip the Indirect Branch Prediction Barrier that is triggered on a VMCS
switch when temporarily loading vmcs02 to synchronize it to vmcs12, i.e.
give copy_vmcs02_to_vmcs12_rare() the same treatment as
vmx_switch_vmcs().
Make vmx_vcpu_load() static now that it's only referenced within vmx.c.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200506235850.22600-3-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Skip the Indirect Branch Prediction Barrier that is triggered on a VMCS
switch when running with spectre_v2_user=on/auto if the switch is
between two VMCSes in the same guest, i.e. between vmcs01 and vmcs02.
The IBPB is intended to prevent one guest from attacking another, which
is unnecessary in the nested case as it's the same guest from KVM's
perspective.
This all but eliminates the overhead observed for nested VMX transitions
when running with CONFIG_RETPOLINE=y and spectre_v2_user=on/auto, which
can be significant, e.g. roughly 3x on current systems.
Reported-by: Alexander Graf <graf@amazon.com>
Cc: KarimAllah Raslan <karahmed@amazon.de>
Cc: stable@vger.kernel.org
Fixes: 15d4507152 ("KVM/x86: Add IBPB support")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200501163117.4655-1-sean.j.christopherson@intel.com>
[Invert direction of bool argument. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use vmx_get_intr_info() when grabbing the cached vmcs.INTR_INFO in
handle_exception_nmi() to ensure the cache isn't stale. Bypassing the
caching accessor doesn't cause any known issues as the cache is always
refreshed by handle_exception_nmi_irqoff(), but the whole point of
adding the proper caching mechanism was to avoid such dependencies.
Fixes: 8791585837 ("KVM: VMX: Cache vmcs.EXIT_INTR_INFO using arch avail_reg flags")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200427171837.22613-1-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
KVM is not handling the case where EIP wraps around the 32-bit address
space (that is, outside long mode). This is needed both in vmx.c
and in emulate.c. SVM with NRIPS is okay, but it can still print
an error to dmesg due to integer overflow.
Reported-by: Nick Peterson <everdox@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The '==' expression itself is bool, no need to convert it to bool again.
This fixes the following coccicheck warning:
virt/kvm/eventfd.c:724:38-43: WARNING: conversion to bool not needed
here
Signed-off-by: Jason Yan <yanaijie@huawei.com>
Message-Id: <20200420123805.4494-1-yanaijie@huawei.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The use of any sort of waitqueue (simple or regular) for
wait/waking vcpus has always been an overkill and semantically
wrong. Because this is per-vcpu (which is blocked) there is
only ever a single waiting vcpu, thus no need for any sort of
queue.
As such, make use of the rcuwait primitive, with the following
considerations:
- rcuwait already provides the proper barriers that serialize
concurrent waiter and waker.
- Task wakeup is done in rcu read critical region, with a
stable task pointer.
- Because there is no concurrency among waiters, we need
not worry about rcuwait_wait_event() calls corrupting
the wait->task. As a consequence, this saves the locking
done in swait when modifying the queue. This also applies
to per-vcore wait for powerpc kvm-hv.
The x86 tscdeadline_latency test mentioned in 8577370fb0
("KVM: Use simple waitqueue for vcpu->wq") shows that, on avg,
latency is reduced by around 15-20% with this change.
Cc: Paul Mackerras <paulus@ozlabs.org>
Cc: kvmarm@lists.cs.columbia.edu
Cc: linux-mips@vger.kernel.org
Reviewed-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Message-Id: <20200424054837.5138-6-dave@stgolabs.net>
[Avoid extra logic changes. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This call is lockless and thus should not be trusted blindly.
For example, the return value of rcuwait_wakeup() should be used to
track whether a process was woken up.
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Message-Id: <20200424054837.5138-5-dave@stgolabs.net>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This allows further flexibility for some callers to implement
ad-hoc versions of the generic rcuwait_wait_event(). For example,
kvm will need this to maintain tracing semantics. The naming
is of course similar to what waitqueue apis offer.
Also go ahead and make use of rcu_assign_pointer() for both task
writes as it will make the __rcu sparse people happy - this will
be the special nil case, thus no added serialization.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Message-Id: <20200424054837.5138-4-dave@stgolabs.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Propagating the return value of wake_up_process() back to the caller
can come in handy for future users, such as for statistics or
accounting purposes.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Message-Id: <20200424054837.5138-3-dave@stgolabs.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The 'trywake' name was renamed to simply 'wake', update the comment.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Message-Id: <20200424054837.5138-2-dave@stgolabs.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add an argument to interrupt_allowed and nmi_allowed, to checking if
interrupt injection is blocked. Use the hook to handle the case where
an interrupt arrives between check_nested_events() and the injection
logic. Drop the retry of check_nested_events() that hack-a-fixed the
same condition.
Blocking injection is also a bit of a hack, e.g. KVM should do exiting
and non-exiting interrupt processing in a single pass, but it's a more
precise hack. The old comment is also misleading, e.g. KVM_REQ_EVENT is
purely an optimization, setting it on every run loop (which KVM doesn't
do) should not affect functionality, only performance.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-13-sean.j.christopherson@intel.com>
[Extend to SVM, add SMI and NMI. Even though NMI and SMI cannot come
asynchronously right now, making the fix generic is easy and removes a
special case. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use vmx_get_rflags() instead of manually reading vmcs.GUEST_RFLAGS when
querying RFLAGS.IF so that multiple checks against interrupt blocking in
a single run loop only require a single VMREAD.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-14-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Use vmx_interrupt_blocked() instead of bouncing through
vmx_interrupt_allowed() when handling edge cases in vmx_handle_exit().
The nested_run_pending check in vmx_interrupt_allowed() should never
evaluate true in the VM-Exit path.
Hoist the WARN in handle_invalid_guest_state() up to vmx_handle_exit()
to enforce the above assumption for the !enable_vnmi case, and to detect
any other potential bugs with nested VM-Enter.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-12-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
WARN if a pending exception is coincident with an injected exception
before calling check_nested_events() so that the WARN will fire even if
inject_pending_event() bails early because check_nested_events() detects
the conflict. Bailing early isn't problematic (quite the opposite), but
suppressing the WARN is undesirable as it could mask a bug elsewhere in
KVM.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-11-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Short circuit vmx_check_nested_events() if an unblocked IRQ/NMI/SMI is
pending and needs to be injected into L2, priority between coincident
events is not dependent on exiting behavior.
Fixes: b518ba9fa6 ("KVM: nSVM: implement check_nested_events for interrupts")
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Report interrupts as allowed when the vCPU is in L2 and L2 is being run with
exit-on-interrupts enabled and EFLAGS.IF=1 (either on the host or on the guest
according to VINTR). Interrupts are always unblocked from L1's perspective
in this case.
While moving nested_exit_on_intr to svm.h, use INTERCEPT_INTR properly instead
of assuming it's zero (which it is of course).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Check for an unblocked SMI in vmx_check_nested_events() so that pending
SMIs are correctly prioritized over IRQs and NMIs when the latter events
will trigger VM-Exit. This also fixes an issue where an SMI that was
marked pending while processing a nested VM-Enter wouldn't trigger an
immediate exit, i.e. would be incorrectly delayed until L2 happened to
take a VM-Exit.
Fixes: 64d6067057 ("KVM: x86: stubs for SMM support")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-10-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Short circuit vmx_check_nested_events() if an unblocked IRQ/NMI is
pending and needs to be injected into L2, priority between coincident
events is not dependent on exiting behavior.
Fixes: b6b8a1451f ("KVM: nVMX: Rework interception of IRQs and NMIs")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-9-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move the architectural (non-KVM specific) interrupt/NMI/SMI blocking checks
to a separate helper so that they can be used in a future patch by
svm_check_nested_events().
No functional change intended.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move the architectural (non-KVM specific) interrupt/NMI blocking checks
to a separate helper so that they can be used in a future patch by
vmx_check_nested_events().
No functional change intended.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-8-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Unlike VMX, SVM allows a hypervisor to take a SMI vmexit without having
any special SMM-monitor enablement sequence. Therefore, it has to be
handled like interrupts and NMIs. Check for an unblocked SMI in
svm_check_nested_events() so that pending SMIs are correctly prioritized
over IRQs and NMIs when the latter events will trigger VM-Exit.
Note that there is no need to test explicitly for SMI vmexits, because
guests always runs outside SMM and therefore can never get an SMI while
they are blocked.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Report NMIs as allowed when the vCPU is in L2 and L2 is being run with
Exit-on-NMI enabled, as NMIs are always unblocked from L1's perspective
in this case.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Report NMIs as allowed when the vCPU is in L2 and L2 is being run with
Exit-on-NMI enabled, as NMIs are always unblocked from L1's perspective
in this case.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-7-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Do not hardcode is_smm so that all the architectural conditions for
blocking SMIs are listed in a single place. Well, in two places because
this introduces some code duplication between Intel and AMD.
This ensures that nested SVM obeys GIF in kvm_vcpu_has_events.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Return an actual bool for kvm_x86_ops' {interrupt_nmi}_allowed() hook to
better reflect the return semantics, and to avoid creating an even
bigger mess when the related VMX code is refactored in upcoming patches.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-5-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Re-request KVM_REQ_EVENT if vcpu_enter_guest() bails after processing
pending requests and an immediate exit was requested. This fixes a bug
where a pending event, e.g. VMX preemption timer, is delayed and/or lost
if the exit was deferred due to something other than a higher priority
_injected_ event, e.g. due to a pending nested VM-Enter. This bug only
affects the !injected case as kvm_x86_ops.cancel_injection() sets
KVM_REQ_EVENT to redo the injection, but that's purely serendipitous
behavior with respect to the deferred event.
Note, emulated preemption timer isn't the only event that can be
affected, it simply happens to be the only event where not re-requesting
KVM_REQ_EVENT is blatantly visible to the guest.
Fixes: f4124500c2 ("KVM: nVMX: Fully emulate preemption timer")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-4-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add a kvm_x86_ops hook to detect a nested pending "hypervisor timer" and
use it to effectively open a window for servicing the expired timer.
Like pending SMIs on VMX, opening a window simply means requesting an
immediate exit.
This fixes a bug where an expired VMX preemption timer (for L2) will be
delayed and/or lost if a pending exception is injected into L2. The
pending exception is rightly prioritized by vmx_check_nested_events()
and injected into L2, with the preemption timer left pending. Because
no window opened, L2 is free to run uninterrupted.
Fixes: f4124500c2 ("KVM: nVMX: Fully emulate preemption timer")
Reported-by: Jim Mattson <jmattson@google.com>
Cc: Oliver Upton <oupton@google.com>
Cc: Peter Shier <pshier@google.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-3-sean.j.christopherson@intel.com>
[Check it in kvm_vcpu_has_events too, to ensure that the preemption
timer is serviced promptly even if the vCPU is halted and L1 is not
intercepting HLT. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Short circuit vmx_check_nested_events() if an exception is pending and
needs to be injected into L2, priority between coincident events is not
dependent on exiting behavior. This fixes a bug where a single-step #DB
that is not intercepted by L1 is incorrectly dropped due to servicing a
VMX Preemption Timer VM-Exit.
Injected exceptions also need to be blocked if nested VM-Enter is
pending or an exception was already injected, otherwise injecting the
exception could overwrite an existing event injection from L1.
Technically, this scenario should be impossible, i.e. KVM shouldn't
inject its own exception during nested VM-Enter. This will be addressed
in a future patch.
Note, event priority between SMI, NMI and INTR is incorrect for L2, e.g.
SMI should take priority over VM-Exit on NMI/INTR, and NMI that is
injected into L2 should take priority over VM-Exit INTR. This will also
be addressed in a future patch.
Fixes: b6b8a1451f ("KVM: nVMX: Rework interception of IRQs and NMIs")
Reported-by: Jim Mattson <jmattson@google.com>
Cc: Oliver Upton <oupton@google.com>
Cc: Peter Shier <pshier@google.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Message-Id: <20200423022550.15113-2-sean.j.christopherson@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Migrate nested guest NMI intercept processing
to new check_nested_events.
Signed-off-by: Cathy Avery <cavery@redhat.com>
Message-Id: <20200414201107.22952-2-cavery@redhat.com>
[Reorder clauses as NMIs have higher priority than IRQs; inject
immediate vmexit as is now done for IRQ vmexits. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We can immediately leave SVM guest mode in svm_check_nested_events
now that we have the nested_run_pending mechanism. This makes
things easier because we can run the rest of inject_pending_event
with GIF=0, and KVM will naturally end up requesting the next
interrupt window.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Similar to VMX, we need to leave the halted state when performing a vmexit.
Failure to do so will cause a hang after vmexit.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We want to inject vmexits immediately from svm_check_nested_events,
so that the interrupt/NMI window requests happen in inject_pending_event
right after it returns.
This however has the same issue as in vmx_check_nested_events, so
introduce a nested_run_pending flag with the exact same purpose
of delaying vmexit injection after the vmentry.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Though rdpkru and wrpkru are contingent upon CR4.PKE, the PKRU
resource isn't. It can be read with XSAVE and written with XRSTOR.
So, if we don't set the guest PKRU value here(kvm_load_guest_xsave_state),
the guest can read the host value.
In case of kvm_load_host_xsave_state, guest with CR4.PKE clear could
potentially use XRSTOR to change the host PKRU value.
While at it, move pkru state save/restore to common code and the
host_pkru field to kvm_vcpu_arch. This will let SVM support protection keys.
Cc: stable@vger.kernel.org
Reported-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Babu Moger <babu.moger@amd.com>
Message-Id: <158932794619.44260.14508381096663848853.stgit@naples-babu.amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
- Ensure that direct mapping alias is always flushed when changing page
attributes. The optimization for small ranges failed to do so when
the virtual address was in the vmalloc or module space.
- Unbreak the trace event registration for syscalls without arguments
caused by the refactoring of the SYSCALL_DEFINE0() macro.
- Move the printk in the TSC deadline timer code to a place where it is
guaranteed to only be called once during boot and cannot be rearmed by
clearing warn_once after boot. If it's invoked post boot then lockdep
rightfully complains about a potential deadlock as the calling context
is different.
- A series of fixes for objtool and the ORC unwinder addressing variety
of small issues:
Stack offset tracking for indirect CFAs in objtool ignored subsequent
pushs and pops
Repair the unwind hints in the register clearing entry ASM code
Make the unwinding in the low level exit to usermode code stop after
switching to the trampoline stack. The unwind hint is not longer valid
and the ORC unwinder emits a warning as it can't find the registers
anymore.
Fix the unwind hints in switch_to_asm() and rewind_stack_do_exit()
which caused objtool to generate bogus ORC data.
Prevent unwinder warnings when dumping the stack of a non-current
task as there is no way to be sure about the validity because the
dumped stack can be a moving target.
Make the ORC unwinder behave the same way as the frame pointer
unwinder when dumping an inactive tasks stack and do not skip the
first frame.
Prevent ORC unwinding before ORC data has been initialized
Immediately terminate unwinding when a unknown ORC entry type is
found.
Prevent premature stop of the unwinder caused by IRET frames.
Fix another infinite loop in objtool caused by a negative offset which
was not catched.
Address a few build warnings in the ORC unwinder and add missing
static/ro_after_init annotations
-----BEGIN PGP SIGNATURE-----
iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAl6363QTHHRnbHhAbGlu
dXRyb25peC5kZQAKCRCmGPVMDXSYoRJHD/4hWjzJLsUZ9xq2NrzhevoeJtxj+wVM
66x9NM3mlFQ30BN4Aye4EnNEhR0iIvNPWWdfEmaJYfPHPwnUjjcOa426HYxP/WXA
DWd5F20wGaaPOJ65LJpy/+pfcxAeQynt4I2cDEWHAplswfOWV/Hv8mSeKAKuq400
lCWaTMkWcO/toexSNn8PVyWi9rHlm+76E1bHkVwuoekGBGt1VloKGlK6OPyElzL2
w9VtrjSLlYQ0MdfCJKQeg44XQPMbf4hZRfc88x9SwDWB01q7aSvb0pWNl9AJKNXA
7fFu5T4F4PABPgRM7eJ5yNk0De9jM1y+6eCp66f9UXoNOeSr7Boz9Xc4xWqAraIi
9Dtx3WliO9CAxwUiD+Cj2iJO5o83AdRK/xhCth2VRnYMS6imfSidEqTC+LhEtkzw
Yplu7sbrWQDa5JTh8vk60clDvbkU+pfdxJisY+KClRguWfQfR6MJNuQnE0NHr7cH
H4VXFFHEE6tDdJneQ9RxA4iF20RTgSlJGK0YlsH6QsxPsRgoHVkGUao8fQhrNvRc
MIdpm9YasWStjJ7ZXbDeStmnLFN3DCj1RC8wmvJ4i/R1sPnBvPvRUt4Lm988a951
Vyr23VIcVrE7zykiqQZVH7bvIv6ULORqTJbIOF1rO/aIut4W8z0ojoVXC0Z7CiwF
S5SGj+hlWciIew==
=0rCi
-----END PGP SIGNATURE-----
Merge tag 'x86-urgent-2020-05-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Thomas Gleixner:
"A set of fixes for x86:
- Ensure that direct mapping alias is always flushed when changing
page attributes. The optimization for small ranges failed to do so
when the virtual address was in the vmalloc or module space.
- Unbreak the trace event registration for syscalls without arguments
caused by the refactoring of the SYSCALL_DEFINE0() macro.
- Move the printk in the TSC deadline timer code to a place where it
is guaranteed to only be called once during boot and cannot be
rearmed by clearing warn_once after boot. If it's invoked post boot
then lockdep rightfully complains about a potential deadlock as the
calling context is different.
- A series of fixes for objtool and the ORC unwinder addressing
variety of small issues:
- Stack offset tracking for indirect CFAs in objtool ignored
subsequent pushs and pops
- Repair the unwind hints in the register clearing entry ASM code
- Make the unwinding in the low level exit to usermode code stop
after switching to the trampoline stack. The unwind hint is no
longer valid and the ORC unwinder emits a warning as it can't
find the registers anymore.
- Fix unwind hints in switch_to_asm() and rewind_stack_do_exit()
which caused objtool to generate bogus ORC data.
- Prevent unwinder warnings when dumping the stack of a
non-current task as there is no way to be sure about the
validity because the dumped stack can be a moving target.
- Make the ORC unwinder behave the same way as the frame pointer
unwinder when dumping an inactive tasks stack and do not skip
the first frame.
- Prevent ORC unwinding before ORC data has been initialized
- Immediately terminate unwinding when a unknown ORC entry type
is found.
- Prevent premature stop of the unwinder caused by IRET frames.
- Fix another infinite loop in objtool caused by a negative
offset which was not catched.
- Address a few build warnings in the ORC unwinder and add
missing static/ro_after_init annotations"
* tag 'x86-urgent-2020-05-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/unwind/orc: Move ORC sorting variables under !CONFIG_MODULES
x86/apic: Move TSC deadline timer debug printk
ftrace/x86: Fix trace event registration for syscalls without arguments
x86/mm/cpa: Flush direct map alias during cpa
objtool: Fix infinite loop in for_offset_range()
x86/unwind/orc: Fix premature unwind stoppage due to IRET frames
x86/unwind/orc: Fix error path for bad ORC entry type
x86/unwind/orc: Prevent unwinding before ORC initialization
x86/unwind/orc: Don't skip the first frame for inactive tasks
x86/unwind: Prevent false warnings for non-current tasks
x86/unwind/orc: Convert global variables to static
x86/entry/64: Fix unwind hints in rewind_stack_do_exit()
x86/entry/64: Fix unwind hints in __switch_to_asm()
x86/entry/64: Fix unwind hints in kernel exit path
x86/entry/64: Fix unwind hints in register clearing code
objtool: Fix stack offset tracking for indirect CFAs
search which can be triggered when building the kernel with
-ffunction-sections.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAl635X8THHRnbHhAbGlu
dXRyb25peC5kZQAKCRCmGPVMDXSYod4AEACumK3x6NTRw7o889GU+Kzj/LfpJZOr
QoKjZgo4AzeONa/5sKoBeNhGiHkuAOhhwG/yq9QrTMTw3b6kFPnQ8Vel590JM/uj
j/qjgbpR5v2v0ULMbONEHBN/3dRETyFomAe88hg6WM7ZuNCXPd+rbya9XxD7LdQo
K04y+vdPACJIf1hyb91sOWfyAWDHzwencNQ0qq0CMf72JpKFcRXCqor7vvH5zNH0
2PmTbKasOlyrgb9HQNLi6mNNoM47bc9lg8D76eDBl/Wl3yqYVwAawk4vqgwjxc0P
MetoybQsWegi2dzmhjk61MIF8h6vw/NM6xuyiXSW7dHCN1GXzWGojuSVfzv0AEj2
0/xbRToSRWuPAvURGqiZ8GhBgG2ybHz+sDQyh/Cg4Jj2NulOLxy0lBngh5o/Dczh
mLpqcJUd6I76bUSk+c0vehUqOEj0yAZm5Olo7gTkyoHlFSG0b2nShC1Km4TBM04A
h5RTp/lBp++u1h4+w2EwyP8F6+chcpsyUQG1yEpz+GYoHsjYkA1gfx1rtD7uNKUI
NwzVsuUEUt3UL4PPnjMMPYbOZb4R3vUjGFhgG5Se3D+2mNHkrvh+HA3WvUb5CUuW
MSQ0e0K7Su3+M0/H+PQDKDPS/a0h3HtpZwaZt4gPkxnXNc5P7+c4Vyo28nttwF2O
Ad08s6mVY8sDtA==
=0+2v
-----END PGP SIGNATURE-----
Merge tag 'objtool-urgent-2020-05-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull objtool fix from Thomas Gleixner:
"A single fix for objtool to prevent an infinite loop in the
jump table search which can be triggered when building the
kernel with '-ffunction-sections'"
* tag 'objtool-urgent-2020-05-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
objtool: Fix infinite loop in find_jump_table()
With those changes GCC9 fails to analyze arch_futex_atomic_op_inuser()
correctly and emits a 'maybe unitialized' warning. While we usually ignore
compiler stupidity the conditional store is pointless anyway because the
correct case has to store. For the fault case the extra store does no harm.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCgAxFiEEQp8+kY+LLUocC4bMphj1TA10mKEFAl635IgTHHRnbHhAbGlu
dXRyb25peC5kZQAKCRCmGPVMDXSYoTQkEAChNFCVLyCNihKNDar4h0LhuChYSVow
CRpnLFKTrxWpUemHYgOQM8FBFRjvVK3o3yhmp7qyWc1LnM4iYuGleP+FhfL5F1mk
t0ANUMFAZOomy4348XXeVR/bq7RFpKrD68tsl0u3nC+NzykN4kCt3n8qN0CendbH
+j9ILi2eNEbSIarC4gH228UuN0YIY5nC9ftW9oHJ+c/Z23X9RXstXhiH1TB9w99E
97G96WOdWjA+z7KzMF1REi/goJGxeZh0GQdz4iuR6vBNd4iR2V9hT3DqklUnSZPp
+XGvaWaUH7yVa0etUdCtlBwmZ7Xq3h/N381khq9m6NfXdS8aZ7OavWyf+3urx7xz
6GtCIlo0QnIyqx5oe1/06zxQNgNAf0JAKIi5IDLFsr8SwfoWoG1Z6RrAYugyZurm
9RganJhVGrTXApi/9NUafhqHv7y9OE5UodRLpnKdnjei+/sE51xaIgx7Tr59Ao8n
G3sMZkI/8GV9cQnKrg7qcN7kiJfyofoslnOigwm3hJaTMAn0fK9+Bx5YvJgVlyf2
SmE3saw3408/hhqkVWCW5GL8J+JEh/WDi6FCZ3Fu+L1UHalzqDGKAlhfmVxxDNmt
tDbP4AUHbucmcWl98Ms0iKtfSwz1H0kTfkaHS0cvphIfH593S4FDJEiywiKsab7v
8nPUV2Bi6vZHxw==
=Va5K
-----END PGP SIGNATURE-----
Merge tag 'locking-urgent-2020-05-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking fix from Thomas Gleixner:
"A single fix for the fallout of the recent futex uacess rework.
With those changes GCC9 fails to analyze arch_futex_atomic_op_inuser()
correctly and emits a 'maybe unitialized' warning. While we usually
ignore compiler stupidity the conditional store is pointless anyway
because the correct case has to store. For the fault case the extra
store does no harm"
* tag 'locking-urgent-2020-05-10' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
ARM: futex: Address build warning
Including:
- The race condition fixes for the AMD IOMMU driver. This are 5
patches fixing two race conditions around
increase_address_space(). The first race condition was around
the non-atomic update of the domain page-table root pointer
and the variable containing the page-table depth (called
mode). This is fixed now be merging page-table root and mode
into one 64-bit field which is read/written atomically.
The second race condition was around updating the page-table
root pointer and making it public before the hardware caches
were flushed. This could cause addresses to be mapped and
returned to drivers which are not reachable by IOMMU hardware
yet, causing IO page-faults. This is fixed too by adding the
necessary flushes before a new page-table root is published.
Related to the race condition fixes these patches also add a
missing domain_flush_complete() barrier to update_domain() and
a fix to bail out of the loop which tries to increase the
address space when the call to increase_address_space() fails.
Qian was able to trigger the race conditions under high load
and memory pressure within a few days of testing. He confirmed
that he has seen no issues anymore with the fixes included
here.
- Fix for a list-handling bug in the VirtIO IOMMU driver.
-----BEGIN PGP SIGNATURE-----
iQIzBAABCAAdFiEEr9jSbILcajRFYWYyK/BELZcBGuMFAl638MYACgkQK/BELZcB
GuOQxQ/5AYorgKuGkqVbob69YWZuSAEG08dlzDw4C8CDnPKXEPd0L4gJGLP7BpEh
bPJo9QJtXW7zG6Hhk8sWk9/iONsThngoudaQrodJwaQRdCDGaDZlvBaezG2Vx4xb
A2OrcM9lvQSODdgyf3x0O1cX7vkQ4J6nJR1Z8Fw4EufjH6TS9DR0tf8ZWHtIpHa6
Josu3M+qhUXPsn7KK5o7GtNib7sI4whLldYaASGsuaFGzod3CgA0cgmL2HfD+DWP
k1EIEZTCaOq0BamtpyXbSA6o0AxwKERr/KONi1pL0xN4r0yCjsxEQ6+Rw4caqvgA
zrfv3kk4a+wFAxOe0hUEtKk8Oy587LPJvIX4FnjG8hRnBrEaQC9vy4eMj05utPid
PpsNQ35P+SyrxTlIp7ybIVhUvKbxih8SSpRsjx16vX+r/h4SRvWHzjpHVq/4+gIT
TeZGw1g7xCIyjzn5HqLs/nMG/Ly9QHQaWia8slJJgbzI/deUXAVTy6PmMrqHB+zv
e0PelKsq5lEQBrFX+r/Sg5hBViKaMykXKbXXg3KIolzlutJc2Rrzh4EEKpP/ug2/
upTXf+NvMobNxb3QLqn3IJApIirEGYQqI7lwjiUwTC5xb3EfYLUuRa5i4fbOAZIv
krsVM4sNX1S32TblTMzDDOEEggPG1wPhVF5B+1emOolYHek3ShI=
=gqwr
-----END PGP SIGNATURE-----
Merge tag 'iommu-fixes-v5.7-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu
Pull iommu fixes from Joerg Roedel:
- Race condition fixes for the AMD IOMMU driver.
These are five patches fixing two race conditions around
increase_address_space(). The first race condition was around the
non-atomic update of the domain page-table root pointer and the
variable containing the page-table depth (called mode). This is fixed
now be merging page-table root and mode into one 64-bit field which
is read/written atomically.
The second race condition was around updating the page-table root
pointer and making it public before the hardware caches were flushed.
This could cause addresses to be mapped and returned to drivers which
are not reachable by IOMMU hardware yet, causing IO page-faults. This
is fixed too by adding the necessary flushes before a new page-table
root is published.
Related to the race condition fixes these patches also add a missing
domain_flush_complete() barrier to update_domain() and a fix to bail
out of the loop which tries to increase the address space when the
call to increase_address_space() fails.
Qian was able to trigger the race conditions under high load and
memory pressure within a few days of testing. He confirmed that he
has seen no issues anymore with the fixes included here.
- Fix for a list-handling bug in the VirtIO IOMMU driver.
* tag 'iommu-fixes-v5.7-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu:
iommu/virtio: Reverse arguments to list_add
iommu/amd: Do not flush Device Table in iommu_map_page()
iommu/amd: Update Device Table in increase_address_space()
iommu/amd: Call domain_flush_complete() in update_domain()
iommu/amd: Do not loop forever when trying to increase address space
iommu/amd: Fix race in increase_address_space()/fetch_pte()
-----BEGIN PGP SIGNATURE-----
iQJEBAABCAAuFiEEwPw5LcreJtl1+l5K99NY+ylx4KYFAl63WVAQHGF4Ym9lQGtl
cm5lbC5kawAKCRD301j7KXHgpkXWD/9qJgqQpPkigCCwwPHZ+phthw6gHeAgBxPH
Cw6P9QB4QCdacZjQA6QH3zdxaDsCCitQRioWPgxngs1326TKYNzBi7U3eTEwiK12
cnRybLnkzei4yzYVUSJk637oOoQh3CiJLvYcJBppGFi7crpbvlQv68M2hu05vhwL
R/91H62X/5UaUlc1cJV63OBk8euWzF6XNbCQQrR4ayDvz+BsV5Fs72vYa1gx7qIt
as/67oTT6y4U4pd74nT4OGkxDIXbXfn2eTbh5sMNc4ilBkqMyNbf8aOHdWqXZIBd
18RKpNl6h/fiDMJ0jsGliReONLjfRBcJla68Kn1AFONMcyxcXidjptOwLOt2fYWf
YMguCVMhfgxVBslzLWoQ9AWSiNVh36ycORWlCOrnRaOaQCb9OaLZ2fwibfZ0JsMd
0259Z5vA7MIUoobCc5akXOYHbpByA9FSYkKudgTYLpdjkn05kxQyA12GgJjW3sVw
ZRjoUuDuZDDUct6JcLWdrlONT8st05g+qf6PCoD+Jac8HtbpqHfKJJUtYecUat75
4hGKhuvTzpuVY0wNHo3sgqKfsejQODTN6UhejNI11Zs/nx6O0ze/qoDuWZHncnKl
158le+K5rNS8SUNbDBTMWp3OX4SJm/Gsf30fOWkkt6z1iaEfKc5sCxBHvSOeBEvH
M9pzy56Vtw==
=73nU
-----END PGP SIGNATURE-----
Merge tag 'block-5.7-2020-05-09' of git://git.kernel.dk/linux-block
Pull block fixes from Jens Axboe:
- a small series fixing a use-after-free of bdi name (Christoph,Yufen)
- NVMe fix for a regression with the smaller CQ update (Alexey)
- NVMe fix for a hang at namespace scanning error recovery (Sagi)
- fix race with blk-iocost iocg->abs_vdebt updates (Tejun)
* tag 'block-5.7-2020-05-09' of git://git.kernel.dk/linux-block:
nvme: fix possible hang when ns scanning fails during error recovery
nvme-pci: fix "slimmer CQ head update"
bdi: add a ->dev_name field to struct backing_dev_info
bdi: use bdi_dev_name() to get device name
bdi: move bdi_dev_name out of line
vboxsf: don't use the source name in the bdi name
iocost: protect iocg->abs_vdebt with iocg->waitq.lock
It seems that for whatever reason, gcc-10 ends up not inlining a couple
of functions that used to be inlined before. Even if they only have one
single callsite - it looks like gcc may have decided that the code was
unlikely, and not worth inlining.
The code generation difference is harmless, but caused a few new section
mismatch errors, since the (now no longer inlined) function wasn't in
the __init section, but called other init functions:
Section mismatch in reference from the function kexec_free_initrd() to the function .init.text:free_initrd_mem()
Section mismatch in reference from the function tpm2_calc_event_log_size() to the function .init.text:early_memremap()
Section mismatch in reference from the function tpm2_calc_event_log_size() to the function .init.text:early_memunmap()
So add the appropriate __init annotation to make modpost not complain.
In both cases there were trivially just a single callsite from another
__init function.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This contains a smattering of fixes and cleanups that I'd like to target for
5.7:
* Dead code removal.
* Exporting riscv_cpuid_to_hartid_mask for modules.
* Per-CPU tracking of ISA features.
* Setting max_pfn correctly when probing memory.
* Adding a note to the VDSO so glibc can check the kernel's version without a
uname().
* A fix to force the bootloader to initialize the boot spin tables, which still
get used as a fallback when SBI-0.1 is enabled.
-----BEGIN PGP SIGNATURE-----
iQJHBAABCgAxFiEEKzw3R0RoQ7JKlDp6LhMZ81+7GIkFAl61prsTHHBhbG1lckBk
YWJiZWx0LmNvbQAKCRAuExnzX7sYia2UD/44ILoaQySVnLZ+ZzXaMXn3WwGHe8bS
NVPQJB21ejkfbM8cDR5A8+w45FBrHquIRwhHnVkl5JU2AtvcdWh3tztmFx6Ejsu9
FFBzcbHcXnYthkm1xLVPQASY0Pl6VOPdx47Mip9gvoLK79VetjQWNzUpFk4CBJdw
nObgYgxE9twCQ7JOcK0VnPL9IpJ6E/lCcIyCi11NL9xRWtUyWk4hcmAFj/+tUegm
DroT7QzKKxFS24eLaRkJgQGwAJ1jb0/b0ztl04U8NTOqVjgFXkGTC1Kuzd06Ch2U
U34CYRL+A2sXwWnnNsIyjD7Epdalc/xx+JMEuD8dhnr0YK8WilvvG53gGwCwFgVc
wpFhvsIuINYTw253Rv0q1oeRcDmMCKmV7bhOKSX4x0V1iGM1ognl/6zkCY4J0dQC
7BCoeAGlpBTNbidatZ6jl5e32jes50ZRjhf3LxXe3mgrBd+diKXyOyLT01SVwqv/
A1Sur/KquwoqT4RSx2Cel8JswPhfErhB0otL3CYoao8V7rxYGTKWKXg5SFAgwDHZ
rib1UpYmyh2tjmoXb99ctlBpRHsYcVzXOZS9tG7B2ue7YhEwiZdV3249uwitAQgm
NmGCH7tDe/nu5DLBoFyTjBJ64pZyn3YmE58M/uCmbXyMRVSGp2TXK83u3mfiw+gh
kKNSRHJDAAl7Fg==
=bGU8
-----END PGP SIGNATURE-----
Merge tag 'riscv-for-linus-5.7-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux
Pull RISC-V fixes from Palmer Dabbelt:
"A smattering of fixes and cleanups:
- Dead code removal.
- Exporting riscv_cpuid_to_hartid_mask for modules.
- Per-CPU tracking of ISA features.
- Setting max_pfn correctly when probing memory.
- Adding a note to the VDSO so glibc can check the kernel's version
without a uname().
- A fix to force the bootloader to initialize the boot spin tables,
which still get used as a fallback when SBI-0.1 is enabled"
* tag 'riscv-for-linus-5.7-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux:
RISC-V: Remove unused code from STRICT_KERNEL_RWX
riscv: force __cpu_up_ variables to put in data section
riscv: add Linux note to vdso
riscv: set max_pfn to the PFN of the last page
RISC-V: Remove N-extension related defines
RISC-V: Add bitmap reprensenting ISA features common across CPUs
RISC-V: Export riscv_cpuid_to_hartid_mask() API
gcc-10 has started warning about conflicting types for a few new
built-in functions, particularly 'free()'.
This results in warnings like:
crypto/xts.c:325:13: warning: conflicting types for built-in function ‘free’; expected ‘void(void *)’ [-Wbuiltin-declaration-mismatch]
because the crypto layer had its local freeing functions called
'free()'.
Gcc-10 is in the wrong here, since that function is marked 'static', and
thus there is no chance of confusion with any standard library function
namespace.
But the simplest thing to do is to just use a different name here, and
avoid this gcc mis-feature.
[ Side note: gcc knowing about 'free()' is in itself not the
mis-feature: the semantics of 'free()' are special enough that a
compiler can validly do special things when seeing it.
So the mis-feature here is that gcc thinks that 'free()' is some
restricted name, and you can't shadow it as a local static function.
Making the special 'free()' semantics be a function attribute rather
than tied to the name would be the much better model ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
gcc-10 now warns about passing aliasing pointers to functions that take
restricted pointers.
That's actually a great warning, and if we ever start using 'restrict'
in the kernel, it might be quite useful. But right now we don't, and it
turns out that the only thing this warns about is an idiom where we have
declared a few functions to be "printf-like" (which seems to make gcc
pick up the restricted pointer thing), and then we print to the same
buffer that we also use as an input.
And people do that as an odd concatenation pattern, with code like this:
#define sysfs_show_gen_prop(buffer, fmt, ...) \
snprintf(buffer, PAGE_SIZE, "%s"fmt, buffer, __VA_ARGS__)
where we have 'buffer' as both the destination of the final result, and
as the initial argument.
Yes, it's a bit questionable. And outside of the kernel, people do have
standard declarations like
int snprintf( char *restrict buffer, size_t bufsz,
const char *restrict format, ... );
where that output buffer is marked as a restrict pointer that cannot
alias with any other arguments.
But in the context of the kernel, that 'use snprintf() to concatenate to
the end result' does work, and the pattern shows up in multiple places.
And we have not marked our own version of snprintf() as taking restrict
pointers, so the warning is incorrect for now, and gcc picks it up on
its own.
If we do start using 'restrict' in the kernel (and it might be a good
idea if people find places where it matters), we'll need to figure out
how to avoid this issue for snprintf and friends. But in the meantime,
this warning is not useful.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This is the final array bounds warning removal for gcc-10 for now.
Again, the warning is good, and we should re-enable all these warnings
when we have converted all the legacy array declaration cases to
flexible arrays. But in the meantime, it's just noise.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>