The hypervisor contains code to accelerate VGA memory accesses for HVM
guests, when the (virtual) VGA is in "standard" mode. Locking involved
there has an unusual discipline, leaving a lock acquired past the
return from the function that acquired it. This behavior results in a
problem when emulating an instruction with two memory accesses, both of
which touch VGA memory (plus some further constraints which aren't
relevant here). When emulating the 2nd access, the lock that is already
being held would be attempted to be re-acquired, resulting in a
deadlock.
This deadlock was already found when the code was first introduced, but
was analysed incorrectly and the fix was incomplete. Analysis in light
of the new finding cannot find a way to make the existing locking
discipline work.
In staging, this logic has all been removed because it was discovered
to be accidentally disabled since Xen 4.7. Therefore, we are fixing the
locking problem by backporting the removal of most of the feature. Note
that even with the feature disabled, the lock would still be acquired
for any accesses to the VGA MMIO region.
Recent x86 CPUs offer functionality named Control-flow Enforcement
Technology (CET). A sub-feature of this are Shadow Stacks (CET-SS).
CET-SS is a hardware feature designed to protect against Return Oriented
Programming attacks. When enabled, traditional stacks holding both data
and return addresses are accompanied by so called "shadow stacks",
holding little more than return addresses. Shadow stacks aren't
writable by normal instructions, and upon function returns their
contents are used to check for possible manipulation of a return address
coming from the traditional stack.
In particular certain memory accesses need intercepting by Xen. In
various cases the necessary emulation involves kind of replaying of
the instruction. Such replaying typically involves filling and then
invoking of a stub. Such a replayed instruction may raise an
exceptions, which is expected and dealt with accordingly.
Unfortunately the interaction of both of the above wasn't right:
Recovery involves removal of a call frame from the (traditional) stack.
The counterpart of this operation for the shadow stack was missing.
The current setup of the quarantine page tables assumes that the
quarantine domain (dom_io) has been initialized with an address width
of DEFAULT_DOMAIN_ADDRESS_WIDTH (48) and hence 4 page table levels.
However dom_io being a PV domain gets the AMD-Vi IOMMU page tables
levels based on the maximum (hot pluggable) RAM address, and hence on
systems with no RAM above the 512GB mark only 3 page-table levels are
configured in the IOMMU.
On systems without RAM above the 512GB boundary
amd_iommu_quarantine_init() will setup page tables for the scratch
page with 4 levels, while the IOMMU will be configured to use 3 levels
only, resulting in the last page table directory (PDE) effectively
becoming a page table entry (PTE), and hence a device in quarantine
mode gaining write access to the page destined to be a PDE.
Due to this page table level mismatch, the sink page the device gets
read/write access to is no longer cleared between device assignment,
possibly leading to data leaks.
The fixes for XSA-422 (Branch Type Confusion) and XSA-434 (Speculative
Return Stack Overflow) are not IRQ-safe. It was believed that the
mitigations always operated in contexts with IRQs disabled.
However, the original XSA-254 fix for Meltdown (XPTI) deliberately left
interrupts enabled on two entry paths; one unconditionally, and one
conditionally on whether XPTI was active.
As BTC/SRSO and Meltdown affect different CPU vendors, the mitigations
are not active together by default. Therefore, there is a race
condition whereby a malicious PV guest can bypass BTC/SRSO protections
and launch a BTC/SRSO attack against Xen.
Arm provides multiple helpers to clean & invalidate the cache
for a given region. This is, for instance, used when allocating
guest memory to ensure any writes (such as the ones during scrubbing)
have reached memory before handing over the page to a guest.
Unfortunately, the arithmetics in the helpers can overflow and would
then result to skip the cache cleaning/invalidation. Therefore there
is no guarantee when all the writes will reach the memory.
This undefined behavior was meant to be addressed by XSA-437, but the
approach was not sufficient.
Arm provides multiple helpers to clean & invalidate the cache
for a given region. This is, for instance, used when allocating
guest memory to ensure any writes (such as the ones during scrubbing)
have reached memory before handing over the page to a guest.
Unfortunately, the arithmetics in the helpers can overflow and would
then result to skip the cache cleaning/invalidation. Therefore there
is no guarantee when all the writes will reach the memory.
For migration as well as to work around kernels unaware of L1TF (see
XSA-273), PV guests may be run in shadow paging mode. Since Xen itself
needs to be mapped when PV guests run, Xen and shadowed PV guests run
directly the respective shadow page tables. For 64-bit PV guests this
means running on the shadow of the guest root page table.
In the course of dealing with shortage of memory in the shadow pool
associated with a domain, shadows of page tables may be torn down. This
tearing down may include the shadow root page table that the CPU in
question is presently running on. While a precaution exists to
supposedly prevent the tearing down of the underlying live page table,
the time window covered by that precaution isn't large enough.
When a transaction is committed, C Xenstored will first check
the quota is correct before attempting to commit any nodes. It would
be possible that accounting is temporarily negative if a node has
been removed outside of the transaction.
Unfortunately, some versions of C Xenstored are assuming that the
quota cannot be negative and are using assert() to confirm it. This
will lead to C Xenstored crash when tools are built without -DNDEBUG
(this is the default).
Closing of an event channel in the Linux kernel can result in a deadlock.
This happens when the close is being performed in parallel to an unrelated
Xen console action and the handling of a Xen console interrupt in an
unprivileged guest.
The closing of an event channel is e.g. triggered by removal of a
paravirtual device on the other side. As this action will cause console
messages to be issued on the other side quite often, the chance of
triggering the deadlock is not neglectable.
Note that 32-bit Arm-guests are not affected, as the 32-bit Linux kernel
on Arm doesn't use queued-RW-locks, which are required to trigger the
issue (on Arm32 a waiting writer doesn't block further readers to get
the lock).
[This CNA information record relates to multiple CVEs; the
text explains which aspects/vulnerabilities correspond to which CVE.]
libfsimage contains parsing code for several filesystems, most of them based on
grub-legacy code. libfsimage is used by pygrub to inspect guest disks.
Pygrub runs as the same user as the toolstack (root in a priviledged domain).
At least one issue has been reported to the Xen Security Team that allows an
attacker to trigger a stack buffer overflow in libfsimage. After further
analisys the Xen Security Team is no longer confident in the suitability of
libfsimage when run against guest controlled input with super user priviledges.
In order to not affect current deployments that rely on pygrub patches are
provided in the resolution section of the advisory that allow running pygrub in
deprivileged mode.
CVE-2023-4949 refers to the original issue in the upstream grub
project ("An attacker with local access to a system (either through a
disk or external drive) can present a modified XFS partition to
grub-legacy in such a way to exploit a memory corruption in grub’s XFS
file system implementation.") CVE-2023-34325 refers specifically to
the vulnerabilities in Xen's copy of libfsimage, which is decended
from a very old version of grub.