ghsa-hrh9-752p-34vj
Vulnerability from github
In the Linux kernel, the following vulnerability has been resolved:
KVM: Use dedicated mutex to protect kvm_usage_count to avoid deadlock
Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 #6 will wait on CPU0 #1, CPU0 #8 will wait on CPU2 #3, and CPU2 #7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores).
CPU0 CPU1 CPU2
1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu);
Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier():
cpuhp_cpufreq_online() | -> cpufreq_online() | -> cpufreq_gov_performance_limits() | -> __cpufreq_driver_target() | -> __target_index() | -> cpufreq_freq_transition_begin() | -> cpufreq_notify_transition() | -> ... __kvmclock_cpufreq_notifier()
But, actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual.
The most robust solution to the general cpu_hotplug_lock issue is likely to switch vm_list to be an RCU-protected list, e.g. so that x86's cpufreq notifier doesn't to take kvm_lock. For now, settle for fixing the most blatant deadlock, as switching to an RCU-protected list is a much more involved change, but add a comment in locking.rst to call out that care needs to be taken when walking holding kvm_lock and walking vm_list.
====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip #330 Tainted: G S O
tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm]
but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm]
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e
-> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e
-> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0
---truncated---
{ affected: [], aliases: [ "CVE-2024-47744", ], database_specific: { cwe_ids: [ "CWE-667", ], github_reviewed: false, github_reviewed_at: null, nvd_published_at: "2024-10-21T13:15:04Z", severity: "MODERATE", }, details: "In the Linux kernel, the following vulnerability has been resolved:\n\nKVM: Use dedicated mutex to protect kvm_usage_count to avoid deadlock\n\nUse a dedicated mutex to guard kvm_usage_count to fix a potential deadlock\non x86 due to a chain of locks and SRCU synchronizations. Translating the\nbelow lockdep splat, CPU1 #6 will wait on CPU0 #1, CPU0 #8 will wait on\nCPU2 #3, and CPU2 #7 will wait on CPU1 #4 (if there's a writer, due to the\nfairness of r/w semaphores).\n\n CPU0 CPU1 CPU2\n1 lock(&kvm->slots_lock);\n2 lock(&vcpu->mutex);\n3 lock(&kvm->srcu);\n4 lock(cpu_hotplug_lock);\n5 lock(kvm_lock);\n6 lock(&kvm->slots_lock);\n7 lock(cpu_hotplug_lock);\n8 sync(&kvm->srcu);\n\nNote, there are likely more potential deadlocks in KVM x86, e.g. the same\npattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with\n__kvmclock_cpufreq_notifier():\n\n cpuhp_cpufreq_online()\n |\n -> cpufreq_online()\n |\n -> cpufreq_gov_performance_limits()\n |\n -> __cpufreq_driver_target()\n |\n -> __target_index()\n |\n -> cpufreq_freq_transition_begin()\n |\n -> cpufreq_notify_transition()\n |\n -> ... __kvmclock_cpufreq_notifier()\n\nBut, actually triggering such deadlocks is beyond rare due to the\ncombination of dependencies and timings involved. E.g. the cpufreq\nnotifier is only used on older CPUs without a constant TSC, mucking with\nthe NX hugepage mitigation while VMs are running is very uncommon, and\ndoing so while also onlining/offlining a CPU (necessary to generate\ncontention on cpu_hotplug_lock) would be even more unusual.\n\nThe most robust solution to the general cpu_hotplug_lock issue is likely\nto switch vm_list to be an RCU-protected list, e.g. so that x86's cpufreq\nnotifier doesn't to take kvm_lock. For now, settle for fixing the most\nblatant deadlock, as switching to an RCU-protected list is a much more\ninvolved change, but add a comment in locking.rst to call out that care\nneeds to be taken when walking holding kvm_lock and walking vm_list.\n\n ======================================================\n WARNING: possible circular locking dependency detected\n 6.10.0-smp--c257535a0c9d-pip #330 Tainted: G S O\n ------------------------------------------------------\n tee/35048 is trying to acquire lock:\n ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm]\n\n but task is already holding lock:\n ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm]\n\n which lock already depends on the new lock.\n\n the existing dependency chain (in reverse order) is:\n\n -> #3 (kvm_lock){+.+.}-{3:3}:\n __mutex_lock+0x6a/0xb40\n mutex_lock_nested+0x1f/0x30\n kvm_dev_ioctl+0x4fb/0xe50 [kvm]\n __se_sys_ioctl+0x7b/0xd0\n __x64_sys_ioctl+0x21/0x30\n x64_sys_call+0x15d0/0x2e60\n do_syscall_64+0x83/0x160\n entry_SYSCALL_64_after_hwframe+0x76/0x7e\n\n -> #2 (cpu_hotplug_lock){++++}-{0:0}:\n cpus_read_lock+0x2e/0xb0\n static_key_slow_inc+0x16/0x30\n kvm_lapic_set_base+0x6a/0x1c0 [kvm]\n kvm_set_apic_base+0x8f/0xe0 [kvm]\n kvm_set_msr_common+0x9ae/0xf80 [kvm]\n vmx_set_msr+0xa54/0xbe0 [kvm_intel]\n __kvm_set_msr+0xb6/0x1a0 [kvm]\n kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm]\n kvm_vcpu_ioctl+0x485/0x5b0 [kvm]\n __se_sys_ioctl+0x7b/0xd0\n __x64_sys_ioctl+0x21/0x30\n x64_sys_call+0x15d0/0x2e60\n do_syscall_64+0x83/0x160\n entry_SYSCALL_64_after_hwframe+0x76/0x7e\n\n -> #1 (&kvm->srcu){.+.+}-{0:0}:\n __synchronize_srcu+0x44/0x1a0\n \n---truncated---", id: "GHSA-hrh9-752p-34vj", modified: "2024-10-22T18:32:09Z", published: "2024-10-21T15:32:26Z", references: [ { type: "ADVISORY", url: "https://nvd.nist.gov/vuln/detail/CVE-2024-47744", }, { type: "WEB", url: "https://git.kernel.org/stable/c/44d17459626052a2390457e550a12cb973506b2f", }, { type: "WEB", url: "https://git.kernel.org/stable/c/4777225ec89f52bb9ca16a33cfb44c189f1b7b47", }, { type: "WEB", url: "https://git.kernel.org/stable/c/760a196e6dcb29580e468b44b5400171dae184d8", }, { type: "WEB", url: "https://git.kernel.org/stable/c/a2764afce521fd9fd7a5ff6ed52ac2095873128a", }, ], schema_version: "1.4.0", severity: [ { score: "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H", type: "CVSS_V3", }, ], }
Log in or create an account to share your comment.
This schema specifies the format of a comment related to a security advisory.
Sightings
Author | Source | Type | Date |
---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
- Confirmed: The vulnerability is confirmed from an analyst perspective.
- Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
- Patched: This vulnerability was successfully patched by the user reporting the sighting.
- Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
- Not confirmed: The user expresses doubt about the veracity of the vulnerability.
- Not patched: This vulnerability was not successfully patched by the user reporting the sighting.