cve-2024-47744
Vulnerability from cvelistv5
Published
2024-10-21 12:14
Modified
2024-12-19 09:27
Severity ?
Summary
In the Linux kernel, the following vulnerability has been resolved: KVM: Use dedicated mutex to protect kvm_usage_count to avoid deadlock Use a dedicated mutex to guard kvm_usage_count to fix a potential deadlock on x86 due to a chain of locks and SRCU synchronizations. Translating the below lockdep splat, CPU1 #6 will wait on CPU0 #1, CPU0 #8 will wait on CPU2 #3, and CPU2 #7 will wait on CPU1 #4 (if there's a writer, due to the fairness of r/w semaphores). CPU0 CPU1 CPU2 1 lock(&kvm->slots_lock); 2 lock(&vcpu->mutex); 3 lock(&kvm->srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(&kvm->slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(&kvm->srcu); Note, there are likely more potential deadlocks in KVM x86, e.g. the same pattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with __kvmclock_cpufreq_notifier(): cpuhp_cpufreq_online() | -> cpufreq_online() | -> cpufreq_gov_performance_limits() | -> __cpufreq_driver_target() | -> __target_index() | -> cpufreq_freq_transition_begin() | -> cpufreq_notify_transition() | -> ... __kvmclock_cpufreq_notifier() But, actually triggering such deadlocks is beyond rare due to the combination of dependencies and timings involved. E.g. the cpufreq notifier is only used on older CPUs without a constant TSC, mucking with the NX hugepage mitigation while VMs are running is very uncommon, and doing so while also onlining/offlining a CPU (necessary to generate contention on cpu_hotplug_lock) would be even more unusual. The most robust solution to the general cpu_hotplug_lock issue is likely to switch vm_list to be an RCU-protected list, e.g. so that x86's cpufreq notifier doesn't to take kvm_lock. For now, settle for fixing the most blatant deadlock, as switching to an RCU-protected list is a much more involved change, but add a comment in locking.rst to call out that care needs to be taken when walking holding kvm_lock and walking vm_list. ====================================================== WARNING: possible circular locking dependency detected 6.10.0-smp--c257535a0c9d-pip #330 Tainted: G S O ------------------------------------------------------ tee/35048 is trying to acquire lock: ff6a80eced71e0a8 (&kvm->slots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm] but task is already holding lock: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm] which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #2 (cpu_hotplug_lock){++++}-{0:0}: cpus_read_lock+0x2e/0xb0 static_key_slow_inc+0x16/0x30 kvm_lapic_set_base+0x6a/0x1c0 [kvm] kvm_set_apic_base+0x8f/0xe0 [kvm] kvm_set_msr_common+0x9ae/0xf80 [kvm] vmx_set_msr+0xa54/0xbe0 [kvm_intel] __kvm_set_msr+0xb6/0x1a0 [kvm] kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm] kvm_vcpu_ioctl+0x485/0x5b0 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -> #1 (&kvm->srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 ---truncated---
Impacted products
Vendor Product Version
Linux Linux Version: 0bf50497f03b3d892c470c7d1a10a3e9c3c95821
Version: 0bf50497f03b3d892c470c7d1a10a3e9c3c95821
Version: 0bf50497f03b3d892c470c7d1a10a3e9c3c95821
Version: 0bf50497f03b3d892c470c7d1a10a3e9c3c95821
Create a notification for this product.
   Linux Linux Version: 6.3
Create a notification for this product.
Show details on NVD website


{
  "containers": {
    "adp": [
      {
        "metrics": [
          {
            "other": {
              "content": {
                "id": "CVE-2024-47744",
                "options": [
                  {
                    "Exploitation": "none"
                  },
                  {
                    "Automatable": "no"
                  },
                  {
                    "Technical Impact": "partial"
                  }
                ],
                "role": "CISA Coordinator",
                "timestamp": "2024-10-21T12:58:48.680064Z",
                "version": "2.0.3"
              },
              "type": "ssvc"
            }
          }
        ],
        "providerMetadata": {
          "dateUpdated": "2024-10-21T13:04:14.065Z",
          "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
          "shortName": "CISA-ADP"
        },
        "title": "CISA ADP Vulnrichment"
      }
    ],
    "cna": {
      "affected": [
        {
          "defaultStatus": "unaffected",
          "product": "Linux",
          "programFiles": [
            "Documentation/virt/kvm/locking.rst",
            "virt/kvm/kvm_main.c"
          ],
          "repo": "https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git",
          "vendor": "Linux",
          "versions": [
            {
              "lessThan": "4777225ec89f52bb9ca16a33cfb44c189f1b7b47",
              "status": "affected",
              "version": "0bf50497f03b3d892c470c7d1a10a3e9c3c95821",
              "versionType": "git"
            },
            {
              "lessThan": "a2764afce521fd9fd7a5ff6ed52ac2095873128a",
              "status": "affected",
              "version": "0bf50497f03b3d892c470c7d1a10a3e9c3c95821",
              "versionType": "git"
            },
            {
              "lessThan": "760a196e6dcb29580e468b44b5400171dae184d8",
              "status": "affected",
              "version": "0bf50497f03b3d892c470c7d1a10a3e9c3c95821",
              "versionType": "git"
            },
            {
              "lessThan": "44d17459626052a2390457e550a12cb973506b2f",
              "status": "affected",
              "version": "0bf50497f03b3d892c470c7d1a10a3e9c3c95821",
              "versionType": "git"
            }
          ]
        },
        {
          "defaultStatus": "affected",
          "product": "Linux",
          "programFiles": [
            "Documentation/virt/kvm/locking.rst",
            "virt/kvm/kvm_main.c"
          ],
          "repo": "https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git",
          "vendor": "Linux",
          "versions": [
            {
              "status": "affected",
              "version": "6.3"
            },
            {
              "lessThan": "6.3",
              "status": "unaffected",
              "version": "0",
              "versionType": "semver"
            },
            {
              "lessThanOrEqual": "6.6.*",
              "status": "unaffected",
              "version": "6.6.54",
              "versionType": "semver"
            },
            {
              "lessThanOrEqual": "6.10.*",
              "status": "unaffected",
              "version": "6.10.13",
              "versionType": "semver"
            },
            {
              "lessThanOrEqual": "6.11.*",
              "status": "unaffected",
              "version": "6.11.2",
              "versionType": "semver"
            },
            {
              "lessThanOrEqual": "*",
              "status": "unaffected",
              "version": "6.12",
              "versionType": "original_commit_for_fix"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "In the Linux kernel, the following vulnerability has been resolved:\n\nKVM: Use dedicated mutex to protect kvm_usage_count to avoid deadlock\n\nUse a dedicated mutex to guard kvm_usage_count to fix a potential deadlock\non x86 due to a chain of locks and SRCU synchronizations.  Translating the\nbelow lockdep splat, CPU1 #6 will wait on CPU0 #1, CPU0 #8 will wait on\nCPU2 #3, and CPU2 #7 will wait on CPU1 #4 (if there\u0027s a writer, due to the\nfairness of r/w semaphores).\n\n    CPU0                     CPU1                     CPU2\n1   lock(\u0026kvm-\u003eslots_lock);\n2                                                     lock(\u0026vcpu-\u003emutex);\n3                                                     lock(\u0026kvm-\u003esrcu);\n4                            lock(cpu_hotplug_lock);\n5                            lock(kvm_lock);\n6                            lock(\u0026kvm-\u003eslots_lock);\n7                                                     lock(cpu_hotplug_lock);\n8   sync(\u0026kvm-\u003esrcu);\n\nNote, there are likely more potential deadlocks in KVM x86, e.g. the same\npattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with\n__kvmclock_cpufreq_notifier():\n\n  cpuhp_cpufreq_online()\n  |\n  -\u003e cpufreq_online()\n     |\n     -\u003e cpufreq_gov_performance_limits()\n        |\n        -\u003e __cpufreq_driver_target()\n           |\n           -\u003e __target_index()\n              |\n              -\u003e cpufreq_freq_transition_begin()\n                 |\n                 -\u003e cpufreq_notify_transition()\n                    |\n                    -\u003e ... __kvmclock_cpufreq_notifier()\n\nBut, actually triggering such deadlocks is beyond rare due to the\ncombination of dependencies and timings involved.  E.g. the cpufreq\nnotifier is only used on older CPUs without a constant TSC, mucking with\nthe NX hugepage mitigation while VMs are running is very uncommon, and\ndoing so while also onlining/offlining a CPU (necessary to generate\ncontention on cpu_hotplug_lock) would be even more unusual.\n\nThe most robust solution to the general cpu_hotplug_lock issue is likely\nto switch vm_list to be an RCU-protected list, e.g. so that x86\u0027s cpufreq\nnotifier doesn\u0027t to take kvm_lock.  For now, settle for fixing the most\nblatant deadlock, as switching to an RCU-protected list is a much more\ninvolved change, but add a comment in locking.rst to call out that care\nneeds to be taken when walking holding kvm_lock and walking vm_list.\n\n  ======================================================\n  WARNING: possible circular locking dependency detected\n  6.10.0-smp--c257535a0c9d-pip #330 Tainted: G S         O\n  ------------------------------------------------------\n  tee/35048 is trying to acquire lock:\n  ff6a80eced71e0a8 (\u0026kvm-\u003eslots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm]\n\n  but task is already holding lock:\n  ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm]\n\n  which lock already depends on the new lock.\n\n   the existing dependency chain (in reverse order) is:\n\n  -\u003e #3 (kvm_lock){+.+.}-{3:3}:\n         __mutex_lock+0x6a/0xb40\n         mutex_lock_nested+0x1f/0x30\n         kvm_dev_ioctl+0x4fb/0xe50 [kvm]\n         __se_sys_ioctl+0x7b/0xd0\n         __x64_sys_ioctl+0x21/0x30\n         x64_sys_call+0x15d0/0x2e60\n         do_syscall_64+0x83/0x160\n         entry_SYSCALL_64_after_hwframe+0x76/0x7e\n\n  -\u003e #2 (cpu_hotplug_lock){++++}-{0:0}:\n         cpus_read_lock+0x2e/0xb0\n         static_key_slow_inc+0x16/0x30\n         kvm_lapic_set_base+0x6a/0x1c0 [kvm]\n         kvm_set_apic_base+0x8f/0xe0 [kvm]\n         kvm_set_msr_common+0x9ae/0xf80 [kvm]\n         vmx_set_msr+0xa54/0xbe0 [kvm_intel]\n         __kvm_set_msr+0xb6/0x1a0 [kvm]\n         kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm]\n         kvm_vcpu_ioctl+0x485/0x5b0 [kvm]\n         __se_sys_ioctl+0x7b/0xd0\n         __x64_sys_ioctl+0x21/0x30\n         x64_sys_call+0x15d0/0x2e60\n         do_syscall_64+0x83/0x160\n         entry_SYSCALL_64_after_hwframe+0x76/0x7e\n\n  -\u003e #1 (\u0026kvm-\u003esrcu){.+.+}-{0:0}:\n         __synchronize_srcu+0x44/0x1a0\n      \n---truncated---"
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2024-12-19T09:27:15.269Z",
        "orgId": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
        "shortName": "Linux"
      },
      "references": [
        {
          "url": "https://git.kernel.org/stable/c/4777225ec89f52bb9ca16a33cfb44c189f1b7b47"
        },
        {
          "url": "https://git.kernel.org/stable/c/a2764afce521fd9fd7a5ff6ed52ac2095873128a"
        },
        {
          "url": "https://git.kernel.org/stable/c/760a196e6dcb29580e468b44b5400171dae184d8"
        },
        {
          "url": "https://git.kernel.org/stable/c/44d17459626052a2390457e550a12cb973506b2f"
        }
      ],
      "title": "KVM: Use dedicated mutex to protect kvm_usage_count to avoid deadlock",
      "x_generator": {
        "engine": "bippy-5f407fcff5a0"
      }
    }
  },
  "cveMetadata": {
    "assignerOrgId": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
    "assignerShortName": "Linux",
    "cveId": "CVE-2024-47744",
    "datePublished": "2024-10-21T12:14:11.830Z",
    "dateReserved": "2024-09-30T16:00:12.960Z",
    "dateUpdated": "2024-12-19T09:27:15.269Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1",
  "vulnerability-lookup:meta": {
    "nvd": "{\"cve\":{\"id\":\"CVE-2024-47744\",\"sourceIdentifier\":\"416baaa9-dc9f-4396-8d5f-8c081fb06d67\",\"published\":\"2024-10-21T13:15:04.480\",\"lastModified\":\"2024-10-22T15:44:40.393\",\"vulnStatus\":\"Analyzed\",\"cveTags\":[],\"descriptions\":[{\"lang\":\"en\",\"value\":\"In the Linux kernel, the following vulnerability has been resolved:\\n\\nKVM: Use dedicated mutex to protect kvm_usage_count to avoid deadlock\\n\\nUse a dedicated mutex to guard kvm_usage_count to fix a potential deadlock\\non x86 due to a chain of locks and SRCU synchronizations.  Translating the\\nbelow lockdep splat, CPU1 #6 will wait on CPU0 #1, CPU0 #8 will wait on\\nCPU2 #3, and CPU2 #7 will wait on CPU1 #4 (if there\u0027s a writer, due to the\\nfairness of r/w semaphores).\\n\\n    CPU0                     CPU1                     CPU2\\n1   lock(\u0026kvm-\u003eslots_lock);\\n2                                                     lock(\u0026vcpu-\u003emutex);\\n3                                                     lock(\u0026kvm-\u003esrcu);\\n4                            lock(cpu_hotplug_lock);\\n5                            lock(kvm_lock);\\n6                            lock(\u0026kvm-\u003eslots_lock);\\n7                                                     lock(cpu_hotplug_lock);\\n8   sync(\u0026kvm-\u003esrcu);\\n\\nNote, there are likely more potential deadlocks in KVM x86, e.g. the same\\npattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with\\n__kvmclock_cpufreq_notifier():\\n\\n  cpuhp_cpufreq_online()\\n  |\\n  -\u003e cpufreq_online()\\n     |\\n     -\u003e cpufreq_gov_performance_limits()\\n        |\\n        -\u003e __cpufreq_driver_target()\\n           |\\n           -\u003e __target_index()\\n              |\\n              -\u003e cpufreq_freq_transition_begin()\\n                 |\\n                 -\u003e cpufreq_notify_transition()\\n                    |\\n                    -\u003e ... __kvmclock_cpufreq_notifier()\\n\\nBut, actually triggering such deadlocks is beyond rare due to the\\ncombination of dependencies and timings involved.  E.g. the cpufreq\\nnotifier is only used on older CPUs without a constant TSC, mucking with\\nthe NX hugepage mitigation while VMs are running is very uncommon, and\\ndoing so while also onlining/offlining a CPU (necessary to generate\\ncontention on cpu_hotplug_lock) would be even more unusual.\\n\\nThe most robust solution to the general cpu_hotplug_lock issue is likely\\nto switch vm_list to be an RCU-protected list, e.g. so that x86\u0027s cpufreq\\nnotifier doesn\u0027t to take kvm_lock.  For now, settle for fixing the most\\nblatant deadlock, as switching to an RCU-protected list is a much more\\ninvolved change, but add a comment in locking.rst to call out that care\\nneeds to be taken when walking holding kvm_lock and walking vm_list.\\n\\n  ======================================================\\n  WARNING: possible circular locking dependency detected\\n  6.10.0-smp--c257535a0c9d-pip #330 Tainted: G S         O\\n  ------------------------------------------------------\\n  tee/35048 is trying to acquire lock:\\n  ff6a80eced71e0a8 (\u0026kvm-\u003eslots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm]\\n\\n  but task is already holding lock:\\n  ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm]\\n\\n  which lock already depends on the new lock.\\n\\n   the existing dependency chain (in reverse order) is:\\n\\n  -\u003e #3 (kvm_lock){+.+.}-{3:3}:\\n         __mutex_lock+0x6a/0xb40\\n         mutex_lock_nested+0x1f/0x30\\n         kvm_dev_ioctl+0x4fb/0xe50 [kvm]\\n         __se_sys_ioctl+0x7b/0xd0\\n         __x64_sys_ioctl+0x21/0x30\\n         x64_sys_call+0x15d0/0x2e60\\n         do_syscall_64+0x83/0x160\\n         entry_SYSCALL_64_after_hwframe+0x76/0x7e\\n\\n  -\u003e #2 (cpu_hotplug_lock){++++}-{0:0}:\\n         cpus_read_lock+0x2e/0xb0\\n         static_key_slow_inc+0x16/0x30\\n         kvm_lapic_set_base+0x6a/0x1c0 [kvm]\\n         kvm_set_apic_base+0x8f/0xe0 [kvm]\\n         kvm_set_msr_common+0x9ae/0xf80 [kvm]\\n         vmx_set_msr+0xa54/0xbe0 [kvm_intel]\\n         __kvm_set_msr+0xb6/0x1a0 [kvm]\\n         kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm]\\n         kvm_vcpu_ioctl+0x485/0x5b0 [kvm]\\n         __se_sys_ioctl+0x7b/0xd0\\n         __x64_sys_ioctl+0x21/0x30\\n         x64_sys_call+0x15d0/0x2e60\\n         do_syscall_64+0x83/0x160\\n         entry_SYSCALL_64_after_hwframe+0x76/0x7e\\n\\n  -\u003e #1 (\u0026kvm-\u003esrcu){.+.+}-{0:0}:\\n         __synchronize_srcu+0x44/0x1a0\\n      \\n---truncated---\"},{\"lang\":\"es\",\"value\":\"En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: KVM: usar mutex dedicado para proteger kvm_usage_count para evitar un bloqueo Use un mutex dedicado para proteger kvm_usage_count para reparar un posible bloqueo en x86 debido a una cadena de bloqueos y sincronizaciones SRCU. Traduciendo el siguiente splat lockdep, CPU1 #6 esperar\u00e1 a CPU0 #1, CPU0 #8 esperar\u00e1 a CPU2 #3 y CPU2 #7 esperar\u00e1 a CPU1 #4 (si hay un escritor, debido a la imparcialidad de los sem\u00e1foros de lectura/escritura). CPU0 CPU1 CPU2 1 lock(\u0026amp;kvm-\u0026gt;slots_lock); 2 lock(\u0026amp;vcpu-\u0026gt;mutex); 3 lock(\u0026amp;kvm-\u0026gt;srcu); 4 lock(cpu_hotplug_lock); 5 lock(kvm_lock); 6 lock(\u0026amp;kvm-\u0026gt;slots_lock); 7 lock(cpu_hotplug_lock); 8 sync(\u0026amp;kvm-\u0026gt;srcu); Tenga en cuenta que es probable que haya m\u00e1s bloqueos potenciales en KVM x86, por ejemplo, el mismo patr\u00f3n de tomar cpu_hotplug_lock fuera de kvm_lock probablemente exista con __kvmclock_cpufreq_notifier(): cpuhp_cpufreq_online() | -\u0026gt; cpufreq_online() | -\u0026gt; cpufreq_gov_performance_limits() | -\u0026gt; __cpufreq_driver_target() | -\u0026gt; __target_index() | -\u0026gt; cpufreq_freq_transition_begin() | -\u0026gt; cpufreq_notify_transition() | -\u0026gt; ... __kvmclock_cpufreq_notifier() Pero, en realidad, activar dichos bloqueos es m\u00e1s que raro debido a la combinaci\u00f3n de dependencias y tiempos involucrados. Por ejemplo, el notificador cpufreq solo se usa en CPU m\u00e1s antiguas sin un TSC constante, es muy poco com\u00fan alterar la mitigaci\u00f3n de p\u00e1ginas enormes de NX mientras las m\u00e1quinas virtuales se est\u00e1n ejecutando, y hacerlo mientras tambi\u00e9n se conecta o desconecta una CPU (necesario para generar contenci\u00f3n en cpu_hotplug_lock) ser\u00eda a\u00fan m\u00e1s inusual. La soluci\u00f3n m\u00e1s s\u00f3lida para el problema general de cpu_hotplug_lock es probablemente cambiar vm_list para que sea una lista protegida por RCU, por ejemplo, para que el notificador cpufreq de x86 no tome kvm_lock. Por ahora, conform\u00e9monos con arreglar el bloqueo m\u00e1s evidente, ya que cambiar a una lista protegida por RCU es un cambio mucho m\u00e1s complejo, pero agregue un comentario en locking.rst para indicar que se debe tener cuidado al recorrer manteniendo kvm_lock y recorrer vm_list. ======================================================== ADVERTENCIA: posible dependencia de bloqueo circular detectada 6.10.0-smp--c257535a0c9d-pip #330 Tainted: GSO ------------------------------------------------------ tee/35048 est\u00e1 intentando adquirir el bloqueo: ff6a80eced71e0a8 (\u0026amp;kvm-\u0026gt;slots_lock){+.+.}-{3:3}, en: set_nx_huge_pages+0x179/0x1e0 [kvm] pero la tarea ya tiene el bloqueo: ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, en: set_nx_huge_pages+0x14a/0x1e0 [kvm] cuyo bloqueo ya depende del nuevo bloqueo. la cadena de dependencia existente (en orden inverso) es: -\u0026gt; #3 (kvm_lock){+.+.}-{3:3}: __mutex_lock+0x6a/0xb40 mutex_lock_nested+0x1f/0x30 kvm_dev_ioctl+0x4fb/0xe50 [kvm] __se_sys_ioctl+0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -\u0026gt; #2 (cpu_hotplug_lock){++++}-{0:0}: Bloqueo de lectura de CPU + 0x2e/0xb0 Clave est\u00e1tica lenta Inc + 0x16/0x30 Base de configuraci\u00f3n de lapic Lapic + 0x6a/0x1c0 [kvm] Base de configuraci\u00f3n de apic Lapic + 0x8f/0xe0 [kvm] MSR com\u00fan Lapic + 0x9ae/0xf80 [kvm] MSR vmx + 0xa54/0xbe0 [kvm_intel] MSR + 0xb6/0x1a0 [kvm] VCPUE ioctl + 0xeca/0x10c0 [kvm] VCPUE ioctl + 0x485/0x5b0 [kvm] SYS ioctl + 0x7b/0xd0 __x64_sys_ioctl+0x21/0x30 x64_sys_call+0x15d0/0x2e60 do_syscall_64+0x83/0x160 entry_SYSCALL_64_after_hwframe+0x76/0x7e -\u0026gt; #1 (\u0026amp;kvm-\u0026gt;srcu){.+.+}-{0:0}: __synchronize_srcu+0x44/0x1a0 ---truncado---\"}],\"metrics\":{\"cvssMetricV31\":[{\"source\":\"nvd@nist.gov\",\"type\":\"Primary\",\"cvssData\":{\"version\":\"3.1\",\"vectorString\":\"CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H\",\"baseScore\":5.5,\"baseSeverity\":\"MEDIUM\",\"attackVector\":\"LOCAL\",\"attackComplexity\":\"LOW\",\"privilegesRequired\":\"LOW\",\"userInteraction\":\"NONE\",\"scope\":\"UNCHANGED\",\"confidentialityImpact\":\"NONE\",\"integrityImpact\":\"NONE\",\"availabilityImpact\":\"HIGH\"},\"exploitabilityScore\":1.8,\"impactScore\":3.6}]},\"weaknesses\":[{\"source\":\"nvd@nist.gov\",\"type\":\"Primary\",\"description\":[{\"lang\":\"en\",\"value\":\"CWE-667\"}]}],\"configurations\":[{\"nodes\":[{\"operator\":\"OR\",\"negate\":false,\"cpeMatch\":[{\"vulnerable\":true,\"criteria\":\"cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*\",\"versionStartIncluding\":\"6.3\",\"versionEndExcluding\":\"6.6.54\",\"matchCriteriaId\":\"20B4A42E-C497-4CCC-8414-F646F1E472AD\"},{\"vulnerable\":true,\"criteria\":\"cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*\",\"versionStartIncluding\":\"6.7\",\"versionEndExcluding\":\"6.10.13\",\"matchCriteriaId\":\"CE94BB8D-B0AB-4563-9ED7-A12122B56EBE\"},{\"vulnerable\":true,\"criteria\":\"cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*\",\"versionStartIncluding\":\"6.11\",\"versionEndExcluding\":\"6.11.2\",\"matchCriteriaId\":\"AB755D26-97F4-43B6-8604-CD076811E181\"}]}]}],\"references\":[{\"url\":\"https://git.kernel.org/stable/c/44d17459626052a2390457e550a12cb973506b2f\",\"source\":\"416baaa9-dc9f-4396-8d5f-8c081fb06d67\",\"tags\":[\"Patch\"]},{\"url\":\"https://git.kernel.org/stable/c/4777225ec89f52bb9ca16a33cfb44c189f1b7b47\",\"source\":\"416baaa9-dc9f-4396-8d5f-8c081fb06d67\",\"tags\":[\"Patch\"]},{\"url\":\"https://git.kernel.org/stable/c/760a196e6dcb29580e468b44b5400171dae184d8\",\"source\":\"416baaa9-dc9f-4396-8d5f-8c081fb06d67\",\"tags\":[\"Patch\"]},{\"url\":\"https://git.kernel.org/stable/c/a2764afce521fd9fd7a5ff6ed52ac2095873128a\",\"source\":\"416baaa9-dc9f-4396-8d5f-8c081fb06d67\",\"tags\":[\"Patch\"]}]}}",
    "vulnrichment": {
      "containers": "{\"adp\": [{\"title\": \"CISA ADP Vulnrichment\", \"metrics\": [{\"other\": {\"type\": \"ssvc\", \"content\": {\"id\": \"CVE-2024-47744\", \"role\": \"CISA Coordinator\", \"options\": [{\"Exploitation\": \"none\"}, {\"Automatable\": \"no\"}, {\"Technical Impact\": \"partial\"}], \"version\": \"2.0.3\", \"timestamp\": \"2024-10-21T12:58:48.680064Z\"}}}], \"providerMetadata\": {\"orgId\": \"134c704f-9b21-4f2e-91b3-4a467353bcc0\", \"shortName\": \"CISA-ADP\", \"dateUpdated\": \"2024-10-21T12:58:51.867Z\"}}], \"cna\": {\"title\": \"KVM: Use dedicated mutex to protect kvm_usage_count to avoid deadlock\", \"affected\": [{\"repo\": \"https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git\", \"vendor\": \"Linux\", \"product\": \"Linux\", \"versions\": [{\"status\": \"affected\", \"version\": \"0bf50497f03b3d892c470c7d1a10a3e9c3c95821\", \"lessThan\": \"4777225ec89f52bb9ca16a33cfb44c189f1b7b47\", \"versionType\": \"git\"}, {\"status\": \"affected\", \"version\": \"0bf50497f03b3d892c470c7d1a10a3e9c3c95821\", \"lessThan\": \"a2764afce521fd9fd7a5ff6ed52ac2095873128a\", \"versionType\": \"git\"}, {\"status\": \"affected\", \"version\": \"0bf50497f03b3d892c470c7d1a10a3e9c3c95821\", \"lessThan\": \"760a196e6dcb29580e468b44b5400171dae184d8\", \"versionType\": \"git\"}, {\"status\": \"affected\", \"version\": \"0bf50497f03b3d892c470c7d1a10a3e9c3c95821\", \"lessThan\": \"44d17459626052a2390457e550a12cb973506b2f\", \"versionType\": \"git\"}], \"programFiles\": [\"Documentation/virt/kvm/locking.rst\", \"virt/kvm/kvm_main.c\"], \"defaultStatus\": \"unaffected\"}, {\"repo\": \"https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git\", \"vendor\": \"Linux\", \"product\": \"Linux\", \"versions\": [{\"status\": \"affected\", \"version\": \"6.3\"}, {\"status\": \"unaffected\", \"version\": \"0\", \"lessThan\": \"6.3\", \"versionType\": \"semver\"}, {\"status\": \"unaffected\", \"version\": \"6.6.54\", \"versionType\": \"semver\", \"lessThanOrEqual\": \"6.6.*\"}, {\"status\": \"unaffected\", \"version\": \"6.10.13\", \"versionType\": \"semver\", \"lessThanOrEqual\": \"6.10.*\"}, {\"status\": \"unaffected\", \"version\": \"6.11.2\", \"versionType\": \"semver\", \"lessThanOrEqual\": \"6.11.*\"}, {\"status\": \"unaffected\", \"version\": \"6.12\", \"versionType\": \"original_commit_for_fix\", \"lessThanOrEqual\": \"*\"}], \"programFiles\": [\"Documentation/virt/kvm/locking.rst\", \"virt/kvm/kvm_main.c\"], \"defaultStatus\": \"affected\"}], \"references\": [{\"url\": \"https://git.kernel.org/stable/c/4777225ec89f52bb9ca16a33cfb44c189f1b7b47\"}, {\"url\": \"https://git.kernel.org/stable/c/a2764afce521fd9fd7a5ff6ed52ac2095873128a\"}, {\"url\": \"https://git.kernel.org/stable/c/760a196e6dcb29580e468b44b5400171dae184d8\"}, {\"url\": \"https://git.kernel.org/stable/c/44d17459626052a2390457e550a12cb973506b2f\"}], \"x_generator\": {\"engine\": \"bippy-5f407fcff5a0\"}, \"descriptions\": [{\"lang\": \"en\", \"value\": \"In the Linux kernel, the following vulnerability has been resolved:\\n\\nKVM: Use dedicated mutex to protect kvm_usage_count to avoid deadlock\\n\\nUse a dedicated mutex to guard kvm_usage_count to fix a potential deadlock\\non x86 due to a chain of locks and SRCU synchronizations.  Translating the\\nbelow lockdep splat, CPU1 #6 will wait on CPU0 #1, CPU0 #8 will wait on\\nCPU2 #3, and CPU2 #7 will wait on CPU1 #4 (if there\u0027s a writer, due to the\\nfairness of r/w semaphores).\\n\\n    CPU0                     CPU1                     CPU2\\n1   lock(\u0026kvm-\u003eslots_lock);\\n2                                                     lock(\u0026vcpu-\u003emutex);\\n3                                                     lock(\u0026kvm-\u003esrcu);\\n4                            lock(cpu_hotplug_lock);\\n5                            lock(kvm_lock);\\n6                            lock(\u0026kvm-\u003eslots_lock);\\n7                                                     lock(cpu_hotplug_lock);\\n8   sync(\u0026kvm-\u003esrcu);\\n\\nNote, there are likely more potential deadlocks in KVM x86, e.g. the same\\npattern of taking cpu_hotplug_lock outside of kvm_lock likely exists with\\n__kvmclock_cpufreq_notifier():\\n\\n  cpuhp_cpufreq_online()\\n  |\\n  -\u003e cpufreq_online()\\n     |\\n     -\u003e cpufreq_gov_performance_limits()\\n        |\\n        -\u003e __cpufreq_driver_target()\\n           |\\n           -\u003e __target_index()\\n              |\\n              -\u003e cpufreq_freq_transition_begin()\\n                 |\\n                 -\u003e cpufreq_notify_transition()\\n                    |\\n                    -\u003e ... __kvmclock_cpufreq_notifier()\\n\\nBut, actually triggering such deadlocks is beyond rare due to the\\ncombination of dependencies and timings involved.  E.g. the cpufreq\\nnotifier is only used on older CPUs without a constant TSC, mucking with\\nthe NX hugepage mitigation while VMs are running is very uncommon, and\\ndoing so while also onlining/offlining a CPU (necessary to generate\\ncontention on cpu_hotplug_lock) would be even more unusual.\\n\\nThe most robust solution to the general cpu_hotplug_lock issue is likely\\nto switch vm_list to be an RCU-protected list, e.g. so that x86\u0027s cpufreq\\nnotifier doesn\u0027t to take kvm_lock.  For now, settle for fixing the most\\nblatant deadlock, as switching to an RCU-protected list is a much more\\ninvolved change, but add a comment in locking.rst to call out that care\\nneeds to be taken when walking holding kvm_lock and walking vm_list.\\n\\n  ======================================================\\n  WARNING: possible circular locking dependency detected\\n  6.10.0-smp--c257535a0c9d-pip #330 Tainted: G S         O\\n  ------------------------------------------------------\\n  tee/35048 is trying to acquire lock:\\n  ff6a80eced71e0a8 (\u0026kvm-\u003eslots_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x179/0x1e0 [kvm]\\n\\n  but task is already holding lock:\\n  ffffffffc07abb08 (kvm_lock){+.+.}-{3:3}, at: set_nx_huge_pages+0x14a/0x1e0 [kvm]\\n\\n  which lock already depends on the new lock.\\n\\n   the existing dependency chain (in reverse order) is:\\n\\n  -\u003e #3 (kvm_lock){+.+.}-{3:3}:\\n         __mutex_lock+0x6a/0xb40\\n         mutex_lock_nested+0x1f/0x30\\n         kvm_dev_ioctl+0x4fb/0xe50 [kvm]\\n         __se_sys_ioctl+0x7b/0xd0\\n         __x64_sys_ioctl+0x21/0x30\\n         x64_sys_call+0x15d0/0x2e60\\n         do_syscall_64+0x83/0x160\\n         entry_SYSCALL_64_after_hwframe+0x76/0x7e\\n\\n  -\u003e #2 (cpu_hotplug_lock){++++}-{0:0}:\\n         cpus_read_lock+0x2e/0xb0\\n         static_key_slow_inc+0x16/0x30\\n         kvm_lapic_set_base+0x6a/0x1c0 [kvm]\\n         kvm_set_apic_base+0x8f/0xe0 [kvm]\\n         kvm_set_msr_common+0x9ae/0xf80 [kvm]\\n         vmx_set_msr+0xa54/0xbe0 [kvm_intel]\\n         __kvm_set_msr+0xb6/0x1a0 [kvm]\\n         kvm_arch_vcpu_ioctl+0xeca/0x10c0 [kvm]\\n         kvm_vcpu_ioctl+0x485/0x5b0 [kvm]\\n         __se_sys_ioctl+0x7b/0xd0\\n         __x64_sys_ioctl+0x21/0x30\\n         x64_sys_call+0x15d0/0x2e60\\n         do_syscall_64+0x83/0x160\\n         entry_SYSCALL_64_after_hwframe+0x76/0x7e\\n\\n  -\u003e #1 (\u0026kvm-\u003esrcu){.+.+}-{0:0}:\\n         __synchronize_srcu+0x44/0x1a0\\n      \\n---truncated---\"}], \"providerMetadata\": {\"orgId\": \"416baaa9-dc9f-4396-8d5f-8c081fb06d67\", \"shortName\": \"Linux\", \"dateUpdated\": \"2024-12-19T09:27:15.269Z\"}}}",
      "cveMetadata": "{\"cveId\": \"CVE-2024-47744\", \"state\": \"PUBLISHED\", \"dateUpdated\": \"2024-12-19T09:27:15.269Z\", \"dateReserved\": \"2024-09-30T16:00:12.960Z\", \"assignerOrgId\": \"416baaa9-dc9f-4396-8d5f-8c081fb06d67\", \"datePublished\": \"2024-10-21T12:14:11.830Z\", \"assignerShortName\": \"Linux\"}",
      "dataType": "CVE_RECORD",
      "dataVersion": "5.1"
    }
  }
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.