CVE-2024-49569
Vulnerability from cvelistv5
Published
2025-01-11 12:25
Modified
2025-01-20 06:19
Severity ?
Summary
In the Linux kernel, the following vulnerability has been resolved: nvme-rdma: unquiesce admin_q before destroy it Kernel will hang on destroy admin_q while we create ctrl failed, such as following calltrace: PID: 23644 TASK: ff2d52b40f439fc0 CPU: 2 COMMAND: "nvme" #0 [ff61d23de260fb78] __schedule at ffffffff8323bc15 #1 [ff61d23de260fc08] schedule at ffffffff8323c014 #2 [ff61d23de260fc28] blk_mq_freeze_queue_wait at ffffffff82a3dba1 #3 [ff61d23de260fc78] blk_freeze_queue at ffffffff82a4113a #4 [ff61d23de260fc90] blk_cleanup_queue at ffffffff82a33006 #5 [ff61d23de260fcb0] nvme_rdma_destroy_admin_queue at ffffffffc12686ce #6 [ff61d23de260fcc8] nvme_rdma_setup_ctrl at ffffffffc1268ced #7 [ff61d23de260fd28] nvme_rdma_create_ctrl at ffffffffc126919b #8 [ff61d23de260fd68] nvmf_dev_write at ffffffffc024f362 #9 [ff61d23de260fe38] vfs_write at ffffffff827d5f25 RIP: 00007fda7891d574 RSP: 00007ffe2ef06958 RFLAGS: 00000202 RAX: ffffffffffffffda RBX: 000055e8122a4d90 RCX: 00007fda7891d574 RDX: 000000000000012b RSI: 000055e8122a4d90 RDI: 0000000000000004 RBP: 00007ffe2ef079c0 R8: 000000000000012b R9: 000055e8122a4d90 R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000004 R13: 000055e8122923c0 R14: 000000000000012b R15: 00007fda78a54500 ORIG_RAX: 0000000000000001 CS: 0033 SS: 002b This due to we have quiesced admi_q before cancel requests, but forgot to unquiesce before destroy it, as a result we fail to drain the pending requests, and hang on blk_mq_freeze_queue_wait() forever. Here try to reuse nvme_rdma_teardown_admin_queue() to fix this issue and simplify the code.
Impacted products
Vendor Product Version
Linux Linux Version: 958dc1d32c80566f58d18f05ef1f05bd32d172c1
Version: 958dc1d32c80566f58d18f05ef1f05bd32d172c1
Create a notification for this product.
   Linux Linux Version: 5.12
Create a notification for this product.
Show details on NVD website


{
  "containers": {
    "cna": {
      "affected": [
        {
          "defaultStatus": "unaffected",
          "product": "Linux",
          "programFiles": [
            "drivers/nvme/host/rdma.c"
          ],
          "repo": "https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git",
          "vendor": "Linux",
          "versions": [
            {
              "lessThan": "05b436f3cf65c957eff86c5ea5ddfa2604b32c63",
              "status": "affected",
              "version": "958dc1d32c80566f58d18f05ef1f05bd32d172c1",
              "versionType": "git"
            },
            {
              "lessThan": "5858b687559809f05393af745cbadf06dee61295",
              "status": "affected",
              "version": "958dc1d32c80566f58d18f05ef1f05bd32d172c1",
              "versionType": "git"
            }
          ]
        },
        {
          "defaultStatus": "affected",
          "product": "Linux",
          "programFiles": [
            "drivers/nvme/host/rdma.c"
          ],
          "repo": "https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git",
          "vendor": "Linux",
          "versions": [
            {
              "status": "affected",
              "version": "5.12"
            },
            {
              "lessThan": "5.12",
              "status": "unaffected",
              "version": "0",
              "versionType": "semver"
            },
            {
              "lessThanOrEqual": "6.12.*",
              "status": "unaffected",
              "version": "6.12.5",
              "versionType": "semver"
            },
            {
              "lessThanOrEqual": "*",
              "status": "unaffected",
              "version": "6.13",
              "versionType": "original_commit_for_fix"
            }
          ]
        }
      ],
      "descriptions": [
        {
          "lang": "en",
          "value": "In the Linux kernel, the following vulnerability has been resolved:\n\nnvme-rdma: unquiesce admin_q before destroy it\n\nKernel will hang on destroy admin_q while we create ctrl failed, such\nas following calltrace:\n\nPID: 23644    TASK: ff2d52b40f439fc0  CPU: 2    COMMAND: \"nvme\"\n #0 [ff61d23de260fb78] __schedule at ffffffff8323bc15\n #1 [ff61d23de260fc08] schedule at ffffffff8323c014\n #2 [ff61d23de260fc28] blk_mq_freeze_queue_wait at ffffffff82a3dba1\n #3 [ff61d23de260fc78] blk_freeze_queue at ffffffff82a4113a\n #4 [ff61d23de260fc90] blk_cleanup_queue at ffffffff82a33006\n #5 [ff61d23de260fcb0] nvme_rdma_destroy_admin_queue at ffffffffc12686ce\n #6 [ff61d23de260fcc8] nvme_rdma_setup_ctrl at ffffffffc1268ced\n #7 [ff61d23de260fd28] nvme_rdma_create_ctrl at ffffffffc126919b\n #8 [ff61d23de260fd68] nvmf_dev_write at ffffffffc024f362\n #9 [ff61d23de260fe38] vfs_write at ffffffff827d5f25\n    RIP: 00007fda7891d574  RSP: 00007ffe2ef06958  RFLAGS: 00000202\n    RAX: ffffffffffffffda  RBX: 000055e8122a4d90  RCX: 00007fda7891d574\n    RDX: 000000000000012b  RSI: 000055e8122a4d90  RDI: 0000000000000004\n    RBP: 00007ffe2ef079c0   R8: 000000000000012b   R9: 000055e8122a4d90\n    R10: 0000000000000000  R11: 0000000000000202  R12: 0000000000000004\n    R13: 000055e8122923c0  R14: 000000000000012b  R15: 00007fda78a54500\n    ORIG_RAX: 0000000000000001  CS: 0033  SS: 002b\n\nThis due to we have quiesced admi_q before cancel requests, but forgot\nto unquiesce before destroy it, as a result we fail to drain the\npending requests, and hang on blk_mq_freeze_queue_wait() forever. Here\ntry to reuse nvme_rdma_teardown_admin_queue() to fix this issue and\nsimplify the code."
        }
      ],
      "providerMetadata": {
        "dateUpdated": "2025-01-20T06:19:17.469Z",
        "orgId": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
        "shortName": "Linux"
      },
      "references": [
        {
          "url": "https://git.kernel.org/stable/c/05b436f3cf65c957eff86c5ea5ddfa2604b32c63"
        },
        {
          "url": "https://git.kernel.org/stable/c/5858b687559809f05393af745cbadf06dee61295"
        }
      ],
      "title": "nvme-rdma: unquiesce admin_q before destroy it",
      "x_generator": {
        "engine": "bippy-5f407fcff5a0"
      }
    }
  },
  "cveMetadata": {
    "assignerOrgId": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
    "assignerShortName": "Linux",
    "cveId": "CVE-2024-49569",
    "datePublished": "2025-01-11T12:25:19.455Z",
    "dateReserved": "2025-01-09T09:50:31.772Z",
    "dateUpdated": "2025-01-20T06:19:17.469Z",
    "state": "PUBLISHED"
  },
  "dataType": "CVE_RECORD",
  "dataVersion": "5.1",
  "vulnerability-lookup:meta": {
    "nvd": "{\"cve\":{\"id\":\"CVE-2024-49569\",\"sourceIdentifier\":\"416baaa9-dc9f-4396-8d5f-8c081fb06d67\",\"published\":\"2025-01-11T13:15:23.840\",\"lastModified\":\"2025-01-11T13:15:23.840\",\"vulnStatus\":\"Received\",\"cveTags\":[],\"descriptions\":[{\"lang\":\"en\",\"value\":\"In the Linux kernel, the following vulnerability has been resolved:\\n\\nnvme-rdma: unquiesce admin_q before destroy it\\n\\nKernel will hang on destroy admin_q while we create ctrl failed, such\\nas following calltrace:\\n\\nPID: 23644    TASK: ff2d52b40f439fc0  CPU: 2    COMMAND: \\\"nvme\\\"\\n #0 [ff61d23de260fb78] __schedule at ffffffff8323bc15\\n #1 [ff61d23de260fc08] schedule at ffffffff8323c014\\n #2 [ff61d23de260fc28] blk_mq_freeze_queue_wait at ffffffff82a3dba1\\n #3 [ff61d23de260fc78] blk_freeze_queue at ffffffff82a4113a\\n #4 [ff61d23de260fc90] blk_cleanup_queue at ffffffff82a33006\\n #5 [ff61d23de260fcb0] nvme_rdma_destroy_admin_queue at ffffffffc12686ce\\n #6 [ff61d23de260fcc8] nvme_rdma_setup_ctrl at ffffffffc1268ced\\n #7 [ff61d23de260fd28] nvme_rdma_create_ctrl at ffffffffc126919b\\n #8 [ff61d23de260fd68] nvmf_dev_write at ffffffffc024f362\\n #9 [ff61d23de260fe38] vfs_write at ffffffff827d5f25\\n    RIP: 00007fda7891d574  RSP: 00007ffe2ef06958  RFLAGS: 00000202\\n    RAX: ffffffffffffffda  RBX: 000055e8122a4d90  RCX: 00007fda7891d574\\n    RDX: 000000000000012b  RSI: 000055e8122a4d90  RDI: 0000000000000004\\n    RBP: 00007ffe2ef079c0   R8: 000000000000012b   R9: 000055e8122a4d90\\n    R10: 0000000000000000  R11: 0000000000000202  R12: 0000000000000004\\n    R13: 000055e8122923c0  R14: 000000000000012b  R15: 00007fda78a54500\\n    ORIG_RAX: 0000000000000001  CS: 0033  SS: 002b\\n\\nThis due to we have quiesced admi_q before cancel requests, but forgot\\nto unquiesce before destroy it, as a result we fail to drain the\\npending requests, and hang on blk_mq_freeze_queue_wait() forever. Here\\ntry to reuse nvme_rdma_teardown_admin_queue() to fix this issue and\\nsimplify the code.\"}],\"metrics\":{},\"references\":[{\"url\":\"https://git.kernel.org/stable/c/05b436f3cf65c957eff86c5ea5ddfa2604b32c63\",\"source\":\"416baaa9-dc9f-4396-8d5f-8c081fb06d67\"},{\"url\":\"https://git.kernel.org/stable/c/5858b687559809f05393af745cbadf06dee61295\",\"source\":\"416baaa9-dc9f-4396-8d5f-8c081fb06d67\"}]}}"
  }
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.