fkie_cve-2024-49569
Vulnerability from fkie_nvd
Published
2025-01-11 13:15
Modified
2025-01-11 13:15
Severity ?
Summary
In the Linux kernel, the following vulnerability has been resolved: nvme-rdma: unquiesce admin_q before destroy it Kernel will hang on destroy admin_q while we create ctrl failed, such as following calltrace: PID: 23644 TASK: ff2d52b40f439fc0 CPU: 2 COMMAND: "nvme" #0 [ff61d23de260fb78] __schedule at ffffffff8323bc15 #1 [ff61d23de260fc08] schedule at ffffffff8323c014 #2 [ff61d23de260fc28] blk_mq_freeze_queue_wait at ffffffff82a3dba1 #3 [ff61d23de260fc78] blk_freeze_queue at ffffffff82a4113a #4 [ff61d23de260fc90] blk_cleanup_queue at ffffffff82a33006 #5 [ff61d23de260fcb0] nvme_rdma_destroy_admin_queue at ffffffffc12686ce #6 [ff61d23de260fcc8] nvme_rdma_setup_ctrl at ffffffffc1268ced #7 [ff61d23de260fd28] nvme_rdma_create_ctrl at ffffffffc126919b #8 [ff61d23de260fd68] nvmf_dev_write at ffffffffc024f362 #9 [ff61d23de260fe38] vfs_write at ffffffff827d5f25 RIP: 00007fda7891d574 RSP: 00007ffe2ef06958 RFLAGS: 00000202 RAX: ffffffffffffffda RBX: 000055e8122a4d90 RCX: 00007fda7891d574 RDX: 000000000000012b RSI: 000055e8122a4d90 RDI: 0000000000000004 RBP: 00007ffe2ef079c0 R8: 000000000000012b R9: 000055e8122a4d90 R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000004 R13: 000055e8122923c0 R14: 000000000000012b R15: 00007fda78a54500 ORIG_RAX: 0000000000000001 CS: 0033 SS: 002b This due to we have quiesced admi_q before cancel requests, but forgot to unquiesce before destroy it, as a result we fail to drain the pending requests, and hang on blk_mq_freeze_queue_wait() forever. Here try to reuse nvme_rdma_teardown_admin_queue() to fix this issue and simplify the code.
Impacted products
Vendor Product Version



{
  "cveTags": [],
  "descriptions": [
    {
      "lang": "en",
      "value": "In the Linux kernel, the following vulnerability has been resolved:\n\nnvme-rdma: unquiesce admin_q before destroy it\n\nKernel will hang on destroy admin_q while we create ctrl failed, such\nas following calltrace:\n\nPID: 23644    TASK: ff2d52b40f439fc0  CPU: 2    COMMAND: \"nvme\"\n #0 [ff61d23de260fb78] __schedule at ffffffff8323bc15\n #1 [ff61d23de260fc08] schedule at ffffffff8323c014\n #2 [ff61d23de260fc28] blk_mq_freeze_queue_wait at ffffffff82a3dba1\n #3 [ff61d23de260fc78] blk_freeze_queue at ffffffff82a4113a\n #4 [ff61d23de260fc90] blk_cleanup_queue at ffffffff82a33006\n #5 [ff61d23de260fcb0] nvme_rdma_destroy_admin_queue at ffffffffc12686ce\n #6 [ff61d23de260fcc8] nvme_rdma_setup_ctrl at ffffffffc1268ced\n #7 [ff61d23de260fd28] nvme_rdma_create_ctrl at ffffffffc126919b\n #8 [ff61d23de260fd68] nvmf_dev_write at ffffffffc024f362\n #9 [ff61d23de260fe38] vfs_write at ffffffff827d5f25\n    RIP: 00007fda7891d574  RSP: 00007ffe2ef06958  RFLAGS: 00000202\n    RAX: ffffffffffffffda  RBX: 000055e8122a4d90  RCX: 00007fda7891d574\n    RDX: 000000000000012b  RSI: 000055e8122a4d90  RDI: 0000000000000004\n    RBP: 00007ffe2ef079c0   R8: 000000000000012b   R9: 000055e8122a4d90\n    R10: 0000000000000000  R11: 0000000000000202  R12: 0000000000000004\n    R13: 000055e8122923c0  R14: 000000000000012b  R15: 00007fda78a54500\n    ORIG_RAX: 0000000000000001  CS: 0033  SS: 002b\n\nThis due to we have quiesced admi_q before cancel requests, but forgot\nto unquiesce before destroy it, as a result we fail to drain the\npending requests, and hang on blk_mq_freeze_queue_wait() forever. Here\ntry to reuse nvme_rdma_teardown_admin_queue() to fix this issue and\nsimplify the code."
    },
    {
      "lang": "es",
      "value": "En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: nvme-rdma: anular la desactivaci\u00f3n de admin_q antes de destruirlo El kernel se bloquear\u00e1 al destruir admin_q mientras creamos un control fallido, como el siguiente calltrace: PID: 23644 TAREA: ff2d52b40f439fc0 CPU: 2 COMANDO: \"nvme\" #0 [ff61d23de260fb78] __schedule en ffffffff8323bc15 #1 [ff61d23de260fc08] schedule en ffffffff8323c014 #2 [ff61d23de260fc28] blk_mq_freeze_queue_wait en ffffffff82a3dba1 #3 [ff61d23de260fc78] blk_freeze_queue en ffffffff82a4113a #4 [ff61d23de260fc90] blk_cleanup_queue en ffffffff82a33006 #5 [ff61d23de260fcb0] nvme_rdma_destroy_admin_queue en ffffffffc12686ce #6 [ff61d23de260fcc8] nvme_rdma_setup_ctrl en ffffffffc1268ced #7 [ff61d23de260fd28] nvme_rdma_create_ctrl en ffffffffc126919b #8 [ff61d23de260fd68] nvmf_dev_write en ffffffffc024f362 #9 [ff61d23de260fe38] vfs_write en ffffffff827d5f25 RIP: 00007fda7891d574 RSP: 00007ffe2ef06958 RFLAGS: 00000202 RAX: ffffffffffffffda RBX: 000055e8122a4d90 RCX: 00007fda7891d574 RDX: 000000000000012b RSI: 000055e8122a4d90 RDI: 000000000000004 RBP: 00007ffe2ef079c0 R8: 000000000000012b R9: 000055e8122a4d90 R10: 00000000000000000 R11: 0000000000000202 R12: 0000000000000004 R13: 000055e8122923c0 R14: 000000000000012b R15: 00007fda78a54500 ORIG_RAX: 0000000000000001 CS: 0033 SS: 002b Esto se debe a que hemos silenciado admi_q antes de cancelar solicitudes, pero olvidamos reactivarlo antes de destruirlo, como resultado no podemos drenar las solicitudes pendientes y nos quedamos colgados en blk_mq_freeze_queue_wait() para siempre. Aqu\u00ed intente reutilizar nvme_rdma_teardown_admin_queue() para solucionar este problema y simplificar el c\u00f3digo."
    }
  ],
  "id": "CVE-2024-49569",
  "lastModified": "2025-01-11T13:15:23.840",
  "metrics": {},
  "published": "2025-01-11T13:15:23.840",
  "references": [
    {
      "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
      "url": "https://git.kernel.org/stable/c/05b436f3cf65c957eff86c5ea5ddfa2604b32c63"
    },
    {
      "source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
      "url": "https://git.kernel.org/stable/c/5858b687559809f05393af745cbadf06dee61295"
    }
  ],
  "sourceIdentifier": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
  "vulnStatus": "Awaiting Analysis"
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.