• ×
    Information
    Need Windows 11 help?
    Check documents on compatibility, FAQs, upgrade information and available fixes.
    Windows 11 Support Center.
  • post a message
  • ×
    Information
    Need Windows 11 help?
    Check documents on compatibility, FAQs, upgrade information and available fixes.
    Windows 11 Support Center.
  • post a message
Guidelines
Seize the moment! nominate yourself or a tech enthusiast you admire & join the HP Community Experts!
HP Recommended

Hi,

I have an HP Z8 Fury G5 workstation running Proxmox VE. When I assign my NVIDIA RTX A4500 GPU to a VM using PCIe passthrough (VFIO), the chassis fans immediately ramp to 100% and remain at maximum speed even after shutting down the VM. Fan speed returns to normal only after rebooting the host.

This happens every time and makes GPU passthrough unusable due to noise and the permanent fan ramp until reboot.


Observed Behavior

  • VM starts with GPU passthrough -> fans instantly ramp to 100%

  • VM shutdown -> fans stay at 100%

  • Host reboot -> fans return to normal

The GPU itself works correctly inside the VM (nvidia-smi works), so this looks like a platform-level thermal/fan control failsafe that is triggered when the host loses access to GPU telemetry after VFIO binds the device.


Steps to Reproduce

  1. Boot host normally -> fans normal

  2. Start VM without GPU passthrough -> fans normal

  3. Start VM with GPU passthrough -> fans ramp to 100%

  4. Shutdown VM -> fans stay at 100%

  5. Reboot host -> fans return to normal


Host / Platform Details

System: HP Z8 Fury G5 Workstation
BIOS: HP U61 Ver. 01.02.01
BIOS Release Date: 03/07/2024
BIOS Revision: 2.1
Firmware Revision: 17.20
SMBIOS: 3.5.0
UEFI: Enabled
Hypervisor: Proxmox VE
Host Kernel: Linux 6.14.8-2-pve


GPU Details

GPU: NVIDIA RTX A4500
PCI IDs:

  • VGA: 10de:2232

  • Audio: 10de:1aef

PCI Address:

  • 0000:47:00.0 (GPU)

  • 0000:47:00.1 (GPU audio)

Detected by host:
pci 0000:47:00.0: [10de:2232] type 00 class 0x030000 PCIe Legacy Endpoint
pci 0000:47:00.1: [10de:1aef] type 00 class 0x040300 PCIe Endpoint


Technical Evidence (VFIO reset loop)

Filtered dmesg (vfio/reset/flr/nvidia):

VFIO - User Level meta-driver version: 0.3
vfio-pci 0000:47:00.0: vgaarb: deactivate vga console
vfio_pci: add [10de:2232] class 0x000000/00000000
vfio_pci: add [10de:1aef] class 0x000000/00000000
vfio-pci 0000:47:00.0: resetting
vfio-pci 0000:47:00.0: reset done
vfio-pci 0000:47:00.1: resetting
vfio-pci 0000:47:00.1: reset done


BIOS / Fan-related Settings Checked

"Increase Idle Fan Speed (%)" = 0
"Increase PCIe Idle Fan Speed (%)" = 0

Performance Mode options available:

  • Performance Mode

  • Rack Mode

  • High Performance Mode

Workload Configuration: Balanced

Even with these settings, the behavior persists.


Questions

  1. Is this a known issue or expected behavior on HP Z8 Fury G5 systems when a GPU is assigned to a VM?

  2. Is there a BIOS/Firmware update that resolves this failsafe fan ramp behavior?

  3. Is there a BIOS setting or recommended configuration to prevent fan ramp when GPU telemetry is unavailable to the host?

  4. Are there HP best practices for VFIO GPU passthrough on this platform?

Thanks.

† The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the <a href="https://www8.hp.com/us/en/terms-of-use.html" class="udrlinesmall">Terms of Use</a> and <a href="/t5/custom/page/page-id/hp.rulespage" class="udrlinesmall"> Rules of Participation</a>.