Resurrecting a 2013 Desktop for 2026 AI Inference
How to bypass PCIe address limits to run a 16GB Blackwell GPU on an Asus B85M-E motherboard.
The "Frankenstein" Node Specs
| GPU | NVIDIA RTX 5060 Ti (16GB VRAM) |
| Motherboard | Asus B85M-E (LGA 1150) |
| Primary Goal | Dedicated Headless AI Inference (Ollama/OpenClaw) |
The Challenge: PCIe Region Invalid
Modern 16GB GPUs require a memory "window" larger than what old B85 chipsets were designed to handle. Without intervention, the driver fails with a PCI-e region invalid error in dmesg.
1. BIOS Configuration
Crucial tweaks to isolate the GPU for compute-only tasks:
- Primary Display: Set to
iGPU(Force display to onboard VGA/HDMI). - iGPU Multi-Monitor:
Enabled(Keeps the NVIDIA card visible). - PCIEX16_1 Speed:
Gen3(Max throughput for model loading). - Launch CSM:
Disabled(Pure UEFI required for CUDA 12.8).
2. The Kernel Workaround
Since the BIOS can't map the memory window, we force the Linux kernel to reallocate resources:
# Edit /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash pci=realloc,nocrs"
3. Results
With these flags, nvidia-smi communicates successfully. By offloading the UI to the integrated graphics, we reclaim the full 16GB VRAM for high-precision models like Gemma 4 E2B.
Update: Agent Testing Success
Following the initial hardware setup, we moved to the software stack with phenomenal results:
- Successfully ran Gemma 4 E2B on Ollama with 100% GPU utilization.
- Connected it as the agent model for OpenClaw and successfully interacted via chat.
- Pulled the larger Gemma 4 E4B model and reran the exact same flow successfully!