Active Alerts
Overview
Ollama
—
checking...
Models in VRAM
—
currently loaded
Active Inference
—
running requests
Open WebUI
—
checking...
PVE Nodes Up
—
of 3 expected
PVE VMs Running
—
across cluster
Total GPU Power
—
watts combined
Est. Cost / Hour
—
@ /bin/sh.12/kWh
Power Cost Estimate
GPU Telemetry — Tesla P100 & P40
Tesla P100 · 16GB HBM2
Temperature
—
°C / 83°C TJ
Power Draw
—
W / 250W TDP
Core Util
—
% utilized
VRAM Used
—
MB / 16384MB
Mem Util
—
% bandwidth
SM Clock
—
MHz
Tesla P40 · 24GB GDDR5
Temperature
—
°C / 83°C TJ
Power Draw
—
W / 250W TDP
Core Util
—
% utilized
VRAM Used
—
MB / 24576MB
Mem Util
—
% bandwidth
SM Clock
—
MHz
GPU Time Series — Last 30 Min
Temperature (°C) —
Power Draw (W) —
Core Utilization (%) —
VRAM Used (MB) —
Ollama — Active Models
| Model Name | Status | VRAM Allocated | Model Size | Load Events (1h) | Unload Events (1h) |
|---|---|---|---|---|---|
| Loading model data... | |||||
Open WebUI
—
Status
—
Active Sessions
—
Model Loads (1h)
—
Inference Running
Proxmox VE Cluster
Loading PVE data...
Proxmox — Resource Usage
CPU % — All Nodes
Memory % — All Nodes
VM Count per Node
n8n Workflow Automation — nexus.xinle.biz/n8n
⚡
n8n Workflows
Connecting...
Enter your n8n API key above and click Apply to load workflows.