I benchmarked 4 local LLMs on my Mac Studio

Everyone posts clean benchmark tables. Here’s what happens when you actually try to reproduce them. I’ve been running a Mac Studio M4 Max (64GB) as a local LLM server for a couple weeks now. Four models, two backends, and way more debugging than expected. This post documents what I found, including the parts that didn’t work. The hardware Mac Studio M4 Max with 64GB unified memory. The unified memory architecture means the GPU and CPU share the same 64GB pool, so large models can actually fit without swapping. ...

March 29, 2026 · 8 min · Homelabcraft

The Mac Studio homelab: What I'd set up first (without overengineering)

My Mac Studio is arriving this week and, like every sane person in homelab land, my first instinct is to install 40 things and break all of them by Saturday night. So this is the version I wish someone had given me first: practical, boring, and actually maintainable. Note: Do you need a Mac Studio? No. Any old computer you already have works. I’ll just be using a Mac Studio for this guide. ...

March 6, 2026 · 3 min · Homelabcraft