1. The Screenpipe Screenshot

I worked 50 hours at my day job this week. Screenpipe says Claude ran 84.

I'll be real — when I first saw that number I thought it was a bug. I went back through the activity log expecting idle time counted as work. It wasn't. Every hour was a real run. Drafts written. Emails staged. Outreach enriched. Site health checks. Editorial pitches built. Carousels rendered. App store reviews monitored. Reply triage. Worker deploys verified. While I was in pipeline integrity meetings, while I was at my son's soccer game, while I was sleeping.

50 hours of W-2. 84 hours of Claude. Same week. Same person.

The math doesn't add up if you're still measuring the week the old way — butt in chair, hands on keys, eyes on screen. That measurement was lossy in 2018. In 2026 it's misleading. Most of the work isn't happening at the keyboard anymore. It's happening in places you're not looking — agents you configured weeks ago, cron jobs ticking over, folder watchers firing, scheduled flows running on free local compute. That's the layer nobody has a clean vocabulary for yet.

When I posted the screenshot on TikTok, the comment section went straight to "agents." Hundreds of replies. A few flat asking for the build sheet.

Here it is. Hands-on. Replicable. Specific.

If you're a TikTok commenter who landed here looking for the stack, scroll to the deep-dive section near the bottom — that's where the install steps and first workflow live. Otherwise read straight through. The framing makes the deep-dive land harder.

Done is better than perfect. Let's go.

2. This Is Not Me Working Harder

Let me kill the wrong read immediately.

I am not waking up at 3am grinding. I am not skipping family time. I am not hustling in any sense the productivity-bro internet would recognize. The 84 hours isn't me — it's infrastructure I built once, and the infrastructure pays back daily. I show up to it. I review what it produced. I approve what's good. I kill what's not. That's the loop.

Most people read "84 hours of AI" as "this guy is on his computer 12 hours a day." Wrong. Others read it as "passive income, set it and forget it." Also wrong. The right read is in the middle — there's a layer of work running while I'm not at the keyboard, and that layer handles the routine stuff so my actual hours can go to the work that requires me specifically.

The 84-hour layer ran while I was at the day job. It ran while I was eating dinner. It ran overnight while I slept. Once it's wired, it doesn't ask for permission. It just runs.

The trade is real, though. Building it took weeks. The compounding only kicks in after the friction is gone. That's the part most people quit before reaching. More on that at the end.

3. The Four-Piece Stack

Four tools. Three free. One paid. That's the whole runtime.

Screenpipe — continuous activity capture. Runs 24/7 on the Mac, screenshots every few seconds, OCRs the text, indexes it locally. Audio capture for meetings is opt-in. Nothing leaves the device. Your day becomes structured data your agents can read. When my Chief of Staff agent runs at 6am, the first thing it does is pull yesterday's Screenpipe summary so it knows what I actually shipped versus what I said I shipped. Real cost: $0 (free, local-only, open source).

Ollama — local LLM server. Runs open-source models on your laptop's GPU — qwen2.5:14b, mistral:7b, llama3.1:8b. Not as smart as Claude. Plenty smart for routine work. This is the cheap drudge layer — caption variants, file triage, classification, first-pass summaries, batch generation. Real cost: $0 (runs on the Mac you already own).

Claude — the judgment layer. Voice-sensitive synthesis. Long-form writing where every sentence is load-bearing. Strategic frameworks. Editorial pitches. Crisis debugging when the root cause is ambiguous. This is where I spend tokens deliberately — for work where the cost of getting it wrong is permanent. Real cost: $200/mo for Claude Max — the budget multiplier that makes the rest of the math work.

Cron + a drop-zone folder — the trigger mechanism. Either a scheduled cron job or a folder you drop files into that fires an agent. Drop a meeting transcript, an agent picks it up, summarizes it, drafts a follow-up email, stages it in Gmail Drafts. Drop a CSV of leads, an enrichment flow runs overnight. Real cost: $0 (built into macOS — crontab -e and a launchd plist).

Free, free, $200, free. Total monthly cost — $200 for roughly 84 hours of work a week. About $0.55 per agent-hour. Not a typo. Fifty-five cents. For routine execution that, in 2007, would have required a small offshore team running thousands of dollars a month. We are living through a price compression most people haven't noticed yet because the framing is still wrong.

4. What the Stack Actually Does — This Week's Real Runs

The spec sheet is meaningless without a concrete example. Here's what ran this week, end of April 2026, while I was working the day job and writing this piece.

Overnight, Tuesday into Wednesday. Apollo enriched 50 fresh leads against my ICP filter — VP L&D and Director Digital Transformation at 500-to-5,000-employee companies. Ollama drafted personalized first-touch emails in my voice from the enrichment data. Gmail MCP staged the drafts. By 6am all I had to do was open Drafts and either send or kill each one. Twelve approved. Ten minutes.

Every morning, 6am. Chief of Staff brief lands on my phone. Pulls overnight Screenpipe activity, prior-day reply classifications, MRR snapshot from Stripe, app store reviews across all four apps, and the day's calendar. I read it at the kitchen table while coffee brews. By the time I'm in the truck for the commute I already know what mattered overnight.

Daytime, while I'm at the day job. Reply monitor polls Gmail every fifteen minutes. Five-class classifier — positive, neutral, out-of-office, not-interested, wrong-person — routes each reply. Positive surfaces to me with priority. Not-interested marks the lead so I never re-touch them. I'm in pipeline meetings. The triage happens anyway.

Late night, while I sleep. Ollama runs a batch job — newsletter draft variants for next Tuesday's send, Instagram caption variants pulled from this week's pillar, reformatted carousel slide-1 hooks. Zero Claude tokens. Free local compute. By morning I have a folder of options instead of a blank document.

Last weekend. Dropped this week's pillar piece into the social drop folder. Six platform-tuned variants got fanned out — Instagram carousel, TikTok carousel, LinkedIn long-form, Threads, X, Reddit. Publer scheduled them. I reviewed on a bus ride.

That's the runs that hit the Screenpipe log this week. Sum them and you get 84 hours. None of it required me at the keyboard for execution. All of it required me at the keyboard for configuration — once, weeks ago, before any of it ran for the first time.

5. Deep Dive — The Replicator's Build Sheet

This is the section the TikTok commenters wanted. If you skipped down here, welcome. If you read the rest, here's the part with the steps.

Hardware and setup baseline

You need a Mac. M-series ideal — Apple Silicon runs Ollama meaningfully faster than Intel, and the unified memory architecture matters for the larger models. 32GB RAM helps; 16GB is fine for the 7B and 14B models that cover 90% of routine work. You should be comfortable in Terminal — not a developer, just willing to copy-paste a brew install line and edit a config file. If ~/.zshrc makes you nervous, this is a four-week ramp instead of one. That's fine. Most operators take longer than they expect.

Install steps for each tool

Screenpipe — five minutes.

brew install screenpipe
screenpipe

Or download the desktop app from screenpi.pe if you want the GUI. macOS will prompt for screen recording, accessibility, and microphone permissions. Grant all three through System Settings → Privacy & Security. If the database stays empty after an hour of capture, the issue is almost always TCC (Transparency, Consent, and Control) — macOS thinks it granted permissions but the binary is checking against a different identity. Fix is tccutil reset ScreenCapture then re-grant from System Settings. Lost two days to this when I first set it up.

Ollama — thirty minutes depending on your bandwidth.

brew install ollama
ollama pull llama3.1:8b
ollama pull qwen2.5:14b
ollama pull mistral:7b
ollama serve

The pulls are 4 to 9 GB each. Start with mistral:7b if your bandwidth is slow — it covers most classification work. Add the bigger models later. The serve command starts a local API on port 11434 that everything else talks to.

Claude — ten minutes.

Claude Max subscription at claude.ai. $200 a month. Then install Claude Code from the CLI:

npm install -g @anthropic-ai/claude-code
claude login

That gets you the desktop app, the API access, and the CLI all on the same account. Claude Code is what you use for any agent work — run a command, drop a file, chain steps.

Cron and the drop zone — thirty minutes including the gotcha fix below.

crontab -e is already on your Mac. The drop zone is a folder you watch with launchd (the macOS user-mode equivalent of cron), so when a file lands, a script fires.

How the four pieces connect — the drop-zone pattern

A hidden dotfolder in your home directory plus a launchd plist that polls it every 30 seconds. Drop a file into ~/.claude-drops/, the watcher script picks it up, runs whatever command fits the file, moves the executed file to ~/.claude-drops/executed/ with a timestamp, logs the run.

The launchd plist points at a watcher script. The watcher imports your environment variables (so ANTHROPIC_API_KEY and other secrets are available), then loops over *.command and *.sh files. Found one? Make it executable, run it, move it to executed or failed based on exit code, log everything.

The single biggest landmine — TCC blocks launchd from reading ~/Desktop/. Almost every tutorial puts the drop zone on Desktop. Don't. macOS Privacy & Security treats Desktop, Documents, and Downloads as protected, and a process spawned by launchd fails silently trying to read them. The watcher looks like it's running, the launchd job looks healthy, and nothing executes. You'll lose an evening.

The fix is the hidden dotfolder. ~/.claude-drops/ is in your home directory but starts with a dot. macOS does not treat dotfolders as TCC-protected. The watcher reads it fine. Migrate before you wire anything up. Trust me.

First workflow to build — drop a meeting transcript, get a summary email

Start with the simplest workflow that saves time. Drop a meeting transcript into the folder, get a summary email staged in Gmail Drafts within two minutes.

Wire it like this. Your watcher detects a *.transcript.txt file. It calls Ollama (qwen2.5:14b is fine) with a prompt template asking for a five-bullet summary plus three action items in your voice. The summary lands in a markdown file. A second step pipes the markdown into a Gmail MCP call that drafts an email to whoever was in the meeting. The draft sits until you open it on your phone and hit send.

Runtime — under two minutes. Cost — zero. Time saved — the twenty-five minutes you used to spend staring at a transcript trying to remember what got decided.

Build that one workflow first. Get it stable. Trust it for two weeks. Then add the next one. Don't try to build the whole 84-hour stack in week one — you'll burn out before any of it compounds.

The mistakes I made so you don't have to

TCC and launchd will eat you alive if you put the drop zone on Desktop. Use ~/.claude-drops/. If it's failing silently, that's the cause.

Ollama will exhaust RAM if you load too many models at once. Each loaded model holds memory. If your watcher pulls three Ollama models inside one job, you'll OOM the small Macs. Keep models small (7B class), unload between calls, or upgrade RAM.

Claude budget burn from poor token discipline is the most expensive mistake. I made it for months. I was using Claude Opus to write social caption variants. Didn't realize I was overusing the expensive model until I asked Claude to audit my own usage — 95% of what I was doing didn't need Opus. Kind of embarrassing honestly. Rule of thumb — Ollama for drudge, Claude Sonnet for default judgment, Claude Opus reserved for work where the cost of getting it wrong is an order of magnitude greater than the cost of doing it right. SBIR narratives. Investor materials. Voice-sensitive pitches. Crisis debugging at 11pm. Almost nothing else.

The honest cost breakdown

Month 1. $200 (Claude Max). Plus 40 to 60 hours of one-time setup if you're new to Terminal, half that if you're a developer. The setup hours are real. Most people don't quote them honestly. I am.

Month 6. Still $200/month if you held the line on token discipline. Add $20-$120/month if you bolt on Apollo or Publer. Neither is required to run the core stack.

Year 1. $2,400 in Claude. Maybe another $500-$1,500 in optional add-ons. That's it — for a stack that does 4,000+ hours of agent execution per year.

The real cost is the setup time, not the cash. Don't trust anyone who calls it plug-and-play. It isn't. Once it's wired, the cash math is one of the most absurd compressions of cost-to-output that's existed in solo work.

6. What This Is NOT

The framing matters, so let me be careful.

This is not me replacing my job. I still work pipeline integrity. The day job funds the apps, funds the federal moat, and gives me operational discipline I don't want to lose. The 84-hour layer runs alongside the W-2, not instead of it.

This is not me building a startup-killer. I am not Pieter Levels. I am not a one-person billion-dollar company yet. I'm a solo operator with three apps shipped, a productized consulting offer in flight, and a federal contracting moat being assembled. The stack makes the math possible. The math is not yet proven at the billion-dollar end. I'm walking the path. I have not finished it.

This is not "passive income while you sleep." That framing is wrong. The agents don't earn money on their own. They handle the operating layer so my hours can go to the work that earns money — relationships, judgment, writing that compounds, federal narratives, calls. The agents don't replace the work that matters. They free the time to do more of it.

What it is — friction replacement. Routine work disappears. Judgment work compounds. Read it correctly and the rest of the math works. Read it wrong and you'll either over-promise to yourself or under-build to your potential.

7. Why Most People Won't Do This

Here's the discipline gate.

Building this stack takes 40 to 60 hours of one-time setup spread across 4 to 6 weeks if you're working a day job alongside it. It's not hard work — it's deliberate work. You install. You configure. You break things. You fix them. You build the first workflow. You trust it. You build the second. The setup is a willpower curve, not an intelligence curve.

Most operators won't do it. Three reasons.

The payoff is delayed. Months 2 and 3 are when you start to feel it. Month 1 you're mostly debugging. People quit at the debugging stage because the dopamine isn't there. The screenshot of 84 hours doesn't show the four evenings I spent fighting TCC before the first workflow ran clean.

The setup is invisible. Nobody else can see your stack. You can't show it off until it's running. That's harder to sustain than building a visible product where every commit is brag-worthy.

The framing is wrong in most people's heads. They want a tool that automates work. They don't want to build the operating system that does. Automation is a feature. The agent layer is an infrastructure decision.

Tim Ferriss called this the DEAL framework in The 4-Hour Work Week — Definition, Elimination, Automation, Liberation. The order matters. Most operators skip Definition (what do you actually want to eliminate, and why) and jump to Automation. They end up automating work that should never have existed. Define what to eliminate first. Automate second. The order is the discipline.

If you're still reading, you're not most operators. Use it.

8. Soft Close

I write a Tuesday newsletter at drjonesy.com. Real ship updates from the field. The deeper deep-dive series on the agent stack lives there — the next four weeks cover the Apollo MCP wiring, the Cowork dispatch pattern, the Cloudflare Worker pattern that makes my custom endpoints free, and the mistake matrix so you don't repeat the ones I already paid for.

If you want the done-with-you version — I open a small batch of two-week productized AI prototype sprints each month at drjonesy.com/sprint. Capped intentionally. The sprints are built around the same stack, scoped to your specific operating layer, and ship a working prototype in two weeks. If it doesn't prove the business case, you don't pay the back half.

Build the stack. Reserve your hours for the work that requires you. Run everything else for free.

Lead with AI. Not hype.

— Dr. Jonesy

About Dr. Renaldo Jones

Dr. Renaldo "Jonesy" Jones, PMP, is an AI Strategist, Anthropic Claude Champion, Microsoft Copilot Champion, combat veteran, Doctor of Strategic Leadership, and creator of the RACE Framework. He builds practical AI tools — RACEprompt, QuietPulse, F3 — and helps professionals use AI effectively through coaching and consulting.

Want the Tuesday newsletter?

Real ship updates from the field. No theory. No hype. Just what I built, what I shipped, and what I'm building next.

Subscribe at drjonesy.com