← Back

Personal Ops Infrastructure

MJ Ops

Always-on infrastructure that lets one operator run Korean multi-channel publishing on LLMs.

Self-hosted Mini PC · Phase 5 (May 2026)

What it is

MJ Ops is the operational layer that makes one-person multi-channel publishing actually sustainable in Korea. A Telegram message or a scheduled cron tick flows through ingest, judgment, and drafting, and surfaces back through a 7-tab dashboard and Telegram itself.

Built minimal on purpose. Heavier agent frameworks were on the table; this layer holds only what one operator actually uses, so the cost of running and reasoning about it stays low. The cleverness is in what was left out.

Always on

A dedicated Mini PC stays awake so the system can — without my involvement. Two kinds of work run side by side: pm2 cron on Node.js for deterministic ingest (35+ sources, deduped into Postgres), and scheduled LLM tasks for everything that needs judgment — curation, drafting, weekly retrospectives. The hourly process-queue picks up Telegram-driven jobs within the hour.

Always-On — 24-hour automation ring (KST)

pm2 · deterministic LLM · scheduled
hourly · process-queue0006121810:37KST · live

Tap or hover a dot to see what it does. The dashed inner ring is the hourly process-queue dispatcher.

Sat 03:00weekly-review
Sat 03:15weekly-linkedin
04:00collect-news
04:05collect-stats
05:15curate-daily-news
06:00safety-net
09:30collect-minbook
10:00daily-linkedin

The split is deliberate. Deterministic work stays in Node.js where failure modes are obvious; judgment work runs against versioned prompt files in the same repo, so behaviour is editable as a flat file rather than buried in glue code. Each collector verifies its own outcome — a 06:00 safety-net re-runs ingest if the row didn't actually land, because a fired schedule is not the same thing as written data.

Self-improving

Posting and drafting is the obvious part. The non-obvious work is that the system gets better at deciding what to post and how to write it, week over week. Every 👍 / 👎 / save / comment on the dashboard becomes training signal — and a Saturday retrospective reads that signal and rewrites the curation prompt for the following week.

Self-improving curation — weekly prompt refinement

👍 / 👎 / save / comment data is the training signal. weekly-review reads it and rewrites the curation prompt.

Week 1 — baseline prompt

curation hits = items that survived to dashboard

Mon

11/20

👍3 👎4 💾1

Tue

10/20

👍2 👎5 💾1

Wed

12/20

👍4 👎3 💾2

Thu

9/20

👍2 👎5 💾1

Fri

11/20

👍3 👎4 💾1

total 👍

14

total 👎

21

saved

6

Numbers are illustrative for the visualisation; the loop itself runs on real news_feedback / saved_news tables.

The same pattern runs for LinkedIn drafting, with a different signal. daily-linkedin generates two drafts a day in my own voice; weekly-linkedin reads the diff between each AI draft and the version I actually edited and posted — that edit rate is the training signal — and updates the drafting prompt accordingly. Posting is a single API call. The value is what the loop learns from the human edit.

Why it matters

One-operator publishing is the LLM era's biggest unlock, and the bottleneck is the operational layer most attempts skip — observability, catch-up, bidirectional input, retrospectives that close the loop. This is the layer that turns LLM drafting from a one-off demo into a system that survives a year.

Status

Live and stable. Phase 5 (May 2026) shipped daily LinkedIn auto-draft (2/day), a Minbook metadata tab, and refinements to the Naver blog drafting flow (mobile photo + caption → Vision + research-grounded draft → manual review). Runs at $0 incremental operating cost on self-hosted compute + bundled LLM subscriptions.

Links

Dashboard is private. Architecture and demo on request — via email.