// How It Works

Seven workflows. Twelve innovations. One production infrastructure.

This is the full technical picture — what the system does, how it does it, and what makes it categorically different from every other content automation tool.

Seven Workflows. One Continuous Pipeline.
Every Stage of Production Automated.

The system is not a single tool. It is seven interconnected workflows, each with a specific job, each feeding the next. Here is what each one does and why it was built the way it was.

// Workflow 01

The News Monitor

Your 24/7 Editorial Sentinel

What it does: Monitors every RSS feed in your source list continuously. Runs a dual-agent AI analysis on every new article — one agent categorizes and suggests angles, a second scores it on three editorial criteria out of 30 points. Deduplicates against everything already in your queue. Delivers a ranked, scored editorial briefing to Google Sheets.

The architectural decision worth knowing: The two AI agents are separated deliberately. Categorization is a creative editorial task. Scoring is an analytical evaluation task. Combining them into one LLM call produces worse results from both. Two agents. Two cognitive modes. Better editorial intelligence.

What this means for you: Open your spreadsheet and see exactly which stories from the last 24 hours deserve your attention — scored, summarized, with a suggested angle. No manual monitoring. No missed stories.

// Workflow 02

The Research Engine

Self-Correcting Investigative Research. In 3 Minutes.

What it does: Takes any topic and produces a 500-1,200 word cited intelligence report from live web sources. Runs a 12-step research protocol including: dynamic prompt generation customized per topic, three-chain query formulation, dual Brave API searches with AI-evaluated self-correction between them, source quality ranking, full-page content extraction, and multi-model synthesis (Gemini for reasoning, Grok for final synthesis).

The architectural decision worth knowing: The Analyst Emulator — the first stage of the research engine — does not perform research. It writes the instructions for the AI that will perform research, dynamically tailored to the exact topic. The quality of the final report is not limited by a static prompt template. It is dynamically optimized for every research cycle.

What this means for you: Research reports that cite 15-20 real sources, surface expert quotes, provide historical context, and cost $0.02 each to produce. Every piece of content this system creates — every thread, every Facebook post, every article — is grounded in a research report this engine produced.

Cost: $0.60/month for 30 research cycles. Everything except one xAI Grok API call is free tier.

// Workflow 03

The Twitter Thread Machine

Investigative Threads with Editorial Images. Three Times Daily.

What it does: Pulls high-scoring articles from the editorial queue, fires the Research Engine, writes a 6-tweet investigative thread using a journalism framework, generates 6 branded AI editorial images (FT-style cover + institutional body cards) via reference-image architecture with Google Search grounding, scrapes a real photograph from the source article, and delivers the complete thread to a review queue.

The architectural decision worth knowing: Before writing begins, the Thread Writer runs a mandatory reasoning protocol: counter-narrative identification, historical precedent research, stakeholder impact mapping, and audience relevance calibration. The thread is written after this reasoning is complete. The result is analysis, not summarization.

What this means for you: Six-tweet threads that cite real sources, name real institutions, provide the angle mainstream coverage misses, and carry branded editorial images — automatically, three times a day — for approximately $0.05-0.10 per thread.

The four visual modes: Control the visual presentation of every thread with one spreadsheet cell. Full AI image package. Featured article photo only. Hybrid. Or text-only for breaking news. One cell. No workflow editing.

// Workflow 04

The Facebook Post Machine

Platform-Native Facebook Posts. Built for the Algorithm.

What it does: Produces a single 250-450 word Facebook post — structured in five blocs engineered for Facebook's "See More" fold mechanics, emotional resonance triggers, and comment-driving question architecture — with a branded editorial collage image. Runs three times daily. Built on the same research engine as the Twitter workflow.

The architectural decision worth knowing: Facebook content is not repurposed Twitter content. The Facebook workflow was built from scratch for Facebook's algorithmic reality. The five-bloc structure — emotion anchor, hook, counter-narrative insight, human element, community question — is a journalism framework for social engagement, not a content template.

What this means for you: Facebook posts that feel written, not generated. Counter-narrative depth that rewards the "See More" click. Closing questions that earn comments. Brand-consistent visuals. Three times daily.

// Workflow 05

The Article Writing Pipeline

Publication-Grade Articles. Written, Edited, Verified, WordPress-Ready.

What it does: Takes a research-backed topic through an 11-phase production pipeline: editorial planning → JSON blueprint generation → chapter-by-chapter writing with independent focus per section → key takeaways generation → assembly and stitching → two-pass editorial review (developmental + copy) → pre-format review → HTML conversion → 22-point structural verification → clean WordPress output.

The architectural decision worth knowing: Each chapter is written independently — the LLM gives its full context window to one section at a time rather than trying to write a 2,000-word article in a single pass. This is the difference between focused and diffused attention.

What this means for you: A full article that has been outlined by an editorial architect, written section by section, assembled, reviewed in two professional passes, converted to HTML, and verified — delivered as a WordPress draft for your review.

// Workflow 06

The Visual Enrichment Pipeline

AI Images and Live Data Charts. Automatically.

What it does: Takes the finished article HTML, runs it through an HTML structure enforcer, then sends it to the Imagenator — an AI art director that makes editorial decisions about 2-4 visual placements. Each placement is routed to either the Image Generation Lane or the Chart Production Lane. Assembles the visuals and publishes to WordPress.

The architectural decision worth knowing: The Chart Production Lane performs original data research, validates the data, confidence-scores every row, and only publishes a chart if the data meets confidence thresholds. If it doesn't, the chart is automatically replaced with an editorial image. The system will not publish a chart with unreliable data.

What this means for you: Your articles have publication-grade visual enrichment — editorial images that look like a real publication's art direction and live interactive charts readers can hover over — from data the system researched and verified.

// Workflow 07

The Automated Publishing Layer

One Spreadsheet Cell. Live on Twitter and Facebook.

What it does: Takes approved content from the review queues and publishes it to Twitter/X and Facebook via the Blotato API. Manages content as a FIFO queue. Updates the source row's status to "posted" with a timestamp after every successful publication.

The architectural decision worth knowing: Nothing in this system auto-publishes. The publishing layer only activates when the operator changes the Status field. Editorial sovereignty is preserved at every stage. The automation executes the operator's decisions — it never makes them.

What this means for you: Change one cell in a spreadsheet. Your content goes live — correctly formatted, with the right images, on the right platform, with a complete audit trail.

Twelve Things This System Does That Nothing Else Does.

[ INV-01 ]

Self-Correcting Dual-Search Research

The research engine runs two independent web searches with an AI evaluation layer between them. If the first search doesn't deliver sufficient depth, the system generates a corrective second query targeted at the gaps. This replicates the iterative nature of real investigative research — where the first query is never the best query.

No commercial content tool does this.

[ INV-02 ]

The Analyst Emulator — Prompts That Write Prompts

Before each research cycle, an AI agent generates a custom reasoning framework for the downstream synthesis agent — tailored to the exact topic, encoding twenty advanced prompt engineering techniques. The research quality is not limited by a static template. It improves with every topic.

The system writes better instructions for itself on every run.

[ INV-03 ]

The Constructive Pyramid Journalism Framework

The article writing pipeline is built around a journalism-native editorial framework — an extension of the inverted pyramid used since the 19th century. This system produces journalism, not content. The structure, the attribution discipline, the accountability — all journalism-native.

The only automated content pipeline built on a journalism framework.

[ INV-04 ]

Chapter Isolation with Assembly Stitching

Each section of an article is written independently, with the LLM's full context window focused on a specific word budget and brief. A dedicated Assembly Agent then stitches the sections into a coherent narrative with smooth transitions. Focused writing + global editing.

Every section of your article gets its own full AI focus.

[ INV-05 ]

Diagnose-Then-Treat Editing

The editing pipeline separates diagnosis from implementation. The editor produces a clinical diagnostic report of what needs fixing. A separate Revision Applier implements the fixes faithfully, without rewriting content in the editor's voice. Your prose style is preserved.

Two professional editorial passes that fix the article without destroying your voice.

[ INV-06 ]

Reference-Image Visual Architecture

AI images are generated using a four-bloc prompt structure anchored to a real reference image. The reference image provides the style authority — composition, typography, color palette — so the AI generates content into that structure rather than inventing a new one.

Consistent brand visuals across hundreds of posts.

[ INV-07 ]

Google Search Grounding for Visuals

Every AI-generated image activates Gemini's web search grounding capability. When an image depicts an institution, a location, or a public figure, the system searches for a real visual reference and incorporates it. Images are grounded in verifiable reality.

Every image shows a real place or real institution — not a hallucinated version of one.

[ INV-08 ]

Live Interactive Data Charts

The chart production pipeline performs original data research, validates the data, confidence-scores every row, runs a three-layer integrity check, and publishes a live, interactive, embeddable Datawrapper chart directly in the article. This is automated data journalism.

The only automated content system that produces live interactive charts from original research.

[ INV-09 ]

Three-Layer Data Honesty Architecture

The Retrievability Router, the Confidence Gate, and the Verification Loop form a three-layer data integrity system. If the data isn't good enough to publish confidently, the system automatically replaces the chart with a high-quality editorial image.

It won't publish a chart with unreliable data. It will produce something better instead.

[ INV-10 ]

Status-Column Visual Publishing

Four visual modes for Twitter, three for Facebook — each controlling a different image strategy — all activated by changing one cell in a Google Sheet. No code. No workflow editing. Complete editorial flexibility from a spreadsheet.

Full visual publishing strategy control from a mobile phone.

[ INV-11 ]

Editorial DNA in Prompts, Not Retrieval

Your editorial identity — voice guidelines, brand standards, taxonomy, audience profile — is embedded in the system prompts of every agent. Every agent operates from the same editorial source of truth, at the same moment, with no retrieval variability.

Your voice is in the system's DNA — from the first query to the final HTML verification.

[ INV-12 ]

Cost Architecture as a Design Principle

The $0.60/month research cost is not an accident. It is the result of deliberate free-tier API stacking, strategic model selection, and self-hosted infrastructure. Cost was treated as a first-class design constraint. The result: newsroom-grade production at individual creator economics.

Not incrementally cheaper. Structurally, categorically cheaper.

What a Team Like This Costs.
What This System Costs.

This is not a "save time" argument. This is an economics argument.

The production work this system does — the research, the writing, the image creation, the editing, the publishing — has a market rate. That rate is $4,800-13,500 per month if you hire the human equivalent team to do it.

Most independent publishers have never been able to afford that team. So they've done it manually — which means at reduced volume, reduced quality, or reduced sanity. Usually all three.

The Human Equivalent

Role Monthly rate
Research Assistant (2 hrs/day) $1,500–3,000
Social Media Producer $1,000–3,000
Editorial Image Designer $500–2,000
Copy Editor (2 passes) $300–1,000
Data Journalist $1,000–3,000
Web Developer / CMS Operator $500–1,500
Full team $4,800–13,500/mo
This system $497 once + ~$10/mo

$497 is not the price of a tool. It is the price of access to production infrastructure that was previously available only to operations with a team budget.

The ongoing API costs to run the system — the Brave searches, the Grok synthesis calls, the Gemini image generation — run approximately $10 per month. The research alone costs $0.60 per month for 30 deep investigations.

We did not price this at $497 because it does $497 worth of work. We priced it at $497 because we wanted it to be accessible to the people who need it most — the solo operators who have been trying to compete with teams using only their own two hands.

The Cost Per Output

Output Human This system
1 deep research report (15-20 sources, citations, expert quotes) $25–50 $0.02
1 six-tweet thread with 6 branded editorial images $150–400 $0.05–0.10
1 Facebook post with research + editorial collage image $80–200 $0.03–0.05
1 full article (research + 11-phase writing + 2 editorial passes + HTML) $300–800 under $1
1 article with 2 AI editorial images + 1 live interactive chart $400–1,200 under $2
30-day full operation (3 threads/day, 3 FB posts/day, 4 articles) $9,000–30,000 $497 once + ~$10/mo

"$0.60 per month. That is the total cost of 30 investigative research reports, each citing 15-20 live web sources, each synthesized by a multi-model AI research pipeline. The freelance equivalent: $750–1,500 per month."