arrow_back BACK_TO_BLOG
terminal://blog.post_04
BUILD_LOG 2026.04.01

Building Blurt Phase 1: Core Engine + API in 7 Days

Rails 8, 6 platform publishers, filesystem queue, parallel publishing, full REST API. No content database. 139 tests. Shipped and deployed in one week.

Build Log Engineering Rails 8 Architecture

One week. That's how long it took to go from rails new to a deployed, tested, working social publishing engine that publishes to six platforms simultaneously.

This is the build log for Phase 1 of Blurt — the core engine and HTTP API. Not a tutorial. Not a retrospective. A raw account of what got built, how, and the architectural decisions that made it possible.

The bet: posts are files, not database rows

Every social scheduling tool stores your content in their database. Buffer, Hootsuite, Typefully — your posts are rows in PostgreSQL. When the company dies (and 1,736 YC startups already have), your content dies with it.

Blurt's core architectural bet is the opposite: posts live on the filesystem as markdown files. The Post model is a PORO (plain old Ruby object) that reads .md files. SQLite only stores PublishLog entries for fast queries — the filesystem is the source of truth.

This isn't just philosophy. It has real engineering consequences:

Day 1: Foundation

rails new blurt --database=sqlite3 --skip-jbuilder. Rails 8.1. The gem list tells the story:

The first thing I built was BlurtConfig — a singleton that loads platform credentials from environment variables. BlurtConfig.bluesky returns { identifier:, password: } or nil. BlurtConfig.configured_platforms returns only those with complete config. Simple.

Then the Post PORO, the MarkdownProcessor service, and the three directories: queue/, sent/, failed/. Drop a markdown file in queue/ and the engine picks it up.

Day 2: The queue engine

The queue engine is four services working together:

QueueScanner scans the queue/ directory for pending posts. Supports flat .md files and subdirectories (for posts with local images). Skips scheduled posts that aren't due yet. Validates platforms against the known list.

PublishOrchestrator takes a post and publishes to all configured platforms in parallel using Concurrent::Future. All succeed? Move to sent/. Any fail? Move to failed/.

PostMover handles the filesystem operations. The key feature: when moving to sent/, it enriches the frontmatter with platform permalinks and timestamps. Your published file becomes the system of record:

---
platforms: [bluesky, mastodon, linkedin]
publishedAt: "2026-03-24T09:01:23Z"
results:
  bluesky:
    url: "https://bsky.app/profile/did:xxx/post/rkey"
    publishedAt: "2026-03-24T09:01:22Z"
  mastodon:
    url: "https://mastodon.social/@user/123456"
    publishedAt: "2026-03-24T09:01:23Z"
---

Your post content here.

File locking prevents double-processing: files get renamed to .publishing while in progress. Solid Queue's recurring job polls every 60 seconds and fans out to PublishPostJob per post.

Days 3-4: Six publishers

Six platforms, six publishers, three days. Each one implements Publishers::Base with a shared Faraday HTTP client.

Bluesky was the most complex. The AT Protocol has no Ruby SDK, so it's raw HTTP: create session, detect rich text facets (URLs, mentions, hashtags with UTF-8 byte offsets), upload images, create post record. Link previews fetch OG metadata and upload thumbnails as blobs.

Mastodon was clean — REST API with Bearer token auth. The one gotcha: bare domain auto-linking. blurt.sh needs to become https://blurt.sh before Mastodon will linkify it.

LinkedIn has the most annoying image upload flow: POST /rest/images?action=initializeUpload to get an upload URL and image URN, then PUT binary to an external URL. Link previews work the same way — upload OG thumbnail, get URN, attach as article content.

Medium — two calls: get user ID, create post. Content is HTML via MarkdownProcessor.to_html.

Dev.to — single call, raw markdown body. The simplest publisher.

Substack — SMTP email via Action Mailer. Posts arrive as drafts. Dynamic SMTP config per delivery, not global. Returns message-id as reference.

All six publishers handle image uploads with alt text passthrough, and all publish simultaneously via Concurrent::Future with a 30-second timeout.

Day 5: The HTTP API

The API turns Blurt from a background process into something you can control with curl:

# Create a post
curl -X POST https://your-server/api/posts \
  -H "Authorization: Bearer $BLURT_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"title": "My post", "content": "Hello world", "platforms": ["bluesky", "mastodon"]}'

# Publish immediately (bypass the 60s poll)
curl -X POST https://your-server/api/posts/my-post.md/publish \
  -H "Authorization: Bearer $BLURT_API_KEY"

# Export all sent posts as ZIP
curl https://your-server/api/export \
  -H "Authorization: Bearer $BLURT_API_KEY" \
  -o posts.zip

Six controllers: Posts (CRUD + immediate publish), History (paginated, filterable), Platforms (configured status), Health (queue counts, Solid Queue connectivity), Export (ZIP stream of sent/). Bearer auth with timing-safe comparison. 53 tests, 163 assertions.

Day 6: Docker, deploy, production hardening

Rails 8 generates a production-ready Dockerfile out of the box: Ruby 3.4 slim, libvips, jemalloc, non-root user, Thruster. I added docker-compose.yml with bind mounts for the three directories and a named volume for storage.

The production hardening pass added real timeouts everywhere: 15s/5s on Publishers::Base, 30s/5s on ad-hoc connections for image uploads, 30s Future timeout in PublishOrchestrator. Specific rescue clauses for Faraday::TimeoutError, connection failures, 401 token expiry, and 429 rate limits with retry-after headers.

Rake tasks for local workflows: rake blurt:scan for synchronous queue processing, rake blurt:platforms to check configuration, and rake blurt:linkedin_auth for the full OAuth flow (WEBrick callback server, browser open, token exchange).

Deployed to a VPS the same day. Replaced the old social-queue Node.js container with Blurt at /opt/blurt.

Day 7: 139 tests

The final day was tests and bug fixes. The test suite covers:

Testing found real bugs: MarkdownProcessor.to_plaintext was stripping inline code before code blocks (order bug), and QueueScanner was rescuing FrontMatterParser::SyntaxError which doesn't exist — it's Psych::SyntaxError.

139 tests, 318 assertions, 0 failures. Tagged as v0.1.0.

What I'd do differently

Honestly? Not much. The filesystem-first architecture is paying off — it's simple to reason about, trivial to debug (just ls sent/), and the system of record feature where permalinks get written back into your files is genuinely useful.

If I had to nitpick: I'd have written publisher tests on day 3 instead of day 7. Testing Bluesky's UTF-8 byte offset facets after the fact was more painful than testing while building.

The one thing that surprised me: how much of Rails 8 you get for free. Solid Queue, concurrent-ruby, image processing, the production Dockerfile — the framework covered maybe 40% of the infrastructure I needed out of the box.

What's next

Phase 2 is the CLI and MCP server. A terminal tool to manage your queue: blurt publish post.md, blurt status, blurt history. And an MCP server so AI assistants can publish on your behalf.

But Phase 1 is the foundation. Drop a markdown file in a folder, and it shows up on six platforms with permalinks written back into your file. That's the whole product thesis, working end to end.


Blurt is open source and building in public. Check it out on GitHub or follow along on Bluesky and Mastodon.

BLURT

Join the Waitlist

Launching April 21. Get notified. No spam. No tracking.