The previous session got the database migration done, the admin panel rebuilt with queue management, and both Edge Functions updated. But the actual cron scheduler that publishes posts automatically didn't exist yet. This session was about closing that loop.
The Cloudflare Worker
The Worker itself is small. It exports a scheduled handler that Cloudflare triggers on a cron expression (0 6 * * 1,3,5, which is 6am UTC on Monday, Wednesday, and Friday), and a fetch handler for manual testing via HTTP.
The publish flow is five steps: query Supabase for the lowest queue_position post with status='queued', update it to status='published' with a published_at timestamp, resequence the remaining queued posts to close the gap, fire the Cloudflare Pages deploy hook, and post the result to a Discord webhook. If the queue is empty, it posts a "nothing to publish" message and exits.
I put the Worker inside the portfolio repo under workers/blog-publisher/ rather than a separate repo. It's a standalone Cloudflare project with its own wrangler.toml and package.json, but keeping it in the same repo means everything stays in one place.
The Worker uses the Supabase service_role key to bypass RLS, since it needs write access to update post statuses. That key only exists as a Cloudflare Worker secret, never in client code or version control.
Secrets management
The Worker needs five secrets: SUPABASE_URL, SUPABASE_SERVICE_KEY, CLOUDFLARE_DEPLOY_HOOK, DISCORD_WEBHOOK_URL, and MANUAL_TRIGGER_SECRET. All set via wrangler secret put. The deploy hook is the same one the admin panel's deploy button uses, and the manual trigger secret is a random hex string for authenticating test requests.
One thing I was initially going to do was paste the Supabase service_role key into the conversation for Claude to set via the terminal. Caught myself before doing it. Sensitive credentials should never pass through a conversation history, even with an AI assistant. I ran those two commands myself.
The manual trigger test
The Worker has a /trigger endpoint that accepts POST requests with a bearer token. This lets me test the full pipeline without waiting for the cron to fire:
curl -X POST https://drewbs-blog-publisher.p3nd3rs.workers.dev/trigger \
-H "Authorization: Bearer <secret>"The response came back immediately: {"published":true,"deployed":true,"post":"The vault's cinematic entrance: building a page transition state machine"}. The first queued post was published, the deploy hook fired, and a Discord notification appeared in the channel. The remaining seven posts in the queue resequenced from positions 1 to 7.
Seeing that JSON response and the Discord message pop up at the same time was genuinely satisfying. The whole pipeline, from database write to deploy trigger to notification, completed in under two seconds.
Discord notifications
The Worker handles its own notifications: success ("Published [title] - deploy triggered"), partial failure ("Published but deploy hook failed"), queue empty ("Nothing to publish"), and hard failure ("Cron publish failed - [error]"). That covers the scheduling side.
For deploy status, Cloudflare Pages has its own notification system. I set up webhook notifications for deployment started, succeeded, and failed events, all pointing at the same Discord channel. That gives end-to-end visibility: the Worker confirms it published a post and triggered a deploy, then Cloudflare confirms the site actually rebuilt successfully. Two independent signals for the same event.
What's actually running now
The publish queue currently has a bunch of posts lined up. The cron fires at 6am UTC on Monday, Wednesday, and Friday. That means:
The next automatic publish is Friday 20th March. Then Monday 23rd, Wednesday 25th etc. By then the backlog will be cleared and I'll be back to publishing as I go, or queuing new posts for the next available slot.
The whole system cost nothing. Cloudflare Workers free tier includes cron triggers, Supabase is on the free tier, and Discord webhooks are free. The only thing that took time was building it.
Where this leaves things
The blog now has a fully automated publishing pipeline. Write in Obsidian, import via the CMS, queue in the admin panel, and the cron handles the rest. Discord gives me visibility on every publish and every deploy without checking the dashboard.
Next up is the vault black hole feature, and then eventually the self-hosted Supabase migration that'll give me a proper dev environment for testing Edge Function writes locally.