Dock
Sign in & remix
REMIX PREVIEWTemplate

LinkedIn content pipeline

Drafts LinkedIn posts in your voice from raw notes you drop in an inbox folder, queues them in Dock for review, and publishes one per posting day after you approve.

· 13 min read· from trydock.ai

Agent: read this first

You're working in the LinkedIn content pipeline workspace at trydock.ai/<org>/linkedin-content-pipeline.

## Your role

Reads new files in the content inbox folder, drafts a LinkedIn post per item using the Style guide, queues them in Post queue at Status=Draft. Never flips Status to Approved or publishes to LinkedIn.

## Cadence, when to update Dock

- After EVERY tool call that changes state (row create / update, doc append), append an entry to the **Status** surface.
- After every milestone, update the canonical doc surface for that work.
- When you start a session, READ the last 5 Status entries first to know what happened since you last worked.

## User-loop protocol

- End EVERY reply to the user with: "Check Dock for the latest at trydock.ai/<org>/linkedin-content-pipeline."
- If you make a decision the user might want to override (priority, scope cut, tone change), append to **Status** AND mention it in your reply: "Decided X. If you'd rather Y, edit [surface] and I'll re-plan."
- If you're blocked, post to Status with prefix `BLOCKED:` and ask one specific question.

## Don't touch

- Canonical phase / column / surface titles.
- Anything in a section titled "Locked" or "Decisions sealed."
- The agentPrompt itself (this section).

## First tool calls

1. `list_surfaces(workspace_slug="linkedin-content-pipeline")`
2. `get_doc(workspace_slug="linkedin-content-pipeline", surface_slug="status")`
3. `list_rows(workspace_slug="linkedin-content-pipeline", surface_slug="post-queue")`

---

# LinkedIn content pipeline

A two-step pipeline that turns a folder of raw notes into a queue of ready-to-review LinkedIn posts. Open in Dock and you get four surfaces seeded:

- **Post queue** (table) one row per draft: Draft, Hook, Source, Status, Scheduled For, Posted At.
- **Setup guide** (doc) how the agent connects to LinkedIn + the two Python scripts.
- **Style guide** (doc) your voice, topics, format preferences, and 1-3 example posts. Read by the agent on every draft. Edit any time to refine.
- **Status** (doc) the agent's working session log. End of every run: 1 paragraph of what happened, what's pending.

## Bring your own agent

Connect one agent to this workspace (Claude in Cursor, Claude Code, Codex, any MCP client). The agent reads the inbox folder weekly (e.g. Sunday evening) and drafts everything in batch, no manual trigger needed once the cron is wired.

## The user-loop protocol

Your agent proposes; you decide.

- Sunday 6 PM (or whenever you batch): agent reads the content inbox folder. For each new file, fetches URL content if the file is a URL, then drafts a post using the Style guide. Adds a Post queue row at Status=Draft.
- You open Post queue, read the drafts, edit Draft in place, optionally set Scheduled For (YYYY-MM-DD), flip Status to Approved.
- Mon/Wed/Fri 9 AM: the publish script picks up Approved rows due today (or with no scheduled date), ships one to LinkedIn, stamps Posted At, flips Status to Posted.
- Agent NEVER flips Status to Approved or Posted. Agent NEVER calls the LinkedIn API. Those are operator decisions and operator-owned credentials.
- End of every working session, agent writes 1 paragraph to **Status**: files processed, drafts queued, next due Sunday.

## First run

1. Confirm voice and topics with the operator: 2-3 voice adjectives, 3-5 topics, format preference, things to avoid. Write into Style guide.
2. Get LinkedIn credentials: a LinkedIn Developer App with w_member_social + profile scopes, an access token, and your person ID.
3. Open Setup guide and follow the install steps. Configure .env, drop scripts.
4. Drop 1-2 test files into content-inbox/, run generate_posts.py once, watch Post queue + Status.

## Status

### [00:00 UTC] seeded by Dock
Workspace created from the LinkedIn content pipeline template. Agent: read the Setup guide, then ask the operator the Style guide questions before the first draft run.
Next: confirm voice + topics + LinkedIn credentials with operator.

Outcome

A weekly content pipeline where you drop raw notes in a folder, the agent drafts LinkedIn posts in your voice, you review and approve in Dock, and the publish script ships one per posting day to your LinkedIn profile.

Estimated time: 45 min setup, ongoing ~15 min/week review
Difficulty: intermediate
For: Founders, operators, and execs who want a Mon/Wed/Fri LinkedIn cadence without the daily writing cost.

What you'll need

Pre-register or install before you start.

  • Dock (Free plan covers this template; Pro $19/mo unlocks more workspaces.) — The workspace itself, the Post queue table, the Style guide doc the agent reads on every draft.
  • Anthropic API (Pay per token, ~$0.01 per drafted post on Sonnet.) — Claude drafts each post from the source material plus the Style guide.
  • LinkedIn Posts API (Free. Access tokens expire every 60 days; the publish script fails loudly with refresh instructions.) — Official LinkedIn Posts API endpoint used by the publish script. Requires a Developer App with w_member_social + profile scopes.
  • CueAPI (optional) (Free tier covers weekly + tri-weekly cadence.) — Cloud-scheduled run so the Sunday draft and Mon/Wed/Fri publish keep working when your laptop is closed.

The template · 5 steps

Step 1: Define your voice in the Style guide

Estimated time: 15 min

The single biggest lever on draft quality is the Style guide. The doc surface is pre-seeded with sections for VOICE (2-3 adjectives), TOPICS (3-5 you want to be known for), FORMAT (narrative / list / punchy / mix), AVOID (phrases or clichés you hate), and EXAMPLES (1-3 LinkedIn posts that match the tone you want). The agent reads this on every draft.

Tasks

  • Open Style guide (doc). Fill in VOICE with 2-3 adjectives (e.g. direct and opinionated, conversational and story-driven, data-focused).
  • List 3-5 TOPICS you want to be known for (e.g. operational scaling, building remote teams, PE portfolio management).
  • Pick a FORMAT preference (narrative posts, list posts, short punchy takes, or a mix).
  • List things to AVOID: clichés, hashtags, em dashes, generic openers, anything else.
  • Paste 1-3 EXAMPLE posts you like, even if they aren't yours. These do more for voice match than every other knob combined.

[!CAUTION] Gotchas

  • Skipping the EXAMPLES section is the #1 reason drafts feel off. Add 1-3 examples first; tune everything else after.
  • AVOID rules are honored when explicit. Saying 'no em dashes' is more reliable than 'don't be cringe'.

Step 2: Get LinkedIn credentials

Estimated time: 15 min

The publish script uses the official LinkedIn Posts API. You need a LinkedIn Developer App with the right scopes, an access token, and your LinkedIn person ID. Tokens expire every 60 days; the publish script fails loudly with refresh instructions.

Tasks

  • Go to https://www.linkedin.com/developers/apps and create a new app
  • Under Auth, add w_member_social and profile to OAuth 2.0 scopes
  • Use the OAuth 2.0 tools tab to generate an access token, both scopes selected
  • Get your person ID: curl -H 'Authorization: Bearer YOUR_TOKEN' https://api.linkedin.com/v2/me, copy the id field
  • Set LINKEDIN_ACCESS_TOKEN and LINKEDIN_PERSON_ID in .env
  • Set a calendar reminder for 55 days from now to refresh the token before it expires

[!CAUTION] Gotchas

  • Access tokens expire after 60 days. The publish script catches 401s and prints the refresh URL, but you still need to act on the reminder.
  • If you get a 422 on publish, your person ID is wrong. Re-run the /v2/me curl to verify.

Step 3: Wire .env + the two Python scripts

Estimated time: 10 min

Two scripts: generate_posts.py drafts on Sunday, publish_posts.py ships on Mon/Wed/Fri. Both read .env, both run as standalone CueAPI handlers. Install Anthropic, requests, dotenv, drop the scripts from Setup guide.

Tasks

  • Open Setup guide (doc) and copy generate_posts.py + publish_posts.py into a local folder
  • Run pip install anthropic requests python-dotenv
  • Create .env with DOCK_API_KEY, DOCK_WORKSPACE_SLUG, ANTHROPIC_API_KEY, LINKEDIN_ACCESS_TOKEN, LINKEDIN_PERSON_ID, CONTENT_INBOX_DIR=./content-inbox, STYLE_GUIDE_FILE=./style_guide.txt, MAX_POSTS_PER_RUN=1, CLAUDE_MODEL=claude-sonnet-4-6
  • Generate a Dock API key at trydock.ai/settings/api
  • mkdir content-inbox; drop 1-2 test files (a short note, a URL on its own line)
  • Run python generate_posts.py once. Confirm Post queue rows appear at Status=Draft.

[!CAUTION] Gotchas

  • MAX_POSTS_PER_RUN defaults to 1 on purpose. Bumping it to 5 means you can rapid-fire 5 posts per run, which reads like a bot to LinkedIn's algorithm. Keep it at 1.
  • CONTENT_INBOX_DIR must exist before the first run. The script creates it if missing, but the run that creates it has nothing to draft.

Agent prompt for this step

Run a first draft of the inbox. Read every new file in CONTENT_INBOX_DIR (skip names in processed_content.json or starting with a dot). For URL files, fetch the page and use the text as source. For text files, use the raw content. Read Style guide doc for voice + format. Draft a post per file via Claude. Add each as a Post queue row at Status=Draft. Append file IDs to processed_content.json. Post a Status entry summarizing: files processed, drafts queued.

Step 4: Schedule the Sunday draft + Mon/Wed/Fri publish

Estimated time: 10 min

Two cron tasks: generate weekly on Sunday at 6 PM, publish Mon/Wed/Fri at 9 AM. With MAX_POSTS_PER_RUN=1, this gives a natural 3 posts/week even if more are approved. CueAPI is the right pick for cloud-scheduled runs.

Tasks

  • Option A, cron: crontab -e, add 0 18 * * 0 cd /path && source .env && python3 generate_posts.py >> generate.log 2>&1 and 0 9 * * 1,3,5 cd /path && source .env && python3 publish_posts.py >> publish.log 2>&1
  • Option B, CueAPI: pip install cueapi cueapi-worker, then cueapi create --schedule '0 18 * * 0' --name 'LinkedIn Generate' and cueapi create --schedule '0 9 * * 1,3,5' --name 'LinkedIn Publish'. Start the worker pointing at each handler.
  • Confirm next Sunday: Status has a fresh session entry, Post queue has fresh Draft rows for everything you dropped that week.
  • Confirm next Monday: one Approved row flips to Posted + Posted At gets stamped + the post appears on your LinkedIn profile.

[!CAUTION] Gotchas

  • If you've approved 5 posts and only want 3 to ship that week, the script handles that automatically with MAX_POSTS_PER_RUN=1 + the Mon/Wed/Fri schedule. Approved posts wait their turn.
  • Scheduled For is optional. Blank means publish on the next eligible run. Setting a future date means the post waits until that date.

Step 5: Set the weekly batch + review cadence

Estimated time: 15 min/week ongoing

The whole pipeline assumes one batch session per week. Sunday evening is the typical pattern: drop 5 files in the inbox, the agent drafts everything overnight, you spend 15 minutes Monday reviewing and approving. Then Mon/Wed/Fri ship without further input.

Tasks

  • Block 15 min on the calendar Sunday evening to drop content + 15 min Monday morning to review
  • Drop 3-5 inbox files: a meeting insight as a .txt note, an article URL on its own line in a .txt file, etc.
  • Monday morning: open Post queue. Read the drafts. Edit Draft in place if needed. Set Scheduled For if you want a specific date. Flip Status to Approved.
  • Walk away. Mon/Wed/Fri 9 AM the publish script does the rest.
  • Tune Style guide as the drafts go live: notice a phrasing pattern you don't like, add it to AVOID. Drafts feel generic, add another EXAMPLE post.

[!CAUTION] Gotchas

  • Skipping the Monday review means nothing ships. The publish script no-ops when nothing is Approved.
  • Editing a draft after it's been published does nothing (LinkedIn doesn't sync from Dock). Edit before the publish day.

Hand the template to your agent

Paste the prompt below into your agent's permanent system prompt so the agent reads, writes, and maintains this workspace as you work through the steps.

You are the agent running on the "LinkedIn content pipeline" template workspace, connected via MCP at your-org/linkedin-content-pipeline.

Your job: read the content inbox folder, draft a LinkedIn post per item using the Style guide doc, queue each draft in Post queue at Status=Draft. Never publish.

User-loop protocol:
- You propose. The operator decides. Never flip Status past Draft. Never call the LinkedIn Posts API.
- Sunday evening (or "draft posts"): list files in CONTENT_INBOX_DIR. Skip names already in processed_content.json or starting with a dot. For each new file, read the contents. If the file body starts with http://, fetch the URL and use the page text as source. Otherwise use the raw file content.
- Read Style guide (doc) for voice, topics, format preference, things to avoid, example posts. Pass everything into a single Claude prompt that returns JSON with hook, full_text, source_summary.
- Add a Post queue row: Draft (full text), Hook (first 1-2 lines), Source (filename or URL), Status=Draft, Scheduled For=blank, Posted At=blank.
- Append the file ID to processed_content.json so the same source never produces two drafts.
- End of every working session, write 1 paragraph to Status: files processed, drafts queued, next due Sunday.

Don't touch:
- Post queue.Status (Draft / Approved / Posted is the operator's flow).
- Post queue.Posted At (the publish script stamps this).
- LinkedIn credentials in .env (those are operator-only, refreshed every 60 days).

First MCP tool calls:
1. list_surfaces(workspace_slug="linkedin-content-pipeline")
2. get_doc(workspace_slug="linkedin-content-pipeline", surface_slug="style-guide")
3. get_doc(workspace_slug="linkedin-content-pipeline", surface_slug="status")

FAQ

How do I get the voice right?

Add 1-3 example LinkedIn posts you like to the EXAMPLES section of the Style guide. They don't have to be yours, but they should match the tone you want. Examples beat every other knob (adjectives, AVOID rules, format pref) for voice match. Read drafts after a few weeks; if a phrase keeps appearing that you'd never say, add it to AVOID.

Does the agent post to LinkedIn on its own?

No. The agent drafts and queues. You read, edit, optionally set a date, flip Status to Approved. Only then does the publish script ship via the LinkedIn Posts API. The agent never calls the LinkedIn API directly.

What if I want more than 3 posts per week?

Bump MAX_POSTS_PER_RUN above 1, or change the publish cron to fire daily instead of Mon/Wed/Fri. Keep MAX_POSTS_PER_RUN low though, posting 5 in one run reads as bot-like. The default is deliberately conservative.

Can I drop URLs in the inbox instead of writing notes?

Yes. A file containing only a URL (one line, starts with http://) gets fetched. The script strips HTML and uses the page text as source material for the draft. Useful for riffing on articles without copy-pasting them yourself.

What happens when my LinkedIn token expires?

Tokens expire every 60 days. The publish script catches 401s and prints the exact refresh URL plus next steps. Generate a new token from your LinkedIn Developer App, paste into LINKEDIN_ACCESS_TOKEN, re-run. Set a 55-day calendar reminder so the token never lapses without you noticing.

Remix this into Dock

Make this yours. Edit, extend, run agents on it.

Sign in (free, 20 workspaces) — Dock mints a copy of this in your own workspace. The original stays untouched.

No Dock account? Sign-in is signup. Magic-link in 30 seconds.