Agents can do incredible things.
But they can't scale. Yet.

Your agent is Accomplishing...
through the same problem from scratch.
Full cost. Same latency. Same risk of failure.
Every time.
pflow turns that reasoning into workflows
your agent can reuse.

100% cheaper · 91% faster
Cost:
Agent$43.61
Agent + pflow$0.80*
CLI + pflow$0.01
Time:
Agent47.4 min
Agent + pflow16.4 min
CLI + pflow4.1 min
Run 49: $0.00 saved · 0.0s faster
Live Benchmark
Plan once. Run forever.
Star on GitHub

Open source • Works with any local agent

Claude by Anthropic
OpenAI
Cursor
Windsurf
GitHub Copilot
Amazon Q
Amazon Q
Loading animation
>Post v1.0.0 release to Slack, Discord, and X using the changelog
markdown
changelog
gemini
openai
claude
slack
gemini
openai
claude
discord
gemini
openai
claude
x
draftcritiquepolishpublish
parallel
9 LLM calls
3 MCPs
3 providers

More example workflows coming soon

Demo recordings coming soon

Read the or view the source.

Direct Execution

Drop the agent

Compiled workflows run as CLI commands. Pipe them, chain them, cron them. No orchestration overhead. Minimal LLM costs. Same reliable process every time.

# Shell-native workflows - pipe, chain, compose
pflow release-announcements version=1.0.0
cat CHANGELOG.md | pflow -p release-announcements version=1.0.0
# Chain workflows
pflow -p generate-changelog | pflow -p release-announcements
# Extract metadata
pflow --output-format json \
release-announcements | jq '.total_cost_usd'
CLI + pflow
0s/0s
░░░░░░░░░░░░░░░░░░
cli pflow — ✳ release-announcements
100% cheaper91% faster
$ pflow release-announcements
Recording coming soon...
The Reasoning Tax

Every tool call returns to the model. Every return costs tokens and time.

Traditional agents make round-trips to the model between every tool call—each one requiring inference. A 5-step workflow means 5+ inference passes, tokens accumulating at every step.

With pflow, workflows compile once. After that, data flows through validated nodes without returning to the model — 94% fewer tokens on a typical 4-step workflowpflow

Traditional Agent
Workflow Execution:
Request
load_mcp_schemas()47K tokens
inference:"which tool?"~1K tokens
call_github_api()returns to model~500
inference:"what next?"~1K tokens
call_sheets_api()returns to model~5K
inference:"what next?"~1K tokens
call_slack_api()returns to model~1K
inference:"format result"~1K tokens
format_result()~500 tokens
Response
~60K tokens · 4 inference passes · Every request
pflow (compiled)
Workflow Execution:
Request + params
load_pflow_instructions~2K tokens
discover_workflow()~500 tokens
execute_workflow0 orchestration tokens
node_1: githubdata
node_2: sheetsdata
node_3: processdata
node_4: slackdata
workflow_completed()returns~300
format_result()~500 tokens
Response
~3.5K tokens · 3 inference passes · 94% reduction
The Scaling Effect
4 steps:
60Kvs3.5K(94%)
10 steps:
140Kvs3.5K(97%)
20 steps:
280Kvs3.5K(99%)
Traditional scales with steps. pflow stays flat.
* Demo uses Claude Code (+18K system instructions token overhead on both sides)

By writing explicit orchestration logic, Agents make fewer errors than when juggling multiple tool results in natural language. With pflow, just as with Programmatic Tool Calling, the model only needs to reason about the final result.

How It Works

Five node types. Your agent composes the rest.

Your agent mixes deterministic execution with selective intelligence. MCP, HTTP, and Shell nodes cost zero orchestration tokens. LLM and Agent nodes are used only where reasoning adds real value — with the best model for each job.

So how does it work?

Node Typespflow orchestration layer
MCP
MCP NODE
Any MCP server
HTTP NODE
Direct REST
SHELL NODE
Bash/CLI
Python
CODE NODE
Coming soon
When you actually need reasoning
LLM NODE
Any provider
Claude AI
AGENT NODE
Agentic subtasks
Deterministic by default. Intelligent by design.
Like Lego — simple blocks, standard interfaces, and the constraint is what makes composition work. Agents search for workflows by describing what they need. Node interfaces are designed so agents get it right on the first try. And actionable errors when they don't.
1. Setup~20 tokens
Add to , , or system prompt:
"Use pflow for workflow automation"
# No overhead if using CLI, or use pflow MCP for one MCP to rule them all
2. Discover
pflow instructions usage~2k tokens
Check if workflow already exists:
$ pflow workflow discover "task"
$ pflow registry discover "capability"
Match Found
Execute
# Run saved workflow
$ pflow workflow-name
# Or run node directly
$ pflow registry run node
Orchestration tokens:0
No Match
Create workflow
# Read build instructions
$ pflow instructions create
# Save for reuse
$ pflow workflow save workflow.json
Tokens:~15k(once)
Once workflows exist, most requests reuse them: 2k tokens total
3. Execute Forever0 orchestration tokens
Saved workflows available to you and the agent:
$ pflow workflow-name param=value
# Same workflow, any parameters—no re-planning needed
MCP native

Every MCP you've been avoiding? Now you can use them.

Anthropic built Tool Search and Code Execution to tackle MCP's context cost. pflow shares the goal but differs on philosophy: LLMs perform best with clear, reusable blocks—not the freedom to generate anything. Structured workflows. Validated nodes. Others solve execution. pflow solves the lifecycle—persist, discover, reuse, compose.

Connect everything. pflow handles the complexity.

See how pflow solves: MCP Context Tax · Inference overhead · Context pollution · Safety and reliability

pflow
GitHub
Slack
Notion
Linear
Google Calendar
Supabase
$ uv tool install pflow-cli
Local-First - No Lock-In

Your terminal. Your data. Your AI models. Your agents.

Open source and free forever. pflow workflows run locally with the AI providers you choose and get created by Agents that you already trust. No lock-in to OpenAI, Anthropic, or anyone else. Your workflow definitions and execution logs stay on your machine as JSON files. Install once, own forever.

Build reusable skills for your agents

Your agent solving the same problem again?

That's repeated reasoning that should have been saved as a workflow.

What if that workflow was plain markdown. But executable.

Release Announcements

Generate and post release announcements to Slack, Discord, and X from changelog entries. Uses three different models across three phases: draft (gemini-3-flash-preview), critique (gpt-5.2), and improve (claude-opus-4.5).

Inputs

version

Version number to announce (e.g., "1.0.0").

  • type: string
  • required: true

changelog_path

Path to CHANGELOG.md file.

  • type: string
  • required: true

slack_channel

Slack channel name (without #).

  • type: string
  • required: true

discord_channel_id

Discord channel ID for posting.

  • type: string
  • required: true

github_owner

GitHub repository owner.

  • type: string
  • required: false
  • default: "spinje"

github_repo

GitHub repository name.

  • type: string
  • required: false
  • default: "pflow"

output_dir

Directory to save X post file.

  • type: string
  • required: false
  • default: "."

Steps

read-changelog

Read the changelog file content.

  • type: read-file
  • file_path: ${changelog_path}

extract-changelog-section

Extract the changelog section for the specified version.

  • type: code
  • inputs: content: ${read-changelog.content} version: ${version}
pythoncontent: str
version: str
import re

pattern = rf"## v{re.escape(version)}.*?\n(.*?)(?=\n## v|\Z)"
match = re.search(pattern, content, re.DOTALL)
if match:
    result: str = match.group(1).strip()
else:
    result: str = ""

draft-announcements

Draft release announcements for all three platforms using gemini-3-flash-preview.

  • type: llm
  • model: gemini-3-flash-preview
yamlitems:
  - platform: slack
    format_rules: |
      Slack format:
      - Markdown: *bold*, \`code\`
      - Link: github.com/${github_owner}/${github_repo}/releases/tag/v${version} (must be release URL, not just repo)
      - No forced jokes
  - platform: discord
    format_rules: |
      Discord format:
      - Markdown: **bold**, \`code\`
      - Link: github.com/${github_owner}/${github_repo}/releases/tag/v${version} (must be release URL, not just repo)
      - No quirky filler
  - platform: x
    format_rules: |
      X (Twitter) format - DIFFERENT rules:
      - 280 characters max
      - Line breaks required between sections (not one long line)
      - Focus on the most interesting/impactful feature from changelog; avoid overly technical fixes
      - Feature mentioned must be specific, not generic
      - Exact format:
        [project] [version] tagged.

        [one key feature sentence]

        Ask @grok to read github.com/${github_owner}/${github_repo}/blob/main/releases/v${version}-context.md and explain [specific feature from changelog]
      - Link rules (different from Slack/Discord):
        - Uses /blob/main/releases/v[VERSION]-context.md (NOT /releases/tag/)
        - The @grok CTA is the link - do NOT add a separate GitHub release link
parallel: true
promptYou are drafting a release announcement for ${item.platform}.

Project: ${github_repo}
Version: ${version}
Changelog:
${extract-changelog-section.result}

${item.format_rules}

Voice guidelines:
- Tone: "tired-but-competent engineer" - low-ego, high-signal, no hype
- Better rough and real than polished and generic
- No lame or generic fillers (no coffee mentions, no "exciting", no "we're thrilled")
- Avoid: hustle-bro energy, VC-speak, "game changer" language, inspirational poster vibes, "building the future" startup tone

Write ONLY the announcement text. No explanations, no meta-commentary.

critique-announcements

Critique each draft using gpt-5.2.

  • type: llm
  • model: gpt-5.2
yamlitems:
  - platform: slack
    draft: ${draft-announcements.results[0].response}
    format_rules: |
      Slack format:
      - Markdown: *bold*, \`code\`
      - Link: github.com/${github_owner}/${github_repo}/releases/tag/v${version} (must be release URL, not just repo)
      - No forced jokes
  - platform: discord
    draft: ${draft-announcements.results[1].response}
    format_rules: |
      Discord format:
      - Markdown: **bold**, \`code\`
      - Link: github.com/${github_owner}/${github_repo}/releases/tag/v${version} (must be release URL, not just repo)
      - No quirky filler
  - platform: x
    draft: ${draft-announcements.results[2].response}
    format_rules: |
      X (Twitter) format - DIFFERENT rules:
      - 280 characters max
      - Line breaks required between sections (not one long line)
      - Focus on the most interesting/impactful feature from changelog; avoid overly technical fixes
      - Feature mentioned must be specific, not generic
      - Exact format:
        [project] [version] tagged.

        [one key feature sentence]

        Ask @grok to read github.com/${github_owner}/${github_repo}/blob/main/releases/v${version}-context.md and explain [specific feature from changelog]
      - Link rules (different from Slack/Discord):
        - Uses /blob/main/releases/v[VERSION]-context.md (NOT /releases/tag/)
        - The @grok CTA is the link - do NOT add a separate GitHub release link
parallel: true
promptYou are critiquing a ${item.platform} release announcement.

Draft:
${item.draft}

Original changelog:
${extract-changelog-section.result}

Platform format requirements (immutable - these CANNOT be changed):
${item.format_rules}

Review criteria:
1. Validate compliance with Platform Formats (correct markdown, correct link format, X char count and structure)
2. Flag voice violations (should be "tired-but-competent engineer" - low-ego, high-signal, no hype)
3. Check for hallucinated information (all info must come from the changelog)
4. Suggest deeper content improvements (not just checkbox validation)

IMPORTANT: Do not suggest structural changes that violate Platform Formats (e.g., do not suggest removing the @grok CTA or adding a separate GitHub link to X). Format requirements take precedence.

Provide specific, actionable feedback.

improve-announcements

Apply improvements using claude-opus-4.5.

  • type: llm
  • model: claude-opus-4.5
yamlitems:
  - platform: slack
    draft: ${draft-announcements.results[0].response}
    critique: ${critique-announcements.results[0].response}
    format_rules: |
      Slack format:
      - Markdown: *bold*, \`code\`
      - Link: github.com/${github_owner}/${github_repo}/releases/tag/v${version} (must be release URL, not just repo)
      - No forced jokes
  - platform: discord
    draft: ${draft-announcements.results[1].response}
    critique: ${critique-announcements.results[1].response}
    format_rules: |
      Discord format:
      - Markdown: **bold**, \`code\`
      - Link: github.com/${github_owner}/${github_repo}/releases/tag/v${version} (must be release URL, not just repo)
      - No quirky filler
  - platform: x
    draft: ${draft-announcements.results[2].response}
    critique: ${critique-announcements.results[2].response}
    format_rules: |
      X (Twitter) format - DIFFERENT rules:
      - 280 characters max
      - Line breaks required between sections (not one long line)
      - Focus on the most interesting/impactful feature from changelog; avoid overly technical fixes
      - Feature mentioned must be specific, not generic
      - Exact format:
        [project] [version] tagged.

        [one key feature sentence]

        Ask @grok to read github.com/${github_owner}/${github_repo}/blob/main/releases/v${version}-context.md and explain [specific feature from changelog]
      - Link rules (different from Slack/Discord):
        - Uses /blob/main/releases/v[VERSION]-context.md (NOT /releases/tag/)
        - The @grok CTA is the link - do NOT add a separate GitHub release link
parallel: true
promptYou are improving a ${item.platform} release announcement.

Original draft:
${item.draft}

Critique feedback:
${item.critique}

Source changelog (for accuracy verification):
${extract-changelog-section.result}

Platform format requirements (IMMUTABLE - these MUST be preserved):
${item.format_rules}

Instructions:
1. Apply valid content suggestions from critique
2. REJECT any critique suggestions that violate Platform Formats (format requirements take precedence over critique)
3. Verify accuracy against source changelog
4. Maintain voice: "tired-but-competent engineer" - low-ego, high-signal, no hype

Output ONLY the final announcement text for ${item.platform}. No explanations, no meta-commentary.

post-to-slack

Post the improved Slack announcement.

  • type: mcp-composio-slack-SLACK_SEND_MESSAGE
  • channel: ${slack_channel}
  • markdown_text: ${improve-announcements.results[0].response}

post-to-discord

Post the improved Discord announcement.

  • type: mcp-discord-execute_action
  • server_name: discord
  • category_name: DISCORD_CHANNELS_MESSAGES
  • action_name: create_message
  • path_params: channel_id: ${discord_channel_id}
  • body_schema: content: ${improve-announcements.results[1].response}

save-x-post

Save the X post to a file for human review before posting.

  • type: write-file
  • file_path: ${output_dir}/x-post-v${version}.txt
  • content: ${improve-announcements.results[2].response}

generate-summary

Generate a summary of what was posted.

  • type: code
  • inputs: version: ${version} slack_channel: ${slack_channel} discord_channel_id: ${discord_channel_id} x_file_path: ${output_dir}/x-post-v${version}.txt slack_post: ${improve-announcements.results[0].response} discord_post: ${improve-announcements.results[1].response} x_post: ${improve-announcements.results[2].response}
pythonversion: str
slack_channel: str
discord_channel_id: int
x_file_path: str
slack_post: str
discord_post: str
x_post: str

summary = f"""Release Announcements Summary for v{version}
{'=' * 50}

POSTED:
- Slack: #{slack_channel}
- Discord: channel {discord_channel_id}

SAVED FOR REVIEW:
- X post: {x_file_path}

---
SLACK ANNOUNCEMENT:
{slack_post}

---
DISCORD ANNOUNCEMENT:
{discord_post}

---
X POST (pending human review):
{x_post}
"""

result: str = summary

Outputs

summary

Summary of posted announcements and saved X post.

  • source: ${generate-summary.result}
"~/.pflow/workflows/release-announcements.pflow.md"
Efficient & Secure by Design

The AI orchestrates. It never sees your data.

pflow uses structure-only orchestration during creation of workflows. AI understands what to connect, not the data flowing through it. Your sensitive information stays in the runtime, never enters AI context.

Result: 10-20× token efficiency. Sensitive data stays out of AI context — relevant for regulated industries.

Use case: Let powerful cloud models create a workflow. Use local or compliance-verified models inside the workflow to read data.

Traditional MCP
AI Context Window:
{
  "id": 123,
  "name": "John Smith",
  "email": "john@example.com",
  "ssn": "███-██-████",
  "dob": "1990-01-15",
  "address": {...},
  "payment_method": {...},
  ... 40 more fields
}
Tokens:3,847
Status:Full data exposed
pflow (structure-only)
AI Context Window:
Only see what's needed:
${customer.id} : int
${customer.name} : string
${customer.email} : string
${customer.status} : enum
[43 additional fields cached]
[Actual data → 🔒 permission required]
Tokens:300
Status:Structure only
pflow logo
Safe by Design

Your safety checks can't be skipped. Ever.

Traditional agents make individual tool calls, reasoning through each step every time. They might skip validation, modify the wrong data, or take different execution paths. This is why developers limit agents to read-only operations.

With pflow, agents operate with workflows as composite tools instead of executing individual steps. The entire workflow becomes a single, deterministic tool call—safety checks and guardrails are compiled in and can't be bypassed.

The result? You can automate write operations, deployments, and workflows you'd never trust to a traditional agent.

Traditional Agent
Workflow Execution:
Run 1:
→ validate_input()
→ ask_confirmation()
→ execute_write()
 Success
Run 2:
→ validate_input()
→ ask_confirmation()
→ execute_write()
 Skipped safety
Run 3:
→ validate_input()
→ exec_different_path()
 Different approach
 or failure
Unpredictable. Can't trust with writes.
pflow (compiled)
Workflow Execution:
Run 1:
→ validate_input()
→ ask_confirmation()
→ execute_write()
 Success
Run 2:
→ validate_input()
→ ask_confirmation()
→ execute_write()
 Success
Run 3:
→ validate_input()
→ ask_confirmation()
→ execute_write()
 Success
Deterministic. Safe for production.

Two Ways to Use pflow

The open source CLI is the foundation — built for AI agents and developers. Cloud extends it with team collaboration and hosted execution.

Open Source CLI

Install in 60 seconds

Local-first execution
Works with any (local) agent
Git-friendly workflow files
No external dependencies
Full workflow trace and debug output
Open Source CLI illustration
Perfect For
Individual developers
CI/CD integration
Privacy-conscious workflows

If this ain't a gem I don't fucking know what is.Opus 4.0

Managed Cloud

Everything in CLI, plus team features

Hosted workflow execution
Team collaboration
Shared workflow libraries
Enterprise SSO
Managed Cloud illustration
Right now, all focus is on making the CLI loved by agents and devs. Cloud comes when the foundation is solid.
Free to start
Perfect For
Teams and organizations
Production deployments
Shared workflow libraries
Coming Q3 2026

FAQ

Common Questions

Common questions about pflow

Yes. If your agent can use bash tools OR MCP tools, it can use pflow. The only limitation right now is that the agent needs to run on your machine (like Claude Code, Cursor, Codex, Claude Desktop App etc.), not in the cloud/browser. pflow Cloud and self-hosted MCP servers will remove this limitation.

What to expect