Quick Start¶
This guide gets you from zero to a generated podcast episode in about 10 minutes.
Prerequisites¶
- A Google Cloud project with billing enabled
- A Google OAuth 2.0 client ID and secret (create one)
- The Vertex AI API enabled in your project
- An MCP-compatible client (Claude Desktop, Cursor, Zed, or
mcp-remote)
1. Deploy to Cloud Run¶
The fastest path is deploying via the CI/CD pipeline. Fork the repo, configure secrets in GitHub, and push to main — the Deploy workflow handles the rest.
See Infrastructure → Terraform for the full setup.
After deployment, note your Cloud Run service URL — you'll need it in the next step.
2. Connect an MCP client¶
Edit ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"the-curator": {
"command": "npx",
"args": ["mcp-remote", "https://your-cloud-run-url/sse"]
}
}
}
Restart Claude Desktop. On first use, a browser window will open for Google OAuth.
Point your client at https://your-cloud-run-url/sse as an SSE MCP server. Consult your client's docs for the exact configuration format.
3. Generate a transcript¶
In your MCP client, call:
The server calls Gemini and returns a list of (speaker, text) turns. Review the transcript — edit it if you like — before proceeding.
4. Synthesize the episode¶
The server synthesizes each turn with Vertex AI TTS, writes a .wav file, and uploads it to GCS. You'll get back a GCS path like episodes/2026-05-16_18-30/Voyager.wav.
What's next?¶
- MCP Clients — connecting additional clients and troubleshooting OAuth
- Configuration — environment variables reference
- Infrastructure — Terraform setup and CI/CD pipeline