I built an MCP server that gives Claude Code a "phone a friend" lifeline. Instead of relying on one model's perspective, Claude can pull in GPT, Gemini, DeepSeek, or any OpenAI-compatible model for a structured multi-round debate — and participate as an active debater itself.
How it works:
You ask Claude to brainstorm a topic
All configured models respond in parallel (Round 1)
Claude reads their responses and pushes back with its own take
Models see each other's responses and refine across rounds
A synthesizer produces the final consolidated output
Claude isn't just orchestrating — it has full conversation context, so it knows what you're working on and argues its position alongside the other models. They genuinely build on and challenge each other's ideas.
A 3-round debate with 3 models costs ~$0.02-0.05. One model failing doesn't kill the debate — results are resilient.
npm: npx brainstorm-mcp
GitHub: https://github.com/spranab/brainstorm-mcp
Sample debate (GPT-5.2 vs DeepSeek vs Claude): https://gist.github.com/spranab/c1770d0bfdff409c33cc9f985043...
Free, MIT licensed. Works with any OpenAI-compatible API including local Ollama.