Every CRM I've looked at does the same thing when they add "AI": they put a chat interface in front of their data. You type a question, the AI turns it into a query, the query runs, you get results. It's a SQL query dressed in a trench coat.
The question I kept asking was different: what would it look like if an AI agent could actually operate a CRM? Not answer questions from a human, but autonomously manage pipeline — the way a great sales ops person does, running in the background while you're on calls.
That's what I built. Here's how it works.
What MCP Actually Is
Model Context Protocol (MCP) is Anthropic's open standard for tool interfaces. The concept is simple: every tool has a name, a description, and a JSON Schema describing its parameters. An LLM sees these tools, decides when to call one, sends a call with matching parameters, gets the result back, and uses it in its next reasoning step.
Mechanically, this is similar to OpenAI's function calling. What's different is intent. MCP is designed for agents that compose multiple tools to complete a goal — not for wrapping a single API call. Tool discovery, chaining, and structured error handling are first-class concerns in the protocol design.
For a CRM, this changes the entire architecture question. Instead of building a chat interface that translates natural language to queries, you build a set of tools that represent every meaningful operation: create a deal, update a stage, log an activity, search for a contact. An agent calling those tools is operating the CRM — not asking it questions through a human intermediary.
The 19 Tools
Supersonic's MCP endpoint exposes 19 tools across three domains. Every operation you'd do manually in a CRM UI is a named, schema-defined tool callable by any MCP-compatible agent.
Here's what the create_deal tool schema actually looks like — the JSON Schema that any MCP client receives at discovery time:
{
"name": "create_deal",
"description": "Create a new deal in the pipeline.",
"parameters": {
"type": "object",
"properties": {
"name": { "type": "string", "description": "Deal name (required)" },
"company": { "type": "string", "description": "Company name" },
"contact_name": { "type": "string", "description": "Primary contact name" },
"contact_email": { "type": "string", "description": "Primary contact email" },
"stage": {
"type": "string",
"enum": ["lead","qualified","proposal","negotiation","closed_won","closed_lost"],
"description": "Pipeline stage (default: lead)"
},
"value": { "type": "number", "description": "Deal value in dollars" },
"mrr": { "type": "number", "description": "Monthly recurring revenue" },
"close_date": { "type": "string", "format": "date" },
"probability": { "type": "integer", "minimum": 0, "maximum": 100 },
"notes": { "type": "string" }
},
"required": ["name"]
}
}
Clean. Self-describing. An LLM can read this schema and know exactly what it can and can't do — no prompt engineering needed to explain the API surface.
Why Sub-100ms Matters for Agents
When a human uses a CRM, a 400ms response is invisible. When an agent chains 10 tool calls to qualify a lead, log the activity, create a deal, link a contact, and schedule a follow-up, latency stacks. At 400ms per call, 10 calls is 4 seconds. At 80ms, it's 800ms.
For human-facing UIs, that difference is nice-to-have. For agent workflows running dozens of times a day, it compounds into real cost and real time. Here are the architectural decisions that keep Supersonic fast:
| Decision | Impact | Typical latency |
|---|---|---|
Direct pool.query(), no ORM |
Predictable query plans, no translation overhead | ~5ms |
| GIN index on full-text search | Contact search scales to 100k+ rows | ~12ms |
| B-tree indexes on stage, created_at | Filter queries avoid full table scans | ~4ms |
| Shaped response payloads | Returns only fields the agent needs | Smaller JSON → less token burn |
| pg.Pool connection pooling | No connection setup on each request | ~2ms saved/call |
The result: median end-to-end response time under 80ms on warm connections. For an agent chaining 10 calls, that's under 1 second of tool execution time — leaving most of the latency budget for LLM reasoning.
A Real Workflow: Pipeline Hygiene on Autopilot
Here's the concrete case this was designed for. It's 6am Monday. Before you open your laptop, an agent has already run a pipeline hygiene pass:
- get_pipeline_summary → identifies 3 deals stuck in
negotiationwith probability still at 60%, no activity logged in 14+ days. - get_deal_timeline (for each) → confirms zero logged touchpoints in two weeks. Not a slow deal — a forgotten one.
- create_activity (for each) → logs a note: "No contact for 14 days. Stale deal — follow-up required." Sets
needs_follow_up: true. - update_deal (for each) → drops probability from 60% to 20% to reflect the stall in weighted pipeline math.
- Generates a summary: company name, original close date, last activity date, and suggested next action for each stale deal.
- Sends you the summary via email or Slack before you're awake.
Total: 10 tool calls. About 800ms of execution time at median latency. You wake up to a prioritized list of stale deals with context, not a raw pipeline report you have to manually audit.
The same pattern applies to post-call logging. After a sales call, you dictate a 30-second voice note. The agent parses it, runs create_activity, updates the deal stage if you mentioned moving forward, and sets a follow-up date. You never touched the CRM UI. The pipeline stays accurate because the agent keeps it accurate.
The MCP Endpoint
The tool registry is public and machine-readable:
GET https://supersonicos-2.polsia.app/api/mcp/tools
Returns the full schema for all 19 tools. Any MCP-compatible client — Claude Desktop, a custom agent, a LangChain pipeline — can hit this endpoint and immediately know what operations are available and how to call them.
All write operations require authentication via Bearer token. Generate an API key from the dashboard, pass it as Authorization: Bearer sk_..., and the agent has full read/write access.
curl https://supersonicos-2.polsia.app/api/mcp/deals \
-H "Authorization: Bearer sk_your_key_here" \
-G --data-urlencode "stage=negotiation" \
--data-urlencode "sort=updated"
The response is a clean JSON array of deals — no envelope, no metadata noise, just the data the agent needs to reason about next.
Who This Is For
The companies this makes sense for are running founder-led sales. You're personally on every call. You have 30–80 deals in flight. You don't have time to keep the CRM up to date, so it falls behind, becomes unreliable, and you stop trusting it. At that point, the CRM is costing you money — not making you money.
An agent that can natively operate the CRM changes the equation entirely. The agent handles pipeline hygiene between calls. It surfaces what's stale, what needs follow-up, what's at risk before close. You spend your time closing, not doing data entry.
This isn't a vision for 2027. The MCP endpoint is live. The tools are production-ready. The only thing between you and an agent-operated pipeline is an API key.
Try It
The agent demo shows a live simulation of an agent working a pipeline — tool calls, reasoning steps, outputs — in real time. The MCP explorer lets you browse all 19 tool schemas and run sample requests interactively.