Cicaddy and MCP: How Model Context Protocol Powers AI Agents Inside CI/CD Pipelines
Most MCP demos end at the developer's laptop. Cicaddy puts MCP where it actually matters — inside the pipeline.
The Model Context Protocol has spent the last year earning its reputation as the standard for connecting AI agents to external tools and data. The spec is solid. The ecosystem is growing. But the vast majority of MCP deployments live in one place: local development environments. An agent running in Claude Desktop or Cursor calls an MCP server, gets some data, and the developer nods approvingly.
CI/CD pipelines are a different animal. They run in ephemeral containers. They enforce strict network boundaries. They process every commit, every merge request, every deployment — automatically, at scale, with no human sitting in front of a terminal. When Red Hat's Cicaddy project wires MCP into this environment, it is not a demo. It is a stress test of whether MCP's architecture can survive contact with production infrastructure [1].
The answer turns out to be yes — but only if you understand which MCP transport to use, when pre-processing beats a live MCP call, and how to handle secrets in a context where a misconfigured environment variable gets committed to a pipeline log for everyone in the organization to read.
Four Transports, Four Different Tradeoffs
MCP is not a single wire protocol. It supports multiple transport mechanisms, and the choice between them determines whether your CI agent can actually reach the data it needs.
Cicaddy's MCP integration supports four: HTTP, stdio, SSE (Server-Sent Events), and WebSocket [1]. Each transport solves a different connectivity problem, and CI/CD pipelines surface those problems more aggressively than any other environment.
HTTP is the default for remote MCP servers. The CI runner sends a request, the MCP server responds, the connection closes. Stateless, cacheable, compatible with every proxy and firewall configuration your infrastructure team has already deployed. When your MCP server lives outside the pipeline — a managed service, a team-shared instance, anything with its own hostname — HTTP is the transport you reach for first.
stdio is the opposite end of the spectrum. The MCP server runs as a subprocess inside the same container as the CI agent. Communication happens over standard input/output streams. No network involved. This matters in CI for two reasons: first, some pipeline runners have restricted egress — they cannot make outbound network calls without explicit allowlisting. A stdio MCP server sidesteps that entirely. Second, stdio servers start fast. When your pipeline allocates a fresh container for every job, you cannot afford a thirty-second handshake with a remote server before the agent begins work.
SSE enables server-push patterns. The CI agent opens a connection, and the MCP server streams events as they become available. This is useful for MCP servers that aggregate data over time — monitoring dashboards, build status feeds, log streams — where the agent needs to observe rather than query. In a pipeline context, SSE connections need careful timeout management. A long-lived SSE connection in an ephemeral CI container is a resource leak waiting to happen.
WebSocket provides full-duplex communication. Both the client and server can send messages at any time. This is the transport for conversational MCP interactions — scenarios where the agent and the server exchange multiple rounds of data before reaching a conclusion. In CI, WebSocket connections face the same lifecycle management challenges as SSE, with the added complexity that both sides can initiate messages.
The practical rule for CI: start with HTTP for remote servers, use stdio for local tools, and only reach for SSE or WebSocket when your specific MCP server's interaction pattern demands it. Most CI workloads are request-response, not streaming.
The Configuration That Makes It Work
Cicaddy defines MCP server connections in YAML, and the configuration format reveals how seriously the project takes production deployment [1]:
MCP_SERVERS_CONFIG: >-
[
{"name": "devlake", "protocol": "http",
"endpoint": "https://devlake-mcp.example.com/mcp",
"headers": {"Authorization": "Bearer ${DEVLAKE_TOKEN}"},
"timeout": 300}
]
Three details in this configuration matter more than they first appear.
First, the protocol field. Every MCP server declaration explicitly states its transport. This is not inferred from the endpoint URL or left to auto-detection. In CI, auto-detection is a reliability hazard — the agent needs to know exactly how it will communicate before the job starts, not discover it at runtime when the pipeline is already burning minutes.
Second, ${DEVLAKE_TOKEN}. The authorization header references an environment variable, not a literal token. This is the only acceptable pattern in CI. Secrets live in the pipeline's secret store — GitLab CI/CD variables, GitHub Actions secrets, Jenkins credentials — and get injected into the environment at runtime. A hardcoded token in a YAML config is a security incident waiting to be discovered by your next audit.
Third, the timeout field. Five minutes. That is an eternity in a synchronous API call and completely reasonable for an MCP server that is querying a data platform like DevLake, aggregating metrics across repositories, and returning a structured analysis. CI agents that call MCP servers need generous timeouts because the work those servers perform is often heavier than a typical API request. But generous is not infinite — every timeout should reflect the actual expected response time of the server, plus a margin. An unlimited timeout in CI means a hung MCP server can silently stall your entire pipeline.
You can declare multiple MCP servers in the same config array, each with its own protocol, endpoint, and credentials. A single CI agent might connect to DevLake over HTTP for metrics data and to Context7 over HTTP for library documentation, using different authentication tokens for each.
Two Agents, Two MCP Patterns
Cicaddy's architecture splits CI intelligence across specialized agents, and the way each agent uses MCP illustrates different integration patterns.
The MR Agent runs on every merge request. Its job: review code changes, check for deprecated patterns, validate that the proposed changes align with current library APIs. This agent connects to the Context7 MCP server to pull current documentation for the libraries used in the project [1]. When a developer submits a merge request that calls a function deprecated in the latest release of a dependency, the MR Agent catches it — not because someone hardcoded a list of deprecated functions, but because the agent checked live documentation through MCP at review time.
This is a pattern worth internalizing. Static linting rules catch known-bad patterns. MCP-connected agents catch patterns that became bad since the last time someone updated the rules. The difference matters in any codebase with active dependencies.
The Task Agent handles operational intelligence. It connects to the DevLake MCP server to pull DORA metrics — deployment frequency, lead time for changes, change failure rate, mean time to recovery — and uses them to assess developer and team health [1]. A sudden spike in change failure rate after a deployment triggers a different agent response than a gradual increase over weeks. The metrics are not decorative. They drive the agent's decisions about whether to flag a deployment for human review or let it proceed.
DevLake is the interesting case here because it demonstrates MCP as a bridge between CI's isolated execution environment and the broader engineering intelligence that lives outside the pipeline. A CI runner, by design, knows almost nothing about the world beyond the current repository and the current job. MCP servers give it peripheral vision.
The Pre-Processing Pattern: When MCP Is Not Available
Not every data source speaks MCP. Not every internal tool has an MCP server sitting in front of it. Cicaddy addresses this gap with a pre-processing pattern that deserves attention because it solves a problem every team deploying MCP in CI will eventually face [1].
The pattern: before the AI agent begins its work, a deterministic pre-processing step runs. This step collects data from sources that lack MCP support — internal APIs, custom databases, proprietary tools — and writes the results to local files. The agent then accesses this data through a local file-reading MCP tool, treating pre-collected data the same way it treats live MCP server responses.
# Pre-processing step collects data before the agent runs
pre_process:
- name: collect-coverage
script: ./scripts/fetch-coverage-report.sh
output: /tmp/agent-context/coverage.json
- name: collect-dependency-audit
script: ./scripts/audit-deps.sh
output: /tmp/agent-context/deps-audit.json
# Agent sees pre-collected data via local file tools
MCP_SERVERS_CONFIG: >-
[
{"name": "local-files", "protocol": "stdio",
"command": "mcp-file-server",
"args": ["--root", "/tmp/agent-context"]}
]
This is pragmatic architecture. The agent does not care whether data arrived via a live MCP connection or a pre-processing script. Its interface is the same: MCP tools that return structured data. The pre-processing step handles the messiness of integrating with systems that were built before MCP existed and may never get MCP support.
The pattern also has a security advantage. Pre-processing scripts run with their own credentials and their own network access, separate from the agent's MCP connections. You can grant the pre-processing step access to a sensitive internal database without giving the AI agent — or its MCP servers — any direct access to that database. The data flows one way: from the pre-processing step into local files, from local files into the agent's context.
Security in the CI Context
Running MCP inside CI pipelines introduces security considerations that do not exist in local development.
Secret injection. Every MCP server that requires authentication needs credentials. In CI, those credentials must come from the pipeline's secret store, never from configuration files checked into the repository. The ${DEVLAKE_TOKEN} pattern in Cicaddy's YAML config is the correct approach — the token value exists only in the runner's environment at execution time and never touches disk in plaintext.
Network boundaries. CI runners often operate in restricted network segments. An MCP server running as a stdio subprocess has zero network exposure. An MCP server accessed over HTTP requires an explicit egress rule. Teams deploying MCP in CI need to map every MCP server connection to a firewall rule and ensure those rules are as narrow as possible — specific hostnames, specific ports, no wildcards.
Output sanitization. AI agents connected to MCP servers generate logs. Those logs may contain data returned by MCP servers — and that data may include sensitive information. DORA metrics might reveal deployment patterns. Library documentation queries might reveal which internal libraries a project uses. Pipeline logs in most CI systems are visible to anyone with repository access. Sanitizing agent output before it hits the pipeline log is not optional — it is a security requirement.
Token scoping. The token used by a CI agent to authenticate with an MCP server should have the minimum permissions required for that agent's tasks. The MR Agent's token for Context7 needs read access to documentation. It does not need write access. It does not need access to billing APIs. If your MCP server supports scoped tokens, scope them. If it does not, raise that as a gap with the server maintainer.
MCP as the CI Intelligence Layer
The deeper lesson from Cicaddy's architecture is not about any specific MCP server or transport protocol. It is about what happens when you treat MCP as an infrastructure primitive rather than a developer convenience.
CI/CD pipelines have been deterministic for decades. That determinism is their strength — the same inputs produce the same outputs, every time. But determinism also means rigidity. A pipeline cannot adapt to context it cannot see. It cannot reason about whether a change is risky or routine. It cannot look at a merge request and understand that the function being modified is called by seventeen other services in production.
MCP does not replace the determinism. Cicaddy's pre-processing pattern makes that explicit: deterministic data collection happens first, and the AI agent operates on the results. What MCP adds is a standardized way for the intelligent layer of the pipeline — the agent — to reach external context without custom integrations for every data source.
This is MCP doing what it was designed to do, in an environment that tests every assumption the protocol makes about connectivity, authentication, and lifecycle management. The fact that it works — that Red Hat is running this in production, not presenting it at a conference with a "coming soon" slide [1] — validates a thesis that the MCP community has been asserting for months: the protocol is ready for infrastructure, not just applications.
Where to Start
If you are running CI/CD pipelines and want to connect your first MCP server, start with a documentation server like Context7.
The reasoning: documentation servers are read-only, low-risk, and immediately useful. Connecting a documentation MCP server to your CI agent means every merge request review has access to current API references for your dependencies. No secrets beyond an API key. No write operations. No risk of the agent modifying external state. And the value is obvious the first time it catches a deprecated function call that would have shipped to production.
Once the documentation server is running, add DevLake or a similar metrics server. Now your agent can make decisions informed by both current library state and historical project health. Two MCP connections, two different data domains, and your pipeline has gone from blind execution to informed execution.
The jump from zero MCP servers to one is the hard part. Not technically — the YAML configuration is straightforward. The hard part is convincing your team that a CI agent should have access to data beyond the repository. Once that first connection is live and catching real issues, the second connection is an easy conversation.
References
[1] Red Hat Developer — How to develop agentic workflows in a CI pipeline with cicaddy. Article