In the modern DevOps landscape, efficiency is king. We live in a world of YAML files, complex CLI commands, and sprawling dashboards. While powerful, these tools can create a steep learning curve and add friction to daily operations. What if you could manage your CI/CD pipelines as easily as having a conversation?
This is no longer science fiction. By combining the power of a large language model (LLM) like Anthropic’s Claude with a lightweight control server, we can create an intuitive, natural language interface for Azure DevOps. Imagine typing “run the staging build for the web-api” into a chat window and seeing it happen automatically.
In this tutorial, we’ll build a complete AIOps solution from the ground up. You will learn how to:
- Design an AI-driven automation architecture.
- Build a “Master Control Program” (MCP) server using Python to act as a secure bridge between Claude and Azure DevOps.
- Craft effective prompts to make Claude a reliable DevOps assistant.
- Connect everything to trigger and monitor Azure DevOps build pipelines using simple English commands.
Let’s dive in and revolutionize your CI/CD workflow.
Table of contents
The AIOps Vision: Natural Language for CI/CD
AIOps, or AI for IT Operations, aims to use artificial intelligence to automate and enhance operational workflows. In the context of CI/CD, this means moving beyond manual button-clicks and script-wrangling.
The goal is to create a system where team members, regardless of their deep technical knowledge of Azure DevOps, can interact with the delivery lifecycle. This approach offers several key benefits:
- Accessibility: Product managers, QA testers, and even junior developers can trigger builds or check statuses without needing to navigate the Azure DevOps UI.
- Speed: Conversational commands are often faster than navigating multiple web pages or recalling complex CLI syntax.
- Auditability: The MCP server provides a central point for logging all automated actions, creating a clear audit trail.
- Flexibility: The system is easily extendable. Once the foundation is in place, adding new capabilities like “deploy to production” or “get build logs” is straightforward.
Solution Architecture: Claude, MCP, and Azure DevOps
Our system consists of three core components working in harmony:
-
Claude (The Brain): The LLM interprets the user’s natural language request. We’ll give it a “system prompt” that defines its role and the specific actions it can take, instructing it to respond with structured JSON.
-
MCP Server (The Nervous System): This is a custom API server we’ll build with Python and FastAPI. It receives the structured JSON from Claude, validates the command, and translates it into the appropriate Azure DevOps REST API calls. This server is a critical security and logic layer—it ensures the LLM can’t execute arbitrary commands and only performs pre-defined, safe operations.
-
Azure DevOps (The Hands): This is our target platform, containing all the build and release pipelines we want to manage. The MCP server will interact with it via its comprehensive REST API.
The workflow is simple:
User Command (Text) -> MCP Server -> Claude API -> Structured Command (JSON) -> MCP Server -> Azure DevOps API -> Action
Prerequisites
Before we start building, make sure you have the following:
- Python 3.8+ and
pipinstalled. - An Azure DevOps account with an organization and a project containing at least one build pipeline.
- An API key for Anthropic’s Claude.
- Basic familiarity with REST APIs, Python, and the command line.
Step 1: Setting Up Your Azure DevOps PAT
To allow our MCP server to communicate with Azure DevOps, we need a Personal Access Token (PAT).
- Log in to your Azure DevOps organization.
- Go to User settings > Personal Access Tokens.
- Click + New Token.
- Give your token a name (e.g.,
mcp-server-pat). - Set the expiration date. For security, choose the shortest duration that works for your needs.
- Under Scopes, select Custom defined and grant the Build (Read & execute) permission. This gives our server the ability to list and queue builds.
- Click Create.
- Immediately copy the generated token. This is the only time you will see it. Store it in a secure location; we’ll add it to our environment file shortly.
Security Warning: A PAT is as powerful as your password. Never commit it to source control. Always use environment variables or a secret manager.
Step 2: Building the MCP Server with Python and FastAPI
Now for the exciting part: building the server that orchestrates everything.
Project Setup
First, let’s set up our project directory and install the necessary libraries.
# Create and navigate to the project directory
mkdir mcp-server && cd mcp-server
# Create a Python virtual environment
python -m venv venv
source venv/bin/activate # On Windows, use `venv\Scripts\activate`
# Install dependencies
pip install fastapi "uvicorn[standard]" requests python-dotenv anthropic
Next, create a file named .env in the root of your mcp-server directory. This file will hold our secrets. Fill it with your credentials:
# .env file
ANTHROPIC_API_KEY="sk-ant-..."
AZDO_ORG_URL="https://dev.azure.com/YourOrganizationName"
AZDO_PAT="your-azure-devops-pat-here"
AZDO_PROJECT_NAME="YourProjectName"
The Core Logic: MCP Server Code
Create a file named main.py. This will contain all our server logic. We will build it up piece by piece.
The code below sets up a FastAPI server with a single /command endpoint. This endpoint takes a natural language command, passes it to Claude for interpretation, and then executes the corresponding function to interact with Azure DevOps.
# main.py
import os
import requests
import json
from base64 import b64encode
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from dotenv import load_dotenv
import anthropic
# Load environment variables from .env file
load_dotenv()
# --- Configuration ---
ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY")
AZDO_ORG_URL = os.getenv("AZDO_ORG_URL")
AZDO_PROJECT_NAME = os.getenv("AZDO_PROJECT_NAME")
AZDO_PAT = os.getenv("AZDO_PAT")
# --- Initialize Clients ---
app = FastAPI()
claude_client = anthropic.Anthropic(api_key=ANTHROPIC_API_KEY)
# --- Pydantic Models for Request Body ---
class CommandRequest(BaseModel):
command: str
# --- Azure DevOps Helper Functions ---
def get_azdo_headers():
"""Returns headers for Azure DevOps API authentication."""
credentials = f":{AZDO_PAT}"
b64_creds = b64encode(credentials.encode()).decode()
return {
'Content-Type': 'application/json',
'Authorization': f'Basic {b64_creds}'
}
def get_pipeline_id_by_name(pipeline_name: str) -> int:
"""Finds a pipeline's ID by its name."""
api_url = f"{AZDO_ORG_URL}/{AZDO_PROJECT_NAME}/_apis/pipelines?api-version=7.1-preview.1"
headers = get_azdo_headers()
response = requests.get(api_url, headers=headers)
response.raise_for_status()
pipelines = response.json().get('value', [])
for p in pipelines:
if p['name'].lower() == pipeline_name.lower():
return p['id']
raise HTTPException(status_code=404, detail=f"Pipeline '{pipeline_name}' not found.")
def queue_build(pipeline_id: int, branch: str = 'main'):
"""Queues a new build for a given pipeline ID."""
api_url = f"{AZDO_ORG_URL}/{AZDO_PROJECT_NAME}/_apis/build/builds?api-version=7.1-preview.7"
headers = get_azdo_headers()
body = {
"definition": {"id": pipeline_id},
"sourceBranch": f"refs/heads/{branch}"
}
response = requests.post(api_url, headers=headers, data=json.dumps(body))
response.raise_for_status()
return response.json()
def list_pipelines():
"""Lists all pipelines in the project."""
api_url = f"{AZDO_ORG_URL}/{AZDO_PROJECT_NAME}/_apis/pipelines?api-version=7.1-preview.1"
headers = get_azdo_headers()
response = requests.get(api_url, headers=headers)
response.raise_for_status()
pipelines = response.json().get('value', [])
return [p['name'] for p in pipelines]
# --- Main Logic ---
def get_claude_command(user_prompt: str) -> dict:
"""Sends a prompt to Claude and gets a structured JSON command back."""
# This system prompt is crucial for guiding Claude's output.
system_prompt = """
You are an AIOps assistant for Azure DevOps. Your goal is to interpret user requests
and translate them into a structured JSON format. You must only respond with valid JSON.
Do not include any text before or after the JSON object.
The available actions are:
1. 'list_pipelines': List all build pipelines in the project. Does not require parameters.
2. 'queue_build': Trigger a new build for a specific pipeline.
Requires 'pipeline_name' and optionally 'branch' (defaults to 'main').
Example 1:
User: "Start a build for the backend-api on the develop branch."
Your JSON response:
{
"action": "queue_build",
"parameters": {
"pipeline_name": "backend-api",
"branch": "develop"
}
}
Example 2:
User: "what builds can i run"
Your JSON response:
{
"action": "list_pipelines",
"parameters": {}
}
"""
message = claude_client.messages.create(
model="claude-3-sonnet-20240229", # Or another suitable model
max_tokens=1024,
system=system_prompt,
messages=[{"role": "user", "content": user_prompt}]
)
# Extract the JSON content from Claude's response
response_text = message.content[0].text
return json.loads(response_text)
@app.post("/command")
async def handle_command(request: CommandRequest):
"""Main endpoint to process natural language commands."""
try:
# 1. Get structured command from Claude
claude_response = get_claude_command(request.command)
action = claude_response.get("action")
params = claude_response.get("parameters", {})
# 2. Execute the corresponding action
if action == "list_pipelines":
pipelines = list_pipelines()
return {"status": "success", "message": "Available pipelines:", "data": pipelines}
elif action == "queue_build":
pipeline_name = params.get("pipeline_name")
if not pipeline_name:
raise HTTPException(status_code=400, detail="pipeline_name is required for queue_build.")
pipeline_id = get_pipeline_id_by_name(pipeline_name)
branch = params.get("branch", "main") # Default to main if not specified
build_info = queue_build(pipeline_id, branch)
build_url = build_info['_links']['web']['href']
return {
"status": "success",
"message": f"Successfully queued build for '{pipeline_name}' on branch '{branch}'.",
"build_url": build_url
}
else:
raise HTTPException(status_code=400, detail=f"Unknown action: {action}")
except requests.exceptions.HTTPError as e:
# Handle Azure DevOps API errors
return HTTPException(status_code=e.response.status_code, detail=f"Azure DevOps API Error: {e.response.text}")
except Exception as e:
# Handle other errors (e.g., JSON parsing, Claude API)
return HTTPException(status_code=500, detail=f"An internal error occurred: {str(e)}")
Running the Server
With main.py and .env in place, you can start the server:
uvicorn main:app --reload
Your MCP server is now running locally on http://127.0.0.1:8000. The --reload flag will automatically restart the server whenever you make changes to the code.
Step 3: The Art of the System Prompt
The most critical part of our main.py code is the system_prompt variable inside the get_claude_command function. This is where we perform prompt engineering.
A well-crafted system prompt constrains the LLM, turning it from a creative chatterbot into a predictable, reliable tool. Our prompt does three things:
- Defines the Role: “You are an AIOps assistant for Azure DevOps.”
- Sets the Rules: “You must only respond with valid JSON.” This is key to making the output machine-readable.
- Provides a Function “Menu”: It explicitly lists the
actions(list_pipelines,queue_build) and their requiredparameters. The examples act as a few-shot guide, showing Claude exactly what we expect.
If you want to extend the server’s functionality, you must update this prompt to teach Claude about the new actions.
Step 4: Putting It All Together - A Test Run
Let’s test our system. Open a new terminal window (while the server is still running) and use curl to send a command to your MCP server.
Test 1: List all available pipelines
curl -X POST -H "Content-Type: application/json" \
-d '{"command": "Show me all the build pipelines"}' \
http://127.0.0.1:8000/command
You should receive a JSON response from your MCP server listing the names of the pipelines in your Azure DevOps project.
{
"status": "success",
"message": "Available pipelines:",
"data": [
"my-awesome-app-ci",
"backend-api-build",
"frontend-unit-tests"
]
}
Test 2: Queue a new build
Let’s assume you have a pipeline named my-awesome-app-ci.
curl -X POST -H "Content-Type: application/json" \
-d '{"command": "run a new build for my-awesome-app-ci on the feature/new-login branch"}' \
http://127.0.0.1:8000/command
If successful, the response will look something like this, and you’ll see a new build running in your Azure DevOps project!
{
"status": "success",
"message": "Successfully queued build for 'my-awesome-app-ci' on branch 'feature/new-login'.",
"build_url": "https://dev.azure.com/YourOrganizationName/YourProjectName/_build/results?buildId=12345"
}
Expanding the Possibilities: Next Steps
This implementation is a powerful proof of concept, but it’s just the beginning. Here are some ways you can extend it:
- More Actions: Add functions to get build status, retrieve build logs, or even manage release pipelines and approvals. Remember to update the system prompt for Claude with each new action.
- ChatOps Integration: Integrate the MCP server with a chatbot in Slack or Microsoft Teams. This would create a truly conversational interface for your entire team.
- Enhanced Security: For production use, deploy the MCP server to a secure cloud service like Azure App Service and place it behind an API gateway with proper authentication (e.g., OAuth2).
- Asynchronous Operations: For long-running tasks, you could modify the server to use background tasks (a feature supported by FastAPI) to poll for build completion and send a notification back to the user.
Conclusion
You have successfully built a bridge between human language and machine execution, creating an AIOps-powered assistant for your Azure DevOps workflows. By defining a clear architecture with a secure MCP server as the intermediary, you’ve unlocked a way to make CI/CD pipeline management more accessible, faster, and more intuitive for everyone on your team.
This pattern isn’t limited to Azure DevOps. You can adapt the MCP server to control any platform with a REST API, from GitHub Actions to AWS deployments. The future of operations is conversational, and you’re now equipped to build it.
What other operations would you automate with this setup? Share your ideas in the comments below