9 min read

The DevOps Guide to the Model Context Protocol (MCP): Connecting LLMs to Your Local Tools

Table of Contents

Large Language Models (LLMs) like Google’s Gemini and Anthropic’s Claude are transforming how we interact with information. But for DevOps engineers, their true power isn’t just in answering trivia—it’s in their potential to become an active partner in our daily operations. The biggest hurdle? These cloud-based models have no idea what’s happening on your local machine, inside your CI/CD pipeline, or within your private Kubernetes cluster.

Manually copying and pasting kubectl describe outputs, terraform plan results, or lengthy log files into a chat window is slow, error-prone, and doesn’t scale. We need a systematic way to bridge this context gap.

This is where the Model Context Protocol (MCP) comes in. In this guide, we’ll define this powerful pattern, explore its game-changing applications in DevOps, and walk you through building a simple implementation using shell scripts to supercharge your AI-assisted workflows.

Table of contents

What is the Model Context Protocol (MCP)?

First, let’s be clear: the Model Context Protocol isn’t a formal, IETF-standardized network protocol like TCP/IP or HTTP. Instead, MCP is a design pattern or a structured approach for providing LLMs with relevant, scoped, and securely-handled information from your local environment.

Think of it as an init script for your conversation with an AI. Before you ask your question, you run a process that gathers all the necessary background information—the context—and bundles it with your prompt.

A robust MCP implementation consists of three core components:

  1. Context Scoping: Intelligently defining what information the LLM needs and, just as importantly, what it shouldn’t see. This involves selecting specific files, command outputs, and environment variables.
  2. Data Formatting: Structuring the collected context in a clean, machine-readable format that the LLM can easily parse. Markdown is an excellent choice for this, as it allows for clear headings and code blocks.
  3. Interaction Mechanism: The toolchain that packages the context and the user’s prompt, sends it to the LLM API, and presents the response. This is often a command-line interface (CLI) tool.

By standardizing how we provide this context, we move from ad-hoc, manual interactions to repeatable, automated, and powerful AIOps workflows.

Why MCP is a Game-Changer for DevOps

Adopting an MCP mindset fundamentally changes the dynamic between an engineer and an AI. The LLM evolves from a passive knowledge base into an active, context-aware collaborator.

From Manual Copy-Paste to Automated AIOps

The traditional workflow for debugging with an LLM is cumbersome:

  1. An alert fires from Prometheus or Datadog.
  2. You SSH into a server or connect to your cluster.
  3. You run a series of diagnostic commands: journalctl, kubectl logs, docker inspect.
  4. You painstakingly copy the relevant snippets, paste them into a web UI, and try to explain the situation.

With an MCP-driven workflow, this becomes a single command:

# Hypothetical script
./diagnose-ai.sh --pod-name my-failing-pod-xyz123

This script would automatically gather logs, pod descriptions, and recent events, bundle them into a structured prompt, and return a concise analysis and suggested fix from the LLM. This is the essence of practical AIOps.

Use Case: Debugging a Failing Kubernetes Pod

Imagine a CrashLoopBackOff error for a critical service. An MCP script can automate the initial triage process:

  1. Gather Context:

    • Run kubectl describe pod <pod-name> -n <namespace> to get events and container statuses.
    • Run kubectl logs <pod-name> -n <namespace> --previous to get logs from the last terminated container.
    • Fetch the pod’s manifest (kubectl get pod <pod-name> -o yaml) to check resource limits and configuration.
  2. Format and Prompt: The script formats this data into Markdown:

    I am a DevOps engineer debugging a Kubernetes pod in a CrashLoopBackOff state.
    Here is the context from my environment.
    
    ## kubectl describe pod
    
    <Output of describe command>
    
    ## kubectl logs --previous
    
    <Output of logs command>
    
    ## Pod Manifest (YAML)
    
    <Output of get yaml command>
    
    Based on this information, what are the 3 most likely root causes and the corresponding commands to verify them?
  3. Interact: The script sends this prompt to an LLM like Google’s Gemini via its CLI, giving you an actionable starting point in seconds.

Use Case: Infrastructure as Code (IaC) Generation

MCP is also incredibly powerful for development tasks. Suppose you need to add a new Redis instance to your Terraform project. Instead of starting from scratch, you can use MCP to make the LLM “aware” of your existing codebase.

Your prompt could be:

“Analyze the attached Terraform files. Note our provider versions, variable naming conventions, and module structure. Now, generate a new module for an AWS ElastiCache (Redis) cluster. It should be placed in modules/redis/ and follow our existing style.”

An MCP script would gather *.tf files, versions.tf, and perhaps a tree view of the directory structure to provide the necessary context.

Implementing a Simple MCP with Shell and a CLI

Let’s build a proof-of-concept MCP using simple bash scripts and a hypothetical LLM CLI. We’ll use a generic ask-ai command, which you can substitute with tools like the gemini CLI, aichat, or a custom script that calls the Anthropic Claude API.

Step 1: The Context Provider Script (context.sh)

This script is the heart of our MCP. It’s responsible for gathering information based on input arguments.

#!/bin/bash

# context.sh - A simple context gathering script for MCP

# Ensure a command is provided
if [ -z "$1" ]; then
  echo "Usage: $0 <command> [args...]"
  echo "Commands: git, k8s-pod <pod-name>"
  exit 1
fi

COMMAND=$1
shift

echo "---"
echo "## Context from local environment"
echo "Generated at: $(date -u +"%Y-%m-%dT%H:%M:%SZ")"
echo "---"

case $COMMAND in
  "git")
    echo "### Git Status"
    echo "\`\`\`"
    git status
    echo "\`\`\`"

    echo "### Recent Git Log (3 commits)"
    echo "\`\`\`"
    git log -n 3 --oneline
    echo "\`\`\`"
    ;;

  "k8s-pod")
    POD_NAME=$1
    if [ -z "$POD_NAME" ]; then
      echo "Error: Kubernetes pod name is required."
      exit 1
    fi
    echo "### Kubernetes Pod: $POD_NAME"
    echo "#### Describe Output"
    echo "\`\`\`"
    kubectl describe pod "$POD_NAME"
    echo "\`\`\`"

    echo "#### Logs (last 50 lines)"
    echo "\`\`\`"
    kubectl logs "$POD_NAME" | tail -n 50
    echo "\`\`\`"
    ;;

  *)
    echo "Error: Unknown command '$COMMAND'"
    exit 1
    ;;
esac

This script can now be run as ./context.sh git or ./context.sh k8s-pod my-app-123 to generate formatted context.

Step 2: The Main Interaction Script (ask.sh)

This script combines a user’s question with the context from context.sh and sends it to the LLM.

#!/bin/bash

# ask.sh - Main MCP interaction script

if [ "$#" -lt 2 ]; then
  echo "Usage: ./ask.sh <context-command> \"<Your Question>\" [context-args...]"
  echo "Example: ./ask.sh git \"Summarize the current state of my repo.\""
  echo "Example: ./ask.sh k8s-pod \"Why is this pod failing?\" my-failing-pod"
  exit 1
fi

CONTEXT_COMMAND=$1
QUESTION=$2
shift 2 # The rest of the arguments are for the context script

# 1. Gather the context using our script
CONTEXT=$(./context.sh "$CONTEXT_COMMAND" "$@")
if [ $? -ne 0 ]; then
  echo "Failed to gather context."
  exit 1
fi

# 2. Build the final prompt
# We prepend the user's question and a role-playing instruction.
FINAL_PROMPT=$(cat <<EOF
You are an expert DevOps assistant. A user has a question about their system.
First, review the context provided below, which was gathered from their local machine.
Then, answer their question clearly and concisely.

$CONTEXT

## User Question

$QUESTION
EOF
)

# 3. Send to the LLM CLI
# This assumes a command 'ask-ai' that reads from stdin.
# Replace with your actual LLM CLI tool, e.g., 'gemini ask -'
echo "$FINAL_PROMPT" | ask-ai -

Now, you can put it all together:

# Grant execute permissions
chmod +x context.sh ask.sh

# Run the full workflow
./ask.sh git "Based on the git status and log, write a commit message for these changes."

The ask.sh script orchestrates the entire MCP flow: it gathers, formats, and sends the prompt, effectively giving the LLM a window into your local Git repository.

Security and Best Practices for MCP

Connecting an external service to your local environment demands strict security measures.

  • Principle of Least Privilege: Your context scripts should be highly specific. Don’t run ls -R /; instead, run ls my-project/. Never include secrets, API keys, or personal data. Consider using a .mcpignore file (similar to .gitignore) to explicitly block sensitive files and directories from being included.
  • Human-in-the-Loop for Execution: Never, ever allow an LLM to directly execute commands on your system. If the LLM suggests a command (e.g., kubectl delete pod ...), your script should print it to the console and require explicit user confirmation ([y/N]) before running.
  • Audit Trails: Log the final prompts (with context) and the LLM’s responses. This is invaluable for debugging the AI’s reasoning and for security auditing.
  • Cost Management: Be aware of your LLM provider’s pricing model. Sending massive log files or entire codebases as context can become expensive. Use tools like tail, head, and grep to trim the context to only the most relevant information.

The Future is Context-Aware: Your Next Steps

The Model Context Protocol (MCP) isn’t a single product but a crucial pattern for unlocking the next level of AI-powered DevOps automation. By treating context gathering as a formal, scriptable step in your workflow, you create a powerful and scalable bridge between your local tools and the intelligence of LLMs.

Start small. Identify a repetitive diagnostic or code-generation task in your daily routine.

  1. Write a simple context.sh script to gather the data you usually look at manually.
  2. Integrate it with your favorite LLM’s CLI.
  3. Refine your prompts and your context script over time.

You’ll be surprised at how quickly this AIOps pattern becomes an indispensable part of your toolkit.

How are you planning to use LLMs in your DevOps workflows? Share your ideas and context-gathering scripts in the comments below