Give your AI assistant access to real Helm chart data. No more hallucinated values.yaml files.
Give your AI assistant access to real Helm chart data. No more hallucinated values.yaml files.
Helm MCP · v0.1.4
by Kubedoll-Heavy-Industries
mcp-helm
Give your AI assistant access to real Helm chart data. No more hallucinated values.yaml files.
What is this?
When you ask Claude, Cursor, or other AI assistants to help with Kubernetes deployments, they don't have access to Helm chart schemas. So they guess — and the guesses look plausible but don't match reality.
Without mcp-helm:
- :x: Hallucinates field names that look right but don't exist
- :x: Suggests stale or deprecated chart versions
- :x: Wastes tokens on web fetches and guesswork
With mcp-helm:
- :white_check_mark: Queries actual Helm repositories for real chart data
- :white_check_mark: Gets the latest chart version automatically
- :white_check_mark: Correct configurations the first time
mcp-helm implements the Model Context Protocol (MCP) — a standard way for AI assistants to access external data sources.
Try It Now
Add this to your editor's MCP config to use our public instance (rate limited, no install required):
{
"mcpServers": {
"helm": {
"type": "http",
"url": "https://helm-mcp.kubedoll.com/mcp"
}
}
}
Then ask your AI: "What values can I configure for the bitnami/postgresql chart?"
Editor Setup
Claude Code
Edit ~/.claude/mcp.json:
{
"mcpServers": {
"helm": {
"command": "docker",
"args": ["run", "--rm", "-i", "ghcr.io/kubedoll-heavy-industries/mcp-helm", "--transport=stdio"]
}
}
}
Claude Desktop
Edit ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"helm": {
"command": "docker",
"args": ["run", "--rm", "-i", "ghcr.io/kubedoll-heavy-industries/mcp-helm", "--transport=stdio"]
}
}
}
Cursor
Edit MCP settings in Cursor's configuration:
{
"mcpServers": {
"helm": {
"command": "docker",
"args": ["run", "--rm", "-i", "ghcr.io/kubedoll-heavy-industries/mcp-helm", "--transport=stdio"]
}
}
}
VS Code + Continue
Add to your Continue config (~/.continue/config.json):
{
"experimental": {
"modelContextProtocolServers": [
{
"transport": {
"type": "stdio",
"command": "docker",
"args": ["run", "--rm", "-i", "ghcr.io/kubedoll-heavy-industries/mcp-helm", "--transport=stdio"]
}
}
]
}
}
Without Docker
If you prefer to run the binary directly, install mcp-helm and replace the Docker config with:
{
"mcpServers": {
"helm": {
"command": "mcp-helm"
}
}
}
Available Tools
| Tool | What it does |
|---|---|
search_charts |
Search for charts in a Helm repository |
get_versions |
Get available versions of a chart (newest first, use limit=1 for latest) |
get_values |
Get chart values.yaml with optional JSON schema (include_schema=true) |
get_dependencies |
Get chart dependencies from Chart.yaml |
get_notes |
Get chart NOTES.txt (post-install instructions) |
Install
Docker (recommended — no install required, used in Editor Setup above):
docker pull ghcr.io/kubedoll-heavy-industries/mcp-helm:latest
Binary:
curl -fsSL https://github.com/kubedoll-heavy-industries/helm-mcp/releases/latest/download/mcp-helm_$(uname -s)_$(uname -m).tar.gz | tar xz
sudo mv mcp-helm /usr/local/bin/
Go:
go install github.com/kubedoll-heavy-industries/helm-mcp/cmd/mcp-helm@latest
Self-Hosting
For shared deployments or when you need an HTTP endpoint:
docker run -p 8012:8012 ghcr.io/kubedoll-heavy-industries/mcp-helm:latest \
--transport=http --listen=:8012
# Connect to http://localhost:8012/mcp
See docs/self-hosting.md for health endpoints and production recommendations.
Documentation
- Configuration Reference — CLI flags, env vars, transport modes
- Self-Hosting Guide — Docker HTTP, health endpoints, production tips
- Troubleshooting — common issues and fixes
- Contributing — development setup, testing, PR guidelines
- Security Policy — reporting vulnerabilities
License
MIT — see LICENSE.