Show HN: Markdown Rules MCP Server – The Portable Alternative to Cursor Rules

2 dannychickenegg 0 6/12/2025, 1:01:19 PM github.com ↗
Hey HN,

I've built Markdown Rules MCP Server (https://github.com/valstro/markdown-rules-mcp).

I was frustrated when writing extensive Cursor rules for our large and bespoke codebase. Cursor frequently failed or inconsistently followed multiple links inside its .mdc files to other relevant documentation and live code examples. Our codebase has many internal libraries and tools that LLMs haven't been trained on, so good context is crucial. While I spent time writing detailed Cursor rules, the results were often non-deterministic.

Additionally, as we're still trialing Cursor amongst other alternatives, I wasn't comfortable with the vendor lock-in of their proprietary .mdc file format.

So, I built this alternative. It uses regular Markdown files that live with your project. Your existing docs can work as-is, but you can enrich them with simple YAML frontmatter to support the same types of rules Cursor does (Agent Requested, Always Apply, Auto-Attached based on file globs, or available for manual inclusion). The goal is to provide consistent, reliable context to AI coding assistants, portably.

Key Features & Benefits:

Universal Compatibility: Write docs once, use them with Cursor, Claude Desktop, or any future MCP-enabled AI tool. Escape vendor lock-in.

Reliable Dependency Resolution: If your Markdown links to other files ([my lib](./lib.ts?md-link=true)), the server reliably traverses and pulls them in. This was a major pain point with other solutions.

Precision Context Control: Embed specific code snippets or sections of files directly into your Markdown context using line ranges ([config example](./config.json?md-embed=10-15)), instead of dumping entire files and creating noise.

Great for Complex/Proprietary Codebases: Specifically designed to give AI models the detailed context they need for your project's custom tooling, internal libraries, or unique architecture.

A Few Caveats/Downsides:

Potentially Large Context: Markdown Rules will diligently parse through all markdown links (?md-link=true) and embeds (e.g., ?md-embed=1-10) to include referenced content. This comprehensiveness can lead to using a significant portion of the AI's context window, especially with deeply linked documentation. However, I find this to be a necessary trade-off for providing complete context in the large, bespoke codebases this tool is designed for.

MCP Tool Invocation Variance: Occasionally, depending on the specific LLM you're using, the model might not call the tool to fetch relevant docs as consistently as one might hope without explicit prompting. This behavior can often be improved by tweaking the usage instructions in your markdown-rules.md file or by directly asking the AI to consult the docs. I've personally found Anthropic models tend to call the tool very consistently without needing explicit prompts.

This is still new, and I'd love to get some feedback, especially if you've faced similar challenges or have thoughts on these trade-offs.

Check out the GitHub repo for full installation, configuration, and examples: https://github.com/valstro/markdown-rules-mcp

Thanks!

Comments (0)

No comments yet