MCPs (Model Context Protocol servers) are services that expose tools for LLMs to use. CLIs (Command Line Interfaces) are shell utilities that programmers use to interact with a tool without using something like a website. Both of these tools are available to many agentic LLMs like Claude Code. Which should you pick?
My take on this is that you should nearly always prefer a CLI.
MCP#
There are three benefits to using an MCP: auth, updates, and preconfiguration.
The biggest reason to use an MCP in my mind is that it centralizes the auth concern. We don’t have to get jira API tokens and plug it into the magic config file on disk. I can stand up an MCP which knows how to do jira auth.. then allow anyone at the company to do reads from it. This ensures we have consistent access of information across multiple people. This single-auth source does assume that everyone SHOULD have the same levels of access. It also makes writes a little tricky because it can potentially show up all as the same user. The MCP spec does support OAuth 2.0, allowing per-user credentials, but in practice, many teams use a shared service account for ease of setup.
The second reason is that having the MCP as a centralized source means you have a single thing to update. If you have an internal tool and expose that for people to make use of via a CLI, updating it means you need a way to distribute the new CLI or suffer with people using the old one. This can cause a fragmented experience and increase testing surface area, as people might be using ancient versions of the client.
Partially piggybacking on the ease of auth, MCPs can be preconfigured for the entire company. This means that we can set up a “connector” in the Claude Console and every teammate (on Claude Code, Claude Web, or Claude Cowork) can all get access to the same thing. This really democratizes access, especially for less technical stakeholders.
CLI#
With all of these benefits, why not use an MCP? It really all boils down to context and the management of it.
First, MCPs have a lot of tooling available and this can bloat the context window you have to work with (the scarce resource that allows the LLM to think well). The agentic harnesses (like Claude Code) are working on this problem through things like lazy loading the full tool definitions and allowing the MCP server to progressively disclose that information.
Even after you load those tools efficiently, your MCP’s tool result will be plopped into your context. These can be big, especially when dealing with content you don’t necessarily control (like a PM who dumped an entire PRD into your jira ticket). By using a CLI, you can peek or filter out information like this using normal bash tools. This is very similar to how Recursive Language Models purport to work.
Lastly, CLIs are less operationally complex. Their latency is consistent and negligible. There is no connectivity to go wrong. There’s no “forgetting to have it running”. You just have an executable on disk waiting for you. Very useful, especially for long-tail tools that the LLM might want to reach for.
All of this said, this is an evolving landscape. Maybe when RLMs get more commonplace, context bloat will be less of an issue. For now? I’m sticking with CLIs whenever I can.
Comments
Reply on Bluesky to join the conversation.
Loading comments...