LLM Tools & MCP - What are they and how to get started?


LLMs, as powerful as they are, have a few disadvantages common to all of them:

1. Their knowledge and information are limited by their training data and, hence, are unaware of any new developments post a specific date.

2. They, by themselves, are unable to take any action based on the reasoning they have developed.

 

Tools

This is where the concept of tools comes in to address both of these issues. Tools are callable interfaces that can be bound to an LLM. These can be functions, APIs, scripts, system calls, or anything callable, whose interface we expose to the LLM.

Let’s consider a function for simplicity. When a tool gets bound to an LLM, we pass the function's name, its description, and the input parameters it takes, along with their data type and description, etc., to the LLM. From this data, especially the description, the LLM is now given context about the existence of a function that can be called to retrieve certain information or take some action.

 

Based upon the user input, if deemed necessary, the LLM will decide to invoke the tool. The tool will then execute some action, which may/may not have a lasting impact, and return some information. The tool’s returned output gets added to the conversation list as a tool message and gives new context about the tool’s impact and/or gives the LLM the information it was looking for. Based on this new information, LLM then continues the conversation with a message to the user.

 

Features currently available on the chat website of popular LLMs, such as searching the web, running code, and giving its output, etc., are done through tool calls.

 

The introduction of tools has enabled Agentic workflows and has been an incredible step forward in the field of AI. One of the most prominent developments related to this is the concept of model context protocol (MCP).

 

MCP:

Similar to how a server exposes a set of APIs, performing some related operations, to a client, it would be very useful to be able to package a bunch of LLM tools into a server and expose a similar API to an LLM to give the LLM access to a whole range of related operations. This would enable any LLM to easily connect to this server of tools and instantly obtain the ability to perform all of the actions and retrieve all of the information provided by that tool server.

 

Similar to how we have a standardised HTTP/REST protocol for clients to interact with server APIs, a standardised protocol for LLMs to connect with such a tool server would also be extremely useful. This would enable anybody to connect to any server in a standardised way without having to write separate code to connect to each different server.

 

This creates a plug-and-play sort of infrastructure where any LLM can just plug into a tool server when needed, disconnect, and connect to another tool server easily.

 

This is exactly what MCP enables. As stated in the official documentation itself, it is like USB for AI. It provides us standardised protocol to create MCP servers that host tools and MCP clients that can connect to the servers simply by giving them the URL corresponding to this.

 

MCP actually goes a couple of steps beyond this by being able to access resources, which are simply stores of information, and even prompt templates in them.

 

For communicating between MCP servers and clients, MCP uses JSON-RPC 2.0 as its protocol, which itself can be done via 2 transport modes - HTTP and stdio.

 

Stdio transport can be used when both the client and server share a process boundary, often on the same machine, and hence can speak via stdin/stdout, i.e., the terminal. This also means you need to be careful while making any print statements in your code, since they will end up messing with their communication. You’ll have to use logging libraries or any other method to send logs to a non-stdio file.

 

The real power of MCP is in its HTTP transport. In this transport mode, MCP is offered as a higher-application-level protocol on top of the HTTP protocol in order to connect remote servers and clients via the Internet.

 


As I conclude, I would highly recommend getting your hands dirty with all of these things, as it leads to a much better understanding of what it actually does. Langgraph is a good framework for orchestrating LLM workflows and tool calls, and fastmcp is a good package to implement a quick MCP service and clients over either transport mode.

Comments

Popular posts from this blog

SQL/Postgres' Inner Workings & Optimizations - The Hidden Beauty