Understanding the Model Context Protocol: Levi9’s Experience with MCP Architecture and Implementation 

Our TuesdAI workshops continue to explore cutting-edge AI technologies that can transform how we work. TuesdAI is a hands-on workshop series held on Tuesdays, designed to make AI a practical, everyday tool for our teams. Each session focuses on specific tools, trending topics, and real-world skills that participants can apply immediately. More about TuesdAI sessions can be found here. 

 

Our latest session focused on the Model Context Protocol (MCP), where we shared insights from actual proof-of-concept work and explored how this protocol works under the hood. Here’s what we covered and the practical lessons we took away. 

What is MCP and Why Does It Matter?

The Model Context Protocol is an open-source standard for connecting AI applications to various external systems. While it’s relatively newwe’re already seeing widespread adoption across different use cases, from IDE integrations to custom client implementations. Think of MCP as the bridge between your AI applications (like Claude Desktop or Cursor) and external systems (like file servers, GitHub, Jira, or PostgreSQL databases). The protocol provides a standardized way to expose tools, resources, and prompts that AI models can leverage. 

The Problem MCP Solves

Before MCP, our team faced the familiar challenge of building custom integrations for each AI application. For example, when developing our Assets Bot, we went through several iterations, starting with custom code built from scratch, then moving to LangChain for better structure. Each time we wanted to add new functionality, like an API call to check user vacation days, we had to implement it in a platform-specific way. 

 

MCP changes this paradigm entirely by bringing standardization, accessibility, and reusability.  

 

We no longer need to reinvent the wheel for each implementation. Instead, we follow a clear protocol. AI applications gain access to external systems through MCP servers that end users can utilize, and once you build an MCP server for your backend system, you can use it with any LLM that supports the protocol.  

From Theory to Practice: Our Real-World Implementation

We recently put these concepts into practice with a client who envisions becoming “AI first.” Their goal is to have all processes, from opening to closing contracts, handled by AI agents. This project gave us valuable insights into what works well with MCP and what considerations are critical for success. 

 

Our approach was to build MCP servers on top of their existing systems, with AI agents above those servers utilizing the tools and resources. At the top level, we implemented AI agent orchestration using agent cards to determine which agent gets called for each request. The key to this architecture is robust authorization and deterministic filtering. We need to know which client is calling the orchestration level and which agents we’re permitted to expose to them. 

 

We used the MCP SDK with Python and TypeScript, which abstracts much of the protocol complexity, allowing us to focus on registering tools on the server and calling them from the client. For our vehicle leasing example, we created specific tools for searching vehicles, getting Financial Leasing information, and getting Hire Purchase options. Through this hands-on experience, we learned that each tool should represent a single user action, similar to how you’d design a gRPC service. Instead of creating a generic “database query” tool, we found that specific actions like “search vehicles” or “get leasing details” give you better control over how the AI interacts with your systems. While registries offer ready-made servers for SQL databases and other common services, custom, specific actions provide better results and predictable behavior. 

 

Each tool has a clear description that helps the LLM understand when to invoke it. We used Zod for schema definition, ensuring type safety and proper validation. Input schema validation proved essential because we’re giving our systems to AI for use, so proper validation isn’t just good practice, it’s critical. Meaningful error messages are equally important because they help the AI understand what went wrong and how to proceed. 

 

Security was another crucial consideration. We implemented proper authorization using OAuth protocol, with an Authorization Server managing access, and the MCP server acting as a Resource Server. We were careful not to expose sensitive data unnecessarily and used caching where appropriate to improve performance. 

 

The beauty of MCP revealed itself in its flexibility. The same MCP server we built can be configured in an IDE like Cursor where developers can interact with it directly, integrated into a custom web application with a Chrome extension for natural chat interaction, or used programmatically through an API with an MCP client. In our web application, we even customized the UI based on which tool was called, for example, displaying a comparison view when the AI retrieves leasing options. 

MCP vs. Alternatives and the Growing Ecosystem

How does MCP compare to other solutions? AWS Bedrock, for instance, lets you define agent functions (tools) and Knowledge Bases (resources). The advantage there is getting RAG implementation out of the box. With MCP, you define your own chunking strategy, embeddings, and response formatting, which means more work but also more control. 

 

The MCP ecosystem is growing rapidly. Official servers exist for GitHub, Slack, Jira, and PostgreSQL, among others. However, the registry isn’t yet centralized or fully trusted, so caution is advised when selecting third-party servers. The MCP roadmap includes promising developments like an official registry with a vetting process, video support (currently not available), and enhanced security features. 

Key Takeaways

The Model Context Protocol represents a significant shift in how we build AI integrations. Instead of custom implementations for each platform, we now have a standardized protocol that promotes reusability and interoperability. From our hands-on experience, the most important considerations are designing tools as discrete, well-defined actions, implementing robust validation and error handling, thinking carefully about authorization and data exposure, leveraging the SDK to handle protocol complexity, and testing across different integration points. 

 

As the protocol matures and the ecosystem expands, we expect MCP to become the standard way of connecting AI applications to external systems. For teams building AI-powered solutions, understanding MCP is no longer optional, it’s essential. 

 

 

***This article is part of the AI9 series, where we walk the talk on AI innovation.*** 

In this article:
Published:
9 January 2026
Share:

Related posts