The Model Context Protocol (MCP) is an open standard that enables large language models to
interact with external tools and data sources. Unlike traditional API integrations, MCP
provides a standardized interface for AI models to access specialized functionality without
requiring custom implementations for each use case.
MCP addresses the problem that AI models can only generate text but can't actually do things like read files, query databases, or call APIs. It provides a standard way for any AI model to call external functions, so you write the tool once and any AI can use it. To enable full automation of nearly everything, using tools is necessary.
How MCP Works
MCP follows a client-server architecture where the AI model is executed on the client side and
can discover and invoke tools provided by remote MCP servers. The protocol handles tool
discovery, parameter validation, and result formatting automatically. This separation
allows domain experts to build specialized tools while maintaining compatibility across
different AI platforms.
The communication flow is simple:
- Client connects to MCP server
- Server advertises available tools with their schemas to the client
- AI model invokes remote tools based on user input
- Server processes tool requests and returns structured results to the client
- AI model incorporates tool results into the final response
In the example in this post I implement a simple anonymization tool, provided by a local MCP server and executed on a local large language model (Qwen3). The nice side-effect is that not only it serves as a privacy engineering MCP demonstrator, but we also use a local LLM that offers 100% privacy
Automating anonymization via LLMs
I use a privacy processing server using Python/FastMCP, paired with qwen-agent.
The server provides two demonstration functions: text anonymization and privacy risk assessment. The anonymization tool applies pattern matching to mask personally identifiable information like email addresses, phone numbers, credit card numbers, and social security numbers. The risk assessment tool calculates a privacy score based on the quantity and sensitivity of detected PII.
When a user submits a query, the Qwen3 model processes the input, determines which remote tools to call, sends requests to the MCP server, and incorporates the server's responses into the final answer.
The flow is as follows:
- User → issues a query to Qwen3 (LLM)
- Qwen3 → decides to use tools and sends requests to MCP server
- MCP server → processes tools and sends results back to Qwen3
- Qwen3 → incorporates results and responds to user
Example
For the query "kitty 111", the complete interaction shows the local model's reasoning process while calling remote tools:
Query: kitty 111
The local model's internal reasoning:
"kitty 111" could be a username or some text that might contain PII. But it's pretty
short. Let me check if there's any PII here. "Kitty" is a name, but maybe it's a
nickname. "111" could be part of a phone number or a credit card, but it's just three
1s. Not sure. The user might want to check the privacy risk of this text.
Remote tool invocation:
Function Call: {'name': 'privacy-calculate_entropy', 'arguments': '{"text": "kitty 111"}'}
Server tool response:
Privacy Risk Score: 0/10 (Risk: Low)
PII Found: No PII detected
Local model's final response to user:
The text "kitty 111" has been analyzed and contains no personally identifiable
information (PII). Privacy Risk Score: 0/10 (Low Risk). No emails, phone numbers, credit cards, or SSNs were detected.
Or:
Query: Hi, I'm Cecille. Email me at cecille.noname@europa.eu or call +1-555-123-4567. My credit card is 4532-1234-5678-9012
Answer: Hi, I'm Cecille. Email me at @.com or call +XX XXX XXX XXX. My credit card is XXXX-XXXX-XXXX-XXXX
[The anonymization patterns are basic and only handle common PII types like emails, phones, and credit cards - production systems would need more comprehensive detection]
Practical benefits for local deployments
MCP+local LLMs is a great tandem and offers significant advantages for privacy-sensitive (or national security...) applications :
- Separation of AI reasoning from domain-specific processing
- Standardized tool discovery and invocation
- Local deployment for sensitive operations
- Validation of tool usage versus model hallucination
Conclusion
MCP represents a practical approach to extending local AI capabilities, particularly for privacy and data protection applications. Naturally, the simple example can be greatly extended and tailored towards production-grade environments (though then it should be configured differently).
Ps. Looking for a privacy architect, engineer, a DPO, or a consultant? Contact me at me@lukaszolejnik.com