---
title: "How to Build Custom AI Tools in Drupal (The MCP Plugin Guide)"
date: "2026-03-31T13:35:22+00:00"
summary:
image:
type: "article"
url: "/acquia-cloud-platform/help/96886-how-build-custom-ai-tools-drupal-mcp-plugin-guide"
id: "ac89412c-7163-4e5f-8570-0b48d01ce793"
---

Background
----------

The Model Context Protocol (MCP) is an open standard that allows AI agents to discover and invoke external tools at runtime. For a Drupal site, this means your CMS is no longer just a content repository - it becomes an active participant in an AI’s reasoning process.

An AI agent on its own can reason, but it lacks access to your private data, business logic, or real-time state. By implementing MCP tools in Drupal, you provide the "hands" the AI needs to interact with your system securely and predictably. To get started with the foundation of this integration, you can explore the official guide on [implementing MCP for Drupal](/acquia-cloud-platform/help/96496-bridging-drupal-and-ai-developers-guide-model-context-protocol-mcp "Bridging Drupal and AI: A Developer’s Guide to the Model Context Protocol (MCP)").

The "Bridge" Architecture
-------------------------

The most effective way to build MCP tools in Drupal is to follow a Service-Oriented Architecture (SOA). We treat the MCP layer as a thin communication bridge between the AI and your internal Drupal services.

1.  ### The Thin Plugin (The Interface)
    
    In Drupal, MCP tools are implemented as Plugins. This layer should be kept "thin." Its only responsibilities are:
    
    *   **Schema Definition:** Providing a JSON Schema that describes the tool’s purpose and arguments to the AI.
    *   **Input Validation:** Ensuring the arguments sent by the LLM match the expected types.
    *   **Delegation:** Passing the validated data to a dedicated Drupal Service.
2.  ### The Service Layer (The Logic)
    
    This is where the heavy lifting happens. By keeping the business logic in a standard Drupal Service (registered in `services.yml`), you ensure that the code is:
    
    *   **Testable:** You can run unit tests on the logic without involving the AI.
    *   **Reusable:** The same logic can be used by a REST API, a Drush command, or a standard Drupal Controller.

Step-by-Step Guide: Creating a New MCP Plugin
---------------------------------------------

The drupal/mcp module uses a plugin manager to discover all classes annotated with the `#[Mcp]` attribute. When an AI agent connects, it uses `getTools()` to build the tool manifest, and routes calls using `executeTool()`.

1.  **Create the Plugin File and Structure**
    
    The key rule for maintaining a future-proof interface is to **keep the plugin thin**: Validate → call service → format output. Everything else belongs in the service.
    
        my_module/
        
        ├── my_module.info.yml         (declare mcp:mcp as dependency)
        
        ├── my_module.services.yml     (register your business logic service)
        
        └── src/
        
            ├── Plugin/
        
            │   └── Mcp/
        
            │       └── YourPlugin.php   (the MCP plugin — thin wrapper)
        
            └── Service/
        
                └── MyService.php      (business logic goes here)
    
2.  **Implement the Plugin Class**
    *   **Create the plugin file:** Place it at `src/Plugin/Mcp/YourPlugin.php` in your custom module.
    *   **Add the** `**#[Mcp]**` **attribute:** Discovery is automatic via this attribute.
        
            #[Mcp(
            
               id: 'your-plugin-id',          // unique, kebab-case
            
               name: new TranslatableMarkup('Human Name'),
            
               description: new TranslatableMarkup('What this plugin does.'),
            
            )]
            
            class YourPlugin extends McpPluginBase {
            
                // ...
            
            }
        
    *   **Extend** `**McpPluginBase**` **and Inject services via** `**create()**`**:**
        
            public static function create(ContainerInterface $container, ...): static {
            
              $instance = parent::create(...);
            
              $instance->myService = $container->get('my_module.my_service');
            
              return $instance;
            
            }
        
3.  **Declare and Execute Tools**
    
    *   **Declare tools in** `**getTools()**`**:** Return one or more Tool objects, defining the name, description, and essential inputSchema.
        
    *   **Implement** `**executeTool()**`**:**
        
        *   Check the tool ID matches (using `sanitizeToolName()`).
            
        *   Validate the required arguments.
            
        *   Call your service (`$this->myService->doLogic($args)`).
            
        *   Return a structured response: `['structuredContent' => [...], 'content' => [['type' => 'text', 'text' => '...']]]`.
            
4.  **Final Steps for Deployment and Testing**
    
    *   **Clear cache:** `ddev drush cr` (The plugin manager needs to rediscover the new class).
    *   **Test with Drush:** Verify your service logic and plugin integration without expensive LLM calls. This is a crucial step for Testing and Verification.
        
            ddev drush php-eval '
            
            $plugin = \Drupal::service("plugin.manager.mcp")->createInstance("your-plugin-id", []);
            
            $result = $plugin->executeTool("your_tool_name", ["arg" => "value"]);
            
            print_r($result);
            
        

![Flowchart of Generalised Drupal MCP Tool Execution, detailing interactions between AI Agent, MCP Server, Plugin, Service Layer, and External Data Source.](https://acquia.widen.net/content/79e3d3ef-7f64-4bad-8852-f4f977bbab2d/web/Drupal-mcp-server-tool-execution-flow.jpg?w=720&itok=ss5YUQsL)

Case Study Example: Semantic Search Tool
----------------------------------------

While the pattern applies to any tool (e.g., checking order status, creating a node, or fetching user profile data), a common use case is Semantic Search using a Vector Database.

In this scenario, we use the architecture to bridge an AI agent with a vector engine (like AWS Bedrock or Pinecone):

1.  The AI requests a search: The agent invokes the `vector_search` tool with a natural language query.
2.  Plugin Processing: The MCP Plugin validates the query and calls the `SearchService`.
3.  Vector Retrieval: The service converts the query into an embedding, fetches the top matches from the vector DB, and—crucially—performs a Reranking pass to ensure the most relevant context is returned.
4.  Structured Response: The service returns a clean array of results, which the plugin formats into a JSON response for the AI to read.

Key Implementation Lessons
--------------------------

### Write Schemas for the AI

The `description` field in your tool schema is the "documentation" the LLM reads at runtime.

*   **Bad:** `"id": "The entity ID."`
*   **Good:** `"id": "The unique UUID of the article you wish to retrieve. Use this when the user asks for a specific document by name."`

### Structured vs. Text Content

Always return data in a **structured format** (JSON). While most MCP clients can handle plain text, providing structured objects allows the AI to parse specific fields (like URLs or IDs) more reliably, reducing "hallucinations."

### Performance & Token Usage

AI agents perform better when given full context rather than snippets. When returning data from a tool, avoid aggressive truncation. It is better to return three high-quality, full paragraphs than ten one-sentence snippets.

Conclusion
----------

Building MCP tools in Drupal is a strategic shift toward Agentic Workflows. By keeping your plugins thin and your services robust, you create a future-proof interface that allows any AI agent to understand and interact with your Drupal ecosystem.