In this blog, I will explain in simple terms:
- What MCP is
- How MCP works in enterprise systems
- How we can build an MCP solution for Oracle JD Edwards (JDE)
- The architecture using OCI Generative AI, VBCS, OIC, and a custom MCP server
- The two execution modes: SQL Mode vs AIS Mode
What is MCP?
MCP (Model Context Protocol) is a design pattern that allows AI models (like LLMs) to safely interact with real business systems.
Instead of AI directly accessing databases or APIs, MCP introduces a structured approach:
- The AI does not execute logic directly
- The AI chooses a capability (tool/service)
- A backend server (MCP Server) executes the request safely
Simple MCP Concept
AI is the brain → MCP Server is the hands → Enterprise system is the data
Flow:
- User asks a question
- AI understands the request
- AI selects a service
- MCP server executes it
- Result is returned
Implementing MCP for JD Edwards
- Create a Chatbot UI Using VBCS
Build a chatbot user interface in VBCS where users can ask questions in natural language and receive responses.
AIS Mode Authentication:
When the user selects AIS mode, authentication with JD Edwards is required. The user must enter their JDE username and password.
The MCP server will use these credentials to securely invoke JD Edwards Orchestrator services and standard AIS APIs on behalf of the user.
- Create OIC integration
Build an OIC integration that receives the user’s question and forwards it to the MCP server running on-premise.
AIS Mode Authentication:
When the user selects AIS mode, authentication with JD Edwards is required. The user must enter their JDE username and password.
The MCP server will use these credentials to securely invoke JD Edwards Orchestrator services and standard AIS APIs on behalf of the user.
- Create OIC integration
Build an OIC integration that receives the user’s question and forwards it to the MCP server running on-premise.
- Build MCP Server
Develop an MCP server using Python or Java that orchestrates communication between the LLM, enterprise services, and backend systems.
Responsibilities:
Receive the user’s question from OIC
Build a prompt using available service definitions from the metadata configuration file and send it to OCI GenAI to identify the required service
Receive the selected service from OCI GenAI
Generate another prompt to prepare the service request payload based on the user’s question
Invoke the corresponding service/API defined in the metadata configuration file
Receive the response from the service/API
Send the response to OCI GenAI to generate a user-friendly message
Return the final response to OIC, which then sends it back to the VBCS application
Note:
You can implement the prompt handling logic in two ways:
Use OIC to build the prompts, invoke the OCI GenAI service, and send the results to the MCP server for execution
Or implement the full prompt generation and GenAI interaction directly within the MCP server
Both approaches are valid, depending on your architecture, security, and design preferences.
- MCP Metadata File
The MCP metadata file provides the necessary context to help OCI GenAI generate accurate SQL statements or select the correct AIS APIs based on the user’s question.
Details:
To achieve accurate responses from GenAI, the prompt must include all relevant metadata and context
The more detailed and clear the metadata is, the better the LLM can generate correct results
SQL Mode:
Provide database table names
Include column names, data types, and descriptions
This enables the LLM to generate accurate SQL SELECT statements based on the user’s question
AIS Mode:
List all available JD Edwards Orchestrators that the chatbot can access
Include required payload structures for each orchestrator
Provide clear descriptions of each service
This structured metadata ensures that GenAI can correctly identify the required service and generate valid requests.
Example of metadata file for AIS and SQL:
AIS
SQL
Conclusion
MCP (Model Context Protocol) provides a secure and structured way to integrate AI models with enterprise systems without exposing sensitive data or allowing direct access. By separating decision-making from execution, MCP ensures that AI models focus on understanding user intent, while backend systems handle the actual processing in a controlled and governed manner.
This approach not only improves security but also enhances scalability and maintainability, as all business logic, validations, and integrations remain centralized within the MCP server and enterprise services.
Summary
In this architecture, the user interacts with a chatbot UI (VBCS) using natural language. The request is routed through OIC to the MCP server, which collaborates with OCI GenAI to understand the intent, identify the required service, and prepare the necessary request.
The MCP server then securely invokes backend systems such as JD Edwards Orchestrator or database queries, processes the response, and uses GenAI again to generate a user-friendly output. Finally, the response is returned to the user through the same flow.
Overall, MCP enables organizations to leverage the power of AI while maintaining full control, security, and governance over their enterprise systems.
Develop an MCP server using Python or Java that orchestrates communication between the LLM, enterprise services, and backend systems.
Responsibilities:
Receive the user’s question from OIC
Build a prompt using available service definitions from the metadata configuration file and send it to OCI GenAI to identify the required service
Receive the selected service from OCI GenAI
Generate another prompt to prepare the service request payload based on the user’s question
Invoke the corresponding service/API defined in the metadata configuration file
Receive the response from the service/API
Send the response to OCI GenAI to generate a user-friendly message
Return the final response to OIC, which then sends it back to the VBCS application
Note:
You can implement the prompt handling logic in two ways:
Use OIC to build the prompts, invoke the OCI GenAI service, and send the results to the MCP server for execution
Or implement the full prompt generation and GenAI interaction directly within the MCP server
Both approaches are valid, depending on your architecture, security, and design preferences.
- MCP Metadata File
The MCP metadata file provides the necessary context to help OCI GenAI generate accurate SQL statements or select the correct AIS APIs based on the user’s question.
Details:
To achieve accurate responses from GenAI, the prompt must include all relevant metadata and context
The more detailed and clear the metadata is, the better the LLM can generate correct results
SQL Mode:
Provide database table names
Include column names, data types, and descriptions
This enables the LLM to generate accurate SQL SELECT statements based on the user’s question
AIS Mode:
List all available JD Edwards Orchestrators that the chatbot can access
Include required payload structures for each orchestrator
Provide clear descriptions of each service
This structured metadata ensures that GenAI can correctly identify the required service and generate valid requests.
Example of metadata file for AIS and SQL:
SQL
Conclusion
MCP (Model Context Protocol) provides a secure and structured way to integrate AI models with enterprise systems without exposing sensitive data or allowing direct access. By separating decision-making from execution, MCP ensures that AI models focus on understanding user intent, while backend systems handle the actual processing in a controlled and governed manner.
This approach not only improves security but also enhances scalability and maintainability, as all business logic, validations, and integrations remain centralized within the MCP server and enterprise services.
Summary
In this architecture, the user interacts with a chatbot UI (VBCS) using natural language. The request is routed through OIC to the MCP server, which collaborates with OCI GenAI to understand the intent, identify the required service, and prepare the necessary request.
The MCP server then securely invokes backend systems such as JD Edwards Orchestrator or database queries, processes the response, and uses GenAI again to generate a user-friendly output. Finally, the response is returned to the user through the same flow.
Overall, MCP enables organizations to leverage the power of AI while maintaining full control, security, and governance over their enterprise systems.

No comments:
Post a Comment