Imixs-AI – LLM Tool Calling
Blog: IMIXS-WORKFLOW Blog
With the latest version of Imixs-AI we just shipped a new feature extending BPMN with AI: Tool Calling support for the OpenAI API. This is a powerfull new feature introducing a fundamental shift in what an LLM can do inside a workflow application.
From answering questions to starting business processes
Until now, an LLM integrated into Imixs could analyze data, generate text, evaluate conditions, and assist users with information. All of that is great. But the LLM was always just talking. It could tell you “you should start a vacation request” — but it couldn’t actually do it. Tool Calling changes this completely.
With Tool Calling, the LLM no longer just responds with text. It can respond with a structured request to execute a function in your application. Consider this scenario:
A user types into the workflow application:
"I need next week off."Imixs-AI now doesn’t explain how to submit a vacation request. It doesn’t ask for clarification. It fires a tool call:
{
"finish_reason": "tool_calls",
"tool_calls": [{
"function": {
"name": "load_skill",
"arguments": "{\"process_id\": \"urlaubsantrag\"}"
}
}]
}Now a business application based on Imixs-AI is able to handle this respons and start the correct BPMN process — all from a single natural language sentence. No menus, no navigation, no searching. Just say what you want and it happens.
How it works
The ImixsAIContextHandler now supports defining functions per request via the new addFunction() method. Functions are intentionally not persisted as part of the conversation context — they are set dynamically before each request, typically loaded from the current BPMN process context:
contextHandler.addFunction(
"load_skill",
"Loads details about an available BPMN workflow process",
"""
{
"type": "object",
"properties": {
"process_id": {
"type": "string",
"description": "The ID of the BPMN process"
}
},
"required": ["process_id"]
}
""");You control how aggressively the LLM uses tools with setToolChoice():
contextHandler.setToolChoice("auto"); // LLM decides (default)
contextHandler.setToolChoice("required"); // LLM must call a tool
contextHandler.setToolChoice("none"); // No tool calls allowedAnd tool results flow back into the conversation with addToolResult() — keeping the full context intact for long-running workflow processes that can span days, weeks, or even months:
contextHandler.addToolResult(toolCallId, "Process details: start date, end date required.");New CDI Event: ImixsAIToolCallEvent
When the LLM responds with a tool call, the OpenAIAPIService fires a CDI event of the type ImixsAIToolCallEvent. This follows the exact same pattern already established by ImixsAIPromptEvent and ImixsAIResultEvent — clean, loosely coupled, and immediately familiar to every Imixs developer:
@ApplicationScoped
public class WorkflowToolCallObserver {
@Inject
WorkflowService workflowService;
public void onToolCall(@Observes ImixsAIToolCallEvent event) {
if ("load_skill".equals(event.getToolName())) {
String processId = event.getArguments().getString("process_id");
String skillContent = workflowService.loadSkill(processId);
event.setResult(skillContent);
}
}
}Want to add a SQL observer? A document search function? A custom API integration? Just implement an observer. The architecture stays clean regardless of how far you take it.
New method in OpenAIAPIService
The new processToolCallResult() method handles the full parsing of a tool call response: it checks the finish_reason, extracts all tool calls, fires the CDI events, collects the results, and adds everything back into the conversation context — ready for the next LLM request.
Security by design — not by accident
The AI agent space right now is full of demos showing LLMs with unrestricted access to shell commands, databases, and arbitrary APIs. We deliberately took a different path.
In Imixs-AI, all tool calls are routed through CDI observers that execute within the existing Imixs Workflow security context. Every tool call is subject to the workflow engine’s permission model. The agent can only do what the current user is allowed to do. There are no accidental security holes — only conscious architectural decisions.
This is what makes this feature production-ready, not just demo-ready.
Try it out
The feature is available now in the imixs-ai-workflow module. Check out the README for the full documentation and get started.
We are just getting started with what Tool Calling makes possible inside Imixs. Stay tuned.
