Platform-Provided LLM, Mistral AI Integration, and More with Cognigy.AI v4.99

7 min read
Nhu Ho
Authors name: Nhu Ho April 24, 2025
v4.99 Hero Image 2

The latest release delivers substantial GenAI enhancements across the board, including out-of-the-box LLM availability for design-time features, native support for the Model Context Protocol (MCP), and turnkey integration with Mistral AI.

Platform-Provided LLM for Agent Development without Extra Licensing 

Cognigy.AI offers various LLM-powered design-time features to help you accelerate agent development with Generative AI. These pertain to auto-generation of Adaptive Cards, Flows, Lexicons, and Intent example sentences.

Starting with v4.99, you can choose to run these capabilities on our new built-in LLM, eliminating additional licensing fees and reducing setup complexity to expedite design cycles from day one.

Select Platform-provided LLM in Generative AI Settings to activate it for all design-time features.

Upskill AI Agents with the Model Context Protocol (MCP)

Introduced by Anthropic, MCP is an open, universal protocol for connecting AI systems with data sources. In other words, it standardizes the communication between AI (models and agents) and external systems.

Cognigy has always championed a model-agnostic, integration-friendly approach to LLM orchestration. The latest release doubles on that commitment, as we adopt MCP as the industry-standard method for tool and data integration.

How it works

Cognigy.AI can now act as an MCP client, allowing your AI Agents to interface with any MCP-compliant servers directly. To enable this:

  1. Add an MCP Tool node to your agent.
  2. Provide the URL of an SSE (Server-Sent Events) endpoint on the remote MCP server.
  3. Optionally whitelist or blacklist specific tools and define the conditions under which an action may execute.

Read this blog post to learn more about MCP

Harness Mistral AI for Data Sovereignty 

Cognigy.AI v4.99 introduces native integration with Mistral AI, the latest addition to our robust LLM orchestration layer. 

Mistral markedly differentiates itself through an open-source approach. Many of its models are released under the Apache 2.0 license, which permits their use without any constraints. Businesses can self-host, fine-tune, and fully control the deployment, making it a strong contender in privacy-conscious and regulated sectors. 

As regulations such as the EU AI Act emerge and take effect, Mistral’s openness allows organizations to adapt flexibly and remain compliant within an increasingly complex global regulatory landscape.

Other Improvements

Cognigy.AI

Cognigy Voice Gateway

  • Added the Enable Immediate Hangup parameter to the Hang Up Node. Activating this parameter immediately ends the call when a Hang Up Node is triggered, bypassing active operations. This parameter is useful for automatically ending calls when an answering machine is detected
  • Improved api-server by adding support for the nova-3 model provided by Deepgram. You can now specify this model in the following Nodes:
    • Set Session Config Node. Go to the Recognizer (STT) settings, select Deepgram as the vendor, then select Custom from the Deepgram Model list
    • Nodes with Set Activity Parameters and Voice Gateway channel, such as Say, Question, and Optional Question
    • AI Agent Node. Go to the Voice settings, select Deepgram as the vendor, then select Custom from the Deepgram Model list. Alternatively, you can specify the model in the AI Agent wizard

Cognigy Webchat

  • Added parameters for activating the deletion of the current conversation or all conversations to the Webchat v3 Endpoint settings
  • Added support for progressive rendering of mixed text, AI streams, and rich messages in order, with updated quick reply behavior and full backwards compatibility
  • Updated Webchat v3 to v3.18.0

Cognigy Live Agent

Cognigy Insights

  • Changed the pre-aggregation error notification so that it doesn't block the loading of other charts

For further information, check out our complete Release Notes here.

image119-1
image119-1
image119-1