April 2026

Version 3.32.0

Released on April 1st, 2026

✨ New Feature & Enhancements

New: Voice Mode for Webchat Integration
Introduction of a dedicated Voice Mode within the Webchat configuration. This feature allows assistants to engage in interactive voice conversations.

  • Select Voice Assistant: Choose a specific neural voice model for the integration.
  • Welcome Manager: Configure a custom spoken greeting message that triggers when the voice session starts.

New: Base Model Customization in Realtime Mode

Addition of the ability to modify the base model when using the Realtime(Chat Completion) API mode within LLM Proxy. This provides greater flexibility for selecting and swapping underlying models while maintaining low-latency streaming interactions.

🛡️ Platform & Engine Reliability

  • **Improved: Execution Timeouts Calibration **
    1. Reduction of the Node.js code execution timeout to 60s (previously 90s/120s) to optimize resource allocation. A notification message has been added to the Node.js editor to inform users of this constraint during the save process.
    2. Adjustment of the Zeus engine timeout to 8s for classical NLU operations, ensuring faster response times.

🐞 Bug Fixes

  • Fixed: Intent Prioritization in Hybrid NLU & Microbot Architectures Resolution of intent selection conflicts when the same intent exists across multiple microbots or the masterbot. The Hybrid NLU engine now follows a strict prioritization logic:

    1. Contextual Priority: Intents matching the current active context are prioritized.
    2. Response Linkage: Intents associated with a defined bot response are favored over unlinked intents.
    3. Local Priority: Preference is given to the masterbot or the current microbot to ensure conversational consistency.
  • Fixed: Hybrid NLU Embedding Configuration Synchronization

    Resolution of a persistence issue where the Hybrid NLU engine would continue to use legacy embedding configurations after the linked AI Deployment model was updated. The system now ensures that any change to the deployment model is automatically reflected in the Hybrid NLU’s vectorization process, maintaining consistency across the RAG pipeline.