With Multimodal Capabilities, your AI Agent can now process and respond to audios, images and documents sent by users over WhatsApp — unlocking powerful new automation use cases like reading receipts, CVs, invoices, and more.
IMPORTANT:
Responding to WhatsApp audios is enabled by default and has not extra costs.
Sticker processing is not supported - your agent will ignore them
✨ What this feature does
When enabled, your Agent can:
Understand the content of incoming images (e.g. photos of receipts, IDs, or handwritten notes)
Read and extract text from PDF files and documents
Access both the media URL and OCR analysis during the conversation
Use this information to route flows, capture data, or take action automatically
⚙️ How to enable it
This feature is off by default and can be enabled per channel. To turn it on:
1. Go to Channels → WhatsApp
2. Under Multimodal Capabilities, enable:
• ✅ Image understanding
• ✅ Document reading
3. Click Save changes
Once enabled, any incoming image or file message will be analyzed automatically.
🧠 Using AI to route based on files
You can use Agentic Routing to trigger specific flows when a file is received — or even based on the content inside the file.
For example:
If someone sends a CV, you can route them to a “Job Applicants” flow.
If someone sends a bank receipt, you can route to a payment confirmation flow.
Set this up in the Intent Node, using instructions like:
"Route to this flow whenever a user applies for a job or sends a CV file."
📥 Capturing file data as conversation variables
Inside your flow, you can use the Agent Capture node to extract and store media-related data using custom variables.
Your agent will have access to:
The media file URL (e.g. curriculum_vitae_file)
The OCR/extracted content from the file (e.g. curriculum_vitae_analysis)
Example setup:
Create a variable: curriculum_vitae_file
Description: Store the media file URL for the CV
Create another variable: curriculum_vitae_analysis
Description: Store the extracted text from the CV
Your Agent will automatically fill these variables when a file is detected and understood.
🔍 Auditing file analysis
You can review how your AI Agent interpreted the file using the Conversation Audit panel. There, you’ll find:
The file’s OCR content
The media URL
How and where it was routed
Any captured variables
This gives you full transparency into how multimodal processing is working in real time.
Use Case Example: Job Applicants
Let’s say you want to automate CV collection via WhatsApp:
A candidate sends a message and attaches a CV.
Frontline detects the file, extracts its content, and triggers a “Job Applicants” flow using agentic routing.
Your Agent captures:
First and last name
The file URL (curriculum_vitae_file)
The extracted CV content (curriculum_vitae_analysis)
This data can now be reviewed, stored in a Table, or sent via API to your ATS.