Skip to main content

WhatsApp Audio, Image, Document Processing

Enable your AI Agent to understand and respond to incoming audios, images, PDFs, and other files shared via WhatsApp

Alvaro Vargas avatar
Written by Alvaro Vargas
Updated over a week ago

With Multimodal Capabilities, your AI Agent can now process and respond to audios, images and documents sent by users over WhatsApp — unlocking powerful new automation use cases like reading receipts, CVs, invoices, and more.

IMPORTANT:

  1. Responding to WhatsApp audios is enabled by default and has not extra costs.

  2. Sticker processing is not supported - your agent will ignore them


✨ What this feature does

When enabled, your Agent can:

  • Understand the content of incoming images (e.g. photos of receipts, IDs, or handwritten notes)

  • Read and extract text from PDF files and documents

  • Access both the media URL and OCR analysis during the conversation

  • Use this information to route flows, capture data, or take action automatically

⚙️ How to enable it

This feature is off by default and can be enabled per channel. To turn it on:

1. Go to Channels → WhatsApp

2. Under Multimodal Capabilities, enable:

• ✅ Image understanding

• ✅ Document reading

3. Click Save changes

Once enabled, any incoming image or file message will be analyzed automatically.

🧠 Using AI to route based on files

You can use Agentic Routing to trigger specific flows when a file is received — or even based on the content inside the file.

For example:

  • If someone sends a CV, you can route them to a “Job Applicants” flow.

  • If someone sends a bank receipt, you can route to a payment confirmation flow.

Set this up in the Intent Node, using instructions like:

"Route to this flow whenever a user applies for a job or sends a CV file."

📥 Capturing file data as conversation variables

Inside your flow, you can use the Agent Capture node to extract and store media-related data using custom variables.

Your agent will have access to:

  • The media file URL (e.g. curriculum_vitae_file)

  • The OCR/extracted content from the file (e.g. curriculum_vitae_analysis)

Example setup:

  • Create a variable: curriculum_vitae_file

    • Description: Store the media file URL for the CV

  • Create another variable: curriculum_vitae_analysis

    • Description: Store the extracted text from the CV

Your Agent will automatically fill these variables when a file is detected and understood.

🔍 Auditing file analysis

You can review how your AI Agent interpreted the file using the Conversation Audit panel. There, you’ll find:

  • The file’s OCR content

  • The media URL

  • How and where it was routed

  • Any captured variables

This gives you full transparency into how multimodal processing is working in real time.

Use Case Example: Job Applicants

  1. Let’s say you want to automate CV collection via WhatsApp:

  2. A candidate sends a message and attaches a CV.

  3. Frontline detects the file, extracts its content, and triggers a “Job Applicants” flow using agentic routing.

  4. Your Agent captures:

    1. First and last name

    2. The file URL (curriculum_vitae_file)

    3. The extracted CV content (curriculum_vitae_analysis)

  5. This data can now be reviewed, stored in a Table, or sent via API to your ATS.

Did this answer your question?