Skip to main content
Advanced settings let you fine‑tune the behavior and quality of your agents.
You control which AI model is used, how fast the agent speaks, how it handles silence, whether it remembers context, and more.
On this page:

Voice, language, and audio

Voice

Select which voice this agent will use on calls (for example, “Heather Rey”).
  • Each option in the dropdown is a predefined voice in your Fluents.ai account.
  • Changing the selection changes how the agent sounds, but does not change its logic or skills.
  • You can switch voices at any time if you find one that fits your brand better.
For more about browsing and testing voices, see Voices.

Language

Set the language your agent will speak and understand (e.g., English).
  • Make sure this matches the language of your customers.
  • It should also align with the voice you selected.

Background sound

If you choose a background sound, the call will include a subtle ambient noise (for example, light office sounds).
  • This can make calls feel more natural, as if the agent is in a real environment.
  • You currently have a small set of background options to choose from.
  • Leave this empty if you prefer a completely quiet background.

AI model and provider

These settings control which AI engine powers your agent’s responses.

LLM Provider

Choose the LLM provider (for example, OpenAI).
  • This is the company that provides the language model.
  • Different providers may offer different quality, speed, and pricing.

Model Name

Choose the specific model from that provider (for example, GPT‑4.1 Mini).
  • Larger models may be more capable but can be slower or more expensive.
  • Smaller models may be faster and cheaper for simple use cases.
If you’re unsure, start with the recommended default in the dropdown.

Speech to Text

Select the Speech‑to‑Text (STT) provider (for example, Deepgram).
  • This controls how the caller’s voice is converted into text for the AI.
  • A good STT provider improves accuracy and reduces misunderstandings.

Webhooks and actions

Add Webhook

Webhooks let your agent send an HTTP request to an endpoint you control during or after a call.
  • Fluents.ai calls your URL with call data (for example, caller number, outcome, transcript snippets, or custom fields).
  • Your server, automation tool, or integration then uses that data to update other systems.
Typical uses (implemented on your side):
  • Receive call results and then update a CRM record.
  • Create a ticket in a helpdesk tool when certain conditions are met.
  • Trigger automations (for example, via Zapier, Make.com, or a custom backend).
You configure the actual webhook endpoints in the Webhooks and Actions areas of the app, then select which ones this agent is allowed to call from Advanced Settings.

Actions

Actions are reusable behaviors your agent can perform, such as:
  • Ending the conversation under certain conditions
  • Triggering a callback
  • Updating external systems
In Advanced Settings, you select which actions this agent is allowed to run. For a deeper dive, see Actions Feature Guide and Webhook Feature Guide.

Conversation tuning (temperature & speed)

LLM Temperature

LLM Temperature controls how creative vs. strict the agent is:
  • Lower values (toward 0): more predictable, consistent answers
  • Higher values: more varied, creative responses
For most customer‑facing use cases, a lower to medium temperature is recommended so the agent stays on‑script and reliable.

Conversation Speed

Conversation Speed controls how fast the agent speaks and responds.
  • Slide left for slower, more deliberate speech.
  • Slide right for faster, more energetic speech.
  • If Auto is enabled, Fluents.ai may adjust speed automatically.
You can adjust this to match your brand and audience:
  • Slower for support or sensitive topics
  • Faster for casual or sales‑oriented conversations

Silence and idle behavior

These settings determine how the agent handles silence from the caller.

Idle Time (seconds)

Idle Time is how long the agent will wait in silence before checking if the caller is still there. Example:
  • Idle Time = 7 seconds
    → After 7 seconds of no response, the agent asks something like
    “Are you still there?”

Max Idle Check Count

Max Idle Check Count is how many times the agent will make that check before ending the call. Example:
  • Idle Time = 7 seconds
  • Max Idle Check Count = 3
The agent might:
  1. Wait 7 seconds → ask if the caller is still there
  2. Wait another 7 seconds → ask again
  3. Wait another 7 seconds → give a final message and end the call
Use these controls to balance:
  • Being patient with callers
  • Avoiding very long, silent calls that go nowhere

Call duration and recording

Call Duration (seconds) and Extend Call

Call Duration sets the maximum length of a call in seconds.
For example, 600 seconds = 10 minutes.
  • When the limit is reached, the agent will end the call gracefully.
  • Extend Call lets the system extend the call in specific cases when needed (depending on how your account is configured).
This is useful to:
  • Prevent runaway calls
  • Keep costs and call times under control

Enable Recording

Turn Enable Recording on if you want to record calls for later review.
  • Recordings can help with quality control, training, and compliance.
  • Make sure to follow any applicable laws or regulations about recording calls in your region.

Memory options

These settings control how much your agent remembers across calls.

Outbound Context Memory

Outbound Context Memory is used for outbound campaigns that include context variables for each contact. When this is enabled:
  • Fluents.ai remembers the context variables that were attached to the contact in the campaign
    (for example: `{first_name}`, `{last_name}`, `{email}`, or any other custom context fields you use).
  • If that person calls back the same number, the agent can access those same context variables again.
This allows the agent to say things like:
“Hi {first_name}, I’m following up about the message we sent you earlier.”
…because it can reuse the outbound campaign context (variables) tied to that contact, even though the new call is inbound.

Conversation Memory (BETA)

Conversation Memory (BETA) allows the agent to remember more details within a single conversation or across multiple turns in more advanced ways.
  • This can help with longer or more complex calls.
  • Because it’s in BETA, behavior and availability may evolve over time.
If you’re unsure, you can start with this off and enable it later for advanced use cases.

Voicemail detection (BETA)

Dynamic Voicemail Detection (BETA)

Dynamic Voicemail Detection helps the agent detect when it has reached a voicemail greeting instead of a live person (mainly for outbound calls). With this enabled, the system can:
  • Recognize voicemail greetings more reliably
  • Decide whether to leave a voicemail message at the right time or handle it differently
This is useful when you:
  • Run outbound campaigns where many calls go to voicemail
  • Want to control how and when voicemails are left
Because it’s a BETA feature, you may want to test it with a small list first and adjust based on results.

Where to go next