1. Overview
  2. AI Voice Agents
  3. Voice AI Best Practices

Voice AI Best Practices

Voice AI Best Practices

Voice AI is one of the most powerful way to scale processes that used to be human capital intensive - however, if you aren't able to replicate human activity in voice AI then the tangible benefit isn't as powerful. This guide will go over best practices, tips & tricks and recommended configurations to scale your voice AI deployment.

Properly handle background noise

In real-world scenarios, phone calls often face audio quality challenges such as background noise, echo, or other unwanted sounds.

While our orchestration model is designed to handle background noise effectively, you may want to fine-tune the settings for optimal performance in challenging environments.

  1. Adjust the responsive speed
    1. Navigate to your assistant's settings
    2. Set the “**Responsive Speed (Queuing)**” setting to 0.9
    3. This adjustment makes the assistant more deliberate in its responses, reducing false triggers from background noise.
  2. Configure the sensitivity to interruption (Noise Sensitivity)
    1. In the same settings panel
    2. Set the “**Sensitivity to Interruption (Noise Sensitivity)**” to 0.8
    3. This setting helps the assistant distinguish between intentional speech and background interference

For extremely noisy environments like construction sites or busy streets, you may need to experiment with even higher settings.

 

Handle voicemail

Configure your AI assistant to detect and handle voicemails automatically - Commonly, your phone assistant will encounter voicemails. You can configure your assistant to either hang up or leave a message when it detects a voicemail.

The system internally runs voicemail detection continuously, and if you leave the voicemail message blank, we will do that the moment we detect we are in voicemail. If you select to leave a message, the system will wait for its turn to speak to leave that message. If the voicemail detection timeout is reached, the system will stop the voicemail detection and will continue with the call as normal.

  1. Enable Voicemail Detection
    1. In your call settings inside the assistant, there is a setting for "Enable Voicemail & Message" - This setting allows for voicemail detection and message leaving. Turning this on starts a time at the start of a call for voice mail detection. If the message input is blank, the assistant will hang up on voicemail detection. If the message input includes a message, that will be left on voicemail detection.
      1. Hangup on voicemail logic - The assistant is looking for a voicemail, if it is found the assistant goes to leave a message BUT it's blank. So the AI will hang up because it's "job" is completed.
      2. Voicemail message - Leaving a voicemail message for a callback is a common use case. For context, the voicemail message can also contain and take advantage of variables and custom variables to dynamically change the voicemail on context.

Take advantage of custom variables

In the "Make AI Call" action node, there are 5 custom variables that can be used in your prompt to dynamically change it based on context/conditions.

Use cases:

  • You can feed historical interactions in a variable and reference it in the prompt - in the action, you can use another AI call or custom field/value to feed context left by a post-call webhook, transcript, form fill, email trigger, and more. Make sure your prompt includes the variable that you are filling.
    • Using the post-call webhook, you can run the transcript through an OpenAI call to get a "markdown-formatted LLM-specific context window summary" to feed to the contact's custom field. The next time the call is made, feed that custom field into the custom variable to include the context. On the first call, it will be blank (unless you have a pre-determined variable in there). Still, on the second call - it will be contextually aware and optimized for the interaction.
    • Using the custom variable as a task-based framework allows you to shift how your AI follows instructions on a campaign call. If you structured your task-based framework in the prompt to replace the first 1-4 tasks (as an example...) as {{custom.one}}, you can feed those first 4 items dynamically on outbound call using the the custom one variable input.

Post-call tagging

Whenever an AI call is finished, there are tags added to the contact to signify the outcome and disposition. You can use these tags instead of a post-call webhook or in tandem with the post-call webhook to do campaigning on outcome/disposition.

Tags Include:

  • answered
  • not answered
  • ai voice appointment booked
  • voicemail reached
  • contact hangup
  • ai hangup
  • dial failed
  • dial no answer
  • call transfer
  • machine detected
  • max duration reached
  • dial busy
  • inactivity
  • scam detected

… and more to come.

Post- / pre-call webhooks

Whenever an AI is making or receiving a call, a webhook is sent to any URL you specify around a notification that the event is happening. On the initiation of outbound calls and inbound calls, a pre-call webhook is sent to the URL specified in the call settings of the AI making / receiving the call. This can be used for call tracking or notifications around calls starting.

On the same note, after a call is finished - both inbound and outbound - a post-call webhook with call data and analysis is sent to your specified webhook URL under the assistant who made/received the call. The payload includes things like a full transcript, call time in milliseconds, contact sentiment analysis, and more.

Payload pre-call webhook:

{

"to": ”null”,

"from": ”null”,

"contactId": ”null”

}

Payload post-call webhook:

{

"call_id":”null”,

"call_type":”null”,

"direction":”null”,

"to":”null”,

"from":”null”,

"contact_id":”null”,

"disconnection_reason":”null”,

"user_sentiment":”null”,

"call_summary":”null”,

"call_completion":”null”,

"call_completion_reason":”null”,

"assistant_task_completion":”null”,

"recording_url":”null”,

"call_time_ms":”null”,

"call_time_seconds":”null”,

"full_transcript":”null”,

"start_timestamp":”null”,

"end_timestamp":”null”,

"added_to_wallet: true

}


Was this article helpful?