rough-hairdresser-81331
07/08/2025, 2:59 PMasync def contact_details(
context: RunContext, raw_arguments: dict[str, object]
) -> dict:
"""
Updates the customer's contact information in the structured call data.
Arguments:
name: str | None, The customer's name, can be None if not provided
email: str | None, The customer's email, can be None if not provided
phone: str | None, The customer's phone, can be None if not provided
Use this function when the customer provides or updates any of their contact details.
The customer may spell out their name, email, or phone number for clarity.
Guidelines for handling contact details:
- If a customer spells out their name (e.g., "J-O-H-N"), capture it accurately
- If a customer spells out their email, ensure it's formatted without spaces
- If you can infer an part of their email from a name or vice versa, do so when appropriate
- when a customer spells out their email, that includes a
"o" (pronounced as oh), you should use that in the email and not the number 0 (zero)
- Only fields that are provided will be updated - other fields remain unchanged
Examples:
- "My name is Juno Smith, J-U-N-O S-M-I-T-H"
- "My email is <mailto:junosmith@example.com|junosmith@example.com>, that's j-u-n-o dot s-m-i-t-h at example dot com"
- "You can reach me at <tel:555-123-4567|555-123-4567>"
Returns a dictionary with the updated contact details.
"""
name = raw_arguments.get("name")
email = raw_arguments.get("email")
phone = raw_arguments.get("phone")
# Get the call SID from the context job
ctx = get_job_context()
if ctx and ctx.room:
call_sid = ctx.room.name
contact_info = {
"contact_details": {
"full_name": name,
"email": email,
"phone": phone,
}
}
# Save to AIIntakeVoiceCall structured_call_data JSON column
await update_ai_intake_structured_call_data(
call_sid=call_sid, call_data=contact_info
)
<http://logger.info|logger.info>(f"Saved contact details for call: {call_sid} {contact_info}")
else:
logger.error("No job context or room found, cannot save contact details")
return {
"full_name": name,
"email": email,
"phone": phone,
}
tall-airline-16756
07/08/2025, 3:00 PMtall-airline-16756
07/08/2025, 3:00 PMallowInterruptions
to false
to disable user interruptions entirely.
• Increase interruptMinWords
to require more transcribed words before allowing interruptions.
• Adjust interruptSpeechDuration
to require a longer user speech duration before interruption is allowed.
• Increase min_endpointing_delay
to wait longer before the system considers the user has finished speaking.
• Use max_endpointing_delay
to set how long the system waits after silence before responding.
• Optionally, manually control interruptions using a speech handle interface for specific conversation parts.
These changes help create a smoother conversation flow without premature AI interruptions during user input.
Sources: Configuring turn detection and interruptions | LiveKit Docs | Improving voice AI's turn detection with transformers | Agent speech | LiveKit Docs