red-dentist-96118
08/29/2025, 1:10 PMtall-airline-16756
08/29/2025, 1:10 PMtall-airline-16756
08/29/2025, 1:10 PMCreateSIPParticipant
method.
For example, to dispatch the agent via CLI, you’d run:
lk dispatch create \
--new-room \
--agent-name outbound-caller \
--metadata '{"phone_number": "+1234567890", "transfer_to": "+9876543210"}'
This command creates a new room, dispatches the agent named "outbound-caller," and passes along the phone number to call in the metadata. Then, the agent uses that info to place the call through your configured SIP trunk.
Sources: Making calls using SIP | LiveKit Docs | agent.py | LiveKit Docsred-dentist-96118
08/29/2025, 1:12 PMtall-airline-16756
08/29/2025, 1:13 PMagent_dispatch.create_dispatch
method to get the agent going, including some metadata like the phone number you want to call.
2. Once your agent is dispatched, it’ll use the createSIPParticipant
method to place the outbound call through your configured SIP trunk.
Here’s a simple example in Python to illustrate:
dispatch = await lkapi.agent_dispatch.create_dispatch(
api.CreateAgentDispatchRequest(
agent_name="outbound-caller",
room="my-room",
metadata='{"phone_number": "+1234567890"}'
)
)
await lkapi.sip.create_sip_participant(
api.CreateSIPParticipantRequest(
room_name="my-room",
sip_trunk_id="ST_xxxx",
sip_call_to="+1234567890",
participant_identity="caller"
)
)
About transcripts — LiveKit doesn’t offer a direct API to grab them after the call ends. Instead, transcription happens live through the agent using a speech-to-text (STT) plugin like Deepgram, OpenAI, or Gladia. This plugin streams transcription data during the call, and you’ll get the final transcripts as messages marked with is_final: true
.
Here’s how you can get those transcripts:
1. Build some logic in your agent to catch and save the final utterances coming from the STT plugin.
2. Store those transcripts somewhere safe — a database, a file, or any external service — while the call is happening.
For example, you might handle final transcripts in your agent like this:
async for message in stt_stream:
if message.type == "transcript" and message.data.is_final:
transcript += message.data.utterance
elif message.type == "post_final_transcript":
await save_to_storage(transcript)
break
If you want me to walk you through any part of this or need more details, just let me know — I’m here to help!
Sources: livekit-plugins/livekit-plugins-gladia/livekit/plugins/gladia/stt.py | README.md | telephony/make_call/make_call.py | packages/sip--make-a-call/README.md | Making calls using SIP | LiveKit Docsred-dentist-96118
08/29/2025, 5:02 PMtall-airline-16756
08/29/2025, 5:03 PMuser_input_transcribed
event or the conversation_item_added
callback to grab transcriptions as they happen.
• Then, send those transcripts to your API using HTTP requests or webhooks.
Plus, if you want, you can grab the full conversation history using session.history
and send it over once the session wraps up.
If you’d like, I’m here to help you get this all set up—just let me know!
Sources: Transcriptions | LiveKit Docs | Transcription Applications | livekit/livekit_composite | DeepWiki | Session recording and transcripts | LiveKit Docs | LiveKit | AI voice agents | LiveKit Docsred-dentist-96118
08/29/2025, 5:03 PMtall-airline-16756
08/29/2025, 5:04 PMred-dentist-96118
08/29/2025, 5:38 PMtall-airline-16756
08/29/2025, 5:39 PMred-dentist-96118
08/29/2025, 5:39 PMtall-airline-16756
08/29/2025, 5:40 PMlk load-test
can help you benchmark your specific agent setup.
Feel free to reach out if you want a hand with load testing or figuring out the best scaling approach!
Sources: Deploying to custom environments | LiveKit Docs | An open source stack for real-time multimodal AI | An open source stack for real-time multimodal AI | An open source stack for real-time multimodal AI | LiveKit Pricing