important-judge-78504
03/26/2025, 4:11 PMtall-airline-16756
03/26/2025, 4:12 PMtall-airline-16756
03/26/2025, 4:12 PMcreate_dispatch
, you can include the room configuration in the token itself. Here's an example in Python:
def create_token_with_agent_dispatch() -> str:
token = (
AccessToken()
.with_identity("my_participant")
.with_grants(VideoGrants(room_join=True, room=room_name))
.with_room_config(
RoomConfiguration(
agents=[
RoomAgentDispatch(agent_name="test-agent", metadata="my_metadata")
],
),
)
.to_jwt()
)
return token
This code creates an access token with a specific participant identity, grants them permission to join a room, and configures the room to dispatch a specific agent when the user joins.
For Node.js, you can also create a token with room configuration for agent dispatch:
import { AccessToken, SIPGrant, VideoGrant } from 'livekit-server-sdk';
import { RoomAgentDispatch, RoomConfiguration } from '@livekit/protocol';
const roomName = 'name-of-room';
const participantName = 'user-name';
const agentName = 'my-agent';
const at = new AccessToken('api-key', 'secret-key', {
identity: participantName,
});
Let me know if you need further assistance!
Sources: Dispatching agents | LiveKit Docs | Authentication | LiveKit Docsimportant-judge-78504
03/26/2025, 4:14 PMtall-airline-16756
03/26/2025, 4:14 PMdownload-files
is necessary to run every time before initializing workers via cli.run_app
. This detail might depend on your specific setup or use case.
If you have further details or context about your workflow, feel free to share, and I can assist further. Alternatively, I can escalate this to a human specialist for more in-depth assistance.important-judge-78504
03/26/2025, 4:23 PMtall-airline-16756
03/26/2025, 4:23 PMcreate_explicit_dispatch
allows you to manually dispatch an agent to a room at a specific time via the API.
• Implementation Method: The token approach configures the agent dispatch within the access token itself using RoomConfiguration
. Explicit dispatch, on the other hand, uses the AgentDispatchService
API to dispatch an agent to an existing room.
• Timing: With the token approach, agent dispatch is configured when the room is created. Explicit dispatch can be triggered at any time after the agent is registered with an agent_name
.
Explicit dispatch also requires setting the agent_name
field in WorkerOptions
when registering the agent, which prevents automatic dispatch to newly created rooms.
Let me know if you need further clarification!
Sources: Dispatching agents | LiveKit Docs | Authentication | LiveKit Docsimportant-judge-78504
03/26/2025, 4:25 PMtall-airline-16756
03/26/2025, 4:25 PMcreate_token_with_agent_dispatch
and create_dispatch
may seem similar, they differ in timing and approach:
• `create_token_with_agent_dispatch`: This function creates an access token with agent dispatch configuration embedded in it. When a user uses this token to join a room, the agent is automatically dispatched to that room. The dispatch is configured at the time of token creation, before the room is joined.
• `create_dispatch`: This is an API method that explicitly dispatches an agent to a room. It requires parameters like the room name, agent name, and optional metadata, and is used to programmatically send an agent to a specific room after the room is already created.
The key difference lies in when and how the agent dispatch is configured. create_token_with_agent_dispatch
is pre-configured in the token, while create_dispatch
is a manual API call for an existing room.
Let me know if you have further questions!
Sources: Dispatching agents | LiveKit Docs | Authentication | LiveKit Docsimportant-judge-78504
03/26/2025, 4:26 PMtall-airline-16756
03/26/2025, 4:26 PMcreate_token_with_agent_dispatch
, the room is created and configured with the agent dispatch settings embedded in the token. When a participant uses the token to join the room, the agent is automatically dispatched to that room at that time.
This approach ensures that the agent is only dispatched when the room is actively joined by a participant, streamlining the process.
Let me know if you need further clarification!important-judge-78504
03/26/2025, 5:50 PMtall-airline-16756
03/26/2025, 5:50 PMimportant-judge-78504
03/26/2025, 5:50 PMtall-airline-16756
03/26/2025, 5:51 PMctx.shutdown(reason="Session ended")
.
◦ When the room automatically closes after the last non-agent participant has left, causing the worker to disconnect.
• Agents Leave a Room:
◦ When explicitly disconnected using ctx.shutdown()
.
◦ When the room is deleted using the server API's deleteRoom
function, which disconnects all participants including agents.
◦ When the last human participant leaves the room, causing the room to close automatically and disconnect any remaining agents.
Let me know if you need further clarification or additional details!
Sources: Inside a session | LiveKit Docsimportant-judge-78504
03/26/2025, 5:53 PMtall-airline-16756
03/26/2025, 5:53 PMbotocore.exceptions.ClientError: An error occurred (UnrecognizedClientException) when calling the ConverseStream operation: The security token included in the request is invalid
typically indicates an issue with the credentials or security token being used for authentication.
Here are some common causes and steps to resolve this:
• Ensure that the AWS credentials (Access Key ID and Secret Access Key) being used are correct and have the necessary permissions for the operation.
• Verify that the security token is valid and has not expired. If you are using temporary credentials, you may need to refresh them.
• Check that the AWS region specified in your configuration matches the region of the Bedrock service you are trying to access.
• Ensure that the AWS SDK or CLI is configured correctly with the appropriate profile or environment variables.
If the issue persists, please provide additional details about your setup, and I can assist further or escalate the issue to a specialist.important-judge-78504
03/26/2025, 7:44 PMdef prewarm_fnc(proc: JobProcess):
# load silero weights and store to process userdata
proc.userdata["vad"] = silero.VAD.load()
async def entrypoint(ctx: JobContext):
# access the loaded silero instance
vad: silero.VAD = ctx.proc.userdata["vad"]
whats the point of the prewarm function - does it run before the entrypoint function? without the prewarm function, would you instead have to load the weights directionly in the entrypoint? and that could take unecessary time?tall-airline-16756
03/26/2025, 7:44 PMprewarm
function is designed to initialize resources, such as loading model files or other time-consuming operations, before the entrypoint
function is executed. Here's how it works and why it's useful:
• Purpose: The prewarm
function prepares the environment by loading resources (e.g., Silero weights) and storing them in the process userdata. This ensures that these resources are ready and available when the entrypoint
function is called.
• Execution Order: The prewarm
function runs before the entrypoint
function. The entrypoint
function can then access the preloaded resources via the context's process userdata.
• Performance Benefits: Without the prewarm
function, you would need to load the weights directly in the entrypoint
function, which could introduce unnecessary delays during job processing. By preloading the resources, the entrypoint
function can execute more quickly.
In your example, the prewarm
function loads the Silero VAD weights and stores them in proc.userdata
, making them accessible to the entrypoint
function when needed.
Let me know if you have further questions!
Sources: Anatomy of an Agent | LiveKit Docsimportant-judge-78504
03/26/2025, 7:45 PMtall-airline-16756
03/26/2025, 7:45 PMprewarm
function runs when a new worker process is created, which happens before the agent is dispatched and before the entrypoint
function is executed. Its purpose is to initialize resources and prepare the process environment so that it is ready to handle jobs efficiently.
This ensures that any time-consuming setup, such as loading model files, is completed before the agent session begins, improving performance and reducing delays when the entrypoint
function is called.
Let me know if you have further questions!
Sources: Anatomy of an Agent | LiveKit Docsimportant-judge-78504
03/26/2025, 7:56 PMtall-airline-16756
03/26/2025, 7:56 PMentrypoint
function to handle multiple participants and create an agent for each participant. Each agent can be stored in a dictionary or list for management.
• Example: While the provided sources don't include a specific example, you can extend your current code to create and manage multiple agents dynamically based on the number of participants or predefined rules.
• Recommendations: Deploying multiple agents in one worker is feasible, but it may be more efficient to deploy multiple workers, especially if you expect high concurrency or resource-intensive operations. Tools like Render.com or Kubernetes can help with scaling and deployment.
If you'd like, I can help draft a code example for managing multiple agents in one worker. Let me know!
Sources: Deployment and scaling | LiveKit Docs | Dispatching agents | LiveKit Docsimportant-judge-78504
03/26/2025, 8:01 PMtall-airline-16756
03/26/2025, 8:01 PMimportant-judge-78504
03/26/2025, 8:03 PMtall-airline-16756
03/26/2025, 8:03 PMimportant-judge-78504
03/26/2025, 8:04 PMtall-airline-16756
03/26/2025, 8:04 PMimportant-judge-78504
03/26/2025, 8:19 PMtall-airline-16756
03/26/2025, 8:19 PMimportant-judge-78504
03/26/2025, 8:22 PMtall-airline-16756
03/26/2025, 8:22 PMtmux
, screen
, or systemd
to manage multiple instances of the script.
• Custom Script: Write a wrapper script to launch multiple instances of the worker script. For example:
import subprocess
# Number of workers to deploy
num_workers = 5
# Path to your worker script
worker_script = "path/to/your_worker_script.py"
# Launch multiple workers
processes = []
for i in range(num_workers):
process = subprocess.Popen(["python", worker_script])
processes.append(process)
# Optionally, wait for all processes to complete
for process in processes:
process.wait()
This script launches 5 instances of your worker script. Adjust the num_workers
variable as needed.
Let me know if you'd like further assistance or clarification!important-judge-78504
03/26/2025, 8:44 PMtall-airline-16756
03/26/2025, 8:45 PMimportant-judge-78504
03/26/2025, 8:45 PMtall-airline-16756
03/26/2025, 8:46 PMimportant-judge-78504
03/26/2025, 8:46 PMtall-airline-16756
03/26/2025, 8:46 PMimportant-judge-78504
03/26/2025, 8:48 PMtall-airline-16756
03/26/2025, 8:48 PMimport subprocess
# Number of workers/agents to deploy
num_workers = 30
# Path to your worker script
worker_script = "path/to/your_worker_script.py"
# Launch multiple workers
processes = []
for i in range(num_workers):
process = subprocess.Popen(["python", worker_script])
processes.append(process)
# Optionally, wait for all processes to complete
for process in processes:
process.wait()
This script launches 30 instances of your worker script, each running independently and managing its own agent. Each worker operates in its own process, ensuring isolation and concurrency.
Let me know if you need further clarification or additional examples!tall-airline-16756
03/26/2025, 8:49 PMimportant-judge-78504
03/26/2025, 8:51 PMtall-airline-16756
03/26/2025, 8:51 PMsubprocess
and multiprocessing
depends on your specific use case:
• `subprocess`: This is used to spawn completely independent processes. It is ideal when you want to run separate Python scripts or external programs, as each process runs in its own Python interpreter and has its own memory space. This is why subprocess
is often used for deploying multiple workers, as each worker operates independently.
• `multiprocessing`: This is used to create multiple processes within the same Python program. It is better suited for parallelizing tasks within a single script, as it allows for shared memory and easier communication between processes. However, it may not be ideal for running completely independent worker scripts.
In the context of deploying multiple LiveKit workers, subprocess
is preferred because each worker is designed to run as an independent process with its own lifecycle, and subprocess
provides a straightforward way to achieve this.
Let me know if you need further clarification!important-judge-78504
03/26/2025, 8:57 PMtall-airline-16756
03/26/2025, 8:57 PMimportant-judge-78504
03/26/2025, 8:58 PMtall-airline-16756
03/26/2025, 8:59 PMload_fnc
and load_threshold
in the `WorkerOptions`:
• `load_fnc`: This function returns a value between 0 and 1, indicating how busy a worker is. By default, it measures the CPU usage of the worker.
• `load_threshold`: This is the load value (default is 0.75) at which a worker will stop accepting new jobs.
These metrics are used by LiveKit to automatically distribute load across available workers, ensuring efficient load balancing. Additionally, LiveKit Agents provides detailed metrics for observability into performance and model usage.
Let me know if you need further details or assistance!
Sources: Deployment and scaling | LiveKit Docs | Capturing metrics | LiveKit Docsimportant-judge-78504
03/26/2025, 9:00 PMtall-airline-16756
03/26/2025, 9:00 PMload_fnc
can still be useful for fine-grained load balancing within a container, but for system-wide monitoring and scaling, container-level metrics are more relevant.
Let me know if you'd like recommendations on tools or strategies for monitoring container-level metrics!important-judge-78504
03/26/2025, 10:26 PMtall-airline-16756
03/26/2025, 10:26 PMimportant-judge-78504
03/26/2025, 10:27 PMtall-airline-16756
03/26/2025, 10:27 PMimportant-judge-78504
03/26/2025, 10:30 PMtall-airline-16756
03/26/2025, 10:30 PM{
"rule": {
"dispatchRuleIndividual": {
"roomPrefix": "call-"
}
},
"roomConfig": {
"agents": [
{
"agentName": "inbound-agent",
"metadata": "job dispatch metadata"
}
]
}
}
In this configuration:
• `dispatchRuleIndividual`: Creates a new room for each caller, with the room name consisting of a prefix (e.g., call-
) followed by the caller's phone number and a random suffix.
• `roomConfig`: Automatically dispatches an agent (e.g., inbound-agent
) to the custom-named room.
This allows you to create rooms with custom names and automatically dispatch agents to handle calls in those rooms. Let me know if you need further assistance!
Sources: SIP dispatch rule | LiveKit Docs | Accepting incoming calls | LiveKit Docsimportant-judge-78504
03/26/2025, 10:31 PMtall-airline-16756
03/26/2025, 10:31 PMagent_name
field is not set in the WorkerOptions
. This allows the system to deploy and dispatch agents in the same process.
• If you set the agent_name
field in WorkerOptions
, automatic dispatch is disabled, and you would need to explicitly dispatch the agent via API.
For your use case, you can avoid setting the agent_name
field to enable automatic dispatch, which will deploy and assign agents to rooms as needed. Let me know if you'd like further clarification or assistance!
Sources: Dispatching agents | LiveKit Docs | Making calls using SIP | LiveKit Docsimportant-judge-78504
03/26/2025, 10:32 PMtall-airline-16756
03/26/2025, 10:32 PMctx.room.name
), which is typically set when the room is created automatically by LiveKit.
If you want to enforce custom room names, you would need to configure the dispatch rules on the server side. For example, you can use a dispatchRuleIndividual
with a roomPrefix
to create rooms with a specific naming pattern:
{
"rule": {
"dispatchRuleIndividual": {
"roomPrefix": "custom-room-"
}
}
}
This configuration would create rooms with names starting with custom-room-
, followed by a unique identifier.
Alternatively, if you want to dynamically set the room name in your Python code, you would need to use explicit dispatch instead of automatic dispatch, as automatic dispatch does not provide direct control over room naming in the worker script.
Let me know if you'd like further clarification or assistance!important-judge-78504
03/26/2025, 10:33 PMtall-airline-16756
03/26/2025, 10:33 PM