Okay Give me an example of different ai agents in ...
# ask-ai
g
Okay Give me an example of different ai agents in a single agent process is it possible I would like to have agent-dev and agent-prod.
t
Thanks for reaching out, SGS. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
Could you clarify what platform or context you are referring to for implementing multiple AI agents in a single process? Are you asking about LiveKit's capabilities, or is this related to a different system or framework?
g
Livekit AI Agent for python.
t
Hey there! I'd love to help you get clear on using agent-dev and agent-prod in LiveKit's Python AI Agent. Could you tell me a bit more about what you're trying to accomplish? Are you hoping to run both environments at the same time, or are you curious about how to smoothly transition between development and production modes? I'm all ears and happy to walk you through this! 😊
g
Okay I need to connect to the right agent by the agent name. I don’t care about single process. Right now I don’t have an agent name. So while local developemnt I get connected to prod.
t
Hey there! 👋 Want to connect to a specific agent by name during local development in LiveKit Python? No worries, I've got you covered with some super simple steps: 1. First up, set the
agent_name
field in the
WorkerOptions
when creating your agent. It's totally easy! For example: `opts = WorkerOptions( ... agent_name="test-agent", )`Or if you're using the CLI template approach:
Copy code
if __name__ == "__main__":
    cli.run_app(
        WorkerOptions(
            entrypoint_fnc=entrypoint,
            prewarm_fnc=prewarm,
            agent_name="inbound-agent",
        ),
    )
2. Next, start your agent in development mode - just run:
python3 agent.py dev
3. Now for the fun part - use explicit dispatch to connect to your named agent via the API. Here's a friendly example:
Copy code
import asyncio
from livekit import api

room_name = "my-room"
agent_name = "test-agent"

async def create_explicit_dispatch():
    lkapi = api.LiveKitAPI()
    dispatch = await lkapi.agent_dispatch.create_dispatch(
        api.CreateAgentDispatchRequest(
            agent_name=agent_name, room=room_name, metadata='{"user_id": "12345"}'
        )
    )
    print("created dispatch", dispatch)
    dispatches = await lkapi.agent_dispatch.list_dispatch(room_name=room_name)
    print(f"there are {len(dispatches)} dispatches in {room_name}")
    await lkapi.aclose()

asyncio.run(create_explicit_dispatch())
Pro tip: When you set the agent_name, automatic dispatch gets disabled, which means you'll have total control over when and how your agent joins rooms. Pretty cool, right? 😎 Sources: Agent dispatch | LiveKit Docs | Accepting incoming calls | LiveKit Docs
g
I don’t get the explicit dispatch fully can you elaborate.
t
How to Enable Explicit Dispatch Hey there! Let me break down explicit dispatch in LiveKit for you. Think of it like having a remote control for your agent's room assignments, instead of letting things happen automatically. To get started, you'll want to set the
agent_name
field in the
WorkerOptions
when registering your agent:
Copy code
opts = WorkerOptions(
    entrypoint_fnc=entrypoint,
    agent_name="test-agent",
)
By doing this, your agent won't jump into rooms on its own – you're taking the driver's seat! Dispatching an Agent via API Once your agent is up and running with an
agent_name
, you can personally guide it into rooms using the
AgentDispatchService
. It's like calling your agent's personal taxi service!
Copy code
import asyncio
from livekit import api

room_name = "my-room"
agent_name = "test-agent"

async def create_explicit_dispatch():
    lkapi = api.LiveKitAPI()
    dispatch = await lkapi.agent_dispatch.create_dispatch(
        api.CreateAgentDispatchRequest(
            agent_name=agent_name, room=room_name, metadata="my_job_metadata"
        )
    )
    print("created dispatch", dispatch)
    dispatches = await lkapi.agent_dispatch.list_dispatch(room_name=room_name)
    print(f"there are {len(dispatches)} dispatches in {room_name}")
    await lkapi.aclose()

asyncio.run(create_explicit_dispatch())
Don't worry – if the room doesn't exist yet, it'll be created automatically during dispatch. Super convenient, right? Handling Job Metadata You can also add some extra context when dispatching an agent. It's like giving your agent a little note about its mission:
Copy code
async def entrypoint(ctx: JobContext):
    <http://logger.info|logger.info>(f"job metadata: {ctx.job.metadata}")
    # Rest of your agent code
Use Case: Telephony Integration For telephony setups, explicit dispatch is a game-changer. It ensures no surprise room assignments, giving you total control:
Copy code
if __name__ == "__main__":
    agents.cli.run_app(agents.WorkerOptions(
        entrypoint_fnc=entrypoint,
        agent_name="my-telephony-agent"
    ))
Need any help working through this? I'm here to guide you! Just give me a shout if something isn't crystal clear. 😊 Sources: Dispatching agents | LiveKit Docs | Dispatching agents | LiveKit Docs | Agents telephony integration | LiveKit Docs
g
Is explicit dispatch must when I the agent has a name? Can I not specify it in the client side?
t
When you set the agent_name field in WorkerOptions, things get a bit more hands-on with dispatching agents. Think of it like switching from automatic pilot to manual mode – you'll need to take the wheel and guide your agents into rooms yourself. But don't worry! You've got several cool ways to do this: 1. Using the API's
AgentDispatchService
– super straightforward. 2. Leveraging SIP dispatch rules for inbound calls – neat trick! 3. Configuring participant tokens with
RoomAgentDispatch
entries, which lets you set up dispatch right from the client side. That last method is especially handy – it means you can plan out agent assignments when participants first connect. Sounds complicated, but it's actually pretty simple once you get the hang of it. Want me to walk you through the details? I'm happy to help! Sources: Dispatching agents | LiveKit Docs
g
Give me examples for 1 and 3.
t
1. API Dispatch Example (AgentDispatchService)
Copy code
import asyncio
from livekit import api

room_name = "my-room"
agent_name = "test-agent"

async def create_explicit_dispatch():
    lkapi = api.LiveKitAPI()
    dispatch = await lkapi.agent_dispatch.create_dispatch(
        api.CreateAgentDispatchRequest(
            agent_name=agent_name, room=room_name, metadata="my_job_metadata"
        )
    )
    print("created dispatch", dispatch)
    dispatches = await lkapi.agent_dispatch.list_dispatch(room_name=room_name)
    print(f"there are {len(dispatches)} dispatches in {room_name}")
    await lkapi.aclose()

asyncio.run(create_explicit_dispatch())
3. Client-Side Token Configuration Example (RoomAgentDispatch)
Copy code
import { AccessToken, VideoGrant } from 'livekit-server-sdk';
import { RoomAgentDispatch, RoomConfiguration } from '@livekit/protocol';

const roomName = 'name-of-room';
const participantName = 'user-name';
const agentName = 'my-agent';

const at = new AccessToken('api-key', 'secret-key', {
  identity: participantName,
});
const videoGrant: VideoGrant = {
  room: roomName,
  roomJoin: true,
};

at.addGrant(videoGrant);
at.roomConfig = new RoomConfiguration({
  agents: [
    new RoomAgentDispatch({
      agentName: "test-agent",
      metadata: "test-metadata"
    })
  ]
});

const token = await at.toJwt();
console.log('access token', token);
Hey there! I've got two awesome ways for you to set up agent dispatches – one using the API and another with client-side token configuration. The cool part? Both methods let you add job metadata, so you can pass along extra context like user IDs. Sounds helpful, right? If anything's not clicking, just give me a shout, and I'll break it down further! Sources: Dispatching agents | LiveKit Docs | Authentication | LiveKit Docs | Agent dispatch | LiveKit Docs
g
Okay how would you write multiple people and single ai room?
t
Overview: Hey there! Let me break down how LiveKit's Room works - think of it like a super flexible conference call where everyone can jump in and share audio, video, or data. The cool part? There's no cap on how many people can join, and everyone can easily share and listen to tracks. Example Use Cases: 1. AI Voice Agent Scenario: Imagine a setup where each person can talk through their mic, and an AI friend is right there listening and ready to respond. Users share their audio, the AI tunes in, and then speaks back - how neat is that? 2. AI Camera Monitoring: Picture multiple cameras all streaming into one room, with a smart AI agent keeping an eye on everything. Users can even pop in to see what's happening in real-time! Implementation Notes: • Just make sure each participant has their own unique identity - like giving everyone a special name tag. • The room comes to life when the first person joins and gently closes when the last person leaves - super smooth! • Your AI agent can easily join the party using the methods we've chatted about before. Excited to help you dive deeper into this! Need me to walk you through some code or explain anything else? Just ask! Sources: Connecting to LiveKit | LiveKit Docs | Realtime media | LiveKit Docs
g
Group meeting with ai as moderator give me the core code and explain.
t
Hey there! I'm really curious to understand exactly what you're looking for with the AI moderator. Could you walk me through the specific roles you want it to play during the group meeting? Are you thinking about things like helping manage who speaks when, creating a live transcript, or something else entirely? The more details you can share, the better I can help craft a solution that fits your needs perfectly. 😊
g
Okay image couple counselling. I need that.
t
I don't have the answer you're looking for. You could also try asking your question: • in one of the other Slack channels or • to https://deepwiki.com/livekit/livekit_composite which is trained on all LiveKit source code If you find the answer, please post it here to help others!
g
Dont worry, I will break it down, I need two participants with AI agent in one room. That’s all whoever speaks the AI responds with context.
There?