witty-yak-62929
07/09/2025, 1:40 AMtall-airline-16756
07/09/2025, 1:40 AMtall-airline-16756
07/09/2025, 1:41 AMwitty-yak-62929
07/09/2025, 1:41 AMtall-airline-16756
07/09/2025, 1:42 AMParallelExec
function is like a task manager that splits up work across different CPU cores. It's basically spreading the workload so nothing gets overwhelmed.
⢠Concurrent Job Execution: These agents can run multiple jobs separately, kind of like having different rooms where work happens without anyone bumping into each other.
⢠LLM Parallel Tool Calls: This means you can do multiple things at once - like transcribing audio, generating responses, and making API calls all simultaneously. Talk about multitasking! š
⢠Process Isolation: Each agent runs in its own little workspace, which keeps everything neat and prevents any unwanted interactions.
⢠Event-Driven Architecture: Components like LLMStream
let you run background tasks smoothly and efficiently.
⢠Pipeline Architecture: For voice agents, different processing steps (speech-to-text, language processing, text-to-speech) can all happen at the same time, creating super smooth real-time interactions.
This pattern is a total game-changer for scenarios where you need to handle complex workflows with tons of moving parts.Curious about how to actually implement this? Just let me know what you're working on, and I'd be happy to help you dive deeper! šGo, TypeScript, and JavaScript code snippets are ready to show you the magic.All of this comes straight from analyzing LiveKit's official code and docs. Let me know if you want to geek out more! š¤ Sources: utils/parallel_test.go | agents/src/pipeline/pipeline_agent.ts | .github/next-release/changeset-5292c7c6.md | An open source stack for real-time multimodal AI | LLMStream | LiveKit Agents | LLMStream | LiveKit Agentswitty-yak-62929
07/09/2025, 1:43 AMtall-airline-16756
07/09/2025, 1:44 AMfrom dotenv import load_dotenv
from livekit.agents import Agent, AgentSession, JobContext, WorkerOptions, cli
from livekit.plugins import deepgram, openai, silero
load_dotenv()
class Agent1(Agent):
def __init__(self) -> None:
super().__init__(
instructions="You are Agent 1",
stt=deepgram.STT(),
llm=openai.LLM(model="gpt-4o"),
tts=openai.TTS(voice="nova"),
vad=silero.VAD.load()
)
async def entrypoint(ctx: JobContext):
await ctx.connect()
session = AgentSession()
await session.start(
agent=Agent1(),
room=ctx.room
)
if __name__ == "__main__":
cli.run_app(WorkerOptions(entrypoint_fnc=entrypoint))
agent2.py:
from dotenv import load_dotenv
from livekit.agents import Agent, AgentSession, JobContext, WorkerOptions, cli
from livekit.plugins import deepgram, openai, silero
load_dotenv()
class Agent2(Agent):
def __init__(self) -> None:
super().__init__(
instructions="You are Agent 2",
stt=deepgram.STT(),
llm=openai.LLM(model="gpt-4o"),
tts=openai.TTS(voice="echo"),
vad=silero.VAD.load()
)
async def entrypoint(ctx: JobContext):
await ctx.connect()
session = AgentSession()
await session.start(
agent=Agent2(),
room=ctx.room
)
if __name__ == "__main__":
cli.run_app(WorkerOptions(entrypoint_fnc=entrypoint))
Run each agent in separate terminal windows:
python agent1.py start
python agent2.py start
Then dispatch the agents to their rooms:
lk dispatch create --agent-name agent1 --room room1
lk dispatch create --agent-name agent2 --room room2
This setup runs both agents independently and in parallel. Pretty cool, right? šI'm here if you want to chat more about integration or need help with specific deployment details. Don't hesitate to reach out! Sources: complex-agents/medical_office_triage/triage.pywitty-yak-62929
07/09/2025, 1:48 AMtall-airline-16756
07/09/2025, 1:49 AMtall-airline-16756
07/10/2025, 2:26 PMwitty-yak-62929
07/10/2025, 2:31 PMtall-belgium-91876
07/10/2025, 2:34 PMwitty-yak-62929
07/10/2025, 2:35 PMtall-belgium-91876
07/10/2025, 2:50 PMtall-belgium-91876
07/10/2025, 2:50 PMtall-belgium-91876
07/10/2025, 2:51 PMwitty-yak-62929
07/10/2025, 2:53 PMtall-belgium-91876
07/10/2025, 2:56 PM