gifted-tomato-63690
02/13/2025, 7:12 PMgray-monkey-83525
02/13/2025, 7:12 PMVoicePipelineAgent
to receive typed text messages, you can use the ChatManager
to listen for incoming chat messages and process them. Here's a basic example of how you can achieve this:
1. Set up a ChatManager: Use the ChatManager
to listen for incoming text messages in the room.
2. Process Incoming Messages: When a message is received, append it to the chat context and generate a response using the LLM.
Here's a code snippet based on the provided data sources:
import asyncio
from livekit import rtc
from livekit.agents import VoicePipelineAgent, llm
async def entrypoint(ctx):
# Connect to the room
await ctx.connect(auto_subscribe=rtc.AutoSubscribe.AUDIO_ONLY)
# Wait for a participant to join
participant = await ctx.wait_for_participant()
# Initialize the agent
agent = VoicePipelineAgent(
vad=ctx.proc.userdata["vad"],
stt=ctx.proc.userdata["stt"],
llm=ctx.proc.userdata["llm"],
tts=ctx.proc.userdata["tts"],
chat_ctx=llm.ChatContext()
)
# Start the agent
agent.start(ctx.room, participant)
# Set up chat manager
chat = rtc.ChatManager(ctx.room)
async def answer_from_text(txt: str):
chat_ctx = agent.chat_ctx.copy()
chat_ctx.append(role="user", text=txt)
stream = agent.llm.chat(chat_ctx=chat_ctx)
await agent.say(stream)
@chat.on("message_received")
def on_chat_received(msg: rtc.ChatMessage):
if msg.message:
asyncio.create_task(answer_from_text(msg.message))
# Run the application
if __name__ == "__main__":
cli.run_app(WorkerOptions(entrypoint_fnc=entrypoint))
This setup allows the VoicePipelineAgent
to handle both voice and text inputs by appending text messages to the chat context and generating responses using the LLM.
Would you like more details on any specific part of this setup?
Sources:
- LiveKit Slack Discussion
- LiveKit Agents Overview 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.