Do you have any examples of real voice apps using ...
# ask-ai
p
Do you have any examples of real voice apps using few shots?
t
Thanks for reaching out, Mario. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
Thanks for reaching out! Just to make sure I understand you correctly, could you please clarify what you mean by "few-shot" in this context? Are you talking about few-shot learning for model training, inference-time prompt engineering, or maybe setting up agent behavior with just a few examples? Knowing this will really help me give you the most helpful information or examples. Looking forward to your reply! Sources: Video Conferencing Applications | livekit/livekit_composite | DeepWiki | LiveKit Docs | templates.yaml | Building the all-in-one platform for voice AI agents | Building the all-in-one platform for voice AI agents
p
few shots, as in, llm instructions with examples
t
Thanks so much for your question! LiveKit voice apps mainly use zero-shot prompting, which means the LLM follows instructions that describe the behavior without needing multiple input-output examples. For example, agents like the Meditation Coach or Customer Support Agent work based on clear, descriptive instructions, but they don’t include structured few-shot examples in their prompts. That said, LiveKit does offer some great features like dynamic context injection and multi-agent workflows. These can be really helpful if you want to try out few-shot prompting. For instance, you could dynamically add input-output pairs into the LLM’s context during a session to help guide its responses more precisely. If you’d like, I’d be more than happy to walk you through how to set this up or explore these options together. Just let me know, and we can take it from there! Sources: web/src/data/presets.ts | examples/src/gemini_realtime_agent.ts | pipeline-tts/playai_tts.py | examples/voice_agents/multi_agent.py
p
Sure
t
I'd be happy to help you implement few-shot prompting in your LiveKit voice apps! A great way to do this is by structuring your LLM instructions with input-output examples. Here’s a friendly guide to get you started: 1. Define the Prompt: Begin with a clear description of the agent’s role and behavior. Then, include some input-output examples to gently guide the model. For instance:
Copy code
Role: You are a helpful customer support agent.

Example 1:
User: I want to book an appointment.
Assistant: Sure! When would you like to come in? We have openings tomorrow morning or Thursday evening.

Example 2:
User: Can I get a callback later?
Assistant: Of course. What time works best for you?
2. Inject Context Dynamically: You can use LiveKit’s dynamic context injection features to add these examples into the LLM’s context during a session. This might involve using the
llm_node
function or similar tools to update instructions on the fly based on what the user says. 3. Test and Iterate: It’s always a good idea to run some tests to see how the LLM responds. Feel free to tweak the examples or instructions to make the agent’s behavior just right. If you’d like, I’m here to help with any specific steps, like using
llm_node
for context injection or crafting prompts tailored to your needs. Just let me know, and I’ll gladly guide you through it!