async def on_enter(self): await self.sessi...
# ask-ai
d
async def on_enter(self): await self.session.generate_reply( instructions="Your name is Anna-Leah, Your job: AI assistant for painting assistance for Hue-Line. Be friendly and give color/layout advice. Greet customer then call verify_caller function" ) return self.verify_caller is this a good way to force call the next function
t
Thanks for reaching out, Billy. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
I understand you're exploring the use of
return self.verify_caller
in an async function to trigger the next function call. While this approach isn't a standard or documented method in Python or the LiveKit framework, don’t worry—there are recommended ways to guide the workflow smoothly in async functions by returning specific values: • You can return another instance (like an
Agent
) to hand off control to a new workflow. • Alternatively, returning a tuple with a result message and the new instance allows you to provide some output before the handoff. • Or, returning
None
or a simple string lets you send output without triggering a handoff. If you’re working within a particular framework or have more details about the
verify_caller
function, please feel free to share! I’m here to help you navigate this and find the best solution. Sources: Docs::Recipes | Workflows | LiveKit Docs | Workflows | LiveKit Docs | livekit-agents/livekit/agents/voice/agent.py
d
@function_tool async def verify_caller(self, ctx: RunContext[HelperSessionInfo], name: str = "") -> str: """Get callers name and send verification sms""" verification_code = str(uuid.uuid4().int)[:6] ctx.session.userdata.verification_code = verification_code ctx.session.userdata.user_name = name ctx.session.userdata.call_status = "verification_pending" logger.info(f" verification code: {verification_code}, name: {name}") if name: await send_verification_sms(ctx.session.userdata.phone_number, verification_code, ctx.session.userdata.from_number) return "Verification code sent successfully. Please check your phone for a 6-digit code."
t
I’m glad you asked about how the
@function_tool
decorated async function
verify_caller
runs in sequence! You actually don’t need to call it manually because the LiveKit framework takes care of the execution order based on the Large Language Model (LLM)’s decisions. Let me walk you through it: • Define the Tool: By decorating your function with
@function_tool
and adding a clear docstring, you help the LLM understand exactly when to use it. • LLM Decision-Making: The LLM smartly decides the order of function calls depending on the conversation and what the agent needs to do next. • Automatic Execution: The framework makes sure each function runs one after the other — the next won’t start until the current one finishes. So, for your
verify_caller
function, it will be triggered whenever the LLM thinks it’s the right time. There’s no need to manually use
return self.verify_caller
to force it. If anything’s still unclear or if you want to dive deeper, just let me know — I’m here to help! Sources: Tool definition and use | LiveKit Docs | Tool definition and use | LiveKit Docs | tests/test_evals.py | realtime/openai-realtime-tools.py
d
if a tool call takes long whats the command to play some music?
t
To help you keep things lively during long tool executions, here are a few friendly ways you can play music or provide feedback: • Background Audio: You can use the
BackgroundAudioPlayer
to play built-in sounds like typing noises or some ambient music. Here’s a little example to get you started:
Copy code
background_audio = BackgroundAudioPlayer(
    thinking_sound=[
        AudioConfig(BuiltinAudioClip.KEYBOARD_TYPING, volume=0.8),
        AudioConfig(BuiltinAudioClip.KEYBOARD_TYPING2, volume=0.7),
    ],
)
await background_audio.start(room=ctx.room, agent_session=session)
Pre-recorded Audio: If you want to add a personal touch, you can play a custom audio file like some music or a sound effect:
await play_wav_once("let_me_check_that.wav", ctx.room)
Verbal Updates: Another great option is to have the agent give verbal feedback such as "Let me look that up..." right before starting the tool execution. These little touches can really help keep users engaged during those longer processes. If you’d like, I’m here to help you set any of these up—just let me know! Sources: External data and RAG | LiveKit Docs | delay-handling/README.mdoc | Web and mobile frontends | LiveKit Docs
d
my code eg async def function_name ... result = await might_take_7_seconds(ctx) if not result: return await play_wav("some_file.wav", ctx.room) await do_something_with_result(ctx,result) return "Call next_function"
t
I'm happy to help you play a WAV file during an async function and guide you through the process using the
WavPlayer
utility. Here’s a simple way to do it:
Copy code
from agent_extensions.utils import WavPlayer
import asyncio

# Initialize the player
wav_player = WavPlayer()

async def my_async_function(ctx):
    # Play a WAV file and wait for it to finish
    await wav_player.play_once("path/to/audio.wav", ctx.room, volume=0.3)
    
    # This part runs only after the audio finishes playing
    print("Audio playback finished, continuing with next steps...")
    # ... rest of your logic
Here are some helpful features to keep in mind:Audio Track Management: The
WavPlayer
takes care of setting up and publishing the audio track to the room, so you don’t have to worry about it. • Volume Control: You can easily adjust the playback volume with the
volume
parameter to suit your needs. • Error Handling: It makes sure to clean up resources if anything goes wrong during playback. • Reusability: The player caches audio files, which means faster playback when you use the same file multiple times. This approach helps your WAV file play smoothly, and your function will continue only after the audio finishes. If you have any questions or want me to walk you through it further, just let me know—I’m here to help! Sources: delay-handling/README.mdoc
d
so the llm can use print("___") as instructions?
t
Absolutely! The LLM can indeed trigger function tools that contain
print()
statements, but it’s important to understand that these aren’t instructions for the LLM itself. Instead,
print()
is primarily used for logging or debugging while the backend logic runs. Here’s a simple example to illustrate:
Copy code
@function_tool
async def print_to_console(self, context: RunContext):
    print("Console Print Success!")  # Outputs to the agent's runtime console
    return "I’ve printed to the console."
Here are some key points to keep in mind:Execution Context: The
print()
statement sends output to the console or log stream, which is really helpful for debugging and monitoring. • LLM Behavior: The LLM doesn’t interpret
print()
as a direct instruction; it just triggers the function tool where the print happens behind the scenes. • Feedback to Users: If you want to give feedback to the user, it’s best to include a return value in the function that the agent can then vocalize or display. Also, for more structured logging in a production environment, you might want to explore Python’s
logging
module instead of relying on raw
print()
statements. If you have any questions or want me to walk you through this further, just let me know—I’m here to help! Sources: tool_calling/update_tools.py
d
build failed for iad.ocir.io/axyci3pr8vxm/production-cloud-agents:p-06bxocvm4h1-ca-e7cq9ndbvgrf-v20250912042126: failed to solve: failed to push iad.ocir.io/axyci3pr8vxm/production-cloud-agents:p-06bxocvm4h1-ca-e7cq9ndbvgrf-v20250912042126: unknown: Tenant storage limit exceeded. totalBytes: 10000159369087 limitBytes: 10000000000000 go this crazy error...do i need to delete the agent before redloying to the cloud each time?
t
It looks like the error "Tenant storage limit exceeded" means that your storage has gone beyond the limit set for your tenant. I understand how this can be a bit frustrating! The documentation doesn’t clearly say if agents need to be deleted before redeploying, but I’d love to help you navigate this. Here are a few friendly steps you might try: • Check Storage Usage: Take a peek at your storage usage in the LiveKit Cloud dashboard to see what’s taking up space. • Clean Up Resources: If you can, try deleting any unused agents, images, or other stored data to free up some room. • Quota Increase: If you find yourself hitting the limit often, it might be worth requesting a storage quota increase. If you’d like, I’m here to help you with any of these steps or answer any other questions you have. Just let me know! Sources: Quotas and limits | LiveKit Docs | Deploying to production | LiveKit Docs | Deployment and scaling | LiveKit Docs
d
So who do I contact for this?
t
Thank you for reaching out! I see you're already in touch with support, which is great. To help you tackle the storage limit issue, here are a few friendly steps you can try: • Check Storage Usage: Take a moment to review your storage usage in the LiveKit Cloud dashboard. This will help you see exactly what's taking up space. • Clean Up Resources: If you spot any unused agents, images, or other stored data, deleting them can free up some much-needed space. • Request Quota Increase: If you find that your usage often goes beyond the limit, it might be a good idea to request a storage quota increase. Just let me know if you'd like some help with how to do this—I’m here for you! If you’d like, feel free to share more details about your current storage usage or how you’re deploying things, and I’d be happy to assist further.
d
I need a definite answer not a guess
t
I don't have the answer you're looking for. You could also try asking your question: • in one of the other Slack channels or • to https://deepwiki.com/livekit/livekit_composite which is trained on all LiveKit source code If you find the answer, please post it here to help others!