I'm working on LiveKit Agents and looking into how...
# ask-ai
c
I'm working on LiveKit Agents and looking into how in-between fillers are handled. I saw options like generate_reply and say in the agent tools, which seem useful for generating filler responses. Just wanted to check—besides those, is there any built-in support for automatically handling fillers during conversations?
t
Thanks for reaching out, Akshay. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
LiveKit Agents does not include automatic or built-in support for conversational fillers beyond the core agent tools like
generate_reply
and
say
. If you want to inject filler responses (such as quick, context-neutral acknowledgments like “let me check that for you…”), you need to implement these manually within your agent logic. Approaches from the documentation include: • Using a fast, secondary LLM to generate short, generic fillers while the main response processes in parallel (e.g., a fast pre-response agent pattern) • Predefined static filler messages chosen at random • System prompts to the LLM requiring it to advise users when a lookup or delay is occurring • Leveraging speech synthesis options: Some TTS providers (like Deepgram) allow native filler word support via prompting, but this also requires manual integration The underlying framework provides tools such as
session.say()
(with
add_to_chat_ctx=False
) and timing controls like
min_consecutive_speech_delay
to help you build these patterns, but fully automatic handling of fillers is not part of the core agent implementation. If you would like more details or sample code for any of these approaches, let me know which method you’re most interested in. Sources: Deepgram TTS integration guide | LiveKit Docs | Agent speech | LiveKit Docs | tests/test_agent_session.py | delay-handling/README.mdoc