when using Langgraph agent as LLM agapter, with li...
# ask-ai
r
when using Langgraph agent as LLM agapter, with livekit, how we can use filler words when agent is busy
t
Thanks for reaching out, Umair. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
Hey there! Just wanted to check in and see how you’re planning to use filler words with the Langgraph agent in LiveKit. Are you thinking about setting up automated responses for those moments when there’s a bit of a delay, or is there a particular feature in Langgraph or LiveKit you have in mind? I’m here to help!
r
let say when tool is being called it takes some time to complete the request, tool call generates some signal and livekit say something like "I am still checking your record, it will take some time", etc Goal is to detech from graph for long running task and fill that silence
t
When dealing with long-running tasks with the Langgraph agent in LiveKit, and you want to fill those quiet moments with some natural filler words, here are a few friendly tips you can try: • Verbal Status Updates: You can set up the agent to give little verbal updates while it’s working. For example, after a short pause, it might say something like, "I’m still checking your record, this might take a bit." It’s a nice way to keep the conversation flowing using the agent’s speech features. • Background Audio: Adding some background sounds—like typing or thinking noises—can really help show that the agent is busy processing your request. It makes the whole experience feel more alive and engaging. • Filler Words in Speech-to-Text (STT): You can turn on the
filler_words
option in the STT settings to sprinkle in natural filler words like "um" or "uh" when the tools are running. It adds a touch of realism to the interaction. These little touches can make waiting feel smoother and keep things more engaging for users. If you want, I’m happy to help you with the specifics on how to set any of these up!
r
I really like option one and three, please guide me how we can do that
t
Here’s a friendly guide on how you can set up options one (Verbal Status Updates) and three (Filler Words in Speech-to-Text) using the Langgraph agent in LiveKit: • Verbal Status Updates: If you want to keep users in the loop during longer tasks, you can have the agent send a quick status update after a short pause. Here’s a simple example to get you started:
Copy code
import asyncio
from livekit.agents import function_tool, RunContext

@function_tool()
async def search_knowledge_base(
    self,
    context: RunContext,
    query: str,
) -> str:
    # Send a verbal status update to the user after a short delay
    async def _speak_status_update(delay: float = 0.5):
        await asyncio.sleep(delay)
        await context.session.generate_reply(instructions=f"""
        You are searching the knowledge base for \"{query}\" but it is taking a little while.
        Update the user on your progress, but be very brief.
        """)

    status_update_task = asyncio.create_task(_speak_status_update(0.5))

    # Perform search (function definition omitted for brevity)
    result = await _perform_search(query)

    # Cancel status update if search completed before timeout
    status_update_task.cancel()

    return result
Filler Words in Speech-to-Text (STT): Want to make the speech-to-text output sound more natural by including filler words like "um" or "uh"? Just turn on the
filler_words
option in your STT settings. Here’s how you can do it:
Copy code
from livekit.agents import stt

async def entrypoint(ctx: agents.JobContext):
    session = AgentSession(
        stt=stt.STT(
            filler_words=True,
            # Other STT options
        ),
        # ... other components
    )
    await session.start(room=ctx.room)
These tweaks should help you add both verbal status updates and filler words smoothly. If you run into any questions or want to chat more about these, I’m here to help!
r
could you please provide this according to langgraph?
t
Here’s a friendly guide on how you can set up verbal status updates and configure filler words specifically for the Langgraph agent in LiveKit: • Verbal Status Updates: You can use the
generate_reply
method to keep users in the loop with verbal feedback during those longer operations. Check out this example:
Copy code
import asyncio
from livekit.agents import function_tool, RunContext

@function_tool()
async def search_knowledge_base(
    self,
    context: RunContext,
    query: str,
) -> str:
    # Here’s a little helper to send a quick verbal update after a short pause
    async def _speak_status_update(delay: float = 0.5):
        await asyncio.sleep(delay)
        await context.session.generate_reply(instructions=f"""
        You are searching the knowledge base for \"{query}\" but it is taking a little while.
        Update the user on your progress, but be very brief.
        """)
    
    status_update_task = asyncio.create_task(_speak_status_update(0.5))
    
    try:
        # Now, do the actual search
        result = await _perform_search(query)
        # If the search finishes before the update, cancel the status update to keep things tidy
        status_update_task.cancel()
        return result
    except asyncio.CancelledError:
        return "Operation cancelled."
Filler Words in Speech-to-Text: To make your speech-to-text experience more natural, you can turn on filler words in the Deepgram STT plugin by setting the
filler_words
parameter to
True
. Here’s how you do it:
Copy code
from livekit.plugins import deepgram

# Initialize the STT with filler_words enabled (it’s True by default, but here’s how to be explicit)
stt = deepgram.STT(
    model="nova-3",
    language="en-US",
    filler_words=True  # This turns on filler words
)

# Or, if you want to update the settings on the fly:
await stt.update_options(filler_words=True)
With these steps, you’ll be all set to add helpful verbal status updates and manage filler words smoothly for Langgraph. If you have any questions or want to dive deeper, just let me know—I’m here to help! Sources: External data and RAG | LiveKit Docs | Handling RAG Delays in Voice Agents | LiveKit Docs | Deepgram integration guide | LiveKit Docs | STTOptions | LiveKit Agents
r
how I can play or say something when agent is working? and user is waiting for response
t
Hey there! To let users know the agent is working while they wait for a response in LiveKit, here are some friendly ways you can do it: • System Prompt Instructions: You can have the agent say something like, "Let me look that up for you," by adding specific directions in the system prompt. It’s a nice way to keep things conversational. • Static Text Messages: Simple, pre-set messages like "I’m checking on that for you, please hold on," work great to keep users in the loop. • Dynamic LLM Responses: For a more personalized touch, let the LLM create real-time updates that fit the context of what’s happening. • Audio File Playback: You could play short audio clips—like little “thinking” sounds—using the BackgroundAudioPlayer. You can use built-in sounds or add your own custom ones to make it feel more lively. • UI Indications: If your app has a frontend, adding visual cues like a “thinking” indicator or progress bar is a great way to show the agent’s status. Just let me know which option sounds good to you, and I’d be happy to help with more details or code examples! Sources: delay-handling/README.mdoc | External data and RAG | LiveKit Docs | Web and mobile frontends | LiveKit Docs | Web and mobile frontends | LiveKit Docs
r
where I can put that text on livekit
t
Hey there! If you’re looking to set up where and how status updates show up or play in LiveKit, here are some friendly tips to get you going: • Agent Speech: You can have your agent give verbal feedback using its speech feature. Just tweak this in your agent code with the
agents.build.speech
module. • Text and Audio Modalities: When you initialize
RealtimeModel
, decide which modalities you want—like text, audio, or both. For example, use
modalities=["text", "audio"]
to enable them together. • Room Input/Output Options: You can turn text and audio inputs or outputs on or off by setting up
RoomInputOptions
and
RoomOutputOptions
. Here’s a quick example:
Copy code
from livekit.agents.build import RoomInputOptions, RoomOutputOptions

# Enable text input
input_options = RoomInputOptions(
    text_enabled=True,
)

# Enable audio output with transcription
output_options = RoomOutputOptions(
    audio_enabled=True,
    transcription_enabled=True,
)
Model Parameters: Feel free to customize the voice and instructions for your audio responses by adjusting the
RealtimeModel
parameters. Like this:
Copy code
model = RealtimeModel(
    api_key="your-api-key",
    voice="alloy",  # Choices: "alloy", "echo", "shimmer"
    instructions="Provide concise and clear status updates.",
    modalities=["text", "audio"],
)
These setups give you the flexibility to decide how your agent shares status updates—whether by text, audio, or both. If you want a hand with anything else, just give me a shout!
r
I mean when agent is busy and some tools are being called that time if I have to use some filling text to fill that silence until tool calls ended and agent respond back what I can say some text during that time
t
Hey there! If you want to provide some filler text or speech while the agent is busy working with tools in LiveKit, here are a few friendly ways to keep your users in the loop: • Verbal Status Updates: You can use the
generate_reply
method to send little updates during longer tasks. Here’s a quick example:
Copy code
import asyncio
from livekit.agents import function_tool, RunContext

@function_tool()
async def perform_slow_operation(
    self,
    context: RunContext,
    query: str,
) -> str:
    async def _speak_status_update(delay: float = 0.5):
        await asyncio.sleep(delay)
        await context.session.generate_reply(instructions=f"""
            Currently processing your request about \"{query}\".
        """)

    status_update_task = asyncio.create_task(_speak_status_update(0.5))

    # Perform slow operation
    result = await _perform_slow_operation(query)

    # Cancel status update if completed before timeout
    status_update_task.cancel()

    return result
"Thinking" Sounds: Another neat trick is to play some background audio so users know the agent is hard at work. Check this out:
Copy code
from livekit.agents import BackgroundAudioPlayer, AudioConfig, BuiltinAudioClip

async def entrypoint(ctx: agents.JobContext):
    session = AgentSession(
        # ... initializations
    )
    await session.start(room=ctx.room)

    background_audio = BackgroundAudioPlayer(
        thinking_sound=[
            AudioConfig(BuiltinAudioClip.KEYBOARD_TYPING, volume=0.8),
            AudioConfig(BuiltinAudioClip.KEYBOARD_TYPING2, volume=0.7),
        ],
    )
    await background_audio.start(room=ctx.room, agent_session=session)
Preemptive Generation: You can also have a secondary, faster model whip up quick responses while the main task is still running. Here’s how:
Copy code
import asyncio
from livekit.agents import Agent

class MyAgent(Agent):
    def __init__(self):
        super().__init__(instructions="Your instructions here.")
        self.fast_llm = FastLanguageModel()  # Replace with actual fast model

    async def on_user_turn_completed(self, turn_ctx: ChatContext, new_message: ChatMessage):
        fast_response_task = asyncio.create_task(self.generate_fast_response())
        filler_response = await fast_response_task
        turn_ctx.add_message(role="assistant", content=[filler_response])

    async def generate_fast_response(self) -> str:
        # Generate a short response using the fast model
        return "Let me look into that for you."
These little tricks help make sure your users feel heard and get quick feedback while waiting for the agent to finish its work. If you want to chat more about this or need a hand setting it up, just let me know—I’m here to help!
r
can I use redis, for example if tool started its execution and then I put tool name redis and conversation and then in livekit somehome I pull that from redis and if I found that tool is running then I make agent to say something, I am not using livekit function_tool so please dont mention that I am using LLMAdapter with langgraph
t
I don't have the answer you're looking for. You could also try asking your question: • in one of the other Slack channels or • to https://deepwiki.com/livekit/livekit_composite which is trained on all LiveKit source code If you find the answer, please post it here to help others!
r
can I convert ctx values to json and get job id and room id and after that recreate session object from that information
Copy code
async def entrypoint(ctx: JobContext):
.