i got ths error from this code : "C:\Users\YUSUF...
# ask-ai
m
i got ths error from this code : "C:\Users\YUSUF\miniconda3\envs\newagents\lib\site-packages\livekit\agents\voice\agent.py", line 408, in tts_node async for ev in stream: File "C:\Users\YUSUF\miniconda3\envs\newagents\li File "C:\Users\YUSUF\miniconda3\envs\newagents\lib\site-packages\opentelemetry\util\_decorator.py", line 71, in async_wrapper return await func(*args, **kwargs) # type: ignore File "C:\Users\YUSUF\miniconda3\envs\newagents\lib\site-packages\livekit\agents\tts\tts.py", line 368, in _main_task raise APIError(f"no audio frames were pushed for text: {self._pushed_text}") livekit.agents._exceptions.APIError: no audio frames were pushed for text: I shall find that for you, sir. Let me just run a quick search. (body=None, retryable=True)
Copy code
class Assistant(Agent):
    def __init__(self) -> None:
        super().__init__(
            instructions=AGENT_INSTRUCTION,
            tools=[
                get_weather,
                search_web,
                send_email
            ],
        )


def prewarm(proc: JobProcess):
    proc.userdata["vad"] = silero.VAD.load()

async def entrypoint(ctx: agents.JobContext):
    ctx.log_context_fields = {
        "room": ctx.room.name,
        "user_id": "nano_user",
    }

    await ctx.connect()
    <http://logger.info|logger.info>("Connected to Nano AI room")

    # ✅ Ensure 'vad' is prewarmed
    session = AgentSession(
        vad=ctx.proc.userdata["vad"],
        llm=openai.LLM.with_together(
        model="Qwen/Qwen2.5-7B-Instruct-Turbo",
        temperature=0.3
    ),
        stt=cartesia.STT(
        model="ink-whisper"),
        tts=elevenlabs.TTS(
      voice_id="ODq5zmih8GrVes37Dizd",
      model="eleven_multilingual_v2"
   ),
        # tts=groq.TTS(
        #     model="playai-tts",
        #     voice="Arista-PlayAI",
        # ),
        allow_interruptions=True
    )

    agent = Assistant()

    try:
        await session.start(
            room=ctx.room,
            agent=agent,
            room_input_options=RoomInputOptions(text_enabled=True, video_enabled=True , audio_enabled=True),
            room_output_options=RoomOutputOptions(transcription_enabled=True, audio_enabled=True)
        )
        <http://logger.info|logger.info>("Nano AI session started successfully")
    except Exception as e:
        logger.error(f"Failed to start Nano AI session: {str(e)}", exc_info=True)
        raise

    await session.say("Hi, my name is Nano, built by Yusuf. How may I help you?", allow_interruptions=True)

if __name__ == "__main__":
    cli.run_app(WorkerOptions(entrypoint_fnc=entrypoint,prewarm_fnc=prewarm))
t
Thanks for reaching out, Anigilaje. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
The error "no audio frames were pushed for text" happens when the text-to-speech (TTS) service can’t generate audio frames for the text you provided. Don’t worry, here are some friendly steps to help you fix this: • Make sure the TTS service is set up correctly and that it’s reachable. • Double-check your input text to see that it’s valid — like, it shouldn’t be empty or have any weird formatting. • Confirm that your TTS provider is properly initialized with the right credentials and settings. • Try increasing the retry count and interval by tweaking
max_retry
and
retry_interval
in
APIConnectOptions
. • Turn on logging for the TTS provider so you can get detailed error messages to help with debugging. If you’re using something like FakeTTS for testing, just check that fake_audio_duration isn’t set to 0.0—unless you’re purposely trying to test this error. Feel free to reach out if you want me to walk you through any of these steps or if you need a hand with anything else! Sources: livekit-plugins/livekit-plugins-neuphonic/livekit/plugins/neuphonic/tts.py
m
2025-08-11 155931,213 - WARNING livekit.agents - The room connection was not established within 10 seconds after calling job_entry. This may indicate that job_ctx.connect() was not called.
t
The warning "The room connection was not established within 10 seconds after calling job_entry" means that the
job_ctx.connect()
method either wasn’t called or didn’t manage to connect in time. No worries—here are some friendly steps to help you get this sorted out: • Make sure you’re calling
job_ctx.connect()
right at the start of your job entry. This step is key to getting that room connection up and running. • Double-check that your server has enough resources—like CPU, memory, and network bandwidth—to handle everything smoothly. • Take a look at your network and server settings to ensure things like room URLs, tokens, and credentials are all correct. • Follow LiveKit’s deployment tips, such as running multiple server instances and setting up load balancing and auto-scaling to keep things running reliably. • Peek at the logs for any extra error messages or warnings that might give clues about what’s going on. If you want, I’m here to walk you through any of these steps or help with specific details. Just let me know! Sources: How to Get Help from LiveKit | livekit-agents/livekit/agents/ipc/job_proc_lazy_main.py
m
is the best set up for connecting after job entry : async def entrypoint(ctx: agents.JobContext): ctx.log_context_fields = { "room": ctx.room.name, "user_id": "nano_user", } await ctx.connect() logger.info("Connected to Nano AI room") # Ensure 'vad' is prewarmed session = AgentSession( vad=ctx.proc.userdata["vad"], llm=openai.LLM.with_together( model="Qwen/Qwen2.5-7B-Instruct-Turbo", temperature=0.3 ), stt=cartesia.STT( model="ink-whisper"), tts=elevenlabs.TTS( voice_id="ODq5zmih8GrVes37Dizd", model="eleven_multilingual_v2" ), # tts=groq.TTS( # model="playai-tts", # voice="Arista-PlayAI", # ), allow_interruptions=True ) agent = Assistant() try: await session.start( room=ctx.room, agent=agent, room_input_options=RoomInputOptions(text_enabled=True, video_enabled=True , audio_enabled=True), room_output_options=RoomOutputOptions(transcription_enabled=True, audio_enabled=True) )
t
The setup you shared for connecting to a room after job entry looks pretty solid! Just a few friendly tips to make sure everything runs smoothly: • Be sure to call
ctx.connect()
right after you’ve set up any event handlers for room events, like when participants join or when tracks get subscribed to. • Use the
auto_subscribe
parameter in
ctx.connect()
to control how tracks get subscribed automatically. For example: ◦ `AutoSubscribe.SUBSCRIBE_ALL`: This will subscribe to all tracks automatically. ◦ `AutoSubscribe.AUDIO_ONLY`: This one subscribes only to audio tracks automatically. • If you want, you can also pass
e2ee
options for end-to-end encryption or
rtc_config
for any custom RTC settings you might need. • Don’t forget to include proper error handling and logging—it really helps when you’re debugging connection issues. Your current setup is already on the right track! If you want me to clarify anything or help tweak it further, just let me know!
m
rwrite the code to followyour recommendations
t
Here is the updated code following the recommended best practices:
Copy code
async def entrypoint(ctx: agents.JobContext):
    ctx.log_context_fields = {
        "room": ctx.room.name,
        "user_id": "nano_user",
    }

    # Set up event handlers before connecting
    @ctx.room.on("participant_connected")
    async def on_participant_connected(participant):
        <http://logger.info|logger.info>(f"Participant connected: {participant.identity}")

    # Connect to the room with auto_subscribe set to AUDIO_ONLY
    await ctx.connect(auto_subscribe=agents.AutoSubscribe.AUDIO_ONLY)
    <http://logger.info|logger.info>("Connected to Nano AI room")

    # Ensure 'vad' is prewarmed
    session = AgentSession(
        vad=ctx.proc.userdata["vad"],
        llm=openai.LLM.with_together(
            model="Qwen/Qwen2.5-7B-Instruct-Turbo",
            temperature=0.3
        ),
        stt=cartesia.STT(
            model="ink-whisper"
        ),
        tts=elevenlabs.TTS(
            voice_id="ODq5zmih8GrVes37Dizd",
            model="eleven_multilingual_v2"
        ),
        allow_interruptions=True
    )

    agent = Assistant()

    try:
        await session.start(
            room=ctx.room,
            agent=agent,
            room_input_options=RoomInputOptions(text_enabled=True, video_enabled=True, audio_enabled=True),
            room_output_options=RoomOutputOptions(transcription_enabled=True, audio_enabled=True)
        )
        <http://logger.info|logger.info>("Nano AI session started successfully")
    except Exception as e:
        logger.error(f"Failed to start Nano AI session: {str(e)}", exc_info=True)
        raise

    await session.say("Hi, my name is Nano, built by Yusuf. How may I help you?", allow_interruptions=True)
This version makes sure event handlers are all set up before calling ctx.connect(), uses auto_subscribe=agents.AutoSubscribe.AUDIO_ONLY to keep things efficient with track subscription, and adds clear logging to help with any debugging. If you want me to tweak anything else or have questions, just let me know!