tall-tailor-23658
07/12/2025, 12:48 PM2025-07-12 14:19:48,504 - ERROR livekit - failed to emit event response_content_added
Traceback (most recent call last):
File "D:\AIVoiveAgent\livekit-front\voice-assistant-frontend-warx7u\venv\Lib\site-packages\livekit\rtc\event_emitter.py", line 58, in emit
callback(*callback_args)
File "D:\AIVoiveAgent\livekit-front\voice-assistant-frontend-warx7u\venv\Lib\site-packages\livekit\agents\multimodal\multimodal_agent.py", line 277, in _on_content_added
Traceback (most recent call last):
File "D:\AIVoiveAgent\livekit-front\voice-assistant-frontend-warx7u\venv\Lib\site-packages\livekit\rtc\event_emitter.py", line 58, in emit
callback(*callback_args)
File "D:\AIVoiveAgent\livekit-front\voice-assistant-frontend-warx7u\venv\Lib\site-packages\livekit\agents\multimodal\multimodal_agent.py", line 277, in _on_content_added
tr_fwd = transcription.TTSSegmentsForwarder(
callback(*callback_args)
File "D:\AIVoiveAgent\livekit-front\voice-assistant-frontend-warx7u\venv\Lib\site-packages\livekit\agents\multimodal\multimodal_agent.py", line 277, in _on_content_added
tr_fwd = transcription.TTSSegmentsForwarder(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
tr_fwd = transcription.TTSSegmentsForwarder(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AIVoiveAgent\livekit-front\voice-assistant-frontend-warx7u\venv\Lib\site-packages\livekit\agents\transcription\tts_forwarder.py", line 110, in __init__
File "D:\AIVoiveAgent\livekit-front\voice-assistant-frontend-warx7u\venv\Lib\site-packages\livekit\agents\transcription\tts_forwarder.py", line 110, in __init__
track = _utils.find_micro_track_id(room, identity)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AIVoiveAgent\livekit-front\voice-assistant-frontend-warx7u\venv\Lib\site-packages\livekit\agents\transcription\_utils.py", line 26, in find_micro_track_id
raise ValueError(f"participant {identity} does not have a microphone track")
ValueError: participant agent-AJ_bCAkN5MvsBCy does not have a microphone track
and here is the agent file
from __future__ import annotations
import logging
import os
import asyncio
from dotenv import load_dotenv
from livekit import rtc
from livekit.agents import (
AutoSubscribe,
JobContext,
WorkerOptions,
cli,
llm,
)
from livekit.agents.multimodal import MultimodalAgent
from livekit.plugins import openai
# Import clinic data functions
from clinic_data import fetch_specialties
from actions import Appointment
load_dotenv(dotenv_path=".env.local")
logger = logging.getLogger("my-worker")
logger.setLevel(<http://logging.INFO|logging.INFO>)
# Format clinic data for LLM context
def format_clinic_data(specialties) -> str:
if not specialties or not specialties.specialties:
return "لا توجد بيانات متاحة للعيادات حالياً."
clinic_info = []
for specialty in specialties.specialties:
doctors_info = []
for doctor in specialty.doctors:
schedule_info = [f"يوم {s.day}: {s.from_time} - {<http://s.to|s.to>}" for s in doctor.schedule]
doctors_info.append(
f"- الدكتور/ة {doctor.name} (المستوى: {doctor.level})\n"
f" المواعيد: {', '.join(schedule_info)}"
)
clinic_info.append(
f"العيادة: {specialty.name}\n"
f"الأطباء:\n" + "\n".join(doctors_info)
)
return (
"البيانات الحالية للعيادات المتاحة:\n\n" +
"\n\n".join(clinic_info) +
"\n\nعند سؤال المستخدم عن العيادات أو الأطباء، استخدم هذه البيانات."
)
async def entrypoint(ctx: JobContext):
<http://logger.info|logger.info>(f"connecting to room {ctx.room.name}")
await ctx.connect(auto_subscribe=AutoSubscribe.AUDIO_ONLY)
participant = await ctx.wait_for_participant()
run_multimodal_agent(ctx, participant)
<http://logger.info|logger.info>("agent started")
def run_multimodal_agent(ctx: JobContext, participant: rtc.RemoteParticipant):
<http://logger.info|logger.info>("starting multimodal agent")
# Fetch clinic data from API
api_url = os.getenv("CLINIC_API_URL", "<https://localhost:63801/api/Raya/medical/specialties>")
specialties = fetch_specialties(api_url)
clinic_data = format_clinic_data(specialties)
# Create background task to refresh data periodically
async def refresh_data():
while True:
await asyncio.sleep(600) # Refresh every 10 minutes
new_specialties = fetch_specialties(api_url)
if new_specialties:
nonlocal clinic_data
clinic_data = format_clinic_data(new_specialties)
<http://logger.info|logger.info>("Clinic data refreshed")
asyncio.create_task(refresh_data())
fnc_ctx = Appointment()
model = openai.realtime.RealtimeModel(
instructions=(
"أنت مساعد صوتي عربي متخصص في حجز مواعيد العيادات. "
"يجب أن تساعد المستخدمين في:"
"\n1. معرفة العيادات والأطباء المتاحين"
"\n2. حجز المواعيد بناءً على التخصص والطبيب والوقت"
"\n3. الإجابة على استفساراتهم حول العيادات"
"\n\n" + clinic_data + "\n\n"
"إرشادات هامة:"
"\n- استخدم أسلوباً ودوداً وبسيطاً"
"\n- استخدم اللغة العربية"
"\n- استخدم جمل مباشرة و قصيرة"
"\n- استفسر عن: الاسم، التخصص المطلوب، الطبيب المفضل، اليوم والوقت"
"\n- قدم خيارات من البيانات عند السؤال عن العيادات"
"\n- تجنب استخدام الإنجليزية أو علامات الترقيم غير المنطوقة"
),
modalities=["audio", "text"],
)
# Initial chat context
chat_ctx = llm.ChatContext()
chat_ctx.append(
text=(
"مرحباً! أنا مساعدك الصوتي لحجز مواعيد العيادات. "
"كيف يمكنني مساعدتك اليوم؟"
),
role="assistant",
)
agent = MultimodalAgent(
model=model,
chat_ctx=chat_ctx,
fnc_ctx=fnc_ctx,
)
agent.start(ctx.room, participant)
agent.generate_reply()
if __name__ == "__main__":
cli.run_app(
WorkerOptions(
entrypoint_fnc=entrypoint,
)
)
tall-airline-16756
07/12/2025, 12:48 PMtall-airline-16756
07/12/2025, 12:49 PMValueError: participant agent-AJ_bCAkN5MvsBCy does not have a microphone track
indicates that the system attempted to access a microphone audio track for this participant, but none was found at that moment.
• Common causes include the participant not publishing an audio track (microphone off, device permissions, or hardware issues), or timing/race conditions where the track hasn't finished publishing before your code tries to access it.
• Best practice is to not assume a microphone track is immediately available. Instead, listen for the track publication event, or implement retry logic that waits until the track is published before attempting to access it.
• Example approach: @room.on("track_published")
def on_track_published(publication, participant):
if publication.source == rtc.TrackSource.SOURCE_MICROPHONE:
# Now it's safe to link or process the microphone track
Or use retry logic to wait for up to a desired timeout.
• Consider wrapping find_micro_track_id
calls in try/except
and gracefully handle the absence of a microphone track.
To resolve your issue, ensure that you only attempt actions that require a microphone track after confirming the track is present on the participant. If you need concrete code patterns for error handling or further troubleshooting steps for device permissions, let me know your context and stack.
Sources: livekit-agents/livekit/agents/voice/room_io/_output.py | quickstart/src/userfriendlyerror.js | agents/src/multimodal/multimodal_agent.ts | agent/main.py | Understanding "silence detected on local audio track" warning