I am trying to create a conversational agent. I am...
# ask-ai
w
I am trying to create a conversational agent. I am using text and voice mode. The voice mode works fine but text only mode is not working properly. For text only mode, I can recieve the text it gets processed but the output is not recieved on front end. I am using audio_enabled=False in RoomOutputOptions. async def entrypoint(ctx: agents.JobContext): await ctx.connect() participant = await ctx.wait_for_participant() user_id = participant.identity agent_name,first_name,agent_personality = "Spark","John","funny" session = AgentSession[MyUserInfo]( userdata= personal, stt=deepgram.STT(model="nova-3",language="en-US"), llm=openai.LLM(model=os.environ["ONBOARDING_MODEL"]), tts = deepgram.TTS(model="aura-asteria-en"), vad=silero.VAD.load(), turn_detection=MultilingualModel(), ) await session.start( room=ctx.room, agent=Assistant(), room_input_options=RoomInputOptions( noise_cancellation=noise_cancellation.BVC(), text_enabled=True, ), room_output_options=RoomOutputOptions( audio_enabled=True, # transcription_enabled=True ), )
t
Thanks for reaching out, Muhammad. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
Thank you for providing the details of your implementation. Based on the information available, when you set
audio_enabled=False
in
RoomOutputOptions
, the agent should publish text responses to the
lk.transcription
text stream topic without audio output. This configuration is typically used in scenarios where audio is forwarded elsewhere or not required. To troubleshoot the issue where text responses are not received on the front end, please verify the following: • Ensure that the
text_enabled
option in
RoomInputOptions
is set to
True
, as you have done. • Check if the text responses are being published to the
lk.transcription
text stream topic as expected. • Confirm that your front-end is correctly subscribed to the
lk.transcription
topic to receive the text responses. • Ensure there are no errors or exceptions in the backend logs that might indicate issues with the text stream publishing. If the issue persists, please let me know, and I can assist further or escalate the matter to a specialist. Sources: Text and transcriptions | LiveKit Docs
w
how can I check if the message was successfully send?
t
Unfortunately, the available documentation does not provide specific details on how to verify if a message was successfully sent to the
lk.transcription
topic. However, here are some general steps you can take to troubleshoot and confirm message delivery: • Check the backend logs for any errors or confirmation messages related to the publishing of text responses to the
lk.transcription
topic. • Ensure that your front-end is correctly subscribed to the
lk.transcription
topic and is capable of receiving and displaying the messages. • Use debugging tools or logging within your application to confirm that the text responses are being generated and sent to the topic as expected. If you need further assistance or if the issue persists, please let me know, and I can escalate this to a specialist for a more in-depth investigation.
w
how can I check all logs because unable to see the text logs
t
To troubleshoot and check all logs, including text logs, you can use the logging capabilities provided by the LiveKit SDK. Here’s how you can capture and analyze logs: 1. Use the
setLogExtension
function in the LiveKit JavaScript SDK to hook into the SDK’s logging system. This allows you to capture all log messages for troubleshooting. For example:
Copy code
setLogExtension((level, msg, context) => {
  // Capture all logs for troubleshooting
  console.log(level, msg, context);

  // Optionally send logs to a logging service
  // yourLoggingService.send(level, msg, context);
});
1. The
setLogExtension
function provides three parameters: ◦ `level`: The log level (e.g., debug, info, warning, error). ◦ `msg`: The log message text. ◦ `context`: An object containing additional context information. 1. For room-specific logs, you can access room information in the context object once a connection is established:
Copy code
setLogExtension((lvl, logmsg, ctx) => {
  if ('room' in ctx) {
    // Room-specific context is available here
    console.log(`Room ${ctx.room}: ${logmsg}`, ctx);
  }
});
1. Enhance logs with additional information like timestamps if needed:
Copy code
setLogExtension((level, msg, context) => {
  const enhancedContext = { ...context, timeStamp: Date.now() };
  if (level >= LogLevel.debug) {
    console.log(level, msg, enhancedContext);
  }
});
By implementing this, you should be able to capture and analyze all logs, including text logs, for troubleshooting purposes. If you need further assistance, feel free to ask! Sources: How to Send Logs from the JavaScript SDK to Providers like Datadog
w
how the front end should listen to the message send by text only backend agent
in react native
t
I'm sorry, but I don't have the information you need at the moment. If you would like a member of the LiveKit team to respond, please ask your question in one of the other channels.
w
can you show integration code for react native