swift-garage-98561
08/06/2025, 7:05 PMTimeoutError
File: /usr/local/lib/python3.13/site-packages/livekit/agents/ipc/supervised_proc.py, line 169
Container stats:
CPU: 132.64% (maxing out both cores)
Memory: 276MiB / 3.821GiB (7.05%)
Agent logs:
appuser@8b1bc5f9b849:~$ PYTHONPATH=/home/appuser python -u src/agent.py start
INFO:google_genai._api_client:The user provided project/location will take precedence over the Vertex AI API key from the environment variable.
INFO:livekit.agents:starting worker
{"message": "starting worker", "level": "INFO", "name": "livekit.agents", "version": "1.2.2", "rtc-version": "1.0.12", "timestamp": "2025-08-06T18:56:35.202302+00:00"}
INFO:livekit.agents:preloading plugins
{"message": "preloading plugins", "level": "INFO", "name": "livekit.agents", "packages": ["livekit.plugins.cartesia", "livekit.plugins.deepgram", "livekit.plugins.silero", "livekit.plugins.assemblyai", "livekit.plugins.openai", "livekit.plugins.groq", "livekit.plugins.elevenlabs", "livekit.plugins.aws", "livekit.plugins.turn_detector", "livekit.plugins.google", "livekit.plugins.anthropic", "av"], "timestamp": "2025-08-06T18:56:35.204163+00:00"}
INFO:livekit.agents:starting inference executor
{"message": "starting inference executor", "level": "INFO", "name": "livekit.agents", "timestamp": "2025-08-06T18:56:35.476252+00:00"}
INFO:livekit.agents:initializing process
{"message": "initializing process", "level": "INFO", "name": "livekit.agents", "pid": 86, "inference": true, "timestamp": "2025-08-06T18:57:04.267543+00:00"}
INFO:livekit.agents:killing process
{"message": "killing process", "level": "INFO", "name": "livekit.agents", "pid": 86, "inference": true, "timestamp": "2025-08-06T18:57:14.402054+00:00"}
ERROR:livekit.agents:worker failed
Traceback (most recent call last):
File "/usr/local/lib/python3.13/asyncio/tasks.py", line 507, in wait_for
return await fut
^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/livekit/agents/ipc/channel.py", line 47, in arecv_message
return _read_message(await dplx.recv_bytes(), messages)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/livekit/agents/utils/aio/duplex_unix.py", line 35, in recv_bytes
len_bytes = await self._reader.readexactly(4)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/asyncio/streams.py", line 769, in readexactly
await self._wait_for_data('readexactly')
File "/usr/local/lib/python3.13/asyncio/streams.py", line 539, in _wait_for_data
await self._waiter
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.13/site-packages/livekit/agents/cli/_run.py", line 79, in _worker_run
await worker.run()
File "/usr/local/lib/python3.13/site-packages/livekit/agents/worker.py", line 387, in run
await self._inference_executor.initialize()
File "/usr/local/lib/python3.13/site-packages/livekit/agents/ipc/supervised_proc.py", line 169, in initialize
init_res = await asyncio.wait_for(
^^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
)
^
File "/usr/local/lib/python3.13/asyncio/tasks.py", line 506, in wait_for
async with timeouts.timeout(timeout):
~~~~~~~~~~~~~~~~^^^^^^^^^
File "/usr/local/lib/python3.13/asyncio/timeouts.py", line 116, in __aexit__
raise TimeoutError from exc_val
TimeoutError
{"message": "worker failed", "level": "ERROR", "name": "livekit.agents", "exc_info": "Traceback (most recent call last):\n File \"/usr/local/lib/python3.13/asyncio/tasks.py\", line 507, in wait_for\n return await fut\n ^^^^^^^^^\n File \"/usr/local/lib/python3.13/site-packages/livekit/agents/ipc/channel.py\", line 47, in arecv_message\n return _read_message(await dplx.recv_bytes(), messages)\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.13/site-packages/livekit/agents/utils/aio/duplex_unix.py\", line 35, in recv_bytes\n len_bytes = await self._reader.readexactly(4)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.13/asyncio/streams.py\", line 769, in readexactly\n await self._wait_for_data('readexactly')\n File \"/usr/local/lib/python3.13/asyncio/streams.py\", line 539, in _wait_for_data\n await self._waiter\nasyncio.exceptions.CancelledError\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/usr/local/lib/python3.13/site-packages/livekit/agents/cli/_run.py\", line 79, in _worker_run\n await worker.run()\n File \"/usr/local/lib/python3.13/site-packages/livekit/agents/worker.py\", line 387, in run\n await self._inference_executor.initialize()\n File \"/usr/local/lib/python3.13/site-packages/livekit/agents/ipc/supervised_proc.py\", line 169, in initialize\n init_res = await asyncio.wait_for(\n ^^^^^^^^^^^^^^^^^^^^^^^\n ...<2 lines>...\n )\n ^\n File \"/usr/local/lib/python3.13/asyncio/tasks.py\", line 506, in wait_for\n async with timeouts.timeout(timeout):\n ~~~~~~~~~~~~~~~~^^^^^^^^^\n File \"/usr/local/lib/python3.13/asyncio/timeouts.py\", line 116, in __aexit__\n raise TimeoutError from exc_val\nTimeoutError", "timestamp": "2025-08-06T18:57:14.409090+00:00"}
INFO:livekit.agents:draining worker
{"message": "draining worker", "level": "INFO", "name": "livekit.agents", "id": "unregistered", "timeout": 1800, "timestamp": "2025-08-06T18:57:14.436013+00:00"}
appuser@8b1bc5f9b849:~$ ad
What I've tried:
- All API keys are set correctly
- Network connectivity works
Questions:
1. Is 2 vCPUs enough for LiveKit agent initialization?
2. How can I increase the inference executor timeout?
3. Any LiveKit config options to reduce initialization load?
The agent initializes multiple AI services (STT, TTS, LLM) simultaneously and seems to timeout after 10 seconds. Should I upgrade to 4+ vCPUs or is there a configuration fix?
Thanks! πwooden-beard-26644
08/06/2025, 7:18 PMpython main.py console
? We configure a lot of behavior via call metadata and i'd like to be able to test it when running locallyacceptable-motorcycle-5430
08/06/2025, 10:50 PMrapid-salesclerk-34950
08/07/2025, 12:24 AMrough-pizza-5956
08/07/2025, 3:38 AMbumpy-student-61140
08/07/2025, 7:43 AMflaky-beard-91685
08/07/2025, 7:50 AMrhythmic-plumber-379
08/07/2025, 8:44 AMambitious-ram-96835
08/07/2025, 11:49 AMmetadata
to AccessToken
the way to go?polite-oil-10264
08/07/2025, 2:24 PMpurple-rainbow-1246
08/07/2025, 2:43 PMmin_endpointing_delay
to 1.0, and made some changes to the silero.VAD.load
options like min_speech_duration
and min_silence_duration
with no luck.rhythmic-plumber-379
08/07/2025, 4:09 PMancient-judge-59849
08/07/2025, 4:23 PMdelightful-mechanic-38378
08/07/2025, 4:26 PMnarrow-chef-82684
08/07/2025, 5:09 PMmany-hair-70963
08/07/2025, 5:26 PMcreamy-judge-56458
08/07/2025, 6:38 PM2025-08-07T18:36:47.3107265Z stdout F {"message": "worker failed", "level": "ERROR", "name": "livekit.agents", "exc_info": "Traceback (most recent call last):\n File \"/usr/local/lib/python3.11/asyncio/tasks.py\", line 500, in wait_for\n return fut.result()\n ^^^^^^^^^^^^\n File \"/opt/venv/lib/python3.11/site-packages/livekit/agents/ipc/channel.py\", line 47, in arecv_message\n return _read_message(await dplx.recv_bytes(), messages)\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/venv/lib/python3.11/site-packages/livekit/agents/utils/aio/duplex_unix.py\", line 35, in recv_bytes\n len_bytes = await self._reader.readexactly(4)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.11/asyncio/streams.py\", line 750, in readexactly\n await self._wait_for_data('readexactly')\n File \"/usr/local/lib/python3.11/asyncio/streams.py\", line 543, in _wait_for_data\n await self._waiter\nasyncio.exceptions.CancelledError\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/opt/venv/lib/python3.11/site-packages/livekit/agents/cli/_run.py\", line 79, in _worker_run\n await worker.run()\n File \"/opt/venv/lib/python3.11/site-packages/livekit/agents/worker.py\", line 387, in run\n await self._inference_executor.initialize()\n File \"/opt/venv/lib/python3.11/site-packages/livekit/agents/ipc/supervised_proc.py\", line 169, in initialize\n init_res = await asyncio.wait_for(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.11/asyncio/tasks.py\", line 502, in wait_for\n raise exceptions.TimeoutError() from exc\nTimeoutError", "timestamp": "2025-08-07T18:36:47.292634+00:00"}
2025-08-07T18:36:47.311745748Z stdout F {"message": "draining worker", "level": "INFO", "name": "livekit.agents", "id": "unregistered", "timeout": 1800, "timestamp": "2025-08-07T18:36:47.309834+00:00"}
2025-08-07T18:56:21.689725611Z stdout F {"message": "Exception in callback Future.set_result(None)\nhandle: <Handle Future.set_result(None)>", "level": "ERROR", "name": "asyncio", "exc_info": "Traceba.....
ambitious-ram-96835
08/07/2025, 6:51 PM36.58 E: Failed to fetch <http://deb.debian.org/debian/pool/main/p/python3.11/libpython3.11-stdlib_3.11.2-6%2bdeb12u6_amd64.deb> Hash Sum mismatch
36.58 Hashes of expected file:
36.58 - SHA256:409f354d3d5d5b605a5d2d359936e6c2262b6c8f2bb120ec530bc69cb318fac4
36.58 - MD5Sum:a45c8d12a11e8ca44e191331917d6c37 [weak]
36.58 - Filesize:1798500 [weak]
36.58 Hashes of received file:
36.58 - SHA256:a1ddb99826b09a928d6556bbf35f059c8ce643057d28ed4c5b46448a5733edd6
36.58 - MD5Sum:c7fe270c74141b29950a0eadde47272a [weak]
36.58 - Filesize:1798500 [weak]
36.58 Last modification reported: Sat, 03 May 2025 18:08:56 +0000
36.58 E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
cuddly-cartoon-47334
08/07/2025, 7:35 PMawait ctx.connect(auto_subscribe=AutoSubscribe.AUDIO_ONLY)
participant = await ctx.wait_for_participant()
lkapi = LiveKitAPI()
?
Also, how do we add shutdown callbacks in agnets 1.x? It semes the 0.x way does not work.flaky-rain-27278
08/07/2025, 8:53 PMbetter-horse-7195
08/07/2025, 9:08 PMbetter-horse-7195
08/07/2025, 9:08 PMfrom __future__ import annotations
import asyncio
import json
import logging
import os
from typing import Any, Set
from dotenv import load_dotenv
from livekit import api, rtc
from livekit.agents import (
Agent, AgentSession, JobContext, JobProcess, RunContext,
cli, WorkerOptions, RoomInputOptions, function_tool,
BackgroundAudioPlayer, AudioConfig, BuiltinAudioClip, get_job_context,
)
from livekit.plugins import deepgram, openai, cartesia, silero, noise_cancellation
from livekit.plugins.noise_cancellation import BVCTelephony
from livekit.plugins.turn_detector.multilingual import MultilingualModel
from metadata import JobMetadata # your pydantic model
load_dotenv(".env.local")
logger = logging.getLogger("outbound-caller")
logger.setLevel(<http://logging.INFO|logging.INFO>)
PLEASANTRIES: Set[str] = {"hi", "hello", "hey", "yes"}
# βββββββββββββββββββββββββββββ Agent ββββββββββββββββββββββββββββββββ #
class OutboundCaller(Agent):
"""Minimal agent β greeting & hang-up handled in entrypoint."""
def __init__(self, *, instructions: str):
super().__init__(instructions=instructions)
self.participant: rtc.RemoteParticipant | None = None
def set_participant(self, participant: rtc.RemoteParticipant):
self.participant = participant
# ------ LLM-visible tools ----------------------------------------
@function_tool()
async def end_call(self, ctx: RunContext):
"""Hang up when user or LLM decides the call is over."""
await get_job_context().api.room.delete_room(
api.DeleteRoomRequest(
room=get_job_context().room.name,
)
)
@function_tool()
async def detected_answering_machine(self, ctx: RunContext):
"""Hang up if voicemail is detected."""
<http://logger.info|logger.info>("AMD Detected")
await get_job_context().api.room.delete_room(
api.DeleteRoomRequest(
room=get_job_context().room.name,
)
)
# ββββββββββββββββββββββββββ Pre-warm VAD ββββββββββββββββββββββββββ #
def prewarm(proc: JobProcess):
proc.userdata["vad"] = silero.VAD.load()
# βββββββββββββββββββββββββββ Entry point ββββββββββββββββββββββββββββ #
async def entrypoint(ctx: JobContext):
session: AgentSession | None = None
try:
# 0 Parse metadata & inject API keys
meta = JobMetadata(**json.loads(ctx.job.metadata))
os.environ.update(
DEEPGRAM_API_KEY=meta.deepgram_api_key,
CARTESIA_API_KEY=meta.cartesia_api_key,
OPENAI_API_KEY=meta.openai_api_key,
)
# 1 Build agent & session
agent = OutboundCaller(instructions=meta.instructions)
session = AgentSession(
vad=ctx.proc.userdata["vad"],
llm=openai.LLM(model="gpt-4o-mini"),
stt=deepgram.STT(model="nova-3", interim_results=True, language="multi"),
tts=cartesia.TTS(model="sonic-2", voice="694f9389-aac1-45b6-b726-9d9369183238"),
turn_detection="stt",
preemptive_generation=True,
allow_interruptions=True,
)
# Pleasantry filter & first-turn latch
first_turn = asyncio.Event()
@session.on("user_input_transcribed")
def _filter(ev):
if ev.is_final:
if ev.transcript.strip().lower() in PLEASANTRIES:
ev.add_to_chat_ctx = False
first_turn.set()
# Background ambience
background_audio = BackgroundAudioPlayer(
ambient_sound=AudioConfig(BuiltinAudioClip.OFFICE_AMBIENCE, volume=0.8),
thinking_sound=[
AudioConfig(BuiltinAudioClip.KEYBOARD_TYPING, volume=0.8),
AudioConfig(BuiltinAudioClip.KEYBOARD_TYPING2, volume=0.7),
],
)
# 2 Start session as background task & dial
session_started = asyncio.create_task(
session.start(
agent=agent,
room=ctx.room,
room_input_options=RoomInputOptions(
noise_cancellation=noise_cancellation.BVCTelephony()
),
)
)
await ctx.api.sip.create_sip_participant(
api.CreateSIPParticipantRequest(
room_name=ctx.room.name,
sip_trunk_id=meta.sip_outbound_trunk_id,
sip_call_to=meta.phone_number,
participant_identity=meta.phone_number,
wait_until_answered=True,
)
)
# 3 Wait for session start and participant join
await session_started
participant = await ctx.wait_for_participant(identity=meta.phone_number)
agent.set_participant(participant)
# Start background audio after session is fully established
await background_audio.start(room=ctx.room, agent_session=session)
try:
await asyncio.wait_for(first_turn.wait(), timeout=1.5)
except asyncio.TimeoutError:
pass # silent pick-up
# 4 Deterministic greeting (valid SSML)
greeting = (
meta.greeting or "Hello, this is Sara from ABC Finance."
)
await session.say(greeting, allow_interruptions=True, add_to_chat_ctx=False)
# Session will continue running naturally - no session.run() needed
except Exception as exc:
logger.exception(f"Outbound-caller fatal error: {exc}")
# best-effort room cleanup
try:
await ctx.api.room.delete_room(api.DeleteRoomRequest(room=ctx.room.name))
except Exception:
pass
finally:
# Dump conversation history (works on all SDK versions)
if session and getattr(session, "history", None):
h = session.history
try:
out = json.dumps(h.to_dict(), indent=2) # β₯1.0.2
except AttributeError:
out = getattr(h, "to_json", lambda **_: str(h))(indent=2)
print("\n--- Call Transcript ---")
print(out)
print("--- End Transcript ---\n")
# βββββββββββββββββββββββββββ CLI runner ββββββββββββββββββββββββββββ #
if __name__ == "__main__":
cli.run_app(
WorkerOptions(
entrypoint_fnc=entrypoint,
agent_name="outbound-caller",
prewarm_fnc=prewarm, # drop if cold-start latency isn't a concern
)
)
dry-france-22717
08/07/2025, 10:16 PMacceptable-psychiatrist-80817
08/07/2025, 11:24 PMcareful-analyst-10302
08/08/2025, 4:21 AMrefined-toddler-89382
08/08/2025, 7:45 AMpublic async createAndConnectRoom(userName?: string, version?: string) {
try {
const queryOptions: { ROOM_NAME: string, USER_NAME?: string, VERSION?: string } = {
ROOM_NAME: this.roomName
}
// Only add 'name' if userName is not null or empty
if (userName && userName.trim() !== '') {
queryOptions.USER_NAME = userName
}
queryOptions.VERSION = version ?? 'prod'
const query = new URLSearchParams(queryOptions)
console.log('Trying to retrieve LiveKit Token...')
const response = await fetch(`/api/liveKitToken?${query}`)
if (!response.ok) {
throw new Error(`Failed to fetch token: ${response.statusText}`)
}
this.liveKitToken = await response.text()
}
catch (error) {
this.logAndSend(`Error retrieving LiveKit token: ${error}`)
}
if (this.liveKitToken === undefined) {
this.logAndSend('Failed to retrieve LiveKit token!')
return
}
console.log('Retrieved LiveKit Token.')
try {
// Register VoicePipeline Agent Readyness
this.registerAgentReady()
let url = this.productionUrl
switch (version) {
case 'test':
url = this.testUrl
break
case 'dev':
url = this.devUrl
break
default:
url = this.productionUrl
break
}
console.log(`Connecting to LiveKit room at ${url} with environment: ${version ?? 'prod'}`)
await this.room.connect(url, this.liveKitToken)
// Set up event listeners after connecting
this.setupParticipantEventListeners()
// Handle already connected participants (e.g., TTS agent)
console.log(this.room.remoteParticipants.size + ' remote participants already connected')
this.room.remoteParticipants.forEach((participant) => {
this.setupVoicePipelineAgent(participant)
})
}
catch (error) {
this.logAndSend(`Error connecting to LiveKit room: ${error}`)
}
}
1a.:
private setupParticipantEventListeners() {
// Listener for transcriptions received
this.room.registerTextStreamHandler('lk.transcription', async (reader, participantInfo) => {
const info = reader.info
if (info.attributes !== undefined) {
const transcriptionId = info.attributes['lk.transcribed_track_id']
const transcriptionFinal = info.attributes['lk.transcription_final'] === 'true'
const participantIdentity = participantInfo.identity
// Option 1: Process the stream incrementally using a for-await loop.
for await (const chunk of reader) {
// Process only if the transcription is from your own participant
if (participantIdentity === this.room.localParticipant.identity) {
// STT
this.onOwnTranscriptionReceivedListeners.forEach(listener =>
listener(chunk, transcriptionFinal, transcriptionId)
)
}
else {
// TTS
this.onForeignTranscriptionReceivedListeners.forEach(listener =>
listener(chunk, transcriptionFinal, transcriptionId)
)
}
}
// TTS finished
if (participantInfo.identity !== this.room.localParticipant.identity) {
this.onForeignTranscriptionReceivedListeners.forEach(listener =>
listener(undefined, true, transcriptionId)
)
}
}
})
}
1b.:
private setupVoicePipelineAgent(participant: RemoteParticipant) {
console.log('Setting up VoicePipeLine-Agent:', participant.identity)
// Listener for track subscriptions
participant.on(
ParticipantEvent.TrackSubscribed,
(track/* , publication */) => {
if (track.kind === Track.Kind.Audio) {
this.handleTTSAudioTrack(track as RemoteAudioTrack)
}
}
)
// Listener for track unsubscriptions
participant.on(
ParticipantEvent.TrackUnsubscribed,
(track/* , publication */) => {
if (track.kind === Track.Kind.Audio) {
this.cleanupTTSAudioTrack(track as RemoteAudioTrack)
}
}
)
// Subscribe to existing audio tracks
participant.trackPublications.forEach((publication: TrackPublication) => {
if (
publication.track
&& publication.track.kind === Track.Kind.Audio
&& publication.isSubscribed
) {
this.handleTTSAudioTrack(publication.track as RemoteAudioTrack)
}
})
}
2.:
public async publishMicrophoneTrack() {
if (this.room.state !== ConnectionState.Connected) {
this.logAndSend('Not connected to room yet!')
return
}
try {
// This will prompt the user for microphone permissions
this.microphoneTrack = await createLocalAudioTrack()
// Publish the track if permission is granted
const audioTrack = await this.room.localParticipant.publishTrack(this.microphoneTrack, { name: 'microphone' })
this.publishedAudioTracks.set('microphone', audioTrack)
console.log('Microphone access granted and track published.')
}
catch (error) {
this.logAndSend(`Microphone access denied: ${error}`)
}
}
3.:
public async toggleMicrophoneEnabled(enabled: boolean) {
if (this.microphoneTrack !== undefined) {
await this.room.localParticipant.setMicrophoneEnabled(enabled)
}
else {
this.logAndSend('No microphone track to toggle!')
}
}
-> Of course I await all functions.
I even added this when the User clicks the play button:
await this.room.startAudio()
But it did not help either.
It only plays the sound if either the user selects very quickly the microphone permission or if he just reloads the page (after set microphone permission).magnificent-dusk-62723
08/08/2025, 11:04 AMcalm-article-62769
08/08/2025, 12:06 PMimport asyncio
import base64
import logging
from dotenv import load_dotenv
from livekit.agents import (
Agent,
AgentSession,
JobContext,
RoomInputOptions,
WorkerOptions,
cli,
get_job_context,
)
from livekit.agents.llm import ImageContent
from livekit.plugins import openai, silero
# # Load environment variables from .env
load_dotenv()
logger = logging.getLogger("vision-assistant")
load_dotenv()
class VisionAssistant(Agent):
def __init__(self) -> None:
self._tasks = []
super().__init__(
instructions=""" You are a helpful voice assistant Tom.""",
llm=openai.LLM(model="gpt-4o-mini"),
stt=openai.STT(model="whisper-1"),
tts=openai.TTS(model="tts-1", voice="nova"),
vad=silero.VAD.load(),
)
async def on_enter(self):
def _image_received_handler(reader, participant_identity):
task = asyncio.create_task(
self._image_received(reader, participant_identity)
)
self._tasks.append(task)
task.add_done_callback(lambda t: self._tasks.remove(t))
get_job_context().room.register_byte_stream_handler("test", _image_received_handler)
self.session.generate_reply(
instructions="Briefly greet the user and offer your assistance."
)
async def _image_received(self, reader, participant_identity):
<http://logger.info|logger.info>("Received image from %s: '%s'", participant_identity, reader.info.name)
try:
image_bytes = bytes()
async for chunk in reader:
image_bytes += chunk
chat_ctx = self.chat_ctx.copy()
chat_ctx.add_message(
role="user",
content=[
ImageContent(
image=f"data:image/png;base64,{base64.b64encode(image_bytes).decode('utf-8')}"
)
],
)
await self.update_chat_ctx(chat_ctx)
print("Image received", self.chat_ctx.copy().to_dict(exclude_image=False))
except Exception as e:
logger.error("Error processing image: %s", e)
async def entrypoint(ctx: JobContext):
await ctx.connect()
session = AgentSession()
await session.start(
agent=VisionAssistant(),
room=ctx.room,
room_input_options=RoomInputOptions(
video_enabled=True
),
)
if __name__ == "__main__":
cli.run_app(WorkerOptions(entrypoint_fnc=entrypoint))
For the UI interface, Iβm using agent-starter-react
package from here.
My agent is joining the room and can communicate with me properly, but whenever I share my screen and ask the bot if itβs visible, it says it canβt see my screen or anything Iβm showing. It keeps replying with something like, βI can't see anything.β
So, is there any issue with my agent.py
file, or could it be something else?gorgeous-gpu-30432
08/08/2025, 12:47 PMmysterious-van-40803
08/08/2025, 1:25 PM