response = await self.job_context.api.sip.create_s...
# ask-ai
m
response = await self.job_context.api.sip.create_sip_participant( api.CreateSIPParticipantRequest( sip_trunk_id=sip_trunk_id, participant_identity=identity, participant_name="Human Agent", sip_call_to=transfer_to, room_name=room_name, play_dialtone=True, wait_until_answered=True ) ) participant after trannsfer does not trigger @session.on("user_input_transcribed") def on_transcript(event: UserInputTranscribedEvent): if event.is_final: transfer_done = getattr(session, 'agent_instance', None) if transfer_done is not None: is_transfer_done = transfer_done.transfer_done else: is_transfer_done = False timestamp = "warm" if is_transfer_done else "non-transfer" # Build the log line log_line = f"[{timestamp}]" log_line += f" {event.transcript}\n" write_to_file(log_line)
t
Thanks for reaching out, Usama. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
Can you provide more details about your setup? Specifically, are there any errors or logs generated during the SIP transfer process, and is the SIP participant successfully added to the room? This will help in diagnosing the issue.
m
i am writing transcript in txt file whenever user_input_inscribed is triggered, but when second participant is added, that event does not trigger for it
t
To handle the
user_input_transcribed
event for multiple participants, it’s a good idea to give each participant their own session to keep things organized. Here’s a friendly rundown on how you can tackle this: • Create Individual Sessions: Set up a separate session for each participant so their events don’t get mixed up. • Attach Event Handlers: For every session, add a handler for the
user_input_transcribed
event to keep track of what’s being transcribed. • Enable Speaker Diarization: If your STT plugin offers it, turn on speaker diarization to easily tell who’s who. • Dynamic Session Management: Create sessions when participants join and close them when they leave to keep everything neat and tidy. This way, you’ll make sure each transcription event is linked to the right person. If you want, I’m happy to help with more details or share some code examples—just let me know!
m
how to set up seperate session for them?
t
To set up separate sessions for each participant, here’s a friendly step-by-step guide: 1. Initialize a Session for Each Participant: When someone joins, create a new session just for them. You can use their identity to keep track of their session.
Copy code
async def on_participant_connected(participant):
    if participant.identity in sessions:
        return  # Session already exists

    session = await create_transcription_session(ctx, participant)
    sessions[participant.identity] = session
2. Create a Transcription Session: Let’s define a function that sets up and configures the transcription session. We’ll attach the
user_input_transcribed
event handler to handle the transcriptions.
Copy code
async def create_transcription_session(ctx, participant):
    session = AgentSession()

    def on_transcription(event):
        if event.is_final:
            print(f"[{participant.identity}] {event.transcript}")

    session.on("user_input_transcribed")(on_transcription)

    # Configure and start the session
    await session.start(agent=Agent(participant_identity=participant.identity))
    return session
3. Manage Sessions Dynamically: Add event handlers to create sessions when participants join and to close them when they leave.
Copy code
ctx.room.on("participant_connected", on_participant_connected)
ctx.room.on("participant_disconnected", on_participant_disconnected)

async def on_participant_disconnected(participant):
    session = sessions.pop(participant.identity, None)
    if session:
        await session.aclose()
This way, each participant gets their own session to handle transcription smoothly. If anything’s unclear or you want to dive deeper, just let me know—I’m here to help!
m
how to add participant in current session
t
I don't have the answer you're looking for. You could also try asking your question: • in one of the other Slack channels or • to https://deepwiki.com/livekit/livekit_composite which is trained on all LiveKit source code If you find the answer, please post it here to help others!