<@U084APK40NS> I have SIP trunking setup with Teln...
# ask-ai
c
@millions-hair-24956 I have SIP trunking setup with Telnyx and want to actually be able to hangup an inbound call, for this I think I need to call Telnyx direcly, just removing a room or a participant from a room doesn’t work either. Is there any way that I can hangup the inbound call with Telnyx?
g
It seems likely that simply removing a participant or deleting a room in LiveKit does not automatically hang up the SIP call with Telnyx. According to LiveKit documentation, SIP participants can be managed using participant management commands, but hanging up a call might require additional steps. My understanding is that you might need to ensure that the SIP trunk configuration is correct and that the SIP BYE message is being sent to Telnyx. One potential issue could be related to the transport protocol used. As noted in a Slack discussion, using
;transport=tcp
in the SIP URL can prevent the call from hanging up properly. Removing this suffix might resolve the issue. Would you like more detailed guidance on configuring your SIP trunk or handling SIP messages? Sources: - LiveKit documentation - Slack discussion 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
c
@millions-hair-24956 I need to get
call_control_id
from telnyx so that I can call Telnyx API to stop the call, is there a possibility to get that variable through SIP?
g
I'm not able to find any data sources to answer your question. You might want to provide details on how you're currently integrating LiveKit with Telnyx or any specific configurations you've set up. If this doesn't help, please reach out to the Livekit support team for further assistance. 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
c
@millions-hair-24956 I have created the following function, that can be called by the voicepipelineagent: @llm.ai_callable(description=“Called when it is clear that the conversation is ended by the user and the user says goodbye or something similar and the user is not asking for anything else”) async def end_conversation( self, query: Annotated[ str, llm.TypeInfo(description=“The user has ended the conversation or wants to end the conversation and is not asking for anything else”) ] ): print(_f_”-----------Ending conversation {query}“) _try_: # Get room and participant info room_name = self.ctx.room.name participant_identity = self.participant.identity # Remove the participant first _try_: await self.lkapi.room.remove_participant(api.RoomParticipantIdentity( room=room_name, identity=participant_identity )) logger.info(_f_”Successfully removed participant {participant_identity} from room {room_name}“) return “Conversation ended successfully.” except Exception as e: logger.error(_f_”Error removing participant: {e}“) raise except Exception as e: logger.error(_f_”Error in end_conversation: {e}“) return _f_”Failed to end conversation: {str(e)}” However, it is only called when the user explicitly says “you can hangup”, not when the user says goodbye or anything. How can I asure that this function is called when the user says goodbye or anything like that and the LLM calls this function?
g
To ensure that your function is called when the user says "goodbye" or similar phrases, you need to enhance the intent detection capabilities of your LLM. This involves configuring the LLM to recognize a broader set of phrases that indicate the end of a conversation. Here are some steps you can take: 1. Expand the System Prompt: Update the system prompt to include a wider range of phrases that should trigger the function. For example, include phrases like "bye", "see you", "talk to you later", etc. 2. Use Function Calling: According to the LiveKit documentation, you can define functions that the LLM can call based on detected intents. Ensure that your function is properly registered and that the LLM is configured to call it when the relevant intent is detected. 3. Test and Iterate: Test the function with various phrases to ensure it is triggered correctly. You may need to adjust the LLM's configuration or the system prompt based on the results. Would you like more detailed guidance on configuring the LLM or examples of how to expand the system prompt? Sources: - LiveKit documentation 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
c
@millions-hair-24956 i want the end_conversation function to also be called if the user is not saying anything after 30 seconds, how can I do that?
g
To call the
end_conversation
function if the user is silent for 30 seconds, you can implement a timeout mechanism using a timer. Here's a general approach: 1. Set Up a Timer: Use a timer to track the duration of user silence. If the timer reaches 30 seconds without any user input, trigger the
end_conversation
function. 2. Reset Timer on User Activity: Reset the timer whenever the user starts speaking. This can be done by listening to events such as
user_started_speaking
or
user_stopped_speaking
. 3. Implement the Timeout Logic: If the timer expires (i.e., the user is silent for 30 seconds), call the
end_conversation
function. Here's a basic example using Python's `asyncio`:
Copy code
import asyncio

TIMEOUT_SECONDS = 30
timeout_task = None

async def start_timeout():
    try:
        await asyncio.sleep(TIMEOUT_SECONDS)
        await end_conversation("User was silent for 30 seconds.")
    except asyncio.CancelledError:
        # Task was cancelled because of new activity
        pass

def reset_timeout():
    global timeout_task
    # Cancel existing timeout if there is one
    if timeout_task and not timeout_task.done():
        timeout_task.cancel()
    # Start new timeout
    timeout_task = asyncio.create_task(start_timeout())

@assistant.on("user_started_speaking")
def on_user_started_speaking(*args):
    reset_timeout()

@assistant.on("user_stopped_speaking")
def on_user_stopped_speaking(*args):
    reset_timeout()
This code sets up a timeout that triggers the
end_conversation
function if the user is silent for 30 seconds. The timer is reset whenever the user starts or stops speaking. Would you like more detailed guidance on implementing this in your specific setup? Sources: - Slack discussion on timeout implementation 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
c
@millions-hair-24956 when the voicepipelineagent is unable to respond to the user’s question the user, it should propose to the user to be connected to an employee by forwarding the user to a different number. What is the best way and approach to implement this?
g
To forward a user to a different number when the VoicePipelineAgent cannot respond, you can use the
CreateSIPParticipant
API to make an outbound call to a human representative. Here's a concise approach: 1. Detect Unanswered Queries: Implement logic to detect when the agent cannot answer a user's question. This can be done by analyzing the LLM's response or using a fallback mechanism. 2. Initiate Call Forwarding: Use the
CreateSIPParticipant
API to forward the call to a human representative. This API allows you to make an outbound call from the LiveKit room to a specified phone number. 3. Handle Call Transfer: Once the human representative joins the room, you can choose to disconnect the agent or keep it in the room based on your requirements. Here's a basic example using Python:
Copy code
from livekit import api
from livekit.protocol import sip as proto_sip

async def forward_to_human():
    lkapi = api.LiveKitAPI()
    sip_participant_info = await lkapi.sip.create_sip_participant(
        proto_sip.CreateSIPParticipantRequest(
            "sip_trunk_id", "number_to_call", "room_name", "human-id", "human-name"
        )
    )
    return sip_participant_info
Would you like more detailed guidance on implementing this in your specific setup? Sources: - Slack discussion on call forwarding 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
c
@millions-hair-24956 please, especially in case of scenario 1. detecting unanswered queries, how to best do this? Give me an example
g
It seems likely that detecting unanswered queries can be achieved by analyzing the LLM's response or using a fallback mechanism. One approach is to define specific criteria or keywords that indicate the LLM is unable to provide a satisfactory answer. Here's a basic example: 1. Define Unanswered Criteria: Identify phrases or patterns in the LLM's response that suggest it cannot answer the query, such as "I don't know", "I'm not sure", or "Can you rephrase?". 2. Implement Detection Logic: Use these criteria to detect when the LLM's response is unsatisfactory. This can be done by checking the response text for these phrases. 3. Trigger Call Forwarding: If an unsatisfactory response is detected, initiate the call forwarding process to connect the user to a human representative. Here's a simple example in Python:
Copy code
def is_unanswered(response_text):
    # Define criteria for unanswered queries
    unanswered_phrases = ["I don't know", "I'm not sure", "Can you rephrase?"]
    # Check if the response contains any of the criteria
    return any(phrase in response_text for phrase in unanswered_phrases)

async def handle_response(response_text):
    if is_unanswered(response_text):
        print("Unanswered query detected. Forwarding to human representative.")
        await forward_to_human()
    else:
        print("Response is satisfactory.")

# Example usage
response = "I'm not sure about that."
await handle_response(response)
This code checks if the LLM's response contains any predefined phrases indicating an unanswered query and triggers the call forwarding process if necessary. Would you like more detailed guidance on implementing this in your specific setup? Sources: - https://docs.livekit.io/agents/voice-agent/ 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.