Can I run the ai agent built-in livekit within fas...
# ask-ai
i
Can I run the ai agent built-in livekit within fastapi process? I found that we should run the ai agent in seperated process. I already built chatbot with RAG, but hope to use livekit for the voice part. is it possible?
g
Running an AI agent within a FastAPI process is not recommended. According to LiveKit documentation, agents are designed to be stateful and long-running, which contrasts with the short-lived nature of HTTP requests handled by FastAPI. The recommended approach is to run the AI agent as a separate process or service, which can be triggered by your FastAPI application via APIs. This separation ensures that the agent can handle long-running tasks, such as voice interactions, without being constrained by the lifecycle of HTTP requests. If you need more detailed guidance on setting up this architecture, let me know! Sources: - LiveKit documentation 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
i
If so, how can I integarte voice ai agent with existed chatbots in fastapi
@millions-hair-24956 If so, how can I integarte voice ai agent with existed chatbots in fastapi
g
To integrate a voice AI agent with an existing chatbot in FastAPI, you can follow these steps: 1. Separate Processes: Run the voice AI agent as a separate process or service. This can be done using LiveKit's Agents framework, which allows the agent to handle voice interactions independently of the FastAPI process. The agent can be triggered via API calls from your FastAPI application. 2. API Communication: Use FastAPI to handle HTTP requests and communicate with the voice AI agent. When a voice interaction is needed, FastAPI can send a request to the agent's API, passing necessary data such as user input or session context. 3. WebRTC Integration: Utilize LiveKit's WebRTC capabilities to manage real-time voice data. The voice AI agent can join a LiveKit room to interact with users, while FastAPI handles the logic for when and how these interactions occur. 4. Data Exchange: Ensure that the chatbot and voice agent can exchange data seamlessly. This might involve using a shared database or message queue to maintain session state and context between the two systems. 5. RAG Integration: If your chatbot uses Retrieval-Augmented Generation (RAG), ensure that the voice agent can access the same knowledge base or retrieval system to provide consistent responses. Would you like more detailed guidance on any of these steps? Sources: - https://livekit-users.slack.com/archives/C07FYCB7FRQ/p1730222742.728759 - https://docs.livekit.io/agents/openai 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
i
if possible, you can share the detailed code
if possible, you can share the detailed code
@millions-hair-24956 if possible, you can share the detailed code
g
To integrate a voice AI agent with an existing chatbot in FastAPI, you can follow these steps with code examples: 1. Run the Voice AI Agent Separately: Use LiveKit's framework to run the voice AI agent as a separate service. Here's a basic setup using LiveKit's Python SDK:
Copy code
from livekit import rtc
from livekit.agents import AutoSubscribe, JobContext, WorkerOptions, cli
from livekit.agents.pipeline import VoicePipelineAgent
from livekit.plugins import openai, deepgram, silero

async def entrypoint(ctx: JobContext):
    await ctx.connect(auto_subscribe=AutoSubscribe.AUDIO_ONLY)
    participant = await ctx.wait_for_participant()

    agent = VoicePipelineAgent(
        vad=silero.VAD.load(),
        stt=deepgram.STT(),
        llm=openai.LLM(),
        tts=openai.TTS(),
        chat_ctx=llm.ChatContext().append(role="system", text="You are a helpful assistant."),
    )
    agent.start(ctx.room, participant)

if __name__ == "__main__":
    cli.run_app(WorkerOptions(entrypoint_fnc=entrypoint))
2. FastAPI Setup: Create a FastAPI application to handle HTTP requests and communicate with the voice AI agent.
Copy code
from fastapi import FastAPI, HTTPException
import requests

app = FastAPI()

@app.post("/send-to-agent/")
async def send_to_agent(user_input: str):
    # Example of sending data to the voice AI agent
    try:
        response = requests.post("http://voice-agent-service/api/process", json={"input": user_input})
        return response.json()
    except Exception as e:
        raise HTTPException(status_code=500, detail=str(e))

@app.get("/")
async def read_root():
    return {"Hello": "World"}
3. Communication Between FastAPI and Voice Agent: Use HTTP requests or WebSockets to send data between FastAPI and the voice AI agent. The example above shows a simple HTTP POST request to send user input to the agent. 4. WebRTC Integration: Ensure that the voice AI agent can join a LiveKit room to handle real-time voice interactions. This is managed by the LiveKit framework and does not require additional code in FastAPI. 5. Data Exchange: Use a shared database or message queue if needed to maintain session state and context between the chatbot and the voice agent. This setup allows you to keep the voice AI agent and FastAPI application separate, ensuring that each can handle its specific tasks efficiently. If you need more detailed guidance on any specific part, let me know! Sources: - https://docs.livekit.io/sip/examples/sip-participant-attributes/ - https://livekit-users.slack.com/archives/C07FY8WHGPM/p1733211463.305549 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
i
@millions-hair-24956 3. Communication Between FastAPI and Voice Agent: Use HTTP requests or WebSockets to send data between FastAPI and the voice AI agent. The example above shows a simple HTTP POST request to send user input to the agent. Implement this part
@millions-hair-24956 3. Communication Between FastAPI and Voice Agent: Use HTTP requests or WebSockets to send data between FastAPI and the voice AI agent. The example above shows a simple HTTP POST request to send user input to the agent. Implement this part
g
I'm not able to find any data sources to answer your question. You might want to specify whether you need a synchronous or asynchronous communication method and any specific requirements for data format or error handling. If this doesn't help, please reach out to the Livekit support team for further assistance. 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.