melodic-zebra-52773
08/17/2025, 1:19 PMtall-airline-16756
08/17/2025, 1:19 PMtall-airline-16756
08/17/2025, 1:20 PMpip install "livekit-plugins-langchain~=1.1"
2. Import Necessary Modules:
Next, bring in the modules you’ll need from LiveKit and LangGraph:
from langgraph.graph import StateGraph
from livekit.agents import Agent, AgentSession
from livekit.plugins import langchain
3. Create and Compile Your LangGraph Workflow:
Now, define your LangGraph workflow by adding the nodes and edges you want, then compile it:
def create_workflow():
workflow = StateGraph(...)
# Add nodes and edges as needed
return workflow.compile()
4. Use the Workflow as an LLM:
Wrap your compiled graph with LLMAdapter
and plug it into an `AgentSession`:
graph = create_workflow()
session = AgentSession(
llm=langchain.LLMAdapter(graph),
# Add other necessary components like STT, TTS, etc.
)
5. Start the Agent Session:
Finally, create the agent and kick off the session:
agent = Agent(llm=langchain.LLMAdapter(graph))
await session.start(agent=agent)
Just a heads-up: make sure you have the right dependencies installed, like langgraph
and langchain-core
. The great thing about LLMAdapter
is that it smoothly converts LiveKit’s chat context into LangChain messages, so everything fits together nicely.
If anything’s unclear or you want to dive deeper into any part, just let me know—I’m here to help!melodic-zebra-52773
08/17/2025, 1:21 PMtall-airline-16756
08/17/2025, 1:21 PMmelodic-zebra-52773
08/17/2025, 1:23 PMtall-airline-16756
08/17/2025, 1:24 PMfrom langgraph.graph import StateGraph
def create_interview_workflow():
workflow = StateGraph()
workflow.add_node("welcome", lambda state: {"messages": ["Hello! Let's begin the interview."]})
workflow.add_node("question1", lambda state: {"messages": ["What is your current role?"]})
workflow.add_edge("START", "welcome")
workflow.add_edge("welcome", "question1")
return workflow.compile()
2. Integrate LangGraph with LiveKit:
Next, use the LiveKit LangChain plugin to wrap the workflow as an LLM. Here's how you can do it:
from livekit.agents import AgentSession
from livekit.plugins import langchain
graph = create_interview_workflow()
session = AgentSession(
llm=langchain.LLMAdapter(graph),
# Add other required components like STT, TTS, etc.
)
3. Set Up the LiveKit Agent:
Now, initialize an agent session to run the interview smoothly:
from livekit.agents.voice import Agent
class InterviewAgent(Agent):
def __init__(self):
super().__init__(
instructions="You are an interviewer. Ask questions from the LangGraph flow and wait for the candidate's answers.",
llm=langchain.LLMAdapter(create_interview_workflow())
)
async def entrypoint(ctx):
await session.start(
agent=InterviewAgent(),
room=ctx.room
)
4. Handle Candidate Responses:
Finally, process the candidate's responses and update the frontend in real-time to keep everything interactive:
class InterviewAgent(Agent):
@function_tool
async def handle_response(self, context, response):
await self.session.send_form_update_to_frontend("updateField", {
"field": "currentRole",
"value": response
})
await self.session.send_form_update_to_frontend("highlightSection", {
"section": "experience"
})
With this setup, your interviewer will ask questions one by one from the LangGraph flow, wait for answers, and keep the frontend updated dynamically. If you have any questions or need a hand with anything else, just let me know—I'm here to help!
Sources: examples/voice_agents/langgraph_agent.py | avatars/tavus/tavus.py