<#C088ZNU7QQ5|> Does livekit have integrations wit...
# ask-ai
s
#C088ZNU7QQ5 Does livekit have integrations with langchain/langgraph or Agno? Has anyone integrates there workflows/Agents of Langchain and Langgraph with Livekit found this langgraph-livekit-plugin but not working. Would be Greatfull if someone can send the eg code/resources of integrating both in workflows.
t
Thanks for reaching out, Abbas. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
LiveKit has an official integration with LangChain and LangGraph through the
livekit-plugins-langchain
package. This enables you to embed LangGraph workflows and LangChain agents directly as the LLM component within LiveKit voice agents. The core adapter is the
LLMAdapter
class, which allows seamless communication between LiveKit voice constructs and LangGraph workflows, including support for streaming, state management, and message format conversion. You can install the official integration with:
Copy code
pip install livekit-plugins-langchain
Here is an official code example adapted from the integration documentation (
examples/voice_agents/langgraph_agent.py
):
Copy code
from langchain.chat_models import init_chat_model
from langgraph.graph import START, StateGraph
from livekit.plugins import langchain

def create_graph() -> StateGraph:
    openai_llm = init_chat_model(model="openai:gpt-4o")

    def chatbot_node(state):
        return {"messages": [openai_llm.invoke(state["messages"])])}

    builder = StateGraph(State)
    builder.add_node("chatbot", chatbot_node)
    builder.add_edge(START, "chatbot")
    return builder.compile()

async def entrypoint(ctx):
    graph = create_graph()
    agent = Agent(
        instructions="",
        llm=langchain.LLMAdapter(graph),
    )
    # ...rest of the setup
This integration uses the
LLMAdapter
to bridge between LiveKit Agents and LangGraph. You can use your custom workflows as the core logic for your voice agents. There is no information about an official Agno integration, nor about a distinct `langgraph-livekit-plugin`—the
livekit-plugins-langchain
is the recommended and maintained integration from LiveKit. For further detail and comprehensive resources, refer to the official package documentation and
examples/voice_agents/langgraph_agent.py
inside the LiveKit examples directory. (Supported versions include
langchain-core >= 0.3.0
,
langgraph >= 0.3.0
, and
livekit-agents >= 1.1.5
.) Sources: livekit-plugins/livekit-plugins-langchain/livekit/plugins/langchain/py.type... | LLM Integrations | livekit/livekit_composite | DeepWiki
s
yes thats what i provided but there are no eg code to startup with and the graph i have made is not integrating with this adapter the TTS and the graph are not being comnnected to the main workflwo
t
Hey there! Let's walk through how to connect your custom LangGraph workflow with LiveKit's
LLMAdapter
to create a smooth voice agent integration. I'll break it down in a friendly, easy-to-understand way.
Copy code
from langgraph.graph import StateGraph, START
from livekit.agents import Agent, AgentSession, JobContext
from livekit.plugins import langchain, deepgram, silero
from livekit.plugins.turn_detector.multilingual import MultilingualModel

class State(TypedDict):
    messages: Annotated[list[BaseMessage], add_messages]

def create_graph():
    # Define your custom workflow
    builder = StateGraph(State)
    builder.add_node("chatbot", chatbot_node)
    builder.add_edge(START, "chatbot")
    return builder.compile()

async def entrypoint(ctx: JobContext):
    graph = create_graph()
    agent = Agent(
        instructions="",
        llm=langchain.LLMAdapter(graph),
    )
    session = AgentSession(
        vad=silero.VAD.load(),
        stt=deepgram.STT(model="nova-3", language="multi"),
        tts=deepgram.TTS(),
        turn_detection=MultilingualModel(),
    )
    await session.start(
        agent=agent,
        room=ctx.room,
    )
Let me walk you through the key integration points: •
LLMAdapter
is like a friendly translator that connects your LangGraph workflow with the LiveKit agent. It helps route messages smoothly. • Make sure to include compatible STT, TTS, and VAD components in
AgentSession
. Don't worry - non-streaming components get automatically wrapped to play nice together. • Set up turn detection to keep the conversation flowing naturally. • Just double-check that your workflow nodes can handle the message format (LiveKit and LLM messages get converted automatically, so it's pretty seamless). If you're still scratching your head about TTS or graph connections, no worries! Double-check that your workflow is compiled correctly and that
AgentSession
components are playing well together. The official example
voice_agents/langgraph_agent.py
is a great reference - it shows a tried-and-true setup (using
deepgram
and
silero
as plugin solutions, but you can swap these out). Stuck on something specific? Drop me the details about your graph structure or any errors you're seeing, and I'll be happy to help you troubleshoot! Sources: livekit-agents/livekit/agents/voice/agent.py | LLM Integrations | livekit/livekit_composite | DeepWiki
s
from langgraph.graph import StateGraph, START,END from livekit.agents import Agent, AgentSession, JobContext,cli,WorkerOptions from livekit.plugins import langchain, deepgram, silero from livekit.plugins.turn_detector.multilingual import MultilingualModel from typing import TypedDict, Annotated from langchain_core.messages import BaseMessage, HumanMessage, AIMessage from langgraph.graph import add_messages from langchain_groq import ChatGroq import os from dotenv import load_dotenv load_dotenv() class State(TypedDict): messages: Annotated[list[BaseMessage], add_messages] def chatbot_node(state: State): """ This node acts as a normal LLM call to Groq. """ llm = ChatGroq(model="llama3-8b-8192", temperature=0.7) # You can choose other Groq models response = llm.invoke(state["messages"]) return {"messages": [response]} def create_graph(): # Define your custom workflow builder = StateGraph(State) builder.add_node("chatbot", chatbot_node) builder.add_edge(START, "chatbot") # You might want to add a conditional edge or END here based on your flow # For a simple chat, you could loop back to chatbot or define an END condition. return builder.compile() async def entrypoint(ctx: JobContext): graph = create_graph() agent = Agent( instructions="", llm=langchain.LLMAdapter(graph), ) session = AgentSession( vad=silero.VAD.load(), stt=deepgram.STT(model="nova-3", language="multi"), tts=deepgram.TTS(), turn_detection=MultilingualModel(), ) await session.start( agent=agent, room=ctx.room, ) if name == "__main__": cli.run_app( WorkerOptions( entrypoint_fnc= entrypoint ) ) hey this is a sample code here the stt is working the langgraph graph agent is also working but why i am not getting any output back from the tts can you look into how can i let the tts work for thw ouput
t
I don't have the answer you're looking for. You could also try asking your question: • in one of the other Slack channels or • to https://deepwiki.com/livekit/livekit_composite which is trained on all LiveKit source code If you find the answer, please post it here to help others!