<@U084APK40NS> How to use a CustomLLM in LiveKit v...
# ask-ai
h
@millions-hair-24956 How to use a CustomLLM in LiveKit voice Agent. Detailed Steps and the custom llm response format
g
It seems likely that to use a custom LLM in a LiveKit voice agent, you would need to create a subclass of the
llm.LLM
class and override its
chat
method to implement your custom logic. This approach allows you to integrate your own LLM logic into the LiveKit framework. The custom LLM should return an instance of
LLMStream
, which is used to handle the streaming of responses. Here's a basic outline of how you might implement this: 1. Create a Custom LLM Class: Subclass the
llm.LLM
class and override the
chat
method. This method should return an instance of a custom
LLMStream
class that you define. 2. Implement the LLMStream: Your custom
LLMStream
class should handle the logic for streaming responses back to the agent. This involves implementing the
_main_task
method to process the input and generate the output. 3. Integrate with VoicePipelineAgent: Use your custom LLM class as the
llm
parameter when creating an instance of
VoicePipelineAgent
. Here is a simplified example based on the provided data sources:
Copy code
class CustomLLM(llm.LLM):
    def __init__(self):
        super().__init__()
        self._client = aiohttp.ClientSession()
        self._api_key = "your_api_key"

    def chat(self, *, chat_ctx: llm.ChatContext, fnc_ctx: llm.FunctionContext = None, temperature: float = None, n: int = 1, parallel_tool_calls: bool = None) -> "CustomLLMStream":
        return CustomLLMStream(llm=self, chat_ctx=chat_ctx, fnc_ctx=fnc_ctx, client=self._client)

    async def aclose(self):
        await self._client.close()

class CustomLLMStream(llm.LLMStream):
    def __init__(self, *, llm: CustomLLM, chat_ctx: llm.ChatContext, fnc_ctx: llm.FunctionContext, client: aiohttp.ClientSession):
        super().__init__(llm=llm, chat_ctx=chat_ctx, fnc_ctx=fnc_ctx)
        self._client = client

    async def _main_task(self):
        # Implement your custom logic here
        # For example, call an external API and process the response
        response = await self._client.get("your_api_endpoint")
        data = await response.text()
        chat_chunk = llm.ChatChunk(request_id="your_request_id", choices=[llm.Choice(delta=llm.ChoiceDelta(role="assistant", content=data))])
        self._event_ch.send_nowait(chat_chunk)
Would you like more detailed guidance on any specific part of this process? Sources: - LiveKit Slack Discussion 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
h
@millions-hair-24956 provide me guidance
g
I'm here to help with questions related to LiveKit. If you have a specific question or need guidance on a particular aspect of LiveKit, such as setting up a voice agent, integrating a custom LLM, or using LiveKit's features, please provide more details or rephrase your question. This will help me assist you better! 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
h
@millions-hair-24956 provide me guidance in the process im getting this error TypeError: Can't instantiate abstract class CustomLLMStream with abstract method _run
g
It seems likely that the error you're encountering is due to the
CustomLLMStream
class missing an implementation for an abstract method, specifically
_run
. In LiveKit, the
LLMStream
class is an abstract base class, and any subclass must implement all its abstract methods. Here's a step-by-step guide to address this issue: 1. Identify Abstract Methods: Check the
LLMStream
class to identify any abstract methods that need to be implemented. The
_run
method is likely one of them. 2. Implement the
_run
Method
: In your
CustomLLMStream
class, implement the
_run
method. This method should contain the logic for processing the input and generating the output. It typically involves handling the streaming of responses. 3. Example Implementation: Below is an example of how you might implement the
_run
method in your
CustomLLMStream
class:
Copy code
class CustomLLMStream(llm.LLMStream):
    def __init__(self, *, llm: CustomLLM, chat_ctx: llm.ChatContext, fnc_ctx: llm.FunctionContext, client: aiohttp.ClientSession):
        super().__init__(llm=llm, chat_ctx=chat_ctx, fnc_ctx=fnc_ctx)
        self._client = client

    async def _main_task(self):
        # Implement your custom logic here
        response = await self._client.get("your_api_endpoint")
        data = await response.text()
        chat_chunk = llm.ChatChunk(request_id="your_request_id", choices=[llm.Choice(delta=llm.ChoiceDelta(role="assistant", content=data))])
        self._event_ch.send_nowait(chat_chunk)

    async def _run(self):
        # Implement the logic for running the stream
        await self._main_task()
4. Test Your Implementation: After implementing the
_run
method, test your
CustomLLMStream
to ensure it works as expected without raising the
TypeError
. If you need further assistance or more detailed guidance on implementing specific parts of the
CustomLLMStream
, feel free to ask! Sources: - https://docs.livekit.io/python/livekit/agents/llm/index.html#livekit.agents.llm.LLMStream 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
h
TypeError: LLMStream.__init__() missing 1 required keyword-only argument: 'conn_options'
@millions-hair-24956 now im getting this TypeError: LLMStream.__init__() missing 1 required keyword-only argument: 'conn_options'
g
It seems likely that the
LLMStream
class requires a
conn_options
argument during initialization, which is not being provided in your
CustomLLMStream
class. This argument is likely used to configure connection options for the stream. Here's how you can address this issue: 1. Identify Required Arguments: Check the
LLMStream
class documentation or source code to understand what
conn_options
should contain. It might be a dictionary or a specific object that configures connection settings. 2. Modify the
CustomLLMStream
Constructor
: Update your
CustomLLMStream
class to accept and pass the
conn_options
argument to the superclass constructor. 3. Example Implementation: Here's an example of how you might modify your
CustomLLMStream
class to include `conn_options`:
Copy code
class CustomLLMStream(llm.LLMStream):
    def __init__(self, *, llm: CustomLLM, chat_ctx: llm.ChatContext, fnc_ctx: llm.FunctionContext, client: aiohttp.ClientSession, conn_options: dict):
        super().__init__(llm=llm, chat_ctx=chat_ctx, fnc_ctx=fnc_ctx, conn_options=conn_options)
        self._client = client

    async def _main_task(self):
        # Implement your custom logic here
        response = await self._client.get("your_api_endpoint")
        data = await response.text()
        chat_chunk = llm.ChatChunk(request_id="your_request_id", choices=[llm.Choice(delta=llm.ChoiceDelta(role="assistant", content=data))])
        self._event_ch.send_nowait(chat_chunk)

    async def _run(self):
        # Implement the logic for running the stream
        await self._main_task()
4. Provide
conn_options
When Instantiating
: When you create an instance of
CustomLLMStream
, make sure to provide the
conn_options
argument with appropriate values. If you need more detailed guidance on what
conn_options
should contain, please refer to the LiveKit documentation or the source code for the
LLMStream
class. Sources: - https://docs.livekit.io/python/livekit/plugins/openai/index.html#livekit.plugins.openai.LLMStream - https://docs.livekit.io/python/livekit/agents/llm/llm.html#livekit.agents.llm.llm.LLMStream Note: I wasn’t able to find highly relevant data sources, but above are a few potentially relevant links. 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
h
@millions-hair-24956 TypeError: CustomLLMStream.__init__() missing 1 required keyword-only argument: 'conn_options' when calling return CustomLLMStream(llm=self, _chat_ctx_=chat_ctx, _fnc_ctx_=fnc_ctx, client=self._client)
g
I'm not able to find any data sources to answer your question. You might want to check the
LLMStream
class documentation or source code to understand the
conn_options
parameter requirements. If this doesn't help, please reach out to the Livekit support team for further assistance. 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.