helpful-sugar-58738
02/26/2025, 12:25 PMgray-monkey-83525
02/26/2025, 12:25 PMllm.LLM
class and override its chat
method to implement your custom logic. This approach allows you to integrate your own LLM logic into the LiveKit framework. The custom LLM should return an instance of LLMStream
, which is used to handle the streaming of responses.
Here's a basic outline of how you might implement this:
1. Create a Custom LLM Class: Subclass the llm.LLM
class and override the chat
method. This method should return an instance of a custom LLMStream
class that you define.
2. Implement the LLMStream: Your custom LLMStream
class should handle the logic for streaming responses back to the agent. This involves implementing the _main_task
method to process the input and generate the output.
3. Integrate with VoicePipelineAgent: Use your custom LLM class as the llm
parameter when creating an instance of VoicePipelineAgent
.
Here is a simplified example based on the provided data sources:
class CustomLLM(llm.LLM):
def __init__(self):
super().__init__()
self._client = aiohttp.ClientSession()
self._api_key = "your_api_key"
def chat(self, *, chat_ctx: llm.ChatContext, fnc_ctx: llm.FunctionContext = None, temperature: float = None, n: int = 1, parallel_tool_calls: bool = None) -> "CustomLLMStream":
return CustomLLMStream(llm=self, chat_ctx=chat_ctx, fnc_ctx=fnc_ctx, client=self._client)
async def aclose(self):
await self._client.close()
class CustomLLMStream(llm.LLMStream):
def __init__(self, *, llm: CustomLLM, chat_ctx: llm.ChatContext, fnc_ctx: llm.FunctionContext, client: aiohttp.ClientSession):
super().__init__(llm=llm, chat_ctx=chat_ctx, fnc_ctx=fnc_ctx)
self._client = client
async def _main_task(self):
# Implement your custom logic here
# For example, call an external API and process the response
response = await self._client.get("your_api_endpoint")
data = await response.text()
chat_chunk = llm.ChatChunk(request_id="your_request_id", choices=[llm.Choice(delta=llm.ChoiceDelta(role="assistant", content=data))])
self._event_ch.send_nowait(chat_chunk)
Would you like more detailed guidance on any specific part of this process?
Sources:
- LiveKit Slack Discussion 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.helpful-sugar-58738
02/26/2025, 12:32 PMgray-monkey-83525
02/26/2025, 12:32 PMhelpful-sugar-58738
02/26/2025, 12:35 PMgray-monkey-83525
02/26/2025, 12:35 PMCustomLLMStream
class missing an implementation for an abstract method, specifically _run
. In LiveKit, the LLMStream
class is an abstract base class, and any subclass must implement all its abstract methods.
Here's a step-by-step guide to address this issue:
1. Identify Abstract Methods: Check the LLMStream
class to identify any abstract methods that need to be implemented. The _run
method is likely one of them.
2. Implement the _run
Method: In your CustomLLMStream
class, implement the _run
method. This method should contain the logic for processing the input and generating the output. It typically involves handling the streaming of responses.
3. Example Implementation: Below is an example of how you might implement the _run
method in your CustomLLMStream
class:
class CustomLLMStream(llm.LLMStream):
def __init__(self, *, llm: CustomLLM, chat_ctx: llm.ChatContext, fnc_ctx: llm.FunctionContext, client: aiohttp.ClientSession):
super().__init__(llm=llm, chat_ctx=chat_ctx, fnc_ctx=fnc_ctx)
self._client = client
async def _main_task(self):
# Implement your custom logic here
response = await self._client.get("your_api_endpoint")
data = await response.text()
chat_chunk = llm.ChatChunk(request_id="your_request_id", choices=[llm.Choice(delta=llm.ChoiceDelta(role="assistant", content=data))])
self._event_ch.send_nowait(chat_chunk)
async def _run(self):
# Implement the logic for running the stream
await self._main_task()
4. Test Your Implementation: After implementing the _run
method, test your CustomLLMStream
to ensure it works as expected without raising the TypeError
.
If you need further assistance or more detailed guidance on implementing specific parts of the CustomLLMStream
, feel free to ask!
Sources:
- https://docs.livekit.io/python/livekit/agents/llm/index.html#livekit.agents.llm.LLMStream 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.helpful-sugar-58738
02/26/2025, 12:46 PMhelpful-sugar-58738
02/26/2025, 12:46 PMgray-monkey-83525
02/26/2025, 12:46 PMLLMStream
class requires a conn_options
argument during initialization, which is not being provided in your CustomLLMStream
class. This argument is likely used to configure connection options for the stream.
Here's how you can address this issue:
1. Identify Required Arguments: Check the LLMStream
class documentation or source code to understand what conn_options
should contain. It might be a dictionary or a specific object that configures connection settings.
2. Modify the CustomLLMStream
Constructor: Update your CustomLLMStream
class to accept and pass the conn_options
argument to the superclass constructor.
3. Example Implementation: Here's an example of how you might modify your CustomLLMStream
class to include `conn_options`:
class CustomLLMStream(llm.LLMStream):
def __init__(self, *, llm: CustomLLM, chat_ctx: llm.ChatContext, fnc_ctx: llm.FunctionContext, client: aiohttp.ClientSession, conn_options: dict):
super().__init__(llm=llm, chat_ctx=chat_ctx, fnc_ctx=fnc_ctx, conn_options=conn_options)
self._client = client
async def _main_task(self):
# Implement your custom logic here
response = await self._client.get("your_api_endpoint")
data = await response.text()
chat_chunk = llm.ChatChunk(request_id="your_request_id", choices=[llm.Choice(delta=llm.ChoiceDelta(role="assistant", content=data))])
self._event_ch.send_nowait(chat_chunk)
async def _run(self):
# Implement the logic for running the stream
await self._main_task()
4. Provide conn_options
When Instantiating: When you create an instance of CustomLLMStream
, make sure to provide the conn_options
argument with appropriate values.
If you need more detailed guidance on what conn_options
should contain, please refer to the LiveKit documentation or the source code for the LLMStream
class.
Sources:
- https://docs.livekit.io/python/livekit/plugins/openai/index.html#livekit.plugins.openai.LLMStream
- https://docs.livekit.io/python/livekit/agents/llm/llm.html#livekit.agents.llm.llm.LLMStream
Note: I wasn’t able to find highly relevant data sources, but above are a few potentially relevant links. 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.helpful-sugar-58738
02/26/2025, 12:53 PMgray-monkey-83525
02/26/2025, 12:53 PMLLMStream
class documentation or source code to understand the conn_options
parameter requirements. If this doesn't help, please reach out to the Livekit support team for further assistance. 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.