able-jackal-84140
07/07/2025, 5:42 AMtall-airline-16756
07/07/2025, 5:42 AMtall-airline-16756
07/07/2025, 5:42 AMextra
dictionary. For example:`extra = {"temperature": 0.7,"max_output_tokens": 100,"top_p": 0.9}`
2. Pass this extra
dictionary to the LLMStream constructor or directly to the API call of the LLM client. For instance, in the OpenAI implementation, you can use:`self._oai_stream = stream = await self._client.chat.completions.create(messages=chat_ctx,tools=fnc_ctx,model=self._model,stream_options={"include_usage": True},stream=True,timeout=httpx.Timeout(self._conn_options.timeout),**extra)`
3. Ensure that the parameters in the extra
dictionary are supported by the LLM provider you are using (e.g., OpenAI, Google, Mistral). Each provider may have specific parameters like temperature
, max_output_tokens
, top_p
, etc.
Let me know if you need further clarification or assistance!
Sources: GitHub::livekit::agents | livekit-plugins/livekit-plugins-openai/livekit/pl...able-jackal-84140
07/07/2025, 5:43 AMtall-airline-16756
07/07/2025, 5:43 AMpython extra = {}
2. Then, you can check for configuration options using something like is_given()
. Here's how that might look: python if is_given(self._opts.temperature): extra["temperature"] = self._opts.temperature if is_given(self._opts.max_output_tokens): extra["max_output_tokens"] = self._opts.max_output_tokens
3. The key here is only adding parameters that are actually provided or valid. This keeps things clean and prevents any unnecessary clutter. 4. When you're ready, just pass this extra
dictionary to your LLMStream constructor or API call, and you're all set! Stuck on any part of this? No worries! I'm happy to break it down further or help you troubleshoot. Just give me a shout if you need anything else! 😊 Sources: GitHub::livekit::agents | livekit-plugins/livekit-plugins-openai/livekit/pl...