<@U084APK40NS> Is there a reason we cannot pass ma...
# ask-ai
c
@millions-hair-24956 Is there a reason we cannot pass max_output_token in azure-openai plugin openai.LLM.with_azure?
g
It seems likely that the
max_output_tokens
parameter is not explicitly mentioned in the context of the
openai.LLM.with_azure
method. According to the LiveKit documentation, the
with_azure
method does not list
max_output_tokens
as a parameter. Instead, it focuses on parameters like
model
,
azure_endpoint
,
azure_deployment
,
api_version
,
api_key
, and
temperature
. Would you like more detail on how to work with the available parameters or any alternative approaches? Sources: - LiveKit documentation 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.