sometimes openai llm would not follow annotated js...
# ask-ai
a
sometimes openai llm would not follow annotated json varaibles and just add raw string to action params. this happens 1 out of 10 times. what can be done here?
g
It seems likely that the issue with OpenAI LLM not consistently following annotated JSON variables could be related to how the function calling is set up or how the LLM interprets the instructions. My understanding is that ensuring the LLM has clear and explicit instructions on when and how to use the function can help improve consistency. According to LiveKit documentation, you can adjust model parameters such as temperature to make the model's output more deterministic, which might help in reducing variability in function calls. Additionally, it might be beneficial to review the function's annotations and ensure they are as descriptive as possible. This can help the LLM understand the expected input and output formats better. If the issue persists, consider logging the inputs and outputs to identify patterns or specific conditions under which the LLM fails to follow the annotations. Would you like more detailed guidance on setting up function calls or adjusting model parameters? Sources: - LiveKit documentation 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
a
@millions-hair-24956 ok so apart from better descriptions and lower temperature there is not much one can do? what about strucutred outputs?
g
Structured outputs can indeed help in ensuring that the LLM adheres to a specific format when generating responses. However, as per the current information, LiveKit does not yet support structured outputs with OpenAI, which could be a limitation in your case. According to a Slack conversation, there are plans to add support for structured outputs, but it is not available at the moment. In the meantime, focusing on clear and detailed function annotations and adjusting the model's temperature are the primary methods to improve consistency. Additionally, you might consider implementing a post-processing step to validate and correct the output format if necessary. Would you like more information on how to implement post-processing or any other aspect of function calling? Sources: - Slack conversation on structured outputs 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.