Structured outputs can indeed help in ensuring that the LLM adheres to a specific format when generating responses. However, as per the current information, LiveKit does not yet support structured outputs with OpenAI, which could be a limitation in your case. According to a
Slack conversation, there are plans to add support for structured outputs, but it is not available at the moment.
In the meantime, focusing on clear and detailed function annotations and adjusting the model's temperature are the primary methods to improve consistency. Additionally, you might consider implementing a post-processing step to validate and correct the output format if necessary.
Would you like more information on how to implement post-processing or any other aspect of function calling?
Sources:
-
Slack conversation on structured outputs 👍 0 button 👎 0 button 💡
Hint: Mention @runllm in the thread for followups.