:wave: Hello, team! LiveKit with OpenAI multimodal...
# ask-ai
b
👋 Hello, team! LiveKit with OpenAI multimodal is working perfectly; however, the collected metrics for input and output token data are incorrect after user and agent interactions. When comparing the data, we found that OpenAI provides the correct values, but the discrepancy in LiveKit’s reported metrics increases incrementally as the conversation progresses.
g
It might be possible that the discrepancy in token metrics between LiveKit and OpenAI is due to differences in how each system calculates or reports these metrics. LiveKit's metrics module provides detailed insights into various stages of a session, including token usage, but there could be differences in how these metrics are aggregated or logged compared to OpenAI's system. According to the LiveKit metrics documentation, metrics are captured and logged during active sessions, and discrepancies might arise if there are differences in the timing or method of data collection. Would you like more detailed information on how to potentially address or investigate this issue further? Sources: - LiveKit metrics documentation 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
b
i collected metrics the way mentioned here, but we are getting wrong data. on livekit side are getting extra values than openai .