witty-kangaroo-6889
08/06/2025, 2:13 AMtall-airline-16756
08/06/2025, 2:13 AMtall-airline-16756
08/06/2025, 2:14 AMwitty-kangaroo-6889
08/06/2025, 2:46 AMtall-airline-16756
08/06/2025, 2:46 AMUsageSummary fields mean for your session:
• llm_prompt_tokens: This counts the tokens in the input prompt you sent to the LLM.
• llm_prompt_cached_tokens: These are tokens pulled from the cache instead of being generated fresh.
• llm_completion_tokens: Tokens that the LLM generated in its response.
• tts_characters_count: The number of characters in the text you fed into the Text-to-Speech (TTS) model.
• tts_audio_duration: How long (in seconds) the audio output from the TTS model is.
• stt_audio_duration: How long (in seconds) the audio input processed by the Speech-to-Text (STT) model is.
These little metrics give you a handy way to keep track of how you’re using the LLM, TTS, and STT features during your session. If anything’s unclear or you want to dive deeper, just let me know—I’m here to help!
Sources: livekit-plugins/livekit-plugins-aws/livekit/plugins/aws/llm.pywitty-kangaroo-6889
08/06/2025, 2:52 AMtall-airline-16756
08/06/2025, 2:52 AMllm_prompt_tokens count might be happening because of things like tokenization, cached tokens, system prompts, or some extra formatting tokens. If you could share a bit more about your input—like how it’s structured, any special formatting, or if it includes special characters or templates—that would really help us figure out what’s going on.
Sources: livekit-plugins/livekit-plugins-anthropic/livekit/plugins/anthropic/llm.py | Capturing metrics | LiveKit Docs | tools/tts/tts.cppwitty-kangaroo-6889
08/06/2025, 2:54 AMtall-airline-16756
08/06/2025, 2:55 AM