Hi, I'm using Livekit linked with Langfuse , I'm ...
# ask-ai
p
Hi, I'm using Livekit linked with Langfuse , I'm also using FallbackAdapter, But when it comes to Langfuse this is what comes out from the model : gen_ai.request.model: "unknown" , is there anyway to solve this issue without leaving FallbackAdapter?
t
Thanks for reaching out, Ahmed. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
Hey there! The issue comes up because the LLM base class in LiveKit has a
model
property that defaults to "unknown" unless a plugin steps in and sets it explicitly. This info then gets used in telemetry spans and is reported to Langfuse through attributes like
gen_ai.request.model
. To fix this without leaving the FallbackAdapter, you’ll want to make sure the
model
property is properly set in the LLM instances you’re using. Usually, this means overriding the
model
property in your LLM plugin or configuration to specify the right model name. If you'd like, I’m here to help walk you through how to override this property or get your setup configured just right—just let me know! Sources: tests/test_tts_fallback.py
p
i've added it in the metadata like this , but still nothing metadata.update({ "gen_ai.request.model": "gemini-2.5-flash",
t
I don't have the answer you're looking for. You could also try asking your question: • in one of the other Slack channels or • to https://deepwiki.com/livekit/livekit_composite which is trained on all LiveKit source code If you find the answer, please post it here to help others!