glamorous-angle-75455
06/06/2025, 12:58 PMtall-airline-16756
06/06/2025, 12:58 PMtall-airline-16756
06/06/2025, 12:59 PMsession = AgentSession(
llm=google.LLM(
model="gemini-2.0-flash-exp",
temperature=0.8,
max_output_tokens=1500, # Set your desired maximum token limit here
),
# ... other parameters
)
The max_output_tokens parameter limits the maximum number of tokens in the generated output, helping to control the length of responses from the model. One token roughly corresponds to one word. If not specified, it uses the model's default value. Sources: Google Gemini integration guide | LiveKit Docs | Adjusting Model Parameters | LiveKit Docsglamorous-angle-75455
06/06/2025, 12:59 PMtall-airline-16756
06/06/2025, 1:00 PMfrom livekit.plugins import google
google_llm = google.LLM(
model="gemini-2.0-flash-exp",
temperature=0.8,
)
For authentication:
• For Google AI: Set the api_key
parameter or use the GOOGLE_API_KEY
environment variable.
• For Vertex AI: Set the GOOGLE_APPLICATION_CREDENTIALS
environment variable to your service account key file path.
You can then use this LLM instance in your LiveKit agent:
session = AgentSession(
llm=google.LLM(
model="gemini-2.0-flash",
),
# ... other components like stt, tts, etc.
)
To install the Google plugin, use the following command:
pip install "livekit-agents[google]~=1.0"
Let me know if you need further assistance! Sources: Google integration guide | LiveKit Docs | Google integration guide | LiveKit Docsglamorous-angle-75455
06/06/2025, 1:37 PMtall-airline-16756
06/06/2025, 1:38 PM