steep-balloon-41261
03/26/2025, 8:08 AMtall-airline-16756
03/26/2025, 8:08 AMtall-airline-16756
03/26/2025, 8:08 AMmodel
. This discrepancy might be due to differences in the TTS implementation or library version.
Here are some steps to resolve the issue:
1. Verify the exact TTS class you're using. For example, some implementations use model_id
instead of model
.
2. Ensure that your code matches the documentation for the specific version of the ElevenLabs TTS library you have installed. If the documentation specifies model
, ensure you're using the latest version of the library.
3. If you're using an older version of the library, consider updating it to the latest version to align with the documentation.
If you continue to face issues, please let me know, and I can assist further.
Sources: ElevenLabs integration guide | LiveKit Docs | livekit.plugins.elevenlabs API documentationtall-airline-16756
03/26/2025, 8:09 AMtall-airline-16756
03/26/2025, 8:10 AMlanguage
parameter might not be supported in the way you're using it.
Here are some key points to consider:
• The language
parameter is only valid when using the "eleven_turbo_v2_5"
model. If you're using a different model, this parameter may not be supported.
• If you're not using "eleven_turbo_v2_5"
, try removing the language
parameter from your code.
• Ensure that your ElevenLabs TTS library is updated to the latest version, as older versions might not support this parameter.
Let me know if you need further assistance!
Sources: livekit.plugins.elevenlabs API documentationtall-airline-16756
03/26/2025, 8:11 AMmodel_id
and language
parameters:
from livekit import agents, rtc
from livekit.agents.tts import SynthesizedAudio
from livekit.plugins import elevenlabs
from typing import AsyncIterable
ctx: agents.JobContext = ...
text_stream: AsyncIterable[str] = ...
audio_source = rtc.AudioSource(44100, 1)
track = rtc.LocalAudioTrack.create_audio_track("agent-audio", audio_source)
await ctx.room.local_participant.publish_track(track)
# Initialize ElevenLabs TTS with model_id and language
tts = elevenlabs.TTS(model_id="eleven_turbo_v2", language="en")
tts_stream = tts.stream()
# Create a task to consume and publish audio frames
ctx.create_task(send_audio(tts_stream))
# Push text into the stream
async for text in text_stream:
tts_stream.push_text(text)
tts_stream.end_input()
Alternatively, for a more detailed initialization:
eleven_tts = elevenlabs.tts.TTS(
model="eleven_turbo_v2_5",
voice=elevenlabs.tts.Voice(
id="EXAVITQu4vr4xnSDxMaL",
name="Bella",
category="premade",
settings=elevenlabs.tts.VoiceSettings(
stability=0.71,
similarity_boost=0.5,
style=0.0,
use_speaker_boost=True
),
),
language="en",
streaming_latency=3,
enable_ssml_parsing=False,
chunk_length_schedule=[80, 120, 200, 260],
)
Let me know if you need further assistance!
Sources: ElevenLabs integration guide | LiveKit Docs | livekit.plugins.elevenlabs API documentation | Working with plugins | LiveKit Docstall-airline-16756
03/26/2025, 8:19 AMcuddly-cartoon-47334
03/26/2025, 10:51 PM