polite-dog-12152
08/14/2025, 10:43 AMtall-airline-16756
08/14/2025, 10:43 AMtall-airline-16756
08/14/2025, 10:44 AMpip install "livekit-agents[aws]~=1.2"
1. Set Up AWS Credentials: Next, set up your AWS credentials by adding them as environment variables in a .env
file:
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_REGION=us-east-1
1. Initialize the Bedrock LLM: Now, use the LLM
class from the LiveKit AWS plugin to create an instance with your preferred model:
from livekit.plugins import aws
llm = aws.LLM(
model="anthropic.claude-3-5-sonnet-20240620-v1:0",
region="us-east-1",
temperature=0.8,
)
1. Use in AgentSession: Finally, plug the LLM into an AgentSession
to get things rolling with your voice AI applications:
from livekit import AgentSession
session = AgentSession(
llm=llm,
# Add other components like TTS, STT, etc. as needed
)
If you’re aiming for real-time applications using Amazon Nova Sonic, just install the optional real-time package like this:
pip install "livekit-plugins-aws[realtime]"
Then, you can use the RealtimeModel
like so:
from livekit.plugins import aws
realtime_llm = aws.realtime.RealtimeModel(
voice="tiffany", # Specify the voice name
region="us-east-1",
temperature=0.7,
)
Just a quick reminder—make sure your AWS credentials are set up correctly and have the right permissions to access Bedrock models. If you want to dive deeper, check out the LiveKit AWS Bedrock Documentation. Happy coding!polite-dog-12152
08/14/2025, 11:11 AMtall-airline-16756
08/14/2025, 11:12 AMAWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
) are set up correctly and have the right permissions to access Bedrock.
2. Network Connectivity: Make sure there aren’t any network hiccups or firewall blocks stopping the connection to AWS Bedrock. Also, confirm that you’re using the correct AWS region.
3. Model Configuration: Take a quick look at the model name or ARN to ensure it’s valid and accessible. If you’re working with a custom model, verify it’s properly deployed.
4. Bedrock Service Status: It’s a good idea to check the AWS Bedrock status dashboard to see if there are any issues in your region or if throttling errors are popping up.
5. Logs and Errors: Peek into your application logs for any errors during session setup or while processing responses. Keep an eye out for authentication, network, or model-related issues.
6. Session Management: Make sure your Bedrock session is properly initialized and active before sending requests. If errors occur, try handling session restarts smoothly.
If you’re still running into trouble, just share any error messages or logs you have, and I’d be happy to help you figure it out!few-dog-48918
08/16/2025, 11:34 PMtall-airline-16756
08/16/2025, 11:34 PMmeta.llama3-3-70b-instruct-v1:0
can't be used with on-demand throughput right now. No worries though—here are some steps that should help you get it sorted out:
1. Use an Inference Profile: Try sending your request again, but this time include the ID or ARN of an inference profile that has the model you want. Just make sure the inference profile is set up correctly in your AWS Bedrock configuration.
2. Verify Model Configuration: Take a moment to double-check the model ID and confirm it supports the operation you're trying. Some models need specific setups or profiles to work.
3. Check AWS Region: Make sure the AWS region in your settings matches where the model is actually deployed.
4. Update Permissions: Confirm your AWS credentials have the right permissions to access both the model and the inference profiles.
5. Review Logs: Peek at your application logs for any extra clues—sometimes they reveal misconfigurations or missing details.
If you'd like, I’m here to help you step-by-step with setting up the inference profile or troubleshooting anything else. Just let me know!
Sources: livekit-plugins/livekit-plugins-aws/livekit/plugins/aws/experimental/realti...few-dog-48918
08/16/2025, 11:35 PMtall-airline-16756
08/16/2025, 11:35 PMpip install "livekit-plugins-aws[bedrock,realtime]"
1. Set Up AWS Credentials: Next, pop your AWS credentials into a .env
file like this:
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_REGION=your_region
1. Initialize the LLM with an Inference Profile: Use your inference profile’s ARN to get things rolling:
from livekit.plugins import aws
llm = aws.LLM(
model="arn:aws:bedrock-runtime:REGION:ACCOUNT_ID:inference-profile/PROFILE_NAME",
region="REGION", # e.g., "us-east-1"
temperature=0.8, # Adjust this to tweak response randomness
tool_choice="auto", # Options: "auto", "required", "none"
)
Just swap out REGION
, ACCOUNT_ID
, and PROFILE_NAME
with your actual AWS region, account ID, and inference profile name.
1. Integrate with AgentSession: Finally, use the LLM in your session like this:
from livekit import AgentSession
session = AgentSession(
llm=llm,
# Feel free to add other components like TTS, STT, and more here!
)
If you want to dive deeper, check out the LiveKit AWS Bedrock documentation. And hey, if you hit any bumps or have questions, just give me a shout—I’m here to help!