how connect aws bedrock models
# ask-ai
p
how connect aws bedrock models
t
Thanks for reaching out, arpan. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
Hey there! If you’re looking to connect AWS Bedrock models with LiveKit, I’ve got you covered. Just follow these simple steps: 1. Install the AWS Plugin: First up, install the LiveKit AWS plugin using pip:
Copy code
pip install "livekit-agents[aws]~=1.2"
1. Set Up AWS Credentials: Next, set up your AWS credentials by adding them as environment variables in a
.env
file:
Copy code
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_REGION=us-east-1
1. Initialize the Bedrock LLM: Now, use the
LLM
class from the LiveKit AWS plugin to create an instance with your preferred model:
Copy code
from livekit.plugins import aws

llm = aws.LLM(
    model="anthropic.claude-3-5-sonnet-20240620-v1:0",
    region="us-east-1",
    temperature=0.8,
)
1. Use in AgentSession: Finally, plug the LLM into an
AgentSession
to get things rolling with your voice AI applications:
Copy code
from livekit import AgentSession

session = AgentSession(
    llm=llm,
    # Add other components like TTS, STT, etc. as needed
)
If you’re aiming for real-time applications using Amazon Nova Sonic, just install the optional real-time package like this:
Copy code
pip install "livekit-plugins-aws[realtime]"
Then, you can use the
RealtimeModel
like so:
Copy code
from livekit.plugins import aws

realtime_llm = aws.realtime.RealtimeModel(
    voice="tiffany",  # Specify the voice name
    region="us-east-1",
    temperature=0.7,
)
Just a quick reminder—make sure your AWS credentials are set up correctly and have the right permissions to access Bedrock models. If you want to dive deeper, check out the LiveKit AWS Bedrock Documentation. Happy coding!
p
it not giving any response
t
Hey there! If you’re not getting any response from AWS Bedrock models in LiveKit, no worries—let’s walk through some troubleshooting steps together: 1. Authentication and Authorization: First, double-check that your AWS credentials (
AWS_ACCESS_KEY_ID
and
AWS_SECRET_ACCESS_KEY
) are set up correctly and have the right permissions to access Bedrock. 2. Network Connectivity: Make sure there aren’t any network hiccups or firewall blocks stopping the connection to AWS Bedrock. Also, confirm that you’re using the correct AWS region. 3. Model Configuration: Take a quick look at the model name or ARN to ensure it’s valid and accessible. If you’re working with a custom model, verify it’s properly deployed. 4. Bedrock Service Status: It’s a good idea to check the AWS Bedrock status dashboard to see if there are any issues in your region or if throttling errors are popping up. 5. Logs and Errors: Peek into your application logs for any errors during session setup or while processing responses. Keep an eye out for authentication, network, or model-related issues. 6. Session Management: Make sure your Bedrock session is properly initialized and active before sending requests. If errors occur, try handling session restarts smoothly. If you’re still running into trouble, just share any error messages or logs you have, and I’d be happy to help you figure it out!
f
i am getting this error: syslog.host:livekit syslog.appname:livekit-rime-deepgram-agent-31ma {"message": "Error in _llm_inference_task\nTraceback (most recent call last):\n File \"/home/appuser/.local/lib/python3.11/site-packages/livekit/plugins/aws/llm.py\", line 215, in _run\n response = await client.converse_stream(**self._opts)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/appuser/.local/lib/python3.11/site-packages/aiobotocore/context.py\", line 36, in wrapper\n return await func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/appuser/.local/lib/python3.11/site-packages/aiobotocore/client.py\", line 424, in _make_api_call\n raise error_class(parsed_response, operation_name)\nbotocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the ConverseStream operation: Invocation of model ID meta.llama3-3-70b-instruct-v1:0 with on-demand throughput isn’t supported. Retry your request with the ID or ARN of an inference profile that contains this model.\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/home/appuser/.local/lib/python3.11/site-packages/livekit/agents/llm/llm.py\", line 177, in _main_task\n return await self._run()\n ^^^^^^^^^^^^^^^^^\n File \"/home/appuser/.local/lib/python3.11/site-packages/livekit/plugins/aws/llm.py\", line 231, in _run\n raise APIConnectionError(\nlivekit.agents._exceptions.APIConnectionError: aws bedrock llm: error generating content: An error occurred (ValidationException) when calling the ConverseStream operation: Invocation of model ID meta.llama3-3-70b-instruct-v1:0 with on-demand throughput isn’t supported. Retry your request with the ID or ARN of an inference profile that contains this model. (body=None, retryable=True)\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/ho
t
It looks like the error you're seeing means the model ID
meta.llama3-3-70b-instruct-v1:0
can't be used with on-demand throughput right now. No worries though—here are some steps that should help you get it sorted out: 1. Use an Inference Profile: Try sending your request again, but this time include the ID or ARN of an inference profile that has the model you want. Just make sure the inference profile is set up correctly in your AWS Bedrock configuration. 2. Verify Model Configuration: Take a moment to double-check the model ID and confirm it supports the operation you're trying. Some models need specific setups or profiles to work. 3. Check AWS Region: Make sure the AWS region in your settings matches where the model is actually deployed. 4. Update Permissions: Confirm your AWS credentials have the right permissions to access both the model and the inference profiles. 5. Review Logs: Peek at your application logs for any extra clues—sometimes they reveal misconfigurations or missing details. If you'd like, I’m here to help you step-by-step with setting up the inference profile or troubleshooting anything else. Just let me know! Sources: livekit-plugins/livekit-plugins-aws/livekit/plugins/aws/experimental/realti...
f
how to use inference profile in livekit agent code?
t
Hey there! If you want to use an inference profile in your LiveKit agent code with AWS Bedrock, here’s a friendly step-by-step guide to help you out: 1. Install the LiveKit AWS Plugin: First, make sure you have the plugin installed by running this:
Copy code
pip install "livekit-plugins-aws[bedrock,realtime]"
1. Set Up AWS Credentials: Next, pop your AWS credentials into a
.env
file like this:
Copy code
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_REGION=your_region
1. Initialize the LLM with an Inference Profile: Use your inference profile’s ARN to get things rolling:
Copy code
from livekit.plugins import aws

llm = aws.LLM(
    model="arn:aws:bedrock-runtime:REGION:ACCOUNT_ID:inference-profile/PROFILE_NAME",
    region="REGION",  # e.g., "us-east-1"
    temperature=0.8,  # Adjust this to tweak response randomness
    tool_choice="auto",  # Options: "auto", "required", "none"
)
Just swap out
REGION
,
ACCOUNT_ID
, and
PROFILE_NAME
with your actual AWS region, account ID, and inference profile name. 1. Integrate with AgentSession: Finally, use the LLM in your session like this:
Copy code
from livekit import AgentSession

session = AgentSession(
    llm=llm,
    # Feel free to add other components like TTS, STT, and more here!
)
If you want to dive deeper, check out the LiveKit AWS Bedrock documentation. And hey, if you hit any bumps or have questions, just give me a shout—I’m here to help!