When i use openai as the LLM with my OPENAI_API_KE...
# ask-ai
h
When i use openai as the LLM with my OPENAI_API_KEY does it use my existing RAG data which we have upload or it creates a separate data for it?
t
Thanks for reaching out, Vivek. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
Hi there! I wanted to share a little clarification about using OpenAI as the LLM with your
OPENAI_API_KEY
in LiveKit. It doesn’t automatically handle or create separate RAG (Retrieval-Augmented Generation) data for you. To make RAG work, you’ll need to manage and integrate that data explicitly within your agent’s logic. Here’s a friendly guide on what you’d need to do to use RAG: • First, build a knowledge base by creating vector embeddings from your documents and storing them in a vector index (tools like Annoy can be really helpful here!). • Then, during runtime, retrieve relevant data by performing a similarity search against your pre-built index, which helps enrich the prompts you send to the LLM. This approach keeps your RAG data nicely separate from the OpenAI model’s internal knowledge, giving you full control within your application. If you have any questions or want to dive deeper, I’m here to help! Sources: OpenAI LLM integration guide | LiveKit Docs | rag/rag_db_builder.py | agent_extensions/agent_extensions/rag/builder.py