LiveKit-server curl <https://e1.local> returns OK ...
# ask-ai
s
LiveKit-server curl https://e1.local returns OK how to open GUI or project/rooms?
t
Thanks for reaching out, Oskar. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
It seems that LiveKit does not provide a built-in GUI for managing servers, projects, or rooms. Instead, you can interact with LiveKit using command-line tools, APIs, or SDKs. For example: • Use the command-line interface to connect to rooms with commands like
connect
and parameters such as
--room
,
--url
,
--api-key
, and
--api-secret
. • Leverage the RoomServiceClient API to programmatically create, list, and delete rooms. • Explore SDKs (e.g., for Go) to integrate LiveKit functionality into your applications. If you need further assistance or have specific requirements, feel free to let me know! Sources: Tree View: | dse-workbench | Managing rooms | LiveKit Docs | lksdk package - github.com/livekit/server-sdk-go/v2 - Go Packages | lksdk package - github.com/livekit/server-sdk-go/v2 - Go Packages
s
why LiveKit.ai has GUI? is it not available in self-hosted kubernetes installation?
t
It appears that the GUI available on LiveKit.ai might be specific to their hosted service. For self-hosted installations, such as those on Kubernetes, a GUI does not seem to be included by default based on the available information. Self-hosted deployments typically rely on APIs, SDKs, or command-line tools for management and interaction. If you are looking for a similar GUI experience, you may need to develop a custom interface or explore third-party tools that integrate with LiveKit. If you have further questions or need assistance with your self-hosted setup, feel free to ask!
s
show me the example of python code to use LiveKit server
t
Here is an example Python code snippet to use a LiveKit server:
Copy code
import asyncio
from dotenv import load_dotenv
from livekit.agents import (
    Agent,
    AgentSession,
    JobContext,
    WorkerOptions,
    WorkerType,
    cli,
)
from livekit.plugins import openai

# Load environment variables
load_dotenv()

async def entrypoint(ctx: JobContext):
    # Connect to the LiveKit room
    await ctx.connect()
    
    # Create an agent session with OpenAI model
    session = AgentSession(
        llm=openai.realtime.RealtimeModel(voice="alloy"),
    )
    
    # Start the agent in the room
    await session.start(
        agent=Agent(instructions="Talk to me!"),
        room=ctx.room,
    )

if __name__ == "__main__":
    cli.run_app(WorkerOptions(entrypoint_fnc=entrypoint, worker_type=WorkerType.ROOM))
To run this code, you need to set up environment variables in a
.env
file:
Copy code
# LiveKit config
export LIVEKIT_API_KEY="..."
export LIVEKIT_API_SECRET="..."
export LIVEKIT_URL="..."
# OpenAI config
export OPENAI_API_KEY="..."
Then, execute the script with:
Copy code
python your_script_name.py dev
If you need further assistance or have specific requirements, feel free to ask! Sources: Tree View: | dse-workbench | Tree View: | dse-workbench
s
why openAI key?
t
The OpenAI API key is required in the provided Python example because it integrates OpenAI's services with LiveKit. Specifically, the example uses OpenAI's language models (LLM) for real-time interactions, such as text-to-speech (TTS), speech-to-text (STT), or other multimodal capabilities. To enable these features, the OpenAI API key is needed for authentication with OpenAI's services. If you do not plan to use OpenAI integrations, you can modify the code to exclude these dependencies. If you need help customizing the example or have further questions, feel free to ask! Sources: OpenAI LLM integration guide | LiveKit Docs | OpenAI TTS integration guide | LiveKit Docs | OpenAI STT integration guide | LiveKit Docs | OpenAI Realtime API integration guide | LiveKit Docs
s
what GUI I can add to my kubernetes helm deployment on my AKS cluster
t
LiveKit provides a Helm chart for deploying to Kubernetes environments, which can be adapted for your AKS (Azure Kubernetes Service) cluster. Here are some key points about the deployment: • The Helm chart sets up a distributed deployment, including a Service and Ingress for routing traffic. • LiveKit pods require direct access to the network with host networking, meaning the RTC UDP/TCP ports on nodes are directly handled by the LiveKit server. This limits deployment to one LiveKit pod per node. • TLS/SSL termination is managed by the Ingress, and the Helm chart configures this for supported load balancers. • LiveKit supports graceful restarts during upgrades, allowing instances to drain before shutting down. • Serverless or private clusters are not supported due to additional layers of NAT, which are unsuitable for WebRTC traffic. While the Helm chart is designed for platforms like GKE, EKS, and DOKS, it can serve as a guide for deploying on AKS. If you need further assistance or specific guidance for AKS, feel free to ask! Sources: Deploy to Kubernetes | LiveKit Docs
s
yes, where is the GUI?
t
Currently, there doesn't appear to be a built-in GUI provided by LiveKit for managing or configuring your Kubernetes Helm deployment. The deployment is typically managed through command-line tools, APIs, or SDKs. If you are looking for a GUI, you might consider integrating third-party Kubernetes management tools like Lens or Rancher, which can provide a graphical interface for managing your AKS cluster and deployed applications. However, these tools would not be specific to LiveKit but could help with general Kubernetes management. If you have further questions or need assistance setting up such tools, feel free to ask!
s
how to test LiveKit server installed on AKS with python code on macOS.
t
I'm sorry, but I don't have the information you need at the moment. If you would like a member of the LiveKit team to respond, please ask your question in one of the other channels.
s
how to create project in Livekit server if I do not have any GUI?