i am following the AI Agent starter which contians...
# ask-ai
d
i am following the AI Agent starter which contians the voice-pipeline-agent-python for the server and voice-assistant-frontend for the frontend. how do i sent custom data, such as the user’s first name, to the server? provide example code
g
To send custom data like a user's first name from the frontend to the server in a LiveKit setup, you can use the
publishData
method on the
localParticipant
object. This method allows you to send arbitrary data messages to other participants in the same room. Here's a basic example: Frontend (JavaScript/React)
Copy code
import { useRoomContext } from '@livekit/components-react';

const room = useRoomContext();

function sendUserData(firstName) {
  const data = JSON.stringify({ type: 'user_info', firstName });
  room.localParticipant.publishData(
    new TextEncoder().encode(data),
    { reliable: true, topic: 'user_data' }
  );
}
Server (Python) On the server side, you can listen for these data messages using the
room.on("data_received")
event. Here's an example of how you might handle this in your Python agent:
Copy code
from livekit import rtc

async def entrypoint(ctx):
    await ctx.connect(auto_subscribe=rtc.AutoSubscribe.AUDIO_ONLY)
    
    @ctx.room.on("data_received")
    async def on_data_received(data, participant):
        message = data.decode('utf-8')
        print(f"Received data from {participant.identity}: {message}")

    # Wait for the first participant to connect
    participant = await ctx.wait_for_participant()
    print(f"Connected to room {ctx.room.name} with participant {participant.identity}")
This setup allows you to send custom data from the frontend to the server, where it can be processed as needed. Would you like more details on any specific part of this process? Sources: - Data messages | LiveKit Docs - Slack discussion on sending data 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.
d
@millions-hair-24956 Here’s my client code:
Copy code
"use client";

import { AnimatePresence, motion } from "framer-motion";
import {
    LiveKitRoom,
    useVoiceAssistant,
    BarVisualizer,
    RoomAudioRenderer,
    VoiceAssistantControlBar,
    AgentState,
    DisconnectButton,
} from "@livekit/components-react";
import { useCallback, useEffect, useState } from "react";
import { MediaDeviceFailure } from "livekit-client";
import type { ConnectionDetails } from "./api/connection-details/route";
import { NoAgentNotification } from "@/components/NoAgentNotification";
import { CloseIcon } from "@/components/CloseIcon";
import { useKrispNoiseFilter } from "@livekit/components-react/krisp";

export default function Page() {
    const [connectionDetails, updateConnectionDetails] = useState<ConnectionDetails | undefined>(
        undefined
    );
    const [agentState, setAgentState] = useState<AgentState>("disconnected");

    const onConnectButtonClicked = useCallback(async () => {
        // Generate room connection details, including:
        //   - A random Room name
        //   - A random Participant name
        //   - An Access Token to permit the participant to join the room
        //   - The URL of the LiveKit server to connect to
        //
        // In real-world application, you would likely allow the user to specify their
        // own participant name, and possibly to choose from existing rooms to join.

        const url = new URL(
            process.env.NEXT_PUBLIC_CONN_DETAILS_ENDPOINT ?? "/api/connection-details",
            window.location.origin
        );
        const response = await fetch(url.toString());
        const connectionDetailsData = await response.json();
        updateConnectionDetails(connectionDetailsData);
    }, []);

    return (
        <main data-lk-theme="default" className="h-full grid content-center bg-[var(--lk-bg)]">
            {/* <p>{JSON.stringify(connectionDetails, null, 4)}</p> */}
            <LiveKitRoom
                token={connectionDetails?.participantToken}
                serverUrl={connectionDetails?.serverUrl}
                connect={connectionDetails !== undefined}
                audio={true}
                video={false}
                onMediaDeviceFailure={onDeviceFailure}
                onDisconnected={() => {
                    updateConnectionDetails(undefined);
                }}
                className="grid grid-rows-[2fr_1fr] items-center"
            >
                <SimpleVoiceAssistant onStateChange={setAgentState} />
                <ControlBar
                    onConnectButtonClicked={onConnectButtonClicked}
                    agentState={agentState}
                />
                <RoomAudioRenderer />
                <NoAgentNotification state={agentState} />
            </LiveKitRoom>
        </main>
    );
}

function SimpleVoiceAssistant(props: { onStateChange: (state: AgentState) => void }) {
    const { state, audioTrack } = useVoiceAssistant();
    useEffect(() => {
        props.onStateChange(state);
    }, [props, state]);
    return (
        <div className="h-[300px] max-w-[90vw] mx-auto">
            <BarVisualizer
                state={state}
                barCount={5}
                trackRef={audioTrack}
                className="agent-visualizer"
                options={{ minHeight: 24 }}
            />
        </div>
    );
}

function ControlBar(props: { onConnectButtonClicked: () => void; agentState: AgentState }) {
    /**
     * Use Krisp background noise reduction when available.
     * Note: This is only available on Scale plan, see {@link <https://livekit.io/pricing> | LiveKit Pricing} for more details.
     */
    const krisp = useKrispNoiseFilter();
    useEffect(() => {
        krisp.setNoiseFilterEnabled(true);
    }, []);

    return (
        <div className="relative h-[100px]">
            <AnimatePresence>
                {props.agentState === "disconnected" && (
                    <motion.button
                        initial={{ opacity: 0, top: 0 }}
                        animate={{ opacity: 1 }}
                        exit={{ opacity: 0, top: "-10px" }}
                        transition={{ duration: 1, ease: [0.09, 1.04, 0.245, 1.055] }}
                        className="uppercase absolute left-1/2 -translate-x-1/2 px-4 py-2 bg-white text-black rounded-md"
                        onClick={() => props.onConnectButtonClicked()}
                    >
                        Start a conversation
                    </motion.button>
                )}
            </AnimatePresence>
            <AnimatePresence>
                {props.agentState !== "disconnected" && props.agentState !== "connecting" && (
                    <motion.div
                        initial={{ opacity: 0, top: "10px" }}
                        animate={{ opacity: 1, top: 0 }}
                        exit={{ opacity: 0, top: "-10px" }}
                        transition={{ duration: 0.4, ease: [0.09, 1.04, 0.245, 1.055] }}
                        className="flex h-8 absolute left-1/2 -translate-x-1/2  justify-center"
                    >
                        <VoiceAssistantControlBar controls={{ leave: false }} />
                        <DisconnectButton>
                            <CloseIcon />
                        </DisconnectButton>
                    </motion.div>
                )}
            </AnimatePresence>
        </div>
    );
}

function onDeviceFailure(error?: MediaDeviceFailure) {
    console.error(error);
    alert(
        "Error acquiring camera or microphone permissions. Please make sure you grant the necessary permissions in your browser and reload the tab"
    );
}
Heres my server code:
Copy code
import logging

from dotenv import load_dotenv
from livekit.agents import (
    AutoSubscribe,
    JobContext,
    JobProcess,
    WorkerOptions,
    cli,
    llm,
    metrics,
)
from livekit.agents.pipeline import VoicePipelineAgent
from livekit.plugins import cartesia, openai, deepgram, silero, turn_detector


load_dotenv(dotenv_path=".env.local")
logger = logging.getLogger("voice-agent")


def prewarm(proc: JobProcess):
    proc.userdata["vad"] = silero.VAD.load()


async def entrypoint(ctx: JobContext):
    initial_ctx = llm.ChatContext().append(
        role="system",
        text=(
            "You are a voice assistant created by LiveKit. Your interface with users will be voice. "
            "You should use short and concise responses, and avoiding usage of unpronouncable punctuation. "
            "You were created as a demo to showcase the capabilities of LiveKit's agents framework."
        ),
    )

    <http://logger.info|logger.info>(f"connecting to room {ctx.room.name}")
    await ctx.connect(auto_subscribe=AutoSubscribe.AUDIO_ONLY)

    # Wait for the first participant to connect
    participant = await ctx.wait_for_participant()
    <http://logger.info|logger.info>(f"starting voice assistant for participant {participant.identity}")

    # This project is configured to use Deepgram STT, OpenAI LLM and Cartesia TTS plugins
    # Other great providers exist like Cerebras, ElevenLabs, Groq, <http://Play.ht|Play.ht>, Rime, and more
    # Learn more and pick the best one for your app:
    # <https://docs.livekit.io/agents/plugins>
    agent = VoicePipelineAgent(
        vad=ctx.proc.userdata["vad"],
        stt=deepgram.STT(),
        llm=openai.LLM(model="gpt-4o-mini"),
        tts=cartesia.TTS(),
        turn_detector=turn_detector.EOUModel(),
        # minimum delay for endpointing, used when turn detector believes the user is done with their turn
        min_endpointing_delay=0.5,
        # maximum delay for endpointing, used when turn detector does not believe the user is done with their turn
        max_endpointing_delay=5.0,
        chat_ctx=initial_ctx,
    )

    usage_collector = metrics.UsageCollector()

    @agent.on("metrics_collected")
    def on_metrics_collected(agent_metrics: metrics.AgentMetrics):
        metrics.log_metrics(agent_metrics)
        usage_collector.collect(agent_metrics)

    agent.start(ctx.room, participant)

    # The agent should be polite and greet the user when it joins :)
    await agent.say("Hey, how can I help you today?", allow_interruptions=True)


if __name__ == "__main__":
    cli.run_app(
        WorkerOptions(
            entrypoint_fnc=entrypoint,
            prewarm_fnc=prewarm,
        ),
    )
How do I send custom data from the client to the server?
g
To send custom data, such as a user's first name, from the client to the server in a LiveKit setup, you can use the
publishData
method on the
localParticipant
object. This method allows you to send arbitrary data messages to other participants in the same room. Here's how you can implement it: Client Code (JavaScript/React) You can modify your client code to send custom data using the
publishData
method. Here's an example of how you might send a user's first name:
Copy code
import { useRoomContext } from '@livekit/components-react';

const room = useRoomContext();

function sendUserData(firstName) {
  const data = JSON.stringify({ type: 'user_info', firstName });
  room.localParticipant.publishData(
    new TextEncoder().encode(data),
    { reliable: true, topic: 'user_data' }
  );
}

// Call this function when you want to send the data, e.g., after the user connects
sendUserData('John');
Server Code (Python) On the server side, you can listen for these data messages using the
room.on("data_received")
event. Here's an example of how you might handle this in your Python agent:
Copy code
from livekit import rtc

async def entrypoint(ctx):
    await ctx.connect(auto_subscribe=rtc.AutoSubscribe.AUDIO_ONLY)
    
    @ctx.room.on("data_received")
    async def on_data_received(data, participant):
        message = data.decode('utf-8')
        print(f"Received data from {participant.identity}: {message}")

    # Wait for the first participant to connect
    participant = await ctx.wait_for_participant()
    print(f"Connected to room {ctx.room.name} with participant {participant.identity}")
This setup allows you to send custom data from the frontend to the server, where it can be processed as needed. Would you like more details on any specific part of this process? Sources: - Data messages | LiveKit Docs 👍 0 button 👎 0 button 💡 Hint: Mention @runllm in the thread for followups.