important-hair-38324
08/19/2025, 3:02 AMtall-airline-16756
08/19/2025, 3:02 AMtall-airline-16756
08/19/2025, 3:02 AMAudioCaptureOptions
struct. Just a heads-up, though—it doesn’t let you adjust how strong or intense these features are, only whether they’re enabled or disabled.
Sources: Runtime/Scripts/Room/Track/Options.csimportant-hair-38324
08/19/2025, 3:03 AMtall-airline-16756
08/19/2025, 3:03 AMvoiceIsolation
constraint (where it’s available) or even create your own custom audio processing with the Web Audio API. Just let me know if you want a hand exploring these options—I’m happy to help!
Sources: packages/livekit-rtc/src/proto/track_pb.ts | libwebrtc/src/native/audio_source.rs | lib/src/proto/livekit_models.pbjson.dart | src/room/track/LocalTrackPublication.tsimportant-hair-38324
08/19/2025, 3:03 AMimportant-hair-38324
08/19/2025, 3:05 AMtall-airline-16756
08/19/2025, 3:05 AMPannerNode
from the Web Audio API to create some cool spatial audio effects.
Here’s a simple way to get started:
1. First, create an AudioContext
by calling new AudioContext()
.
2. Then, attach this AudioContext
to your track using the setAudioContext
method on the RemoteAudioTrack
.
3. Next, create and set up a PannerNode
to position your audio source in 3D space.
4. Finally, adjust the position of the PannerNode
dynamically depending on where you want the sound to come from.
If you want to dive deeper, LiveKit has a great spatial audio tutorial and example repo that you can check out. And of course, feel free to ask if you want me to walk you through some code or help out with implementation!
Sources: src/room/participant/RemoteParticipant.ts | Using WebRTC + React + WebAudio to create spatial audio | src/controller/SpatialAudioController.tsx | Audio rendering with React Components | LiveKit Docsimportant-hair-38324
08/19/2025, 3:06 AMtall-airline-16756
08/19/2025, 3:06 AMSetVolume()
method on RemoteAudioTrack
, which helps simulate spatial audio by making sounds quieter the farther away they are.
If you’re looking for more advanced panning or full 3D spatialization, you’d want to add some custom logic on the JavaScript/WebAudio side since the Unity WebGL SDK connects through a bridge to WebRTC and WebAudio for handling audio. Feel free to reach out if you want me to walk you through it or share more details—I’m happy to help!
Sources: unity-project/ProjectSettings/boot.config | Runtime/Scripts/Room/Track/Track.cs | Real-time audio and video in the Metaverse | Real-time audio and video in the Metaverse | Real-time audio and video in the Metaverseimportant-hair-38324
08/19/2025, 3:08 AMtall-airline-16756
08/19/2025, 3:08 AMPannerNode
works its magic to create spatial effects. Here's a quick rundown:
1. Use the RemoteAudioTrack
in Unity to handle your audio tracks.
2. Extend the Unity-WebGL bridge to send position data to JavaScript using JSNative.CallMethod
.
3. Then, in JavaScript, update the `PannerNode`'s positionX
and positionZ
values based on the positions coming from Unity to control the audio direction.
Although the SDK doesn’t offer a direct stereo pan setting, this method lets you achieve spatial audio panning nicely. If you want, I’d be happy to help with more details or share some code examples to get you started!
Sources: Real-time audio and video in the Metaverse | Real-time audio and video in the Metaverse | Using WebRTC + React + WebAudio to create spatial audio | Subscribing to tracks | LiveKit Docsimportant-hair-38324
08/19/2025, 3:09 AMtall-airline-16756
08/19/2025, 3:09 AMJSNative.CallMethod
to send positional data from Unity over to JavaScript. For instance, create a method in Unity that passes the position of the audio source like this:
JSNative.CallMethod("updateAudioPosition", new object[] { audioSourceId, position.x, position.y, position.z });
2. Implement the JavaScript function: In your WebGL template or JavaScript file, define the updateAudioPosition
function so it updates the PannerNode
for the right audio track:
function updateAudioPosition(audioSourceId, x, y, z) {
const audioTrack = audioTracks[audioSourceId]; // Grab the audio track by ID
if (audioTrack && audioTrack.pannerNode) {
audioTrack.pannerNode.positionX.setValueAtTime(x, audioContext.currentTime);
audioTrack.pannerNode.positionY.setValueAtTime(y, audioContext.currentTime);
audioTrack.pannerNode.positionZ.setValueAtTime(z, audioContext.currentTime);
}
}
3. Attach the PannerNode
to the audio track: When you’re setting up the audio track in JavaScript, create a PannerNode
and connect it within the WebAudio graph like this:
const audioContext = new AudioContext();
const pannerNode = audioContext.createPanner();
pannerNode.panningModel = "HRTF"; // This gives you realistic spatial audio
pannerNode.distanceModel = "inverse"; // Feel free to tweak this based on what you need
const sourceNode = audioContext.createMediaStreamSource(audioTrack.stream);
sourceNode.connect(pannerNode).connect(audioContext.destination);
// Keep the pannerNode handy for updates later
audioTracks[audioSourceId] = { pannerNode, sourceNode };
4. Update positions dynamically: Keep sending updated positions from Unity to JavaScript as your audio source or listener moves around.
This setup should help you get spatial audio panning going smoothly by combining Unity’s positional data with the WebAudio API’s processing power. If anything’s unclear or you want more examples, just let me know—I’m here to help!important-hair-38324
08/19/2025, 3:11 AMtall-airline-16756
08/19/2025, 3:12 AMLocalAudioTrack.CreateAudioTrack
method. Here's a quick example:
var track = LocalAudioTrack.CreateAudioTrack("my-audio-track", rtcSource, room);
Now, while you can easily adjust the volume of remote audio tracks using track.SetVolume(float volume)
, the SDK doesn't offer a direct way to control panning in C#. To get that panning effect, you'd want to use the Unity-WebGL bridge to send positional data over to JavaScript. From there, the WebAudio API's PannerNode
can take care of the spatial audio effects for you. If you'd like, I’m happy to help you set this up—just let me know!
Sources: LivekitUnitySampleApp/Assets/LivekitSamples.cs | Runtime/Scripts/Room/Track/TSInterop/IAudioTrack.csimportant-hair-38324
08/19/2025, 3:13 AMtall-airline-16756
08/19/2025, 3:13 AMPannerNode
to it using JavaScript. Here’s a friendly step-by-step on how to do this:
1. Create the audio track in C#: Use the LocalAudioTrack.CreateAudioTrack
method in Unity like this:
var track = LocalAudioTrack.CreateAudioTrack("my-audio-track", rtcSource, room);
1. Send the track ID over to JavaScript: You can use the Unity-WebGL bridge to pass the track ID. For example:
JSNative.CallMethod("attachPannerNode", new object[] { track.Id });
2. Attach the PannerNode
in JavaScript: Here’s how you can set up the attachPannerNode
function to create and connect a PannerNode
to your audio track:
function attachPannerNode(trackId) {
const audioTrack = audioTracks[trackId]; // Grab the audio track by its ID
if (audioTrack) {
const audioContext = new AudioContext();
const pannerNode = audioContext.createPanner();
pannerNode.panningModel = "HRTF";
pannerNode.distanceModel = "inverse";
const sourceNode = audioContext.createMediaStreamSource(audioTrack.stream);
sourceNode.connect(pannerNode).connect(audioContext.destination);
// Keep a reference to the pannerNode for updates later
audioTrack.pannerNode = pannerNode;
}
}
3. Update the PannerNode
position dynamically: You can send position updates from Unity to JavaScript and adjust the PannerNode
like this:
function updateAudioPosition(trackId, x, y, z) {
const audioTrack = audioTracks[trackId];
if (audioTrack && audioTrack.pannerNode) {
audioTrack.pannerNode.positionX.setValueAtTime(x, audioContext.currentTime);
audioTrack.pannerNode.positionY.setValueAtTime(y, audioContext.currentTime);
audioTrack.pannerNode.positionZ.setValueAtTime(z, audioContext.currentTime);
}
}
This way, you get the best of both worlds—Unity’s C# for creating and managing your audio tracks, plus the WebAudio API’s cool spatial audio features. If you want me to walk you through any part or have more questions, just let me know—I’m here to help!important-hair-38324
08/19/2025, 3:19 AMtall-airline-16756
08/19/2025, 3:19 AMPannerNode
to a subscribed audio track. The SDK takes care of audio playback using the Track.Attach()
method, and for spatial audio effects, it usually adjusts volume with track.SetVolume()
based on where things are positioned.
If you’re looking for more advanced spatial audio features like panning, you’d need to tweak the Unity-WebGL bridge a bit to send the subscribed track’s ID over to JavaScript. From there, you can use the Web Audio API to attach a PannerNode
to that track. If you want, I’m happy to help you figure out how to set this up—just let me know!
Sources: Real-time audio and video in the Metaverse | Subscribing to tracks | LiveKit Docs | unity-project/Assets/Scripts/GameManager.cs | src/controller/SpatialAudioController.tsx | Using WebRTC + React + WebAudio to create spatial audioimportant-hair-38324
08/19/2025, 3:20 AMtall-airline-16756
08/19/2025, 3:21 AMimportant-hair-38324
08/19/2025, 3:22 AMimportant-hair-38324
08/19/2025, 3:26 AMimportant-hair-38324
08/19/2025, 3:27 AM