is there a way to control the strength of automati...
# ask-ai
i
is there a way to control the strength of automatic gain control, noise suppression and echo cancellation using the Unity WebGL sdk?
t
Thanks for reaching out, Greg. I'm an AI assistant and I'm here to help. Give me a moment while I research this topic.
With the Unity WebGL SDK, you can easily turn on or off features like automatic gain control, noise suppression, and echo cancellation using the
AudioCaptureOptions
struct. Just a heads-up, though—it doesn’t let you adjust how strong or intense these features are, only whether they’re enabled or disabled. Sources: Runtime/Scripts/Room/Track/Options.cs
i
Is this still the case for the native javascript sdk?
t
The LiveKit JavaScript SDK lets you turn on or off features like automatic gain control, noise suppression, and echo cancellation through audio capture constraints. That said, it doesn’t currently let you adjust how strong or intense these features are. This is because of how the WebRTC API is designed and the current capabilities of LiveKit’s SDKs. If you’re looking for stronger noise suppression, you might want to try out some experimental browser features like the
voiceIsolation
constraint (where it’s available) or even create your own custom audio processing with the Web Audio API. Just let me know if you want a hand exploring these options—I’m happy to help! Sources: packages/livekit-rtc/src/proto/track_pb.ts | libwebrtc/src/native/audio_source.rs | lib/src/proto/livekit_models.pbjson.dart | src/room/track/LocalTrackPublication.ts
i
Hello?
How about panning the audio track, how would I go about that?
t
If you want to pan audio tracks in the LiveKit JavaScript SDK, you can totally do that by combining the Web Audio API with how LiveKit handles audio tracks. While LiveKit doesn’t have a built-in panning method, you can use the
PannerNode
from the Web Audio API to create some cool spatial audio effects. Here’s a simple way to get started: 1. First, create an
AudioContext
by calling
new AudioContext()
. 2. Then, attach this
AudioContext
to your track using the
setAudioContext
method on the
RemoteAudioTrack
. 3. Next, create and set up a
PannerNode
to position your audio source in 3D space. 4. Finally, adjust the position of the
PannerNode
dynamically depending on where you want the sound to come from. If you want to dive deeper, LiveKit has a great spatial audio tutorial and example repo that you can check out. And of course, feel free to ask if you want me to walk you through some code or help out with implementation! Sources: src/room/participant/RemoteParticipant.ts | Using WebRTC + React + WebAudio to create spatial audio | src/controller/SpatialAudioController.tsx | Audio rendering with React Components | LiveKit Docs
i
Can this be achieved with the Unity WebGL sdk?
t
Hey there! Just a heads-up—the Unity WebGL SDK doesn’t have built-in methods for panning audio tracks (like adjusting left/right stereo balance or 3D spatialization) directly. But no worries! You can still create some cool spatial audio effects by tweaking the volume of audio tracks based on distance. The Unity WebGL demo app actually shows how to do this using the
SetVolume()
method on
RemoteAudioTrack
, which helps simulate spatial audio by making sounds quieter the farther away they are. If you’re looking for more advanced panning or full 3D spatialization, you’d want to add some custom logic on the JavaScript/WebAudio side since the Unity WebGL SDK connects through a bridge to WebRTC and WebAudio for handling audio. Feel free to reach out if you want me to walk you through it or share more details—I’m happy to help! Sources: unity-project/ProjectSettings/boot.config | Runtime/Scripts/Room/Track/Track.cs | Real-time audio and video in the Metaverse | Real-time audio and video in the Metaverse | Real-time audio and video in the Metaverse
i
Yes I already control the volume in that way, can you show me how to use the bridge to control the pan setting?
t
Hey there! To control the pan setting in the Unity WebGL SDK, you can use the bridge to pass positional data from Unity (C#) over to JavaScript, where the WebAudio API's
PannerNode
works its magic to create spatial effects. Here's a quick rundown: 1. Use the
RemoteAudioTrack
in Unity to handle your audio tracks. 2. Extend the Unity-WebGL bridge to send position data to JavaScript using
JSNative.CallMethod
. 3. Then, in JavaScript, update the `PannerNode`'s
positionX
and
positionZ
values based on the positions coming from Unity to control the audio direction. Although the SDK doesn’t offer a direct stereo pan setting, this method lets you achieve spatial audio panning nicely. If you want, I’d be happy to help with more details or share some code examples to get you started! Sources: Real-time audio and video in the Metaverse | Real-time audio and video in the Metaverse | Using WebRTC + React + WebAudio to create spatial audio | Subscribing to tracks | LiveKit Docs
i
Yes please
t
Here’s a simple, step-by-step guide to help you implement spatial audio panning using the Unity-WebGL bridge and the WebAudio API: 1. Set up the Unity-WebGL bridge: You can use
JSNative.CallMethod
to send positional data from Unity over to JavaScript. For instance, create a method in Unity that passes the position of the audio source like this:
JSNative.CallMethod("updateAudioPosition", new object[] { audioSourceId, position.x, position.y, position.z });
2. Implement the JavaScript function: In your WebGL template or JavaScript file, define the
updateAudioPosition
function so it updates the
PannerNode
for the right audio track:
Copy code
function updateAudioPosition(audioSourceId, x, y, z) {
  const audioTrack = audioTracks[audioSourceId]; // Grab the audio track by ID
  if (audioTrack && audioTrack.pannerNode) {
    audioTrack.pannerNode.positionX.setValueAtTime(x, audioContext.currentTime);
    audioTrack.pannerNode.positionY.setValueAtTime(y, audioContext.currentTime);
    audioTrack.pannerNode.positionZ.setValueAtTime(z, audioContext.currentTime);
  }
}
3. Attach the
PannerNode
to the audio track:
When you’re setting up the audio track in JavaScript, create a
PannerNode
and connect it within the WebAudio graph like this:
Copy code
const audioContext = new AudioContext();
const pannerNode = audioContext.createPanner();
pannerNode.panningModel = "HRTF"; // This gives you realistic spatial audio
pannerNode.distanceModel = "inverse"; // Feel free to tweak this based on what you need

const sourceNode = audioContext.createMediaStreamSource(audioTrack.stream);
sourceNode.connect(pannerNode).connect(audioContext.destination);

// Keep the pannerNode handy for updates later
audioTracks[audioSourceId] = { pannerNode, sourceNode };
4. Update positions dynamically: Keep sending updated positions from Unity to JavaScript as your audio source or listener moves around. This setup should help you get spatial audio panning going smoothly by combining Unity’s positional data with the WebAudio API’s processing power. If anything’s unclear or you want more examples, just let me know—I’m here to help!
i
What if I'm creating my audio track in c#?
t
When you're creating an audio track in C# with the Unity WebGL SDK, you can use the
LocalAudioTrack.CreateAudioTrack
method. Here's a quick example:
Copy code
var track = LocalAudioTrack.CreateAudioTrack("my-audio-track", rtcSource, room);
Now, while you can easily adjust the volume of remote audio tracks using
track.SetVolume(float volume)
, the SDK doesn't offer a direct way to control panning in C#. To get that panning effect, you'd want to use the Unity-WebGL bridge to send positional data over to JavaScript. From there, the WebAudio API's
PannerNode
can take care of the spatial audio effects for you. If you'd like, I’m happy to help you set this up—just let me know! Sources: LivekitUnitySampleApp/Assets/LivekitSamples.cs | Runtime/Scripts/Room/Track/TSInterop/IAudioTrack.cs
i
Can I create my audio track in c# and attach the pannernode to that track using javascript?
t
Absolutely, you can create your audio track in C# and then attach a
PannerNode
to it using JavaScript. Here’s a friendly step-by-step on how to do this: 1. Create the audio track in C#: Use the
LocalAudioTrack.CreateAudioTrack
method in Unity like this:
Copy code
var track = LocalAudioTrack.CreateAudioTrack("my-audio-track", rtcSource, room);
1. Send the track ID over to JavaScript: You can use the Unity-WebGL bridge to pass the track ID. For example:
JSNative.CallMethod("attachPannerNode", new object[] { track.Id });
2. Attach the
PannerNode
in JavaScript:
Here’s how you can set up the
attachPannerNode
function to create and connect a
PannerNode
to your audio track:
Copy code
function attachPannerNode(trackId) {
  const audioTrack = audioTracks[trackId]; // Grab the audio track by its ID
  if (audioTrack) {
    const audioContext = new AudioContext();
    const pannerNode = audioContext.createPanner();
    pannerNode.panningModel = "HRTF";
    pannerNode.distanceModel = "inverse";

    const sourceNode = audioContext.createMediaStreamSource(audioTrack.stream);
    sourceNode.connect(pannerNode).connect(audioContext.destination);

    // Keep a reference to the pannerNode for updates later
    audioTrack.pannerNode = pannerNode;
  }
}
3. Update the
PannerNode
position dynamically:
You can send position updates from Unity to JavaScript and adjust the
PannerNode
like this:
Copy code
function updateAudioPosition(trackId, x, y, z) {
  const audioTrack = audioTracks[trackId];
  if (audioTrack && audioTrack.pannerNode) {
    audioTrack.pannerNode.positionX.setValueAtTime(x, audioContext.currentTime);
    audioTrack.pannerNode.positionY.setValueAtTime(y, audioContext.currentTime);
    audioTrack.pannerNode.positionZ.setValueAtTime(z, audioContext.currentTime);
  }
}
This way, you get the best of both worlds—Unity’s C# for creating and managing your audio tracks, plus the WebAudio API’s cool spatial audio features. If you want me to walk you through any part or have more questions, just let me know—I’m here to help!
i
What if I want this to apply to a subscribed audio track?
t
Hey there! Just a quick heads-up: in the Unity WebGL SDK, you can’t directly attach a
PannerNode
to a subscribed audio track. The SDK takes care of audio playback using the
Track.Attach()
method, and for spatial audio effects, it usually adjusts volume with
track.SetVolume()
based on where things are positioned. If you’re looking for more advanced spatial audio features like panning, you’d need to tweak the Unity-WebGL bridge a bit to send the subscribed track’s ID over to JavaScript. From there, you can use the Web Audio API to attach a
PannerNode
to that track. If you want, I’m happy to help you figure out how to set this up—just let me know! Sources: Real-time audio and video in the Metaverse | Subscribing to tracks | LiveKit Docs | unity-project/Assets/Scripts/GameManager.cs | src/controller/SpatialAudioController.tsx | Using WebRTC + React + WebAudio to create spatial audio
i
Right so it's just the same as for a local track but using the subscribed track id?
t
I don't have the answer you're looking for. You could also try asking your question: • in one of the other Slack channels or • to https://deepwiki.com/livekit/livekit_composite which is trained on all LiveKit source code If you find the answer, please post it here to help others!
i
Can I subscribe to an audio track in c# and attach a panner node in javascript then send positional information to the audio context to control the pan?
Let me rephrase, Can I subscribe and attach to an audio track in c# and then send positional information to the audio context in javascript to control the pan?
Maybe I'll ask CoPilot