white-sugar-85462
08/06/2025, 10:42 PMTypeError: Cannot read property ‘toLowerCase’ of undefined
error.
## Environment
- SDK Version: @livekit/react-native v2.7.6
- Platform: iOS (iPhone)
- React Native: Latest
- Device: iPhone (any model)
## Error Details
TypeError: Cannot read property ‘toLowerCase’ of undefined
This error occurs when:
1. Calling room.localParticipant.setMicrophoneEnabled(true)
2. Calling room.localParticipant.publishTrack(audioTrack)
3. Using audio={true}
prop on LiveKitRoom component
## Minimal Reproduction Code
### 1. App.tsx (Main App)
tsx
import React from ‘react’;
import { registerGlobals } from ‘@livekit/react-native’;
_// Register globals first_
registerGlobals();
export default function App() {
return <LiveKitTest />;
}
### 2. LiveKitTest.tsx (Minimal Test Component)
tsx
import React, { useState, useEffect } from ‘react’;
import { View, Text, Button, Alert } from ‘react-native’;
import { LiveKitRoom, AudioSession } from ‘@livekit/react-native’;
import { Room } from ‘livekit-client’;
const LIVEKIT_URL = ‘<wss://your-livekit-server.com>’;
const TOKEN = ‘your-livekit-token’;
function LiveKitTest() {
const [isConnected, setIsConnected] = useState(false);
const [room, setRoom] = useState<Room | null>(null);
useEffect(() => {
_// Start audio session_
AudioSession.startAudioSession();
return () => AudioSession.stopAudioSession();
}, []);
const testMicrophone = async () => {
if (!room) return;
try {
console.log(‘:microphone: Testing setMicrophoneEnabled...’);
await room.localParticipant.setMicrophoneEnabled(true);
console.log(‘:white_check_mark: setMicrophoneEnabled succeeded’);
} catch (error) {
console.error(‘:x: setMicrophoneEnabled failed:’, error);
Alert.alert(‘Error’, `setMicrophoneEnabled failed: ${error}`);
}
};
const testPublishTrack = async () => {
if (!room) return;
try {
console.log(‘:microphone: Testing publishTrack...’);
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
const audioTrack = stream.getAudioTracks()[0];
await room.localParticipant.publishTrack(audioTrack, {
name: ‘microphone’,
source: ‘microphone’,
});
console.log(‘:white_check_mark: publishTrack succeeded’);
} catch (error) {
console.error(‘:x: publishTrack failed:’, error);
Alert.alert(‘Error’, `publishTrack failed: ${error}`);
}
};
return (
<View _style_={{ flex: 1, padding: 20, justifyContent: ‘center’ }}>
<Text _style_={{ fontSize: 18, marginBottom: 20 }}>
LiveKit React Native Microphone Test
</Text>
<LiveKitRoom
_serverUrl_={LIVEKIT_URL}
_token_={TOKEN}
_connect_={true}
_audio_={true}
_onConnected_={() => {
console.log(‘:white_check_mark: Connected to LiveKit’);
setIsConnected(true);
}}
_onDisconnected_={() => {
console.log(‘:x: Disconnected from LiveKit’);
setIsConnected(false);
}}
>
<View _style_={{ flex: 1, justifyContent: ‘center’ }}>
<Text>Connection Status: {isConnected ? ‘Connected’ : ‘Disconnected’}</Text>
<Button
_title_=“Test setMicrophoneEnabled”
_onPress_={testMicrophone}
_disabled_={!isConnected}
/>
<Button
_title_=“Test publishTrack”
_onPress_={testPublishTrack}
_disabled_={!isConnected}
/>
</View>
</LiveKitRoom>
</View>
);
}
export default LiveKitTest;
### 3. package.json Dependencies
json
{
“dependencies”: {
“@livekit/react-native”: “^2.7.6”,
“@livekit/react-native-webrtc”: “^125.0.11”,
“livekit-client”: “^1.15.0”,
“react-native”: “0.72.0”
}
}
### 4. iOS Info.plist (Required Permissions)
xml
<key>NSMicrophoneUsageDescription</key>
<string>This app needs microphone access for voice chat</string>
## Steps to Reproduce
1. Install dependencies:
bash
npm install @livekit/react-native@2.7.6 @livekit/react-native-webrtc@125.0.11 livekit-client@1.15.0
2. Set up iOS:
- Add microphone permission to Info.plist
- Run cd ios && pod install
3. Get LiveKit token:
- Replace LIVEKIT_URL
and TOKEN
with your LiveKit server details
4. Run the app:
bash
npx react-native run-ios
5. Test the bug:
- Connect to LiveKit room
- Press “Test setMicrophoneEnabled” button
- Press “Test publishTrack” button
- Both will fail with the toLowerCase error
## Expected Behavior
- setMicrophoneEnabled(true)
should enable and publish microphone
- publishTrack(audioTrack)
should publish the audio track
- audio={true}
prop should auto-publish microphone
## Actual Behavior
- All methods fail with TypeError: Cannot read property ‘toLowerCase’ of undefined
- No microphone tracks are published
- Audio functionality is completely broken
## Debug Information
The error occurs in the LiveKit SDK when it tries to process track metadata or device labels. The undefined value being passed to toLowerCase()
suggests the SDK is not handling track properties correctly.
## Workarounds Attempted
1. ✅ Added AudioSession initialization
2. ❌ Direct MediaStream API - same error
3. ❌ LocalAudioTrack creation - same error
4. ❌ createLocalAudioTrack helper - same error
5. ❌ audio={true} prop - auto-publish fails with same error
## Impact
This bug completely prevents microphone usage in LiveKit React Native apps, making voice chat impossible.
## Request
Please investigate and fix the toLowerCase
error in the track publication process. This appears to be a regression in v2.7.6 as it affects all microphone-related functionality.tall-airline-16756
08/06/2025, 10:42 PMtall-airline-16756
08/06/2025, 10:43 PMTypeError: Cannot read property ‘toLowerCase’ of undefined
, typically occurs due to issues with track publication or microphone access. Here are steps to resolve this:
• Check Microphone Permissions:
Ensure your app has the necessary permissions:
◦ iOS: Add the following to your `Info.plist`:
<key>NSMicrophoneUsageDescription</key>
<string>This app needs microphone access for voice chat</string>
◦ Android: Add this to your `AndroidManifest.xml`:
<uses-permission android:name="android.permission.FOREGROUND_SERVICE_MICROPHONE"/>
• Verify Audio Session Configuration:
Ensure the audio session is properly configured. For Android, set up the audio type in `MainApplication.java`:
LiveKitReactNative.setup(this, new AudioType.CommunicationAudioType());
• Resolve Dependency Conflicts:
Use npm
instead of yarn
if possible, and run yarn-deduplicate
to resolve conflicts:
npm install yarn-deduplicate
npx yarn-deduplicate
• Check Track Publication Code:
Ensure you're publishing the microphone track correctly using:
room.localParticipant.setMicrophoneEnabled(true);
• Additional Troubleshooting:
Double-check your setup against the LiveKit React Native SDK installation guide and ensure all dependencies are correctly versioned.
Following these steps should help resolve the issue. Let me know if you need further assistance!white-sugar-85462
08/06/2025, 10:46 PMtoLowerCase
error persists. Here’s what I’ve done and the current status:
## ✅ Implemented Your Suggestions
### 1. Microphone Permissions - ✅ Already Configured
iOS Info.plist (already present):
xml
<key>NSMicrophoneUsageDescription</key>
<string>This app needs access to the microphone for voice conversation practice and speech recognition features.</string>
Android AndroidManifest.xml (already present):
xml
<uses-permission android:name="android.permission.RECORD_AUDIO"/>
### 2. Audio Session Configuration - ✅ Already Implemented
iOS: AudioSession is properly initialized in the component:
tsx
useEffect(() => {
const startAudioSession = async () => {
try {
console.log(':loud_sound: Starting audio session...');
await AudioSession.startAudioSession();
console.log(':white_check_mark: Audio session started');
} catch (error) {
console.error(':x: Failed to start audio session:', error);
}
};
startAudioSession();
return () => AudioSession.stopAudioSession();
}, []);
Android: Need to add LiveKit setup to MainApplication.kt:
kotlin
import com.livekit.reactnative.LiveKitReactNative
import com.livekit.reactnative.audio.AudioType
class MainApplication : Application(), ReactApplication {
override fun onCreate() {
super.onCreate()
// Add this line before other RN initialization
LiveKitReactNative.setup(this, AudioType.CommunicationAudioType())
// ... rest of onCreate
}
}
### 3. Track Publication Code - ✅ Already Using Correct Method
tsx
await room.localParticipant.setMicrophoneEnabled(true);
## ❌ The Bug Persists Despite All Fixes
Even with all your suggestions implemented, the error still occurs:
INFO publishing track {"enabled": true, "kind": "audio", "muted": false, "pID": "PA_xxx", "participant": "xxx", "room": "xxx", "roomID": "", "source": "microphone", "streamID": "xxx", "streamTrackID": "xxx", "trackID": undefined}
WARN [TypeError: Cannot read property 'toLowerCase' of undefined]
## 🔍 Additional Debugging Information
### The Error Occurs In Multiple Contexts:
1. setMicrophoneEnabled(true) - fails with toLowerCase error
2. publishTrack(audioTrack) - fails with same error
3. audio={true} prop on LiveKitRoom - auto-publish fails with same error
4. createLocalAudioTrack() - fails with same error
### Track Publication Status:
javascript
// After connection, trackPublications array is empty:
trackPublications: []
// No microphone track is successfully published
### Permission Status:
- ✅ Microphone permissions are granted
- ✅ AudioSession is started successfully
- ✅ Room connects successfully
- ✅ Agent joins the room
- ❌ Microphone track publication fails with toLowerCase error
## 🐛 This Appears to be a SDK Bug
The error suggests the LiveKit SDK is trying to call toLowerCase()
on an undefined value during track publication. This happens regardless of:
- How the track is created (MediaStream, LocalAudioTrack, etc.)
- How it’s published (setMicrophoneEnabled, publishTrack, auto-publish)
- Platform configuration (iOS/Android)
- Permission status
## 📋 Request for Further Investigation
Could you please:
1. Check if this is a known issue in v2.7.6
2. Test the minimal reproduction code I provided in the bug report
3. Verify if this is a regression from earlier versions
4. Provide a workaround if the bug can’t be fixed immediatelytall-airline-16756
08/06/2025, 10:46 PM