Compare commits

..

27 Commits

Author SHA1 Message Date
lukasIO
12cee3ed06
Update livekit dependencies (#512) 2026-02-19 17:07:12 +01:00
renovate[bot]
2220072d47
chore(deps): update dependency node to v24 (#491)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-02-19 16:56:32 +01:00
renovate[bot]
392ca136de
fix(deps): update dependency @livekit/krisp-noise-filter to v0.4.1 (#505)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-02-19 16:54:36 +01:00
renovate[bot]
3a75f3222f
fix(deps): update livekit dependencies (non-major) (#499)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-02-10 16:24:35 +01:00
vercel[bot]
f80673aba8
Fix React Server Components CVE vulnerabilities (#503)
Updated dependencies to fix Next.js and React CVE vulnerabilities.

The fix-react2shell-next tool automatically updated the following packages to their secure versions:
- next
- react-server-dom-webpack
- react-server-dom-parcel  
- react-server-dom-turbopack

All package.json files have been scanned and vulnerable versions have been patched to the correct fixed versions based on the official React advisory.

Co-authored-by: Vercel <vercel[bot]@users.noreply.github.com>
2025-12-26 11:35:52 +01:00
vercel[bot]
690dc1011a
Update Next.js/React Flight RCE vulnerability patches (#501)
## React Flight / Next.js RCE Advisory - Security Update

### Summary
Updated the project to address the React Flight / Next.js RCE advisory (CVE-2024-50383) by upgrading Next.js to the patched version.

### Vulnerability Assessment
 **Project is affected by the advisory:**
- Uses **Next.js 15.2.x** (vulnerable version range)
- Does NOT use React Flight packages (react-server-dom-webpack, react-server-dom-parcel, react-server-dom-turbopack)
- Uses React 18.3.1 (not vulnerable React 19.x versions)

### Changes Made

#### Modified Files:
1. **package.json**
   - Upgraded `next` from `15.2.4` to `15.2.6` (patched version for 15.2.x)
   - No React or React DOM changes required (Next.js manages its own patched React versions)

2. **pnpm-lock.yaml**
   - Updated lockfile to reflect `next@15.2.6` installation
   - All dependencies resolved correctly with patched versions

### Implementation Details
- This project is a Next.js 15 application without React Server Components/Flight
- The RCE vulnerability in Next.js 15.2.x is addressed by upgrading to 15.2.6
- No React Flight packages required updating since they are not used
- React versions (18.3.1) are not affected by this vulnerability

### Build Status
⚠️ **Note on Pre-existing Issue:**
The build fails due to corrupted image files in `public/background-images/` (pre-existing issue):
- `ali-kazal-tbw_KQE3Cbg-unsplash.jpg` (130 bytes - should be larger)
- `samantha-gades-BlIhVfXbi9s-unsplash.jpg` (132 bytes - should be larger)

This image corruption issue exists in the original codebase and is unrelated to the security update. The Next.js upgrade to 15.2.6 itself is successful and the patched version is correctly installed.

### Testing
- Verified dependency installation with `pnpm install`
- Confirmed lockfile contains `next@15.2.6`
- Confirmed no React Flight packages are used
- Pre-existing image corruption prevents full build, but dependency upgrade is verified

### Security Impact
 **Successfully patched against CVE-2024-50383**
- Next.js upgraded to 15.2.6 (patched version for 15.2.x)
- No vulnerable React Flight packages in use
- React versions remain compatible and secure

Co-authored-by: Vercel <vercel[bot]@users.noreply.github.com>
2025-12-08 12:50:11 +01:00
renovate[bot]
6de1bc8cc6
fix(deps): update dependency livekit-server-sdk to v2.14.2 (#495)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-04 09:33:18 +00:00
renovate[bot]
563925f757
chore(deps): update actions/checkout action to v6 (#497)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-12-03 22:31:46 -08:00
renovate[bot]
0b62ed930e
chore(deps): update devdependencies (non-major) (#480)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-29 20:58:17 -08:00
lukasIO
baa4e787a2
Default to dual peer connection for custom tab (#496) 2025-11-20 15:55:51 +01:00
renovate[bot]
dc82cc23b9
fix(deps): update dependency livekit-client to v2.16.0 (#494)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-19 09:56:42 +01:00
renovate[bot]
e9b037bac1
fix(deps): update livekit dependencies (non-major) (#492)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-15 08:20:55 +01:00
lukasIO
49b83637dc
Enable singlePC mode for meet also on prod (#493)
* Enable singlePC mode for meet also on prod

* fix
2025-11-10 11:04:29 +01:00
renovate[bot]
aa9be8cdc0
fix(deps): update dependency livekit-client to v2.15.13 (#487)
* fix(deps): update dependency livekit-client to v2.15.12

* bump

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: lukasIO <mail@lukasseiler.de>
2025-10-21 19:25:41 +02:00
lukasIO
03aac6591a
Enable single pc connection on staging (#488)
* Enable single pc connection on staging

* fix deps

* 'security'

* vp9

* use util
2025-10-16 10:32:49 +02:00
lukasIO
83424b27d5
Revert "Use single pc (#483)" (#484)
* Revert "Update livekit client and use single pc (#483)"

This reverts commit 55adec00d31c25ef40e10f67ef7dd4880c9e81a6.

* still update livekit client
2025-10-13 17:53:27 +02:00
lukasIO
55adec00d3
Update livekit client and use single pc (#483) 2025-10-13 16:57:59 +02:00
renovate[bot]
5ff6fa32ac
chore(deps): update pnpm to v10.18.2 (#408)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-13 15:38:57 +02:00
renovate[bot]
8e66391a01
fix(deps): update livekit dependencies (non-major) (#481)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-02 17:20:43 +02:00
renovate[bot]
e9dba9861a
fix(deps): update dependency react-hot-toast to v2.6.0 (#473)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-15 13:21:40 +02:00
renovate[bot]
76234cdf93
fix(deps): update livekit dependencies (non-major) (#475)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-11 14:28:39 +02:00
Tobias Fried
0b4af83a3f
chore(ci): tag deployment versions (#478) 2025-09-09 00:48:31 -06:00
renovate[bot]
6fdf7f0b9a
fix(deps): update livekit dependencies (non-major) (#474)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-29 14:04:10 +02:00
renovate[bot]
372cdfe760
chore(deps): update dependency node to v22 (#470)
* chore(deps): update dependency node to v22

* Update test.yaml

---------

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: lukasIO <mail@lukasseiler.de>
2025-08-15 12:37:50 +02:00
renovate[bot]
fcec3a2459
chore(deps): update devdependencies (non-major) (#451)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-15 12:36:16 +02:00
renovate[bot]
7d1d62b6c3
fix(deps): update dependency livekit-client to v2.15.5 (#472)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-15 12:35:54 +02:00
renovate[bot]
aa310ade64
fix(deps): update livekit dependencies (non-major) (#463)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-11 17:11:12 +02:00
16 changed files with 681 additions and 1652 deletions

View File

@ -1,33 +1,16 @@
# .github/workflows/sync-to-production.yaml
name: Sync main to sandbox-production
on:
push:
branches:
- main
permissions:
contents: write
pull-requests: write
workflow_dispatch:
jobs:
sync:
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- name: Checkout code
uses: actions/checkout@v4
- uses: livekit-examples/sandbox-deploy-action@v1
with:
fetch-depth: 0 # Fetch all history so we can force push
- name: Set up Git
run: |
git config --global user.name 'github-actions[bot]'
git config --global user.email 'github-actions[bot]@livekit.io'
- name: Sync to sandbox-production
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
git checkout sandbox-production || git checkout -b sandbox-production
git merge --strategy-option theirs main
git push origin sandbox-production
production_branch: 'sandbox-production'
token: ${{ secrets.GITHUB_TOKEN }}

View File

@ -11,12 +11,12 @@ jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v6
- uses: pnpm/action-setup@v4
- name: Use Node.js 20
- name: Use Node.js 22
uses: actions/setup-node@v4
with:
node-version: 20
node-version: 24
cache: 'pnpm'
- name: Install dependencies
@ -29,4 +29,4 @@ jobs:
run: pnpm format:check
- name: Run Tests
run: pnpm test
run: pnpm test

View File

@ -2,7 +2,6 @@ import { randomString } from '@/lib/client-utils';
import { getLiveKitURL } from '@/lib/getLiveKitURL';
import { ConnectionDetails } from '@/lib/types';
import { AccessToken, AccessTokenOptions, VideoGrant } from 'livekit-server-sdk';
import { RoomAgentDispatch, RoomConfiguration } from '@livekit/protocol';
import { NextRequest, NextResponse } from 'next/server';
const API_KEY = process.env.LIVEKIT_API_KEY;
@ -18,11 +17,9 @@ export async function GET(request: NextRequest) {
const participantName = request.nextUrl.searchParams.get('participantName');
const metadata = request.nextUrl.searchParams.get('metadata') ?? '';
const region = request.nextUrl.searchParams.get('region');
const language = request.nextUrl.searchParams.get('language') ?? 'en';
if (!LIVEKIT_URL) {
throw new Error('LIVEKIT_URL is not defined');
}
const livekitServerUrl = region ? getLiveKitURL(LIVEKIT_URL, region) : LIVEKIT_URL;
let randomParticipantPostfix = request.cookies.get(COOKIE_KEY)?.value;
if (livekitServerUrl === undefined) {
@ -36,6 +33,7 @@ export async function GET(request: NextRequest) {
return new NextResponse('Missing required query parameter: participantName', { status: 400 });
}
// Generate participant token
if (!randomParticipantPostfix) {
randomParticipantPostfix = randomString(4);
}
@ -44,15 +42,10 @@ export async function GET(request: NextRequest) {
identity: `${participantName}__${randomParticipantPostfix}`,
name: participantName,
metadata,
attributes: {
language,
}
},
roomName,
);
console.info("token:", participantToken);
// Return connection details
const data: ConnectionDetails = {
serverUrl: livekitServerUrl,
@ -82,14 +75,8 @@ function createParticipantToken(userInfo: AccessTokenOptions, roomName: string)
canPublish: true,
canPublishData: true,
canSubscribe: true,
canUpdateOwnMetadata: true,
};
at.addGrant(grant);
at.roomConfig = new RoomConfiguration({
agents: [new RoomAgentDispatch({
agentName: "translator",
})],
})
return at.toJwt();
}

View File

@ -21,6 +21,7 @@ export function VideoConferenceClientImpl(props: {
liveKitUrl: string;
token: string;
codec: VideoCodec | undefined;
singlePeerConnection: boolean | undefined;
}) {
const keyProvider = new ExternalE2EEKeyProvider();
const { worker, e2eePassphrase } = useSetupE2EE();
@ -43,6 +44,7 @@ export function VideoConferenceClientImpl(props: {
worker,
}
: undefined,
singlePeerConnection: props.singlePeerConnection,
};
}, [e2eeEnabled, props.codec, keyProvider, worker]);

View File

@ -7,9 +7,10 @@ export default async function CustomRoomConnection(props: {
liveKitUrl?: string;
token?: string;
codec?: string;
singlePC?: string;
}>;
}) {
const { liveKitUrl, token, codec } = await props.searchParams;
const { liveKitUrl, token, codec, singlePC } = await props.searchParams;
if (typeof liveKitUrl !== 'string') {
return <h2>Missing LiveKit URL</h2>;
}
@ -22,7 +23,12 @@ export default async function CustomRoomConnection(props: {
return (
<main data-lk-theme="default" style={{ height: '100%' }}>
<VideoConferenceClientImpl liveKitUrl={liveKitUrl} token={token} codec={codec} />
<VideoConferenceClientImpl
liveKitUrl={liveKitUrl}
token={token}
codec={codec}
singlePeerConnection={singlePC === 'true'}
/>
</main>
);
}

View File

@ -44,8 +44,6 @@ function Tabs(props: React.PropsWithChildren<{}>) {
function DemoMeetingTab(props: { label: string }) {
const router = useRouter();
const [e2ee, setE2ee] = useState(false);
// TODO(dz): we need to set this to the default language of the browser
const [language, setLanguage] = useState("en")
const [sharedPassphrase, setSharedPassphrase] = useState(randomString(64));
const startMeeting = () => {
if (e2ee) {
@ -62,12 +60,6 @@ function DemoMeetingTab(props: { label: string }) {
</button>
<div style={{ display: 'flex', flexDirection: 'column', gap: '1rem' }}>
<div style={{ display: 'flex', flexDirection: 'row', gap: '1rem' }}>
<select
id="language"
onChange={(ev) => setLanguage(ev.target.value)}
>
</select>
<input
id="use-e2ee"
type="checkbox"

View File

@ -6,12 +6,13 @@ import { DebugMode } from '@/lib/Debug';
import { KeyboardShortcuts } from '@/lib/KeyboardShortcuts';
import { RecordingIndicator } from '@/lib/RecordingIndicator';
import { SettingsMenu } from '@/lib/SettingsMenu';
import { ConnectionDetails, LocalUserChoices } from '@/lib/types';
import { VideoConference } from './VideoConference';
import { PreJoin } from './PreJoin';
import { ConnectionDetails } from '@/lib/types';
import {
formatChatMessageLinks,
LocalUserChoices,
PreJoin,
RoomContext,
VideoConference,
} from '@livekit/components-react';
import {
ExternalE2EEKeyProvider,
@ -42,7 +43,6 @@ export function PageClientImpl(props: {
const [preJoinChoices, setPreJoinChoices] = React.useState<LocalUserChoices | undefined>(
undefined,
);
const preJoinDefaults = React.useMemo(() => {
return {
username: '',
@ -50,7 +50,6 @@ export function PageClientImpl(props: {
audioEnabled: true,
};
}, []);
const [connectionDetails, setConnectionDetails] = React.useState<ConnectionDetails | undefined>(
undefined,
);
@ -60,9 +59,6 @@ export function PageClientImpl(props: {
const url = new URL(CONN_DETAILS_ENDPOINT, window.location.origin);
url.searchParams.append('roomName', props.roomName);
url.searchParams.append('participantName', values.username);
if (values.language) {
url.searchParams.append('language', values.language);
}
if (props.region) {
url.searchParams.append('region', props.region);
}
@ -133,6 +129,7 @@ function VideoConferenceComponent(props: {
adaptiveStream: true,
dynacast: true,
e2ee: keyProvider && worker && e2eeEnabled ? { keyProvider, worker } : undefined,
singlePeerConnection: true,
};
}, [props.userChoices, props.options.hq, props.options.codec]);

View File

@ -1,542 +0,0 @@
import type {
CreateLocalTracksOptions,
LocalAudioTrack,
LocalTrack,
LocalVideoTrack,
TrackProcessor,
} from 'livekit-client';
import {
createLocalAudioTrack,
createLocalTracks,
createLocalVideoTrack,
facingModeFromLocalTrack,
Track,
VideoPresets,
Mutex,
} from 'livekit-client';
import * as React from 'react';
import { MediaDeviceMenu, ParticipantPlaceholder } from '@livekit/components-react';
import { TrackToggle } from '@livekit/components-react';
import { log } from '@livekit/components-core';
import { useMediaDevices, usePersistentUserChoices } from '@livekit/components-react/hooks';
import { LocalUserChoices } from '@/lib/types';
/**
* Props for the PreJoin component.
* @public
*/
export interface PreJoinProps
extends Omit<React.HTMLAttributes<HTMLDivElement>, 'onSubmit' | 'onError'> {
/** This function is called with the `LocalUserChoices` if validation is passed. */
onSubmit?: (values: LocalUserChoices) => void;
/**
* Provide your custom validation function. Only if validation is successful the user choices are past to the onSubmit callback.
*/
onValidate?: (values: LocalUserChoices) => boolean;
onError?: (error: Error) => void;
/** Prefill the input form with initial values. */
defaults?: Partial<LocalUserChoices>;
/** Display a debug window for your convenience. */
debug?: boolean;
joinLabel?: string;
micLabel?: string;
camLabel?: string;
userLabel?: string;
languageLabel?: string;
/**
* If true, user choices are persisted across sessions.
* @defaultValue true
* @alpha
*/
persistUserChoices?: boolean;
videoProcessor?: TrackProcessor<Track.Kind.Video>;
}
/** @public */
export function usePreviewTracks(
options: CreateLocalTracksOptions,
onError?: (err: Error) => void,
) {
const [tracks, setTracks] = React.useState<LocalTrack[]>();
const trackLock = React.useMemo(() => new Mutex(), []);
React.useEffect(() => {
let needsCleanup = false;
let localTracks: Array<LocalTrack> = [];
trackLock.lock().then(async (unlock) => {
try {
if (options.audio || options.video) {
localTracks = await createLocalTracks(options);
if (needsCleanup) {
localTracks.forEach((tr) => tr.stop());
} else {
setTracks(localTracks);
}
}
} catch (e: unknown) {
if (onError && e instanceof Error) {
onError(e);
} else {
log.error(e);
}
} finally {
unlock();
}
});
return () => {
needsCleanup = true;
localTracks.forEach((track) => {
track.stop();
});
};
}, [JSON.stringify(options, roomOptionsStringifyReplacer), onError, trackLock]);
return tracks;
}
/**
* @public
* @deprecated use `usePreviewTracks` instead
*/
export function usePreviewDevice<T extends LocalVideoTrack | LocalAudioTrack>(
enabled: boolean,
deviceId: string,
kind: 'videoinput' | 'audioinput',
) {
const [deviceError, setDeviceError] = React.useState<Error | null>(null);
const [isCreatingTrack, setIsCreatingTrack] = React.useState<boolean>(false);
const devices = useMediaDevices({ kind });
const [selectedDevice, setSelectedDevice] = React.useState<MediaDeviceInfo | undefined>(
undefined,
);
const [localTrack, setLocalTrack] = React.useState<T>();
const [localDeviceId, setLocalDeviceId] = React.useState<string>(deviceId);
React.useEffect(() => {
setLocalDeviceId(deviceId);
}, [deviceId]);
const createTrack = async (deviceId: string, kind: 'videoinput' | 'audioinput') => {
try {
const track =
kind === 'videoinput'
? await createLocalVideoTrack({
deviceId,
resolution: VideoPresets.h720.resolution,
})
: await createLocalAudioTrack({ deviceId });
const newDeviceId = await track.getDeviceId(false);
if (newDeviceId && deviceId !== newDeviceId) {
prevDeviceId.current = newDeviceId;
setLocalDeviceId(newDeviceId);
}
setLocalTrack(track as T);
} catch (e) {
if (e instanceof Error) {
setDeviceError(e);
}
}
};
const switchDevice = async (track: LocalVideoTrack | LocalAudioTrack, id: string) => {
await track.setDeviceId(id);
prevDeviceId.current = id;
};
const prevDeviceId = React.useRef(localDeviceId);
React.useEffect(() => {
if (enabled && !localTrack && !deviceError && !isCreatingTrack) {
log.debug('creating track', kind);
setIsCreatingTrack(true);
createTrack(localDeviceId, kind).finally(() => {
setIsCreatingTrack(false);
});
}
}, [enabled, localTrack, deviceError, isCreatingTrack]);
// switch camera device
React.useEffect(() => {
if (!localTrack) {
return;
}
if (!enabled) {
log.debug(`muting ${kind} track`);
localTrack.mute().then(() => log.debug(localTrack.mediaStreamTrack));
} else if (selectedDevice?.deviceId && prevDeviceId.current !== selectedDevice?.deviceId) {
log.debug(`switching ${kind} device from`, prevDeviceId.current, selectedDevice.deviceId);
switchDevice(localTrack, selectedDevice.deviceId);
} else {
log.debug(`unmuting local ${kind} track`);
localTrack.unmute();
}
}, [localTrack, selectedDevice, enabled, kind]);
React.useEffect(() => {
return () => {
if (localTrack) {
log.debug(`stopping local ${kind} track`);
localTrack.stop();
localTrack.mute();
}
};
}, []);
React.useEffect(() => {
setSelectedDevice(devices?.find((dev) => dev.deviceId === localDeviceId));
}, [localDeviceId, devices]);
return {
selectedDevice,
localTrack,
deviceError,
};
}
/**
* The `PreJoin` prefab component is normally presented to the user before he enters a room.
* This component allows the user to check and select the preferred media device (camera und microphone).
* On submit the user decisions are returned, which can then be passed on to the `LiveKitRoom` so that the user enters the room with the correct media devices.
*
* @remarks
* This component is independent of the `LiveKitRoom` component and should not be nested within it.
* Because it only accesses the local media tracks this component is self-contained and works without connection to the LiveKit server.
*
* @example
* ```tsx
* <PreJoin />
* ```
* @public
*/
export function PreJoin({
defaults = {},
onValidate,
onSubmit,
onError,
debug,
joinLabel = 'Join Room',
micLabel = 'Microphone',
camLabel = 'Camera',
userLabel = 'Username',
languageLabel = 'Language',
persistUserChoices = true,
videoProcessor,
...htmlProps
}: PreJoinProps) {
const [browserLanguage, setBrowserLanguage] = React.useState<string>('en');
React.useEffect(() => {
setBrowserLanguage(getBrowserLanguage());
}, []);
const {
userChoices: initialUserChoices,
saveAudioInputDeviceId,
saveAudioInputEnabled,
saveVideoInputDeviceId,
saveVideoInputEnabled,
saveUsername,
} = usePersistentUserChoices({
defaults,
preventSave: !persistUserChoices,
preventLoad: !persistUserChoices,
});
// Cast initialUserChoices to our extended LocalUserChoices type
const extendedInitialChoices = initialUserChoices as unknown as LocalUserChoices;
const [userChoices, setUserChoices] = React.useState({
...initialUserChoices,
language: extendedInitialChoices.language || browserLanguage,
});
// Initialize device settings
const [audioEnabled, setAudioEnabled] = React.useState<boolean>(userChoices.audioEnabled);
const [videoEnabled, setVideoEnabled] = React.useState<boolean>(userChoices.videoEnabled);
const [audioDeviceId, setAudioDeviceId] = React.useState<string>(userChoices.audioDeviceId);
const [videoDeviceId, setVideoDeviceId] = React.useState<string>(userChoices.videoDeviceId);
const [username, setUsername] = React.useState(userChoices.username);
const [language, setLanguage] = React.useState(userChoices.language || browserLanguage);
// use browser defaults if we can discover it
React.useEffect(() => {
if (browserLanguage && !extendedInitialChoices.language) {
setLanguage(browserLanguage);
}
}, [browserLanguage, extendedInitialChoices.language]);
// Save user choices to persistent storage.
React.useEffect(() => {
saveAudioInputEnabled(audioEnabled);
}, [audioEnabled, saveAudioInputEnabled]);
React.useEffect(() => {
saveVideoInputEnabled(videoEnabled);
}, [videoEnabled, saveVideoInputEnabled]);
React.useEffect(() => {
saveAudioInputDeviceId(audioDeviceId);
}, [audioDeviceId, saveAudioInputDeviceId]);
React.useEffect(() => {
saveVideoInputDeviceId(videoDeviceId);
}, [videoDeviceId, saveVideoInputDeviceId]);
React.useEffect(() => {
saveUsername(username);
}, [username, saveUsername]);
// Save language preference to local storage
React.useEffect(() => {
if (persistUserChoices) {
try {
localStorage.setItem('lk-user-language', language);
} catch (e) {
console.warn('Failed to save language preference to local storage', e);
}
}
}, [language, persistUserChoices]);
const tracks = usePreviewTracks(
{
audio: audioEnabled ? { deviceId: initialUserChoices.audioDeviceId } : false,
video: videoEnabled
? { deviceId: initialUserChoices.videoDeviceId, processor: videoProcessor }
: false,
},
onError,
);
const videoEl = React.useRef(null);
const videoTrack = React.useMemo(
() => tracks?.filter((track) => track.kind === Track.Kind.Video)[0] as LocalVideoTrack,
[tracks],
);
const facingMode = React.useMemo(() => {
if (videoTrack) {
const { facingMode } = facingModeFromLocalTrack(videoTrack);
return facingMode;
} else {
return 'undefined';
}
}, [videoTrack]);
const audioTrack = React.useMemo(
() => tracks?.filter((track) => track.kind === Track.Kind.Audio)[0] as LocalAudioTrack,
[tracks],
);
React.useEffect(() => {
if (videoEl.current && videoTrack) {
videoTrack.unmute();
videoTrack.attach(videoEl.current);
}
return () => {
videoTrack?.detach();
};
}, [videoTrack]);
const [isValid, setIsValid] = React.useState<boolean>();
const handleValidation = React.useCallback(
(values: LocalUserChoices) => {
if (typeof onValidate === 'function') {
return onValidate(values);
} else {
return values.username !== '';
}
},
[onValidate],
);
React.useEffect(() => {
const newUserChoices = {
username,
videoEnabled,
videoDeviceId,
audioEnabled,
audioDeviceId,
language,
};
setUserChoices(newUserChoices);
setIsValid(handleValidation(newUserChoices));
}, [username, videoEnabled, handleValidation, audioEnabled, audioDeviceId, videoDeviceId, language]);
function handleSubmit(event: React.FormEvent) {
event.preventDefault();
if (handleValidation(userChoices)) {
if (typeof onSubmit === 'function') {
onSubmit(userChoices);
}
} else {
log.warn('Validation failed with: ', userChoices);
}
}
return (
<div className="lk-prejoin" {...htmlProps}>
<div className="lk-video-container">
{videoTrack && (
<video ref={videoEl} width="1280" height="720" data-lk-facing-mode={facingMode} />
)}
{(!videoTrack || !videoEnabled) && (
<div className="lk-camera-off-note">
<ParticipantPlaceholder />
</div>
)}
</div>
<div className="lk-button-group-container">
<div className="lk-button-group audio">
<TrackToggle
initialState={audioEnabled}
source={Track.Source.Microphone}
onChange={(enabled) => setAudioEnabled(enabled)}
>
{micLabel}
</TrackToggle>
<div className="lk-button-group-menu">
<MediaDeviceMenu
initialSelection={audioDeviceId}
kind="audioinput"
disabled={!audioTrack}
tracks={{ audioinput: audioTrack }}
onActiveDeviceChange={(_, id) => setAudioDeviceId(id)}
/>
</div>
</div>
<div className="lk-button-group video">
<TrackToggle
initialState={videoEnabled}
source={Track.Source.Camera}
onChange={(enabled) => setVideoEnabled(enabled)}
>
{camLabel}
</TrackToggle>
<div className="lk-button-group-menu">
<MediaDeviceMenu
initialSelection={videoDeviceId}
kind="videoinput"
disabled={!videoTrack}
tracks={{ videoinput: videoTrack }}
onActiveDeviceChange={(_, id) => setVideoDeviceId(id)}
/>
</div>
</div>
</div>
<form className="lk-username-container">
<input
className="lk-form-control"
id="username"
name="username"
type="text"
defaultValue={username}
placeholder={userLabel}
onChange={(inputEl) => setUsername(inputEl.target.value)}
autoComplete="off"
/>
<div className="lk-form-control-wrapper">
<label htmlFor="language" className="lk-form-label">
{languageLabel}
</label>
<select
className="lk-form-control"
id="language"
name="language"
value={language}
onChange={(e) => setLanguage(e.target.value)}
>
{availableLanguages.map((lang) => (
<option key={lang.code} value={lang.code}>
{lang.name}
</option>
))}
</select>
</div>
<button
className="lk-button lk-join-button"
type="submit"
onClick={handleSubmit}
disabled={!isValid}
>
{joinLabel}
</button>
</form>
{debug && (
<>
<strong>User Choices:</strong>
<ul className="lk-list" style={{ overflow: 'hidden', maxWidth: '15rem' }}>
<li>Username: {`${userChoices.username}`}</li>
<li>Video Enabled: {`${userChoices.videoEnabled}`}</li>
<li>Audio Enabled: {`${userChoices.audioEnabled}`}</li>
<li>Video Device: {`${userChoices.videoDeviceId}`}</li>
<li>Audio Device: {`${userChoices.audioDeviceId}`}</li>
<li>Language: {`${userChoices.language}`}</li>
</ul>
</>
)}
</div>
);
}
// copied because it's not exported
function roomOptionsStringifyReplacer(key: string, val: unknown) {
if (key === 'processor' && val && typeof val === 'object' && 'name' in val) {
return val.name;
}
if (key === 'e2ee' && val) {
return 'e2ee-enabled';
}
return val;
}
/**
* Get the user's preferred language as a two-character code
* First checks local storage for a saved preference,
* then falls back to the browser's language,
* and finally defaults to 'en' if neither is available or supported
*/
export function getBrowserLanguage(): string {
if (typeof window === 'undefined') {
return 'en'; // Default for server-side rendering
}
// First check if there's a saved preference
try {
const savedLanguage = localStorage.getItem('lk-user-language');
if (savedLanguage) {
const isSupported = availableLanguages.some(lang => lang.code === savedLanguage);
if (isSupported) {
return savedLanguage;
}
}
} catch (e) {
console.warn('Failed to read language preference from local storage', e);
}
// Fall back to browser language
const browserLang = navigator.language.substring(0, 2).toLowerCase();
// Check if the browser language is in our supported languages
const isSupported = availableLanguages.some(lang => lang.code === browserLang);
return isSupported ? browserLang : 'en';
}
export const availableLanguages = [
{ code: 'en', name: 'English' },
{ code: 'es', name: 'Español' },
{ code: 'fr', name: 'Français' },
{ code: 'de', name: 'Deutsch' },
{ code: 'ja', name: 'Japanese' },
{ code: 'zh', name: 'Chinese' },
];

View File

@ -1,60 +0,0 @@
import { getTrackReferenceId } from '@livekit/components-core';
import { Track, ParticipantKind } from 'livekit-client';
import * as React from 'react';
import { useLocalParticipant, useTracks } from '@livekit/components-react/hooks';
import { AudioTrack, TrackReference } from '@livekit/components-react';
export function RoomAudioRenderer() {
const tracks = useTracks(
[Track.Source.Microphone, Track.Source.ScreenShareAudio, Track.Source.Unknown],
{
updateOnlyOn: [],
onlySubscribed: true,
},
).filter((ref) => !ref.participant.isLocal && ref.publication.kind === Track.Kind.Audio);
const {localParticipant} = useLocalParticipant();
const currentLanguage = localParticipant?.attributes?.language;
// we don't have a language set so we don't know how to handle the multiple audio tracks
// this should not happen
if (!currentLanguage) {
return null;
}
const matchingTracks: TrackReference[] = [];
const originalTracks: TrackReference[] = [];
for (const track of tracks) {
if (track.participant.attributes?.language === currentLanguage ||
(track.participant.kind === ParticipantKind.AGENT && track.publication.trackName.endsWith(`-${currentLanguage}`))
) {
matchingTracks.push(track);
} else if (track.participant.kind !== ParticipantKind.AGENT) {
originalTracks.push(track);
}
}
return (
<div style={{ display: 'none' }}>
{matchingTracks.map((trackRef) => (
<AudioTrack
key={getTrackReferenceId(trackRef)}
trackRef={trackRef}
volume={1.0}
muted={false}
/>
))}
{originalTracks.map((trackRef) => (
<AudioTrack
key={getTrackReferenceId(trackRef)}
trackRef={trackRef}
volume={0.5}
muted={false}
/>
))}
</div>
);
}

View File

@ -1,179 +0,0 @@
import * as React from 'react';
import { useEnsureRoom, useLocalParticipant } from '@livekit/components-react';
export interface Transcript {
id: string;
text: string;
isTranslation: boolean;
participantId?: string;
timestamp: number;
complete?: boolean;
}
export interface TranscriptDisplayProps {
}
/**
* TranscriptDisplay component shows captions of what users are saying
* It displays up to two different transcripts (original and translation)
* and removes them after 5 seconds of no changes or when new transcripts arrive
*/
export function TranscriptDisplay() {
const [visibleTranscripts, setVisibleTranscripts] = React.useState<Transcript[]>([]);
const timeoutRef = React.useRef<NodeJS.Timeout | null>(null);
const transcriptsRef = React.useRef<Record<string, Transcript>>({});
const room = useEnsureRoom();
const {localParticipant} = useLocalParticipant();
const currentLanguage = localParticipant?.attributes?.language;
const updateTranscriptState = React.useCallback(() => {
const allTranscripts = Object.values(transcriptsRef.current);
// Sort by timestamp (newest first) and take the most recent 2
// One original and one translation if available
const sortedTranscripts = allTranscripts
.sort((a, b) => b.timestamp - a.timestamp);
// Find the most recent original transcript
const originalTranscript = sortedTranscripts.find(t => !t.isTranslation);
// Find the most recent translation transcript
const translationTranscript = sortedTranscripts.find(t => t.isTranslation);
// Combine them into the visible transcripts array
const newVisibleTranscripts: Transcript[] = [];
if (originalTranscript) newVisibleTranscripts.push(originalTranscript);
if (translationTranscript) newVisibleTranscripts.push(translationTranscript);
setVisibleTranscripts(newVisibleTranscripts);
// Reset the timeout
if (timeoutRef.current) {
clearTimeout(timeoutRef.current);
}
// Set timeout to clear transcripts after 5 seconds
timeoutRef.current = setTimeout(() => {
setVisibleTranscripts([]);
// Also clear the transcripts reference
transcriptsRef.current = {};
}, 5000);
}, []);
React.useEffect(() => {
if (room) {
room.registerTextStreamHandler('lk.transcription', async (reader, participantInfo) => {
const info = reader.info;
const isTranslation = info.attributes?.translated === "true";
// ignore translations for other languages
if (isTranslation && info.attributes?.language !== currentLanguage) {
return;
}
const id = info.id;
const participantId = participantInfo?.identity;
const isFinal = info.attributes?.["lk.transcription_final"] === "true";
console.log("transcript", id, isFinal);
// Create or update the transcript in our reference object
if (!transcriptsRef.current[id]) {
transcriptsRef.current[id] = {
id,
text: '',
isTranslation,
participantId,
timestamp: Date.now(),
};
}
try {
for await (const chunk of reader) {
// Update the transcript with the new chunk
if (chunk) {
const transcript = transcriptsRef.current[id];
transcript.text += chunk;
transcript.timestamp = Date.now();
transcript.complete = isFinal;
updateTranscriptState();
}
}
if (transcriptsRef.current[id]) {
transcriptsRef.current[id].complete = true;
updateTranscriptState();
}
} catch (e) {
console.error('Error processing transcript stream:', e);
}
});
return () => {
room.unregisterTextStreamHandler('lk.transcription');
if (timeoutRef.current) {
clearTimeout(timeoutRef.current);
}
};
}
}, [room, currentLanguage, updateTranscriptState]);
React.useEffect(() => {
return () => {
if (timeoutRef.current) {
clearTimeout(timeoutRef.current);
}
};
}, []);
if (!currentLanguage) {
return null;
}
if (visibleTranscripts.length === 0) {
return null;
}
return (
<div className="lk-transcript-container">
{visibleTranscripts.map((transcript) => (
<div
key={transcript.id}
className={`lk-transcript ${transcript.isTranslation ? 'lk-transcript-translation' : 'lk-transcript-original'}`}
>
{transcript.text}
</div>
))}
<style jsx>{`
.lk-transcript-container {
position: absolute;
bottom: 80px;
left: 20%;
right: 20%;
display: flex;
flex-direction: column;
align-items: center;
z-index: 10;
}
.lk-transcript {
background-color: rgba(0, 0, 0, 0.7);
color: white;
padding: 8px 16px;
margin-bottom: 8px;
border-radius: 4px;
max-width: 100%;
text-align: center;
font-size: 1rem;
line-height: 1.5;
}
.lk-transcript-translation {
font-style: italic;
background-color: rgba(0, 0, 0, 0.6);
}
`}</style>
</div>
);
}

View File

@ -1,176 +0,0 @@
import * as React from 'react';
import type {
MessageDecoder,
MessageEncoder,
TrackReferenceOrPlaceholder,
WidgetState,
} from '@livekit/components-core';
import { isEqualTrackRef, isTrackReference, isWeb, log } from '@livekit/components-core';
import { ParticipantKind, RoomEvent, Track } from 'livekit-client';
import { RoomAudioRenderer } from './RoomAudioRenderer';
import { TranscriptDisplay } from './TranscriptDisplay';
import {
CarouselLayout,
ConnectionStateToast,
FocusLayout,
FocusLayoutContainer,
GridLayout,
LayoutContextProvider,
ParticipantTile,
useCreateLayoutContext,
Chat,
ControlBar,
MessageFormatter,
} from '@livekit/components-react';
import { usePinnedTracks, useTracks } from '@livekit/components-react/hooks';
/**
* @public
*/
export interface VideoConferenceProps extends React.HTMLAttributes<HTMLDivElement> {
chatMessageFormatter?: MessageFormatter;
chatMessageEncoder?: MessageEncoder;
chatMessageDecoder?: MessageDecoder;
/** @alpha */
SettingsComponent?: React.ComponentType;
}
/**
* The `VideoConference` ready-made component is your drop-in solution for a classic video conferencing application.
* It provides functionality such as focusing on one participant, grid view with pagination to handle large numbers
* of participants, basic non-persistent chat, screen sharing, and more.
*
* @remarks
* The component is implemented with other LiveKit components like `FocusContextProvider`,
* `GridLayout`, `ControlBar`, `FocusLayoutContainer` and `FocusLayout`.
* You can use these components as a starting point for your own custom video conferencing application.
*
* @example
* ```tsx
* <LiveKitRoom>
* <VideoConference />
* <LiveKitRoom>
* ```
* @public
*/
export function VideoConference({
chatMessageFormatter,
chatMessageDecoder,
chatMessageEncoder,
SettingsComponent,
...props
}: VideoConferenceProps) {
const [widgetState, setWidgetState] = React.useState<WidgetState>({
showChat: false,
unreadMessages: 0,
showSettings: false,
});
const lastAutoFocusedScreenShareTrack = React.useRef<TrackReferenceOrPlaceholder | null>(null);
let tracks = useTracks(
[
{ source: Track.Source.Camera, withPlaceholder: true },
{ source: Track.Source.ScreenShare, withPlaceholder: false },
],
{ updateOnlyOn: [RoomEvent.ActiveSpeakersChanged], onlySubscribed: false },
);
tracks = tracks.filter((track) => track.participant.kind !== ParticipantKind.AGENT)
const widgetUpdate = (state: WidgetState) => {
log.debug('updating widget state', state);
setWidgetState(state);
};
const layoutContext = useCreateLayoutContext();
const screenShareTracks = tracks
.filter(isTrackReference)
.filter((track) => track.publication.source === Track.Source.ScreenShare);
const focusTrack = usePinnedTracks(layoutContext)?.[0];
const carouselTracks = tracks.filter((track) => !isEqualTrackRef(track, focusTrack));
React.useEffect(() => {
// If screen share tracks are published, and no pin is set explicitly, auto set the screen share.
if (
screenShareTracks.some((track) => track.publication.isSubscribed) &&
lastAutoFocusedScreenShareTrack.current === null
) {
log.debug('Auto set screen share focus:', { newScreenShareTrack: screenShareTracks[0] });
layoutContext.pin.dispatch?.({ msg: 'set_pin', trackReference: screenShareTracks[0] });
lastAutoFocusedScreenShareTrack.current = screenShareTracks[0];
} else if (
lastAutoFocusedScreenShareTrack.current &&
!screenShareTracks.some(
(track) =>
track.publication.trackSid ===
lastAutoFocusedScreenShareTrack.current?.publication?.trackSid,
)
) {
log.debug('Auto clearing screen share focus.');
layoutContext.pin.dispatch?.({ msg: 'clear_pin' });
lastAutoFocusedScreenShareTrack.current = null;
}
if (focusTrack && !isTrackReference(focusTrack)) {
const updatedFocusTrack = tracks.find(
(tr) =>
tr.participant.identity === focusTrack.participant.identity &&
tr.source === focusTrack.source,
);
if (updatedFocusTrack !== focusTrack && isTrackReference(updatedFocusTrack)) {
layoutContext.pin.dispatch?.({ msg: 'set_pin', trackReference: updatedFocusTrack });
}
}
}, [
screenShareTracks
.map((ref) => `${ref.publication.trackSid}_${ref.publication.isSubscribed}`)
.join(),
focusTrack?.publication?.trackSid,
tracks,
]);
return (
<div className="lk-video-conference" {...props}>
{isWeb() && (
<LayoutContextProvider
value={layoutContext}
onWidgetChange={widgetUpdate}
>
<RoomAudioRenderer />
<div className="lk-video-conference-inner">
{!focusTrack ? (
<div className="lk-grid-layout-wrapper">
<GridLayout tracks={tracks}>
<ParticipantTile />
</GridLayout>
</div>
) : (
<div className="lk-focus-layout-wrapper">
<FocusLayoutContainer>
<CarouselLayout tracks={carouselTracks}>
<ParticipantTile />
</CarouselLayout>
{focusTrack && <FocusLayout trackRef={focusTrack} />}
</FocusLayoutContainer>
</div>
)}
<TranscriptDisplay />
<ControlBar controls={{ chat: false, settings: !!SettingsComponent }} />
</div>
{SettingsComponent && (
<div
className="lk-settings-menu-modal"
style={{ display: widgetState.showSettings ? 'block' : 'none' }}
>
<SettingsComponent />
</div>
)}
</LayoutContextProvider>
)}
<ConnectionStateToast />
</div>
);
}

View File

@ -23,3 +23,7 @@ export function randomString(length: number): string {
export function isLowPowerDevice() {
return navigator.hardwareConcurrency < 6;
}
export function isMeetStaging() {
return new URL(location.origin).host === 'meet.staging.livekit.io';
}

View File

@ -1,15 +1,5 @@
import { LocalAudioTrack, LocalVideoTrack, videoCodecs } from 'livekit-client';
import { VideoCodec } from 'livekit-client';
import { LocalUserChoices as LiveKitLocalUserChoices } from '@livekit/components-core';
// Extend the LocalUserChoices type with our additional properties
export interface LocalUserChoices extends LiveKitLocalUserChoices {
/**
* The language code selected by the user.
* @defaultValue 'en'
*/
language?: string;
}
export interface SessionProps {
roomName: string;

View File

@ -14,33 +14,31 @@
},
"dependencies": {
"@datadog/browser-logs": "^5.23.3",
"@livekit/components-core": "^0.12.9",
"@livekit/components-react": "2.9.13",
"@livekit/components-styles": "1.1.6",
"@livekit/krisp-noise-filter": "0.3.4",
"@livekit/protocol": "^1.39.3",
"@livekit/track-processors": "^0.5.4",
"livekit-client": "2.15.2",
"livekit-server-sdk": "2.13.1",
"next": "15.2.4",
"@livekit/components-react": "2.9.19",
"@livekit/components-styles": "1.2.0",
"@livekit/krisp-noise-filter": "0.4.1",
"@livekit/track-processors": "^0.7.0",
"livekit-client": "2.17.2",
"livekit-server-sdk": "2.15.0",
"next": "15.2.8",
"react": "18.3.1",
"react-dom": "18.3.1",
"react-hot-toast": "^2.5.2",
"tinykeys": "^3.0.0"
},
"devDependencies": {
"@types/node": "22.15.31",
"@types/react": "18.3.23",
"@types/node": "24.10.13",
"@types/react": "18.3.27",
"@types/react-dom": "18.3.7",
"eslint": "9.29.0",
"eslint-config-next": "15.3.3",
"prettier": "3.5.3",
"eslint": "9.39.1",
"eslint-config-next": "15.5.6",
"prettier": "3.7.3",
"source-map-loader": "^5.0.0",
"typescript": "5.8.3",
"typescript": "5.9.3",
"vitest": "^3.2.4"
},
"engines": {
"node": ">=18"
},
"packageManager": "pnpm@10.9.0"
"packageManager": "pnpm@10.18.2"
}

1233
pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff

View File

@ -65,15 +65,3 @@ h2 a {
h2 a {
text-decoration: none;
}
.lk-form-control-wrapper {
margin-top: 10px;
width: 100%;
}
.lk-form-label {
display: block;
margin-bottom: 5px;
font-size: 0.9rem;
color: #666;
}