Compare commits

...

95 Commits
v3.5.0 ... main

Author SHA1 Message Date
juancarmore
029802787a Reapply "Revert commits 6c7bfd4 5638025 da7759d ba374ce cf84de4 39a9b7d e990c19"
This reverts commit 450aa85b887e2ce56052c8abe75fbe4722a2ef69.
2026-03-06 17:00:20 +01:00
GitHub Actions
ad413d2791 Bump version to 3.6.0 2026-03-06 11:37:26 +00:00
CSantosM
d025a35e15 frontend: update participant role notification handling in MeetingEventHandlerService 2026-03-04 16:05:13 +01:00
CSantosM
1761517afc backend: update '@livekit/track-processors' to version 0.7.2 2026-03-04 13:59:18 +01:00
juancarmore
9278260837 frontend: improve recording action buttons with consistent attributes and structure 2026-03-04 13:52:35 +01:00
CSantosM
53f62708ce backend(test): increase sleep duration for webhook processing in expired rooms GC test 2026-03-04 12:56:42 +01:00
CSantosM
ba0d0b10c4 backend: Enhances LiveKit agent robustness
LiveKit agent service calls now gracefully handle errors instead of throwing exceptions.

`listAgents` and `getAgent` return sensible defaults (empty array, undefined) on failure, preventing disruption to calling services. `stopAgent` now logs errors during deletion without halting the process.

An early exit condition is also added in the AI assistant service to prevent unnecessary processing if no agents are found, further improving resilience.
2026-03-04 12:33:36 +01:00
CSantosM
8953e9891c backend(test): Extends test sleep for webhook processing
Increases the sleep duration in room deletion test helpers from 1 second to 5 seconds. This ensures that webhooks triggered by room deletion have enough time to process before test execution continues, addressing intermittent test failures caused by premature assertions. This is a temporary measure to improve test stability.
2026-03-04 12:28:35 +01:00
CSantosM
02703b1f83 frontend: enhance captions button functionality and integrate AI assistant for live captions 2026-03-03 19:13:42 +01:00
CSantosM
c808e98820 backend(ai-assistant): implement AI assistant creation and management
- Add OpenAPI components for creating and responding to AI assistant requests.
- Implement AI assistant service for managing live captions capability.
- Create routes and controllers for AI assistant operations (create and cancel).
- Introduce request validation middleware for AI assistant requests.
- Update Redis helper to manage AI assistant locks.
- Integrate AI assistant cleanup in webhook service.
- Enhance LiveKit service to manage agent dispatch for AI assistants.
- Update token service to remove unnecessary parameters related to captions.
- Add typings for AI assistant requests and responses.
2026-03-03 18:43:35 +01:00
CSantosM
56d5126acb frontend: update @livekit/track-processors dependency to version 0.7.2 2026-03-02 17:15:29 +01:00
juancarmore
ac3a728591 Revert "frontend: Refactor user management components and update routes"
This reverts commit f677b18879bb13acf063de6a3366059a3a49d3ed.
2026-02-17 17:40:50 +01:00
juancarmore
90a1c6fde9 backend: refactor migration transforms to return updated document instances and improve MigrationService to execute all transforms sequantilly for each document 2026-02-17 16:03:05 +01:00
juancarmore
7378a8f53e backend: create schema migrations for room and recording from v1 to v2 version 2026-02-17 12:47:31 +01:00
juancarmore
3142f9fe79 backend: update migration README to clarify schema versioning and MIGRATION_REV timestamp requirements 2026-02-17 12:46:38 +01:00
juancarmore
2a1575768f backend: use collection names in schema migration name generation 2026-02-17 12:32:01 +01:00
juancarmore
396c23aa3c backend: remove unused repository injections from MigrationService 2026-02-17 12:30:26 +01:00
juancarmore
96e441726c refactor(migrations): overhaul migration system to use schema transformation maps
- Removed the BaseSchemaMigration class and replaced it with a more flexible schema migration approach using transformation functions.
- Updated global-config, recording, room, and user migrations to utilize the new schema migration map structure.
- Introduced runtime migration registry for better management of migration execution.
- Enhanced migration service to handle schema migrations more efficiently, including improved error handling and logging.
- Added utility functions for generating and validating schema migration names.
- Updated migration repository methods to streamline migration status tracking.
2026-02-17 11:32:34 +01:00
cruizba
a853aa02a2 backend: apply dynamic base path to OpenAPI docs server URLs
When deployed under a base path (e.g. /meet), the Stoplight "Try It"
requests were hitting /api/v1 instead of /meet/api/v1. This applies
the base path to the embedded OpenAPI spec's servers array at serve time,
following the same pattern used for the frontend index.html.

Also renames html-injection.utils to html-dynamic-base-path.utils and
updates function names for better wording.
2026-02-11 19:18:30 +01:00
cruizba
366632741c fix: strip basePath prefix in redirectTo method
Strip basePath prefix if present, since Angular router operates relative to <base href>
2026-02-05 19:40:00 +01:00
CSantosM
1046b5a0dd backend: enhance build script to generate API documentation after compilation 2026-02-05 12:40:49 +01:00
CSantosM
59d722f882 backend: enhance recording start process sending FAILED event to client when an error occurs 2026-02-04 15:21:25 +01:00
cruizba
b08bb10f63 backend: fix URL path extraction to remove basePath prefix 2026-02-03 01:56:58 +01:00
cruizba
177648e6d5 tests: Use getFullPath for constructing recording URLs in assertions 2026-02-02 21:09:34 +01:00
cruizba
b059b88be4 fix tests: Update API path construction with new path 2026-02-02 20:51:03 +01:00
cruizba
4e634dac54 Move LiveKit Webhook route to app level 2026-02-02 19:58:38 +01:00
cruizba
b0c7dcbc9a Introduce base path configuration and update related services 2026-02-02 19:47:18 +01:00
CSantosM
ba7600bfc5 test: Fix default captions config enabled state to false 2026-02-02 17:01:41 +01:00
CSantosM
accb35c7e1 Adds recording encoding options to room config and start recording
Adds configuration options for recording encoding, including presets and advanced settings, allowing users to customize video and audio quality.

This enhancement introduces new schemas for recording encoding presets and advanced options, enabling users to select from predefined encoding profiles or fine-tune specific video and audio parameters.

A conversion helper is implemented to translate between the internal encoding configurations and the format required by the LiveKit SDK.

backend: Adds recording encoding configuration options

Allows users to specify custom audio and video encoding settings for recordings, overriding room defaults.

This enhancement provides greater flexibility in controlling recording quality and file size. It introduces new schema definitions for encoding options and validates these configurations through Zod schemas.

Enforces complete video/audio encoding options

Requires both video and audio configurations with all their properties
when using advanced encoding options for recordings. This change ensures
complete encoding setups and prevents potential recording failures due to
missing encoding parameters. It also corrects a typo of keyframeInterval.

Add video depth option to recording encoding settings
2026-02-02 17:00:01 +01:00
CSantosM
1add921ce0 backend: Allows overriding recording layout
Enables users to override the default recording layout for a room
when starting a recording. This allows customization of the recording
appearance on a per-recording basis, instead of being tied solely to the
room's configuration.
2026-01-28 18:14:29 +01:00
CSantosM
2fe720c90b test: Remove moderatorToken from start and stop recording tests 2026-01-28 18:07:00 +01:00
CSantosM
3f91e281b3 frontend: Controls captions button based on admin config
Updates the captions button to respect the global admin configuration.

The button now displays a disabled state and tooltip when captions are disabled globally, preventing users from toggling them.

The UI is updated to show disabled state and a specific subtitle off icon,
reflecting whether captions are enabled at the system level.
2026-01-28 16:27:30 +01:00
CSantosM
4ac182c244 backend: Update captions agent name and improve environment checks in TokenService 2026-01-28 16:25:43 +01:00
CSantosM
43f7ff5001 backend: Exposes captions config via internal API
Adds an internal API endpoint to retrieve the captions configuration,
allowing the frontend to determine whether captions are enabled.
The configuration is read from the MEET_CAPTIONS_ENABLED environment variable.
2026-01-28 15:21:00 +01:00
CSantosM
30bd4b5a41 Enable captions by default in room configurations and related tests 2026-01-28 14:50:32 +01:00
CSantosM
00433c75a4 Updated pnpm-lock.yaml 2026-01-26 17:21:30 +01:00
CSantosM
becf3070b0 test: Enhance E2EE UI tests by improving chat panel visibility checks and adding wait for animations 2026-01-26 16:46:16 +01:00
CSantosM
21e939e09c Update Jest configuration for integration tests and improve command line options 2026-01-26 14:10:53 +01:00
CSantosM
659cdcaf73 webcomponent: Updates dependencies and improves end-to-end tests
Upgrades Playwright dependency to the latest version.

Removes unnecessary test cleanup functions and simplifies test structure.
Improves test stability by properly handling browser resources.
2026-01-26 13:59:40 +01:00
CSantosM
dbcc9bbb25 test: Improve room closure check with retry logic in delete room tests 2026-01-26 11:09:00 +01:00
CSantosM
215b11e93f Moves recording API to public endpoint
This commit refactors the recording API endpoints from the internal API to the public API.

This change allows users to start and stop recordings using API keys, enabling more secure and flexible access control for recording functionality. It also centralizes recording-related logic in the public API, simplifying the codebase and improving maintainability.
2026-01-23 17:32:18 +01:00
CSantosM
55aab084b0 backend: Add captions configuration to room tests 2026-01-23 16:38:50 +01:00
CSantosM
96b5cd249e testapp: Add captions configuration to room settings and UI 2026-01-23 14:28:30 +01:00
CSantosM
4751e7e989 dockerignore: Remove audio file extensions from ignore list 2026-01-23 10:23:49 +01:00
CSantosM
0a56a74433 frontend: Enables user-controlled live captions
Allows users to toggle live captions on or off.
Introduces a room configuration setting to enable/disable the captions feature.
The captions button visibility is now controlled by the 'showCaptions' feature flag.
2026-01-22 19:28:18 +01:00
CSantosM
cb12d9a8fe backend: Add captions configuration test 2026-01-22 18:35:35 +01:00
CSantosM
9ae27bf32a backend: Adds live captions functionality to rooms
Adds support for live captions in meet rooms.
This includes schema definitions, API configurations,
and LiveKit integration for dispatching captions agents.
Captions are disabled by default and can be enabled per room.
2026-01-22 18:24:50 +01:00
CSantosM
f677b18879 frontend: Refactor user management components and update routes 2026-01-21 18:22:09 +01:00
CSantosM
f95b02e42b frontend: Comment out handleRoomConfigUpdated method for future reference 2026-01-21 17:31:51 +01:00
CSantosM
5f8af67ac6 frontend: Enhances user experience on role updates
Adds a notification and sound effect to inform users when their role is updated.

This provides immediate feedback to the user and improves the overall user experience.
2026-01-21 17:31:34 +01:00
CSantosM
1ef813e509 frontend: Moves sound effects to dedicated service
Refactors sound effect logic into a dedicated `SoundService`.

This change centralizes audio playback functionality, promoting
better code organization and reusability. Removes sound effect logic
from the meeting service.
2026-01-21 17:30:30 +01:00
CSantosM
011e44b4f9 frontend: Removed allowSignalWrites flag because of it is deprecated 2026-01-21 17:28:52 +01:00
CSantosM
ed057612a0 frontend: Update moderation tooltips and move display properties interface 2026-01-21 17:01:23 +01:00
CSantosM
5cdc49d90c frontend: Adds live captions component
Introduces a live captions feature using LiveKit's transcription service.

This adds a new component that displays real-time transcriptions of the meeting audio in. It manages caption lifecycles, handles both interim and final transcriptions, and
provides reactive signals for UI updates.
2026-01-21 12:47:09 +01:00
CSantosM
073f0dc640 backend: Adds speech processing agent support
Enables the capability to integrate speech processing agents by adding room configuration to the token when the agent processing name is set in the environment.
This allows to specify the agent to be dispatched on room creation.
2026-01-21 12:37:06 +01:00
CSantosM
bbd4d5fbaf frontend: Remove unused DestroyRef injection in MeetingContextService 2026-01-21 09:02:20 +01:00
CSantosM
5f722c36e7 frontend: Add sound notification for participant joining the meeting 2026-01-19 17:08:21 +01:00
CSantosM
b5ccd7c087 frontend: Enhance meeting component to manage participant left events and update state 2026-01-18 16:38:55 +01:00
Carlos Santos
13a339eb87 backend: Add main entry point and organize module exports 2026-01-15 16:57:18 +01:00
Carlos Santos
1d5cd9be26 backend: Fix import order and improve path resolution logic 2026-01-15 16:57:01 +01:00
Carlos Santos
68a10ff901 frontend: Refactors selection logic in lists
Ensures the selected items in lists are correctly updated
when the underlying data changes by using `untracked` to avoid circular dependencies.

Introduces a utility function to compare sets for equality,
preventing unnecessary updates and improving performance.
2026-01-15 16:56:19 +01:00
Carlos Santos
eabb559a82 frontend: Reset nextPageToken before refreshing recordings and rooms 2026-01-15 15:36:58 +01:00
Carlos Santos
e70dc6619f frontend: Refactors rooms lists component to use Angular signals
Migrates the rooms lists component to leverage Angular's signal-based inputs.
This improves change detection and simplifies data flow within the component.

Updates the component's template to reflect the use of signal accessors.
Ensures initial filters are correctly applied.
2026-01-15 14:55:43 +01:00
Carlos Santos
520816b983 frontend: Refactors recording list component to use signals
Migrates the recording list component to use Angular signals for input properties and data binding.
This improves performance and simplifies the component's change detection.

- Converts input properties to input signals.
- Uses computed signals for derived values.
- Introduces effect for side effects related to recordings changes.
- Moves recording list model interfaces to shared location.
2026-01-15 14:43:19 +01:00
Carlos Santos
b24992ad24 frontend: Improves HTTP error handling
Refactors error handling to allow handlers to directly return a response.

Updates the error handler service to return null when no handler can process an error.
2026-01-15 13:11:18 +01:00
Carlos Santos
d9e064e971 frontend: Rename initializeTheme method to init for consistency 2026-01-15 12:12:15 +01:00
Carlos Santos
8af00ab6ee frontend: Refactor app component and meeting component to use inline templates and remove unused files 2026-01-15 12:06:49 +01:00
Carlos Santos
2ec42f701d frontend: Enhance E2EE UI tests by adding visibility checks for panel close button and more options menu 2026-01-15 10:59:16 +01:00
Carlos Santos
f77630d1e0 Revert "frontend: Improve E2EE UI tests by enhancing panel close interactions and timeout handling"
This reverts commit 3cb163deee6c306b52ce0bb6d96d4151a1aa8e6e.
2026-01-15 10:26:39 +01:00
Carlos Santos
3cb163deee frontend: Improve E2EE UI tests by enhancing panel close interactions and timeout handling 2026-01-14 19:37:53 +01:00
Carlos Santos
4ecd086f21 backend: Adds layout property to recording info
Adds the 'layout' property to recording information.

This allows tracking the layout used during a recording, enhancing recording metadata.

Updates recording schema and adds layout information to API responses.
2026-01-14 18:46:38 +01:00
Carlos Santos
a1acc9ba22 frontend: Refactor webhook event storage to use sessionStorage and enhance webhook handling in E2E tests 2026-01-14 18:27:33 +01:00
Carlos Santos
0368ab83e6 frontend: Enhance unit tests for WebComponent attributes, commands, events, and lifecycle handling 2026-01-14 17:22:02 +01:00
Carlos Santos
082aa8480c frontend: Configures the application routing
Sets up domain-based routing for different app features.

This change introduces a structured approach to managing application routes,
making it easier to add, modify, and maintain different sections of the application.
It configures routes for authentication, meetings, rooms, recordings, and the console.
2026-01-14 13:57:03 +01:00
Carlos Santos
ca2d41b05e frontend: Remove unused service dependencies from RecordingService 2026-01-14 12:57:26 +01:00
Carlos Santos
db62cf0e1c frontend: Provides adapter interfaces for shared guards
Creates adapter interfaces for meeting context and room member operations.

This allows shared guards to interact with meeting context and room member context without directly depending on domain services, improving modularity and testability.

Adds providers to supply the adapters using existing services, enabling the use of the adapter interface within the guards.
2026-01-14 11:37:33 +01:00
Carlos Santos
3eb06e41e2 frontend: Add confirm dialog component with customizable actions and styles 2026-01-13 18:28:04 +01:00
Carlos Santos
56e025d23d frontend: Moves delete room dialog to rooms domain
Relocates the delete room dialog component to the rooms domain for better organization and separation of concerns.

Updates imports and references to reflect the new location of the component.
2026-01-13 18:21:07 +01:00
Carlos Santos
6c730a6dbc frontend: Remove unused README files for Auth and Rooms interceptor handlers 2026-01-13 17:37:11 +01:00
Carlos Santos
163e0d5f99 frontend: Update meeting route to use dynamic component loading for AppCeMeetingComponent 2026-01-13 17:36:57 +01:00
Carlos Santos
834dc2be42 frontend: Update E2EE UI tests to reflect changes in expected video poster counts 2026-01-13 17:36:31 +01:00
Carlos Santos
7cddb59e2d frontend: Refactors recording URL generation
Moves recording URL generation to the component using the URL.

This provides more flexibility in how the URL is generated,
allowing the component to handle different scenarios.
The service is no longer responsible for generating the URL.
2026-01-12 20:13:26 +01:00
Carlos Santos
8afed3a2f8 frontend: Refactors project structure for better organization
Moves code into domain-specific folders for better modularity.
Updates imports to reflect the new directory structure.
2026-01-12 19:51:49 +01:00
Carlos Santos
dac64bb1a9 frontend: project restructuring by domain
- centralize HTTP interceptor error handling via registry and handlers
2026-01-12 17:52:46 +01:00
Carlos Santos
c42a3ce1cf frontend: refactor layout option IDs to use constants from MeetRecordingLayout 2026-01-09 11:57:41 +01:00
Carlos Santos
6455a4937c frontend: refactor GlobalConfigService to use Angular's inject for dependency injection 2026-01-09 11:51:23 +01:00
Carlos Santos
4bee373e85 frontend: refactor ConfigComponent to use Angular's inject for services 2026-01-09 11:50:01 +01:00
Carlos Santos
e0c0453a02 frontend: refactor ColorField and ThemeColors types for color configuration 2026-01-09 11:48:30 +01:00
Carlos Santos
7a83cc57fd frontend: remove unused imports in MeetingComponent 2026-01-09 11:38:03 +01:00
Carlos Santos
7d6f61e12c frontend: update default layout mode to SMART_MOSAIC in MeetLayoutService 2026-01-09 11:37:51 +01:00
Carlos Santos
6f841eb254 Adds recording layout configuration
Enables configuration of recording layouts.

Specifies the recording layout in the room configuration.
Now supports different layouts, such as grid, speaker, and single-speaker.
Updated zod validation schemas
Updated integration tests
2026-01-08 19:51:04 +01:00
Carlos Santos
be5e3ffb1d backend: enhance room configuration schemas for updates and creation 2026-01-07 19:51:35 +01:00
Carlos Santos
0ba9d0b297 backend: Simplifies room configuration
The default configuration is assigned in the API validator middleware.
Streamlines the room configuration process by directly using the provided configuration options.
2026-01-07 18:01:14 +01:00
juancarmore
caad4bc550 backend: remove legacy storage service and migration process 2026-01-07 14:07:40 +01:00
juancarmore
450aa85b88 Revert "Revert commits 6c7bfd4 5638025 da7759d ba374ce cf84de4 39a9b7d e990c19"
This reverts commit 0ab6a48e13ec15267de4373f2647745cc184bb87.
2026-01-07 10:13:08 +01:00
430 changed files with 9053 additions and 3994 deletions

View File

@ -141,8 +141,6 @@
**/*.mov
**/*.mkv
**/*.webm
**/*.mp3
**/*.wav
**/*.flac
# ====================================================

View File

@ -1,8 +1,9 @@
{
"jest.jestCommandLine": "node --experimental-vm-modules ../../node_modules/.bin/jest",
"jest.jestCommandLine": "node --experimental-vm-modules ../../node_modules/.bin/jest --config jest.integration.config.mjs",
"jest.rootPath": "backend",
"jest.nodeEnv": {
"NODE_OPTIONS": "--experimental-vm-modules"
},
"jest.runMode": "on-demand"
}

15
meet-ce/backend/index.ts Normal file
View File

@ -0,0 +1,15 @@
// Main entry point for @openvidu-meet/backend package
export * from './src/config/internal-config.js';
export * from './src/environment.js';
export * from './src/server.js';
// Export other modules as needed
export * from './src/config/index.js';
export * from './src/controllers/index.js';
export * from './src/helpers/index.js';
export * from './src/middlewares/index.js';
export * from './src/models/index.js';
export * from './src/routes/index.js';
export * from './src/services/index.js';
export * from './src/utils/index.js';

View File

@ -15,16 +15,19 @@ const jestConfig = {
'^(\\.{1,2}/.*)\\.js$': '$1' // Allow importing js files and resolving to ts files
},
transform: {
'^.+\\.tsx?$': ['ts-jest', {
tsconfig: {
module: 'esnext',
moduleResolution: 'node16',
esModuleInterop: true,
allowSyntheticDefaultImports: true,
isolatedModules: true
},
useESM: true
}]
'^.+\\.tsx?$': [
'ts-jest',
{
tsconfig: {
module: 'esnext',
moduleResolution: 'node16',
esModuleInterop: true,
allowSyntheticDefaultImports: true,
isolatedModules: true
},
useESM: true
}
]
}
};

View File

@ -0,0 +1,12 @@
import baseConfig from './jest.config.mjs';
const integrationConfig = {
...baseConfig,
runInBand: true,
forceExit: true,
detectOpenHandles: true,
testMatch: ['**/tests/integration/**/*.(spec|test).ts'],
};
export default integrationConfig;

View File

@ -0,0 +1,6 @@
description: Create AI assistant activation request
required: true
content:
application/json:
schema:
$ref: '../../schemas/internal/ai-assistant-create-request.yaml'

View File

@ -1,11 +0,0 @@
description: Room to record
required: true
content:
application/json:
schema:
type: object
properties:
roomId:
type: string
description: The unique identifier of the room to record.
example: 'room-123'

View File

@ -0,0 +1,35 @@
description: Room to record
required: true
content:
application/json:
schema:
type: object
properties:
roomId:
type: string
description: The unique identifier of the room to record.
example: 'room-123'
config:
type: object
description: |
Optional configuration to override the room's recording configuration for this specific recording.
If not provided, the recording will use the configuration defined in the room's config.
properties:
layout:
type: string
enum:
- grid
- speaker
- single-speaker
example: speaker
description: |
Defines the layout of the recording. This will override the room's default recording layout.
Options are:
- `grid`: All participants are shown in a grid layout.
- `speaker`: The active speaker is shown prominently, with other participants in smaller thumbnails.
- `single-speaker`: Only the active speaker is shown in the recording.
encoding:
description: Defines the encoding settings for the recording. This will override the room's default recording encoding.
oneOf:
- $ref: '../schemas/meet-room-config.yaml#/MeetRecordingEncodingPreset'
- $ref: '../schemas/meet-room-config.yaml#/MeetRecordingEncodingOptions'

View File

@ -11,6 +11,7 @@ content:
chat:
enabled: true
recording:
enabled: false
enabled: true
encoding: H264_720P_30
virtualBackground:
enabled: true

View File

@ -2,7 +2,7 @@ description: Conflict — The recording cannot be started due to resource state
content:
application/json:
schema:
$ref: '../../schemas/error.yaml'
$ref: '../schemas/error.yaml'
examples:
already_recording:
summary: Room is already being recorded

View File

@ -2,7 +2,7 @@ description: Conflict — The recording is starting or already stopped
content:
application/json:
schema:
$ref: '../../schemas/error.yaml'
$ref: '../schemas/error.yaml'
examples:
starting_recording:
summary: Recording is starting

View File

@ -2,7 +2,7 @@ description: Service Unavailable — The recording service is unavailable
content:
application/json:
schema:
$ref: '../../schemas/error.yaml'
$ref: '../schemas/error.yaml'
examples:
starting_timeout:
summary: Recording service timed out

View File

@ -0,0 +1,5 @@
description: Successfully created or reused AI assistant activation
content:
application/json:
schema:
$ref: '../../schemas/internal/ai-assistant-create-response.yaml'

View File

@ -0,0 +1,5 @@
description: Successfully retrieved captions config
content:
application/json:
schema:
$ref: '../../schemas/internal/global-captions-config.yaml'

View File

@ -11,6 +11,7 @@ content:
roomId: 'room-123'
roomName: 'room'
status: 'complete'
layout: 'grid'
filename: 'room-123--XX445.mp4'
startDate: 1600000000000
endDate: 1600000003600
@ -25,5 +26,6 @@ content:
roomId: 'room-456'
roomName: 'room'
status: 'active'
layout: 'grid'
filename: 'room-456--QR789.mp4'
startDate: 1682500000000

View File

@ -19,6 +19,7 @@ content:
roomId: 'room-123'
roomName: 'room'
status: 'active'
layout: 'grid'
filename: 'room-123--XX445.mp4'
startDate: 1620000000000
endDate: 1620000003600
@ -29,6 +30,7 @@ content:
roomId: 'room-456'
roomName: 'room'
status: 'complete'
layout: 'grid'
filename: 'room-456--XX678.mp4'
startDate: 1625000000000
endDate: 1625000007200

View File

@ -19,10 +19,15 @@ content:
enabled: true
recording:
enabled: false
layout: grid
encoding: H264_720P_30
allowAccessTo: admin_moderator_speaker
virtualBackground:
enabled: true
e2ee:
enabled: false
captions:
enabled: true
moderatorUrl: 'http://localhost:6080/room/room-123?secret=123456'
speakerUrl: 'http://localhost:6080/room/room-123?secret=654321'
status: open
@ -45,6 +50,17 @@ content:
enabled: true
recording:
enabled: false
layout: grid
encoding:
video:
width: 1920
height: 1080
framerate: 30
codec: H264_MAIN
audio:
codec: OPUS
bitrate: 128
allowAccessTo: admin_moderator_speaker
virtualBackground:
enabled: true
e2ee:

View File

@ -28,10 +28,15 @@ content:
enabled: true
recording:
enabled: false
layout: grid
encoding: H264_720P_30
allowAccessTo: admin_moderator_speaker
virtualBackground:
enabled: true
e2ee:
enabled: false
captions:
enabled: true
moderatorUrl: 'http://localhost:6080/room/room-123?secret=123456'
speakerUrl: 'http://localhost:6080/room/room-123?secret=654321'
status: open
@ -48,6 +53,17 @@ content:
enabled: false
recording:
enabled: true
layout: grid
encoding:
video:
width: 1280
height: 720
framerate: 60
codec: H264_HIGH
audio:
codec: AAC
bitrate: 192
allowAccessTo: admin_moderator_speaker
virtualBackground:
enabled: false
e2ee:

View File

@ -2,12 +2,13 @@ description: Successfully created the OpenVidu Meet recording
content:
application/json:
schema:
$ref: '../../schemas/meet-recording.yaml'
$ref: '../schemas/meet-recording.yaml'
example:
recordingId: 'room-123--EG_XYZ--XX445'
roomId: 'room-123'
roomName: 'room'
status: 'active'
layout: 'speaker'
filename: 'room-123--XX445.mp4'
startDate: 1600000000000
headers:

View File

@ -8,12 +8,13 @@ headers:
content:
application/json:
schema:
$ref: '../../schemas/meet-recording.yaml'
$ref: '../schemas/meet-recording.yaml'
example:
recordingId: 'room-123--EG_XYZ--XX445'
roomId: 'room-123'
roomName: 'room'
status: 'ending'
layout: 'speaker'
filename: 'room-123--XX445.mp4'
startDate: 1600000000000
details: 'End reason: StopEgress API'

View File

@ -0,0 +1,37 @@
type: object
required:
# - scope
- capabilities
properties:
# scope:
# type: object
# required:
# - resourceType
# - resourceIds
# properties:
# resourceType:
# type: string
# enum: ['meeting']
# description: Scope resource type where assistant will be activated.
# example: meeting
# resourceIds:
# type: array
# minItems: 1
# items:
# type: string
# minLength: 1
# description: List of target resource ids.
# example: ['meeting_123']
capabilities:
type: array
minItems: 1
items:
type: object
required:
- name
properties:
name:
type: string
enum: ['live_captions']
description: AI capability to activate.
example: live_captions

View File

@ -0,0 +1,14 @@
type: object
required:
- id
- status
properties:
id:
type: string
description: Identifier of the assistant activation.
example: asst_123
status:
type: string
enum: ['active']
description: Current assistant activation state.
example: active

View File

@ -0,0 +1,8 @@
type: object
properties:
enabled:
type: boolean
description: Indicates whether captions are enabled in the system
example: true
required:
- enabled

View File

@ -22,6 +22,58 @@ properties:
enum: ['starting', 'active', 'ending', 'complete', 'failed', 'aborted', 'limit_reached']
example: 'active'
description: The status of the recording.
layout:
type: string
example: 'grid'
description: The layout of the recording.
encoding:
oneOf:
- type: string
enum: ['H264_720P_30', 'H264_720P_60', 'H264_1080P_30', 'H264_1080P_60', 'PORTRAIT_H264_720P_30', 'PORTRAIT_H264_720P_60', 'PORTRAIT_H264_1080P_30', 'PORTRAIT_H264_1080P_60']
description: Encoding preset
- type: object
properties:
video:
type: object
properties:
width:
type: integer
example: 1920
height:
type: integer
example: 1080
framerate:
type: integer
example: 30
codec:
type: string
enum: ['DEFAULT_VC', 'H264_BASELINE', 'H264_MAIN', 'H264_HIGH', 'VP8']
bitrate:
type: integer
example: 4500
keyFrameInterval:
type: number
example: 2
depth:
type: integer
example: 24
audio:
type: object
properties:
codec:
type: string
enum: ['DEFAULT_AC', 'OPUS', 'AAC', 'AC_MP3']
bitrate:
type: integer
example: 128
frequency:
type: integer
example: 48000
description: Advanced encoding options
description: |
The encoding configuration used for this recording.
Can be either a preset string or advanced encoding options.
example: 'H264_720P_30'
filename:
type: string
example: 'room-123--XX445.mp4'

View File

@ -13,6 +13,9 @@ MeetRoomConfig:
e2ee:
$ref: '#/MeetE2EEConfig'
description: Config for End-to-End Encryption (E2EE) in the room.
captions:
$ref: '#/MeetCaptionsConfig'
description: Config for live captions in the room.
MeetChatConfig:
type: object
properties:
@ -29,6 +32,29 @@ MeetRecordingConfig:
default: true
example: true
description: If true, the room will be allowed to record the video of the participants.
layout:
type: string
enum:
- grid
- speaker
- single-speaker
# - grid-light
# - speaker-light
# - single-speaker-light
default: grid
example: grid
description: |
Defines the layout of the recording. Options are:
- `grid`: All participants are shown in a grid layout.
- `speaker`: The active speaker is shown prominently, with other participants in smaller thumbnails.
- `single-speaker`: Only the active speaker is shown in the recording.
# - `grid-light`: Similar to `grid` but with a light-themed background.
# - `speaker-light`: Similar to `speaker` but with a light-themed background.
# - `single-speaker-light`: Similar to `single-speaker` but with a light-themed background.
encoding:
oneOf:
- $ref: '#/MeetRecordingEncodingPreset'
- $ref: '#/MeetRecordingEncodingOptions'
allowAccessTo:
type: string
enum:
@ -61,3 +87,173 @@ MeetE2EEConfig:
If true, the room will have End-to-End Encryption (E2EE) enabled.<br/>
This ensures that the media streams are encrypted from the sender to the receiver, providing enhanced privacy and security for the participants.<br/>
**Enabling E2EE will disable the recording feature for the room**.
MeetCaptionsConfig:
type: object
properties:
enabled:
type: boolean
default: true
example: true
description: >
If true, the room will have live captions enabled.<br/>
This allows participants to see real-time captions of the all participants' speech during the meeting.<br/>
MeetRecordingEncodingPreset:
type: string
enum:
- H264_720P_30
- H264_720P_60
- H264_1080P_30
- H264_1080P_60
- PORTRAIT_H264_720P_30
- PORTRAIT_H264_720P_60
- PORTRAIT_H264_1080P_30
- PORTRAIT_H264_1080P_60
description: |
Predefined encoding presets for recordings. Each preset defines a combination of resolution, frame rate, and codec:
- `H264_720P_30`: 1280x720, 30fps, 3000kbps, H.264_MAIN / OPUS **(default)**
- `H264_720P_60`: 1280x720, 60fps, 4500kbps, H.264_MAIN / OPUS
- `H264_1080P_30`: 1920x1080, 30fps, 4500kbps, H.264_MAIN / OPUS
- `H264_1080P_60`: 1920x1080, 60fps, 6000kbps, H.264_MAIN / OPUS
- `PORTRAIT_H264_720P_30`: 720x1280, 30fps, 3000kbps, H.264_MAIN / OPUS
- `PORTRAIT_H264_720P_60`: 720x1280, 60fps, 4500kbps, H.264_MAIN / OPUS
- `PORTRAIT_H264_1080P_30`: 1080x1920, 30fps, 4500kbps, H.264_MAIN / OPUS
- `PORTRAIT_H264_1080P_60`: 1080x1920, 60fps, 6000kbps, H.264_MAIN / OPUS
example: H264_720P_30
MeetRecordingVideoCodec:
type: string
enum:
- DEFAULT_VC
- H264_BASELINE
- H264_MAIN
- H264_HIGH
- VP8
description: |
Video codec options for recording encoding:
- `DEFAULT_VC`: Use the default video codec (H.264_MAIN)
- `H264_BASELINE`: H.264 Baseline profile
- `H264_MAIN`: H.264 Main profile
- `H264_HIGH`: H.264 High profile
- `VP8`: VP8 codec
example: H264_MAIN
MeetRecordingAudioCodec:
type: string
enum:
- DEFAULT_AC
- OPUS
- AAC
- AC_MP3
description: |
Audio codec options for recording encoding:
- `DEFAULT_AC`: Use the default audio codec (OPUS)
- `OPUS`: Opus codec
- `AAC`: AAC codec
- `AC_MP3`: MP3 codec
example: OPUS
MeetRecordingVideoEncodingOptions:
type: object
required:
- width
- height
- framerate
- codec
- bitrate
- keyFrameInterval
- depth
properties:
width:
type: integer
minimum: 1
example: 1280
description: |
Video width in pixels
height:
type: integer
minimum: 1
example: 720
description: |
Video height in pixels
framerate:
type: integer
minimum: 1
example: 30
description: |
Frame rate in fps
codec:
$ref: '#/MeetRecordingVideoCodec'
description: |
Video codec
bitrate:
type: integer
minimum: 1
example: 4500
description: |
Video bitrate in kbps
keyframeInterval:
type: number
minimum: 0
example: 4
description: |
Keyframe interval in seconds
depth:
type: integer
minimum: 1
example: 24
description: |
Video depth (pixel format) in bits
description: |
Advanced video encoding options for recordings.
MeetRecordingAudioEncodingOptions:
type: object
required:
- codec
- bitrate
- frequency
properties:
codec:
$ref: '#/MeetRecordingAudioCodec'
description: |
Audio codec (required when audio is provided)
bitrate:
type: integer
minimum: 1
example: 128
description: |
Audio bitrate in kbps (required when audio is provided)
frequency:
type: integer
minimum: 1
example: 44100
description: |
Audio sample rate in Hz (required when audio is provided)
description: |
Advanced audio encoding options for recordings.
When audio encoding is provided, all fields are required.
MeetRecordingEncodingOptions:
type: object
required:
- video
- audio
properties:
video:
$ref: '#/MeetRecordingVideoEncodingOptions'
description: Video encoding configuration
audio:
$ref: '#/MeetRecordingAudioEncodingOptions'
description: Audio encoding configuration
description: |
Advanced encoding options for recordings.
Use this for fine-grained control over video and audio encoding parameters.
Both video and audio configurations are required when using advanced options.
For common scenarios, consider using encoding presets instead.
example:
video:
width: 1280
height: 720
framerate: 30
codec: H264_MAIN
bitrate: 3000
keyFrameInterval: 4
audio:
codec: OPUS
bitrate: 128
frequency: 44100

View File

@ -22,6 +22,11 @@ properties:
status:
type: string
description: The status of the recording.
example: active
layout:
type: string
description: The layout of the recording.
example: grid
filename:
type: string
description: The name of the recording file.

View File

@ -27,6 +27,8 @@ paths:
$ref: './paths/recordings.yaml#/~1recordings~1{recordingId}~1media'
/recordings/{recordingId}/url:
$ref: './paths/recordings.yaml#/~1recordings~1{recordingId}~1url'
/recordings/{recordingId}/stop:
$ref: './paths/recordings.yaml#/~1recordings~1{recordingId}~1stop'
components:
securitySchemes:
$ref: './components/security.yaml'

View File

@ -28,22 +28,24 @@ paths:
$ref: './paths/internal/meet-global-config.yaml#/~1config~1security'
/config/rooms/appearance:
$ref: './paths/internal/meet-global-config.yaml#/~1config~1rooms~1appearance'
/config/captions:
$ref: './paths/internal/meet-global-config.yaml#/~1config~1captions'
/rooms/{roomId}/token:
$ref: './paths/internal/rooms.yaml#/~1rooms~1{roomId}~1token'
/rooms/{roomId}/roles:
$ref: './paths/internal/rooms.yaml#/~1rooms~1{roomId}~1roles'
/rooms/{roomId}/roles/{secret}:
$ref: './paths/internal/rooms.yaml#/~1rooms~1{roomId}~1roles~1{secret}'
/recordings:
$ref: './paths/internal/recordings.yaml#/~1recordings'
/recordings/{recordingId}/stop:
$ref: './paths/internal/recordings.yaml#/~1recordings~1{recordingId}~1stop'
/meetings/{roomId}:
$ref: './paths/internal/meetings.yaml#/~1meetings~1{roomId}'
/meetings/{roomId}/participants/{participantIdentity}:
$ref: './paths/internal/meetings.yaml#/~1meetings~1{roomId}~1participants~1{participantIdentity}'
/meetings/{roomId}/participants/{participantIdentity}/role:
$ref: './paths/internal/meetings.yaml#/~1meetings~1{roomId}~1participants~1{participantIdentity}~1role'
/ai/assistants:
$ref: './paths/internal/ai-assistant.yaml#/~1ai~1assistants'
/ai/assistants/{assistantId}:
$ref: './paths/internal/ai-assistant.yaml#/~1ai~1assistants~1{assistantId}'
/analytics:
$ref: './paths/internal/analytics.yaml#/~1analytics'
@ -71,5 +73,9 @@ components:
$ref: components/schemas/internal/meet-analytics.yaml
MeetRecording:
$ref: components/schemas/meet-recording.yaml
AiAssistantCreateRequest:
$ref: components/schemas/internal/ai-assistant-create-request.yaml
AiAssistantCreateResponse:
$ref: components/schemas/internal/ai-assistant-create-response.yaml
Error:
$ref: components/schemas/error.yaml

View File

@ -0,0 +1,56 @@
/ai/assistants:
post:
operationId: createAiAssistant
summary: Create AI assistant
description: |
Activates AI assistance.
> Currently only meeting AI Assistand and `live_captions` capability is supported.
tags:
- Internal API - AI Assistants
security:
- roomMemberTokenHeader: []
requestBody:
$ref: '../../components/requestBodies/internal/create-ai-assistant-request.yaml'
responses:
'200':
$ref: '../../components/responses/internal/success-create-ai-assistant.yaml'
'401':
$ref: '../../components/responses/unauthorized-error.yaml'
'403':
$ref: '../../components/responses/forbidden-error.yaml'
'422':
$ref: '../../components/responses/validation-error.yaml'
'500':
$ref: '../../components/responses/internal-server-error.yaml'
/ai/assistants/{assistantId}:
delete:
operationId: cancelAiAssistant
summary: Cancel AI assistant
description: |
Cancels AI assistant.
The assistant process (live_captions) is stopped only when the last participant cancels it.
tags:
- Internal API - AI Assistants
security:
- roomMemberTokenHeader: []
parameters:
- in: path
name: assistantId
required: true
schema:
type: string
minLength: 1
description: Identifier of the assistant activation returned by create operation.
example: asst_123
responses:
'204':
description: AI assistant canceled successfully.
'401':
$ref: '../../components/responses/unauthorized-error.yaml'
'422':
$ref: '../../components/responses/validation-error.yaml'
'500':
$ref: '../../components/responses/internal-server-error.yaml'

View File

@ -139,3 +139,20 @@
$ref: '../../components/responses/validation-error.yaml'
'500':
$ref: '../../components/responses/internal-server-error.yaml'
/config/captions:
get:
operationId: getCaptionsConfig
summary: Get captions config
description: >
Retrieves the captions configuration from the environment variable MEET_CAPTIONS_ENABLED.
This endpoint returns whether captions are enabled in the system.
tags:
- Internal API - Global Config
responses:
'200':
$ref: '../../components/responses/internal/success-get-captions-config.yaml'
'401':
$ref: '../../components/responses/unauthorized-error.yaml'
'500':
$ref: '../../components/responses/internal-server-error.yaml'

View File

@ -1,58 +0,0 @@
/recordings:
post:
operationId: startRecording
summary: Start a recording
description: >
Start a new recording for an OpenVidu Meet room with the specified room ID.
tags:
- Internal API - Recordings
security:
- roomMemberTokenHeader: []
requestBody:
$ref: '../../components/requestBodies/internal/start-recording-request.yaml'
responses:
'201':
$ref: '../../components/responses/internal/success-start-recording.yaml'
'401':
$ref: '../../components/responses/unauthorized-error.yaml'
'403':
$ref: '../../components/responses/forbidden-not-allowed-error.yaml'
'404':
$ref: '../../components/responses/error-room-not-found.yaml'
'409':
$ref: '../../components/responses/internal/error-recording-conflict.yaml'
'422':
$ref: '../../components/responses/validation-error.yaml'
'500':
$ref: '../../components/responses/internal-server-error.yaml'
'503':
$ref: '../../components/responses/internal/error-service-unavailable.yaml'
/recordings/{recordingId}/stop:
post:
operationId: stopRecording
summary: Stop a recording
description: |
Stops a recording with the specified recording ID.
> **Note:** The recording must be in an `active` state; otherwise, a 409 error is returned.
tags:
- Internal API - Recordings
security:
- roomMemberTokenHeader: []
parameters:
- $ref: '../../components/parameters/recording-id.yaml'
responses:
'202':
$ref: '../../components/responses/internal/success-stop-recording.yaml'
'401':
$ref: '../../components/responses/unauthorized-error.yaml'
'403':
$ref: '../../components/responses/forbidden-error.yaml'
'404':
$ref: '../../components/responses/error-recording-not-found.yaml'
'409':
$ref: '../../components/responses/internal/error-recording-not-active.yaml'
'422':
$ref: '../../components/responses/validation-error.yaml'
'500':
$ref: '../../components/responses/internal-server-error.yaml'

View File

@ -1,4 +1,42 @@
/recordings:
post:
operationId: startRecording
summary: Start a recording
description: >
Start a new recording for an OpenVidu Meet room with the specified room ID.
By default, the recording will use the configuration defined in the room's settings.
However, you can optionally provide a configuration override in the request body to customize
the recording settings (e.g., layout) for this specific recording session.
If a configuration override is provided, those values will take precedence over the room's configuration.
tags:
- OpenVidu Meet - Recordings
security:
- apiKeyHeader: []
- roomMemberTokenHeader: []
requestBody:
$ref: '../components/requestBodies/start-recording-request.yaml'
responses:
'201':
$ref: '../components/responses/success-start-recording.yaml'
'401':
$ref: '../components/responses/unauthorized-error.yaml'
'403':
$ref: '../components/responses/forbidden-not-allowed-error.yaml'
'404':
$ref: '../components/responses/error-room-not-found.yaml'
'409':
$ref: '../components/responses/error-recording-conflict.yaml'
'422':
$ref: '../components/responses/validation-error.yaml'
'500':
$ref: '../components/responses/internal-server-error.yaml'
'503':
$ref: '../components/responses/error-service-unavailable.yaml'
get:
operationId: getRecordings
summary: Get all recordings
@ -249,6 +287,36 @@
$ref: '../components/responses/validation-error.yaml'
'500':
$ref: '../components/responses/internal-server-error.yaml'
/recordings/{recordingId}/stop:
post:
operationId: stopRecording
summary: Stop a recording
description: |
Stops a recording with the specified recording ID.
> **Note:** The recording must be in an `active` state; otherwise, a 409 error is returned.
tags:
- OpenVidu Meet - Recordings
security:
- apiKeyHeader: []
- roomMemberTokenHeader: []
parameters:
- $ref: '../components/parameters/recording-id.yaml'
responses:
'202':
$ref: '../components/responses/success-stop-recording.yaml'
'401':
$ref: '../components/responses/unauthorized-error.yaml'
'403':
$ref: '../components/responses/forbidden-error.yaml'
'404':
$ref: '../components/responses/error-recording-not-found.yaml'
'409':
$ref: '../components/responses/error-recording-not-active.yaml'
'422':
$ref: '../components/responses/validation-error.yaml'
'500':
$ref: '../components/responses/internal-server-error.yaml'
/recordings/{recordingId}/url:
get:
operationId: getRecordingUrl

View File

@ -16,5 +16,7 @@
description: Operations related to managing OpenVidu Meet rooms
- name: Internal API - Meetings
description: Operations related to managing meetings in OpenVidu Meet rooms
- name: Internal API - AI Assistants
description: High-level operations to manage AI assistance capabilities in meetings
- name: Internal API - Recordings
description: Operations related to managing OpenVidu Meet recordings

View File

@ -1,6 +1,6 @@
{
"name": "@openvidu-meet/backend",
"version": "3.5.0",
"version": "3.6.0",
"description": "OpenVidu Meet Backend",
"author": "OpenVidu",
"license": "Apache-2.0",
@ -27,7 +27,7 @@
"package.json"
],
"scripts": {
"build": "tsc -p tsconfig.prod.json",
"build": "tsc -p tsconfig.prod.json && pnpm run doc:api",
"build:watch": "tsc -p tsconfig.prod.json --watch",
"doc:api": "mkdir -p public/openapi && cd openapi && openapi-generate-html -i openvidu-meet-api.yaml --ui=stoplight --theme=light --title 'OpenVidu Meet REST API' --description 'OpenVidu Meet REST API' -o ../public/openapi/public.html",
"doc:internal-api": "mkdir -p public/openapi && cd openapi && openapi-generate-html -i openvidu-meet-internal-api.yaml --ui=stoplight --theme=dark --title 'OpenVidu Meet Internal REST API' --description 'OpenVidu Meet Internal REST API' -o ../public/openapi/internal.html",
@ -73,6 +73,7 @@
"ioredis": "5.6.1",
"jwt-decode": "4.0.0",
"livekit-server-sdk": "2.13.3",
"lodash.merge": "4.6.2",
"mongoose": "8.19.4",
"ms": "2.1.3",
"uid": "2.0.2",
@ -87,6 +88,7 @@
"@types/cors": "2.8.19",
"@types/express": "4.17.25",
"@types/jest": "29.5.14",
"@types/lodash.merge": "4.6.9",
"@types/ms": "2.1.0",
"@types/node": "22.16.5",
"@types/supertest": "6.0.3",

View File

@ -41,7 +41,6 @@ import { StorageInitService } from '../services/storage/storage-init.service.js'
import { StorageKeyBuilder, StorageProvider } from '../services/storage/storage.interface.js';
import { StorageFactory } from '../services/storage/storage.factory.js';
import { BlobStorageService } from '../services/storage/blob-storage.service.js';
import { LegacyStorageService } from '../services/storage/legacy-storage.service.js';
import { MigrationService } from '../services/migration.service.js';
import { LiveKitService } from '../services/livekit.service.js';
@ -55,6 +54,7 @@ import { LivekitWebhookService } from '../services/livekit-webhook.service.js';
import { RoomScheduledTasksService } from '../services/room-scheduled-tasks.service.js';
import { RecordingScheduledTasksService } from '../services/recording-scheduled-tasks.service.js';
import { AnalyticsService } from '../services/analytics.service.js';
import { AiAssistantService } from '../services/ai-assistant.service.js';
export const container: Container = new Container();
@ -101,7 +101,6 @@ export const registerDependencies = () => {
container.bind(StorageFactory).toSelf().inSingletonScope();
container.bind(BlobStorageService).toSelf().inSingletonScope();
container.bind(StorageInitService).toSelf().inSingletonScope();
container.bind(LegacyStorageService).toSelf().inSingletonScope();
container.bind(MigrationService).toSelf().inSingletonScope();
container.bind(FrontendEventService).toSelf().inSingletonScope();
@ -115,6 +114,7 @@ export const registerDependencies = () => {
container.bind(RoomScheduledTasksService).toSelf().inSingletonScope();
container.bind(RecordingScheduledTasksService).toSelf().inSingletonScope();
container.bind(AnalyticsService).toSelf().inSingletonScope();
container.bind(AiAssistantService).toSelf().inSingletonScope();
};
const configureStorage = (storageMode: string) => {

View File

@ -0,0 +1,2 @@
export * from './dependency-injector.config.js';
export * from './internal-config.js';

View File

@ -49,14 +49,18 @@ export const INTERNAL_CONFIG = {
PARTICIPANT_MAX_CONCURRENT_NAME_REQUESTS: '20', // Maximum number of request by the same name at the same time allowed
PARTICIPANT_NAME_RESERVATION_TTL: '12h' as StringValue, // Time-to-live for participant name reservations
CAPTIONS_AGENT_NAME: 'speech-processing',
// MongoDB Schema Versions
// These define the current schema version for each collection
// Increment when making breaking changes to the schema structure
GLOBAL_CONFIG_SCHEMA_VERSION: 1 as SchemaVersion,
USER_SCHEMA_VERSION: 1 as SchemaVersion,
API_KEY_SCHEMA_VERSION: 1 as SchemaVersion,
ROOM_SCHEMA_VERSION: 1 as SchemaVersion,
RECORDING_SCHEMA_VERSION: 1 as SchemaVersion
// IMPORTANT: whenever you increment a schema version, update the MIGRATION_REV timestamp too.
// This helps surface merge conflicts when multiple branches create schema migrations concurrently.
GLOBAL_CONFIG_SCHEMA_VERSION: 1 as SchemaVersion, // MIGRATION_REV: 1771328577054
USER_SCHEMA_VERSION: 1 as SchemaVersion, // MIGRATION_REV: 1771328577054
API_KEY_SCHEMA_VERSION: 1 as SchemaVersion, // MIGRATION_REV: 1771328577054
ROOM_SCHEMA_VERSION: 2 as SchemaVersion, // MIGRATION_REV: 1771328577054
RECORDING_SCHEMA_VERSION: 2 as SchemaVersion // MIGRATION_REV: 1771328577054
};
// This function is used to set private configuration values for testing purposes.

View File

@ -0,0 +1,69 @@
import { Request, Response } from 'express';
import { container } from '../config/dependency-injector.config.js';
import { handleError } from '../models/error.model.js';
import { AiAssistantService } from '../services/ai-assistant.service.js';
import { LoggerService } from '../services/logger.service.js';
import { RequestSessionService } from '../services/request-session.service.js';
import { TokenService } from '../services/token.service.js';
import { getRoomMemberToken } from '../utils/token.utils.js';
const getRoomMemberIdentityFromRequest = async (req: Request): Promise<string> => {
const tokenService = container.get(TokenService);
const token = getRoomMemberToken(req);
if (!token) {
throw new Error('Room member token not found');
}
const claims = await tokenService.verifyToken(token);
if (!claims.sub) {
throw new Error('Room member token does not include participant identity');
}
return claims.sub;
};
export const createAssistant = async (req: Request, res: Response) => {
const logger = container.get(LoggerService);
const requestSessionService = container.get(RequestSessionService);
const aiAssistantService = container.get(AiAssistantService);
// const payload: MeetCreateAssistantRequest = req.body;
const roomId = requestSessionService.getRoomIdFromToken();
if (!roomId) {
return handleError(res, new Error('Could not resolve room from token'), 'creating assistant');
}
try {
const participantIdentity = await getRoomMemberIdentityFromRequest(req);
logger.verbose(`Creating assistant for participant '${participantIdentity}' in room '${roomId}'`);
const assistant = await aiAssistantService.createLiveCaptionsAssistant(roomId, participantIdentity);
return res.status(200).json(assistant);
} catch (error) {
handleError(res, error, `creating assistant in room '${roomId}'`);
}
};
export const cancelAssistant = async (req: Request, res: Response) => {
const logger = container.get(LoggerService);
const requestSessionService = container.get(RequestSessionService);
const aiAssistantService = container.get(AiAssistantService);
const { assistantId } = req.params;
const roomId = requestSessionService.getRoomIdFromToken();
if (!roomId) {
return handleError(res, new Error('Could not resolve room from token'), 'canceling assistant');
}
try {
const participantIdentity = await getRoomMemberIdentityFromRequest(req);
logger.verbose(
`Canceling assistant '${assistantId}' for participant '${participantIdentity}' in room '${roomId}'`
);
await aiAssistantService.cancelAssistant(assistantId, roomId, participantIdentity);
return res.status(204).send();
} catch (error) {
handleError(res, error, `canceling assistant '${assistantId}' in room '${roomId}'`);
}
};

View File

@ -1,6 +1,7 @@
import { MeetAppearanceConfig, SecurityConfig, WebhookConfig } from '@openvidu-meet/typings';
import { Request, Response } from 'express';
import { container } from '../config/dependency-injector.config.js';
import { MEET_ENV } from '../environment.js';
import { handleError } from '../models/error.model.js';
import { GlobalConfigService } from '../services/global-config.service.js';
import { LoggerService } from '../services/logger.service.js';
@ -108,3 +109,16 @@ export const getRoomsAppearanceConfig = async (_req: Request, res: Response) =>
handleError(res, error, 'getting rooms appearance config');
}
};
export const getCaptionsConfig = async (_req: Request, res: Response) => {
const logger = container.get(LoggerService);
logger.verbose('Getting captions config');
try {
const captionsEnabled = MEET_ENV.CAPTIONS_ENABLED === 'true';
return res.status(200).json({ enabled: captionsEnabled });
} catch (error) {
handleError(res, error, 'getting captions config');
}
};

View File

@ -0,0 +1,10 @@
export * from './analytics.controller.js';
export * from './api-key.controller.js';
export * from './auth.controller.js';
export * from './global-config.controller.js';
export * from './livekit-webhook.controller.js';
export * from './meeting.controller.js';
export * from './recording.controller.js';
export * from './room.controller.js';
export * from './user.controller.js';

View File

@ -18,11 +18,11 @@ import { getBaseUrl } from '../utils/url.utils.js';
export const startRecording = async (req: Request, res: Response) => {
const logger = container.get(LoggerService);
const recordingService = container.get(RecordingService);
const { roomId } = req.body;
const { roomId, config } = req.body;
logger.info(`Starting recording in room '${roomId}'`);
try {
const recordingInfo = await recordingService.startRecording(roomId);
const recordingInfo = await recordingService.startRecording(roomId, config);
res.setHeader(
'Location',
`${getBaseUrl()}${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings/${recordingInfo.recordingId}`

View File

@ -23,6 +23,7 @@ export const MEET_ENV = {
LOG_LEVEL: process.env.MEET_LOG_LEVEL || 'info',
NAME_ID: process.env.MEET_NAME_ID || 'openviduMeet',
BASE_URL: process.env.MEET_BASE_URL ?? '',
BASE_PATH: process.env.MEET_BASE_PATH || '/meet',
EDITION: process.env.MEET_EDITION || 'CE',
// Authentication configuration
@ -79,10 +80,14 @@ export const MEET_ENV = {
REDIS_SENTINEL_PASSWORD: process.env.MEET_REDIS_SENTINEL_PASSWORD ?? '',
REDIS_SENTINEL_MASTER_NAME: process.env.MEET_REDIS_SENTINEL_MASTER_NAME ?? 'openvidu',
// Live Captions configuration
CAPTIONS_ENABLED: process.env.MEET_CAPTIONS_ENABLED || 'false',
// Deployment configuration
MODULES_FILE: process.env.MODULES_FILE || undefined,
MODULE_NAME: process.env.MODULE_NAME || 'openviduMeet',
ENABLED_MODULES: process.env.ENABLED_MODULES ?? ''
ENABLED_MODULES: process.env.ENABLED_MODULES ?? '',
};
export function checkModuleEnabled() {

View File

@ -0,0 +1,192 @@
import {
MeetRecordingAudioCodec,
MeetRecordingEncodingOptions,
MeetRecordingEncodingPreset,
MeetRecordingVideoCodec
} from '@openvidu-meet/typings';
import { AudioCodec, EncodingOptions, EncodingOptionsPreset, VideoCodec } from 'livekit-server-sdk';
/**
* Helper class for converting encoding configurations between OpenVidu Meet and LiveKit formats.
* Provides bidirectional conversion for presets, codecs, and advanced encoding options.
*/
export class EncodingConverter {
private constructor() {
// Prevent instantiation of this utility class
}
// Bidirectional mappings for encoding conversions
private static readonly PRESET_MAP = new Map<MeetRecordingEncodingPreset, EncodingOptionsPreset>([
[MeetRecordingEncodingPreset.H264_720P_30, EncodingOptionsPreset.H264_720P_30],
[MeetRecordingEncodingPreset.H264_720P_60, EncodingOptionsPreset.H264_720P_60],
[MeetRecordingEncodingPreset.H264_1080P_30, EncodingOptionsPreset.H264_1080P_30],
[MeetRecordingEncodingPreset.H264_1080P_60, EncodingOptionsPreset.H264_1080P_60],
[MeetRecordingEncodingPreset.PORTRAIT_H264_720P_30, EncodingOptionsPreset.PORTRAIT_H264_720P_30],
[MeetRecordingEncodingPreset.PORTRAIT_H264_720P_60, EncodingOptionsPreset.PORTRAIT_H264_720P_60],
[MeetRecordingEncodingPreset.PORTRAIT_H264_1080P_30, EncodingOptionsPreset.PORTRAIT_H264_1080P_30],
[MeetRecordingEncodingPreset.PORTRAIT_H264_1080P_60, EncodingOptionsPreset.PORTRAIT_H264_1080P_60]
]);
private static readonly VIDEO_CODEC_MAP = new Map<MeetRecordingVideoCodec, VideoCodec>([
[MeetRecordingVideoCodec.H264_BASELINE, VideoCodec.H264_BASELINE],
[MeetRecordingVideoCodec.H264_MAIN, VideoCodec.H264_MAIN],
[MeetRecordingVideoCodec.H264_HIGH, VideoCodec.H264_HIGH],
[MeetRecordingVideoCodec.VP8, VideoCodec.VP8]
]);
private static readonly AUDIO_CODEC_MAP = new Map<MeetRecordingAudioCodec, AudioCodec>([
[MeetRecordingAudioCodec.OPUS, AudioCodec.OPUS],
[MeetRecordingAudioCodec.AAC, AudioCodec.AAC]
]);
/**
* Converts OpenVidu Meet encoding options to LiveKit encoding options.
* Used when starting a recording to translate from Meet format to LiveKit SDK format.
*
* @param encoding - The encoding configuration in OpenVidu Meet format
* @returns The encoding options in LiveKit format (preset or advanced)
*/
static toLivekit(
encoding: MeetRecordingEncodingPreset | MeetRecordingEncodingOptions | undefined
): EncodingOptions | EncodingOptionsPreset | undefined {
if (!encoding) return undefined;
// If it's a preset string
if (typeof encoding === 'string') {
return this.convertPresetToLivekit(encoding);
}
// It's advanced encoding options
return this.convertAdvancedOptionsToLivekit(encoding);
}
/**
* Converts LiveKit encoding options back to OpenVidu Meet format.
* Used when receiving webhook information about a recording.
*
* @param encodingOptions - The encoding options from LiveKit
* @returns The encoding configuration in OpenVidu Meet format
*/
static fromLivekit(
encodingOptions: EncodingOptions | EncodingOptionsPreset | undefined
): MeetRecordingEncodingPreset | MeetRecordingEncodingOptions | undefined {
// When undefined, recording is using default preset but EgressInfo does not specify it.
// Return default preset.
if (encodingOptions === undefined) return MeetRecordingEncodingPreset.H264_720P_30;
// If it's a preset (number enum from LiveKit)
if (typeof encodingOptions === 'number') {
return this.convertPresetFromLivekit(encodingOptions);
}
// It's an EncodingOptions object
return this.convertAdvancedOptionsFromLivekit(encodingOptions);
}
/**
* Converts OpenVidu Meet encoding preset to LiveKit preset.
*/
private static convertPresetToLivekit(preset: MeetRecordingEncodingPreset): EncodingOptionsPreset {
return this.PRESET_MAP.get(preset) ?? EncodingOptionsPreset.H264_720P_30;
}
/**
* Converts LiveKit encoding preset to OpenVidu Meet preset.
*/
private static convertPresetFromLivekit(preset: EncodingOptionsPreset): MeetRecordingEncodingPreset {
for (const [meetPreset, lkPreset] of this.PRESET_MAP) {
if (lkPreset === preset) return meetPreset;
}
return MeetRecordingEncodingPreset.H264_720P_30;
}
/**
* Converts OpenVidu Meet advanced encoding options to LiveKit EncodingOptions.
*/
private static convertAdvancedOptionsToLivekit(options: MeetRecordingEncodingOptions): EncodingOptions {
const encodingOptions = new EncodingOptions();
const { video, audio } = options;
if (video) {
Object.assign(encodingOptions, {
width: video.width,
height: video.height,
framerate: video.framerate,
videoBitrate: video.bitrate,
videoCodec: this.convertVideoCodecToLivekit(video.codec),
keyFrameInterval: video.keyFrameInterval,
depth: video.depth
});
}
if (audio) {
Object.assign(encodingOptions, {
audioBitrate: audio.bitrate,
audioFrequency: audio.frequency,
audioCodec: this.convertAudioCodecToLivekit(audio.codec)
});
}
return encodingOptions;
}
/**
* Converts LiveKit EncodingOptions to OpenVidu Meet advanced encoding options.
*/
private static convertAdvancedOptionsFromLivekit(options: EncodingOptions): MeetRecordingEncodingOptions {
// In Meet, both video and audio are required with all their properties
return {
video: {
width: options.width || 1920,
height: options.height || 1080,
framerate: options.framerate || 30,
codec: this.convertVideoCodecFromLivekit(options.videoCodec),
bitrate: options.videoBitrate || 128,
keyFrameInterval: options.keyFrameInterval || 4,
depth: options.depth || 24 // Use 24 as default when LiveKit returns 0 or undefined
},
audio: {
codec: this.convertAudioCodecFromLivekit(options.audioCodec),
bitrate: options.audioBitrate || 128,
frequency: options.audioFrequency || 44100
}
};
}
/**
* Converts OpenVidu Meet video codec to LiveKit video codec.
*/
private static convertVideoCodecToLivekit(codec: MeetRecordingVideoCodec): VideoCodec {
return this.VIDEO_CODEC_MAP.get(codec) ?? VideoCodec.H264_MAIN;
}
/**
* Converts LiveKit video codec to OpenVidu Meet video codec.
*/
private static convertVideoCodecFromLivekit(codec: VideoCodec): MeetRecordingVideoCodec {
for (const [meetCodec, lkCodec] of this.VIDEO_CODEC_MAP) {
if (lkCodec === codec) return meetCodec;
}
return MeetRecordingVideoCodec.H264_MAIN;
}
/**
* Converts OpenVidu Meet audio codec to LiveKit audio codec.
*/
private static convertAudioCodecToLivekit(codec: MeetRecordingAudioCodec): AudioCodec {
return this.AUDIO_CODEC_MAP.get(codec) ?? AudioCodec.OPUS;
}
/**
* Converts LiveKit audio codec to OpenVidu Meet audio codec.
*/
private static convertAudioCodecFromLivekit(codec: AudioCodec): MeetRecordingAudioCodec {
for (const [meetCodec, lkCodec] of this.AUDIO_CODEC_MAP) {
if (lkCodec === codec) return meetCodec;
}
return MeetRecordingAudioCodec.OPUS;
}
}

View File

@ -0,0 +1,6 @@
export * from './ov-components-adapter.helper.js';
export * from './password.helper.js';
export * from './recording.helper.js';
export * from './redis.helper.js';
export * from './room.helper.js';

View File

@ -1,8 +1,15 @@
import { EgressStatus } from '@livekit/protocol';
import { MeetRecordingInfo, MeetRecordingStatus } from '@openvidu-meet/typings';
import {
MeetRecordingEncodingOptions,
MeetRecordingEncodingPreset,
MeetRecordingInfo,
MeetRecordingLayout,
MeetRecordingStatus
} from '@openvidu-meet/typings';
import { EgressInfo } from 'livekit-server-sdk';
import { container } from '../config/dependency-injector.config.js';
import { RoomService } from '../services/room.service.js';
import { EncodingConverter } from './encoding-converter.helper.js';
export class RecordingHelper {
private constructor() {
@ -19,7 +26,8 @@ export class RecordingHelper {
const filename = RecordingHelper.extractFilename(egressInfo);
const recordingId = RecordingHelper.extractRecordingIdFromEgress(egressInfo);
const { roomName: roomId, errorCode, error, details } = egressInfo;
const layout = RecordingHelper.extractRecordingLayout(egressInfo);
const encoding = RecordingHelper.extractRecordingEncoding(egressInfo);
const roomService = container.get(RoomService);
const { roomName } = await roomService.getMeetRoom(roomId);
@ -28,6 +36,8 @@ export class RecordingHelper {
roomId,
roomName,
// outputMode,
layout,
encoding,
status,
filename,
startDate: startDateMs,
@ -138,6 +148,44 @@ export class RecordingHelper {
return `${meetRoomId}--${egressId}--${uid}`;
}
static extractRecordingLayout(egressInfo: EgressInfo): MeetRecordingLayout | undefined {
if (egressInfo.request.case !== 'roomComposite') return undefined;
const { layout } = egressInfo.request.value;
switch (layout) {
case 'grid':
return MeetRecordingLayout.GRID;
case 'speaker':
return MeetRecordingLayout.SPEAKER;
case 'single-speaker':
return MeetRecordingLayout.SINGLE_SPEAKER;
default:
return MeetRecordingLayout.GRID; // Default layout
}
}
/**
* Extracts the encoding configuration from EgressInfo.
* Converts LiveKit encoding options back to OpenVidu Meet format.
*
* @param egressInfo - The egress information from LiveKit
* @returns The encoding configuration in OpenVidu Meet format (preset or advanced options)
*/
static extractRecordingEncoding(
egressInfo: EgressInfo
): MeetRecordingEncodingPreset | MeetRecordingEncodingOptions | undefined {
if (egressInfo.request.case !== 'roomComposite') return undefined;
const { options } = egressInfo.request.value;
// Extract encoding based on type (preset or advanced)
const encodingOptions =
options.case === 'preset' ? options.value : options.case === 'advanced' ? options.value : undefined;
return EncodingConverter.fromLivekit(encodingOptions);
}
/**
* Extracts the room name, egressId, and UID from the given recordingId.
* @param recordingId ${roomId}--${egressId}--${uid}

View File

@ -45,4 +45,16 @@ export class MeetLock {
return `${RedisLockPrefix.BASE}${RedisLockName.WEBHOOK}_${webhookEvent.event}_${webhookEvent.id}`;
}
static getAiAssistantLock(roomId: string, capabilityName: string): string {
if (!roomId) {
throw new Error('roomId must be a non-empty string');
}
if (!capabilityName) {
throw new Error('capabilityName must be a non-empty string');
}
return `${RedisLockPrefix.BASE}${RedisLockName.AI_ASSISTANT}_${roomId}_${capabilityName}`;
}
}

View File

@ -0,0 +1,18 @@
// Middlewares
export * from './auth.middleware.js';
export * from './base-url.middleware.js';
export * from './content-type.middleware.js';
export * from './participant.middleware.js';
export * from './recording.middleware.js';
export * from './request-context.middleware.js';
export * from './room.middleware.js';
// Request validators
export * from './request-validators/auth-validator.middleware.js';
export * from './request-validators/ai-assistant-validator.middleware.js';
export * from './request-validators/config-validator.middleware.js';
export * from './request-validators/meeting-validator.middleware.js';
export * from './request-validators/recording-validator.middleware.js';
export * from './request-validators/room-validator.middleware.js';
export * from './request-validators/user-validator.middleware.js';

View File

@ -46,9 +46,20 @@ export const withCanRecordPermission = async (req: Request, res: Response, next:
const requestSessionService = container.get(RequestSessionService);
const tokenRoomId = requestSessionService.getRoomIdFromToken();
/**
* If there is no token, the user is allowed to access the resource because one of the following reasons:
*
* - The request is invoked using the API key.
* - The user is admin.
*/
if (!tokenRoomId) {
return next();
}
const permissions = requestSessionService.getRoomMemberMeetPermissions();
if (!tokenRoomId || !permissions) {
if (!permissions) {
const error = errorInsufficientPermissions();
return rejectRequestFromMeetError(res, error);
}

View File

@ -0,0 +1,26 @@
import { NextFunction, Request, Response } from 'express';
import { rejectUnprocessableRequest } from '../../models/error.model.js';
import { AssistantIdSchema, CreateAssistantReqSchema } from '../../models/zod-schemas/ai-assistant.schema.js';
export const validateCreateAssistantReq = (req: Request, res: Response, next: NextFunction) => {
const { success, error, data } = CreateAssistantReqSchema.safeParse(req.body);
if (!success) {
return rejectUnprocessableRequest(res, error);
}
req.body = data;
next();
};
export const validateAssistantIdPathParam = (req: Request, res: Response, next: NextFunction) => {
const { success, error, data } = AssistantIdSchema.safeParse(req.params.assistantId);
if (!success) {
error.errors[0].path = ['assistantId'];
return rejectUnprocessableRequest(res, error);
}
req.params.assistantId = data;
next();
};

View File

@ -21,7 +21,6 @@ The schema migration system enables safe evolution of MongoDB document structure
- ✅ **HA-safe** (distributed locking prevents concurrent migrations)
- ✅ **Batch processing** (efficient handling of large collections)
- ✅ **Progress tracking** (migrations stored in `MeetMigration` collection)
- ✅ **Version validation** (optional runtime checks in repositories)
---
@ -47,7 +46,6 @@ Each document includes a `schemaVersion` field:
```
src/
├── migrations/
│ ├── base-migration.ts # Base class for migrations
│ ├── migration-registry.ts # Central registry of all collections
│ ├── room-migrations.ts # Room-specific migrations
│ ├── recording-migrations.ts # Recording-specific migrations
@ -59,100 +57,13 @@ src/
└── migration.model.ts # Migration types and interfaces
```
**Note**: All migration types and interfaces (`ISchemaMigration`, `MigrationContext`, `MigrationResult`, `SchemaVersion`, `CollectionMigrationRegistry`) are defined in `src/models/migration.model.ts` for better code organization.
---
## Adding New Migrations
### Step 1: Update Schema Version in Configuration
### Step 1: Update TypeScript Interface
In `src/config/internal-config.ts`, increment the version constant:
```typescript
// internal-config.ts
export const INTERNAL_CONFIG = {
// ... other config
ROOM_SCHEMA_VERSION: 2 // Was 1
// ...
};
```
### Step 2: Create Migration Class
```typescript
import { BaseSchemaMigration } from './base-migration.js';
import { MeetRoomDocument } from '../repositories/schemas/room.schema.js';
import { MigrationContext } from '../models/migration.model.js';
import { Model } from 'mongoose';
class RoomMigrationV1ToV2 extends BaseSchemaMigration<MeetRoomDocument> {
fromVersion = 1;
toVersion = 2;
description = 'Add maxParticipants field with default value of 100';
protected async transform(document: MeetRoomDocument): Promise<Partial<MeetRoomDocument>> {
// Return fields to update (schemaVersion is handled automatically)
return {
maxParticipants: 100
};
}
// Optional: Add validation before migration runs
async validate(model: Model<MeetRoomDocument>, context: MigrationContext): Promise<boolean> {
// Check prerequisites, data integrity, etc.
return true;
}
}
```
### Step 3: Register Migration
Add the migration instance to the migrations array in `room-migrations.ts`:
```typescript
import { ISchemaMigration } from '../models/migration.model.js';
import { MeetRoomDocument } from '../repositories/schemas/room.schema.js';
export const roomMigrations: ISchemaMigration<MeetRoomDocument>[] = [
new RoomMigrationV1ToV2()
// Future migrations will be added here
];
```
### Step 4: Update Schema Definition
Update the Mongoose schema default version in `internal-config.ts`:
```typescript
// config/internal-config.ts
export const INTERNAL_CONFIG = {
// ... other config
ROOM_SCHEMA_VERSION: 2 // Updated from 1
// ...
};
```
If adding new required fields, update the Mongoose schema:
```typescript
// repositories/schemas/room.schema.ts
import { INTERNAL_CONFIG } from '../../config/internal-config.js';
const MeetRoomSchema = new Schema<MeetRoomDocument>({
schemaVersion: {
type: Number,
required: true,
default: INTERNAL_CONFIG.ROOM_SCHEMA_VERSION // Uses config value (2)
},
// ... existing fields ...
maxParticipants: { type: Number, required: true, default: 100 } // New field
});
```
### Step 5: Update TypeScript Interface
Update the domain interface to include new fields:
Update the domain interface to include new fields or changes:
```typescript
// typings/src/room.ts
@ -163,6 +74,66 @@ export interface MeetRoom extends MeetRoomOptions {
}
```
### Step 2: Update Schema Version in Configuration
In `src/config/internal-config.ts`, increment the version constant and update the `MIGRATION_REV` timestamp comment on the same line:
```typescript
// internal-config.ts
export const INTERNAL_CONFIG = {
// ... other config
ROOM_SCHEMA_VERSION: 2 as SchemaVersion // MIGRATION_REV: 1771328577054
// ...
};
```
`MIGRATION_REV` is a unique marker (current timestamp in milliseconds) used to make concurrent schema-version bumps more visible during Git merges.
If a merge conflict appears in that line, it means multiple migrations were created in parallel; resolve it by:
1. Keeping all migration code changes.
2. Re-evaluating the final schema version number.
3. Updating `MIGRATION_REV` again with a new timestamp.
### Step 3: Update Moongose Schema
Update the Mongoose schema to reflect the changes (new fields, etc.):
```typescript
// models/mongoose-schemas/room.schema.ts
const MeetRoomSchema = new Schema<MeetRoomDocument>({
// ... existing fields ...
maxParticipants: { type: Number, required: true, default: 100 } // New field
});
```
### Step 4: Create Migration Definition
```typescript
import { SchemaTransform, generateSchemaMigrationName } from '../models/migration.model.js';
import { meetRoomCollectionName, MeetRoomDocument } from '../models/mongoose-schemas/room.schema.js';
const roomMigrationV1ToV2Name = generateSchemaMigrationName(meetRoomCollectionName, 1, 2);
const roomMigrationV1ToV2Transform: SchemaTransform<MeetRoomDocument> = (room) => {
room.maxParticipants = 100;
return room;
};
```
`transform` must return the updated document instance.
It can mutate the received document by adding, removing, or modifying fields as needed to conform to the new schema version.
### Step 5: Register Migration
Add the migration to the map initialization in `room-migrations.ts`:
```typescript
export const roomMigrations: SchemaMigrationMap<MeetRoomDocument> = new Map([
[roomMigrationV1ToV2Name, roomMigrationV1ToV2Transform]
]);
```
### Step 6: Test Migration
1. Start application - migration runs automatically
@ -187,7 +158,6 @@ Each migration is tracked in the `MeetMigration` collection:
"fromVersion": 1,
"toVersion": 2,
"migratedCount": 1523,
"skippedCount": 0,
"failedCount": 0,
"durationMs": 123000
}

View File

@ -1,24 +1,19 @@
import { ISchemaMigration } from '../models/migration.model.js';
import { SchemaMigrationMap } from '../models/migration.model.js';
import { MeetApiKeyDocument } from '../models/mongoose-schemas/api-key.schema.js';
/**
* All migrations for the MeetApiKey collection in chronological order.
* Add new migrations to this array as the schema evolves.
* Schema migrations for MeetApiKey.
* Key format: schema_{collection}_v{from}_to_v{to}
*
* Example migration (when needed in the future):
* Example:
*
* class ApiKeyMigrationV1ToV2 extends BaseSchemaMigration<MeetApiKeyDocument> {
* fromVersion = 1;
* toVersion = 2;
* description = 'Add expirationDate field for API key expiration';
* const apiKeyMigrationV1ToV2Name = generateSchemaMigrationName(meetApiKeyCollectionName, 1, 2);
*
* protected async transform(document: MeetApiKeyDocument): Promise<Partial<MeetApiKeyDocument>> {
* return {
* expirationDate: undefined // No expiration for existing keys
* };
* }
* }
* const apiKeyMigrationV1ToV2Transform: SchemaTransform<MeetApiKeyDocument> = (apiKey) => {
* apiKey.expirationDate = undefined;
* return apiKey;
* };
*/
export const apiKeyMigrations: ISchemaMigration<MeetApiKeyDocument>[] = [
// Migrations will be added here as the schema evolves
];
export const apiKeyMigrations: SchemaMigrationMap<MeetApiKeyDocument> = new Map([
// [apiKeyMigrationV1ToV2Name, apiKeyMigrationV1ToV2Transform]
]);

View File

@ -1,124 +0,0 @@
import { Model } from 'mongoose';
import { ISchemaMigration, MigrationContext, MigrationResult, SchemaVersion } from '../models/migration.model.js';
/**
* Base class for schema migrations providing common functionality.
* Extend this class to implement specific migrations for collections.
*/
export abstract class BaseSchemaMigration<TDocument> implements ISchemaMigration<TDocument> {
abstract fromVersion: SchemaVersion;
abstract toVersion: SchemaVersion;
abstract description: string;
/**
* Default batch size for processing documents.
* Can be overridden in subclasses for collections with large documents.
*/
protected readonly defaultBatchSize = 50;
/**
* Executes the migration in batches.
* Processes all documents at fromVersion and upgrades them to toVersion.
*/
async execute(model: Model<TDocument>, context: MigrationContext): Promise<MigrationResult> {
const startTime = Date.now();
const batchSize = context.batchSize || this.defaultBatchSize;
let migratedCount = 0;
const skippedCount = 0;
let failedCount = 0;
context.logger.info(
`Starting schema migration: ${this.description} (v${this.fromVersion} -> v${this.toVersion})`
);
try {
// Find all documents at the source version
const totalDocs = await model.countDocuments({ schemaVersion: this.fromVersion }).exec();
if (totalDocs === 0) {
context.logger.info('No documents to migrate');
return {
migratedCount: 0,
skippedCount: 0,
failedCount: 0,
durationMs: Date.now() - startTime
};
}
context.logger.info(`Found ${totalDocs} documents to migrate`);
// Process documents in batches
let processedCount = 0;
while (processedCount < totalDocs) {
const documents = await model.find({ schemaVersion: this.fromVersion }).limit(batchSize).exec();
if (documents.length === 0) {
break;
}
// Transform and update each document
for (const doc of documents) {
try {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
const updates = await this.transform(doc as any);
// Update the document with new fields and version
await model
.updateOne(
{ _id: doc._id },
{
$set: {
...updates,
schemaVersion: this.toVersion
}
}
)
.exec();
migratedCount++;
} catch (error) {
failedCount++;
context.logger.warn(`Failed to migrate document ${doc._id}:`, error);
}
}
processedCount += documents.length;
context.logger.debug(`Processed ${processedCount}/${totalDocs} documents`);
}
const durationMs = Date.now() - startTime;
context.logger.info(
`Migration completed: ${migratedCount} migrated, ${failedCount} failed (${durationMs}ms)`
);
return {
migratedCount,
skippedCount,
failedCount,
durationMs
};
} catch (error) {
context.logger.error('Migration failed:', error);
throw error;
}
}
/**
* Transform a single document from source version to target version.
* Override this method to implement the specific transformation logic.
*
* @param document - The document to transform
* @returns Object with fields to update (excluding schemaVersion which is handled automatically)
*/
protected abstract transform(document: TDocument): Promise<Partial<TDocument>>;
/**
* Optional validation before running migration.
* Default implementation always returns true.
*/
// eslint-disable-next-line @typescript-eslint/no-unused-vars
async validate(_model: Model<TDocument>, _context: MigrationContext): Promise<boolean> {
return true;
}
}

View File

@ -1,27 +1,19 @@
import { ISchemaMigration } from '../models/migration.model.js';
import { SchemaMigrationMap } from '../models/migration.model.js';
import { MeetGlobalConfigDocument } from '../models/mongoose-schemas/global-config.schema.js';
/**
* All migrations for the MeetGlobalConfig collection in chronological order.
* Add new migrations to this array as the schema evolves.
* Schema migrations for MeetGlobalConfig.
* Key format: schema_{collection}_v{from}_to_v{to}
*
* Example migration (when needed in the future):
* Example:
*
* class GlobalConfigMigrationV1ToV2 extends BaseSchemaMigration<MeetGlobalConfigDocument> {
* fromVersion = 1;
* toVersion = 2;
* description = 'Add new branding configuration section';
* const globalConfigMigrationV1ToV2Name = generateSchemaMigrationName(meetGlobalConfigCollectionName, 1, 2);
*
* protected async transform(document: MeetGlobalConfigDocument): Promise<Partial<MeetGlobalConfigDocument>> {
* return {
* brandingConfig: {
* logoUrl: '',
* companyName: 'OpenVidu Meet'
* }
* };
* }
* }
* const globalConfigMigrationV1ToV2Transform: SchemaTransform<MeetGlobalConfigDocument> = (globalConfig) => {
* globalConfig.newField = 'defaultValue';
* return globalConfig;
* };
*/
export const globalConfigMigrations: ISchemaMigration<MeetGlobalConfigDocument>[] = [
// Migrations will be added here as the schema evolves
];
export const globalConfigMigrations: SchemaMigrationMap<MeetGlobalConfigDocument> = new Map([
// [globalConfigMigrationV1ToV2Name, globalConfigMigrationV1ToV2Transform]
]);

View File

@ -0,0 +1,6 @@
export * from './api-key-migrations.js';
export * from './global-config-migrations.js';
export * from './migration-registry.js';
export * from './recording-migrations.js';
export * from './room-migrations.js';
export * from './user-migrations.js';

View File

@ -1,13 +1,22 @@
import { INTERNAL_CONFIG } from '../config/internal-config.js';
import { CollectionMigrationRegistry } from '../models/migration.model.js';
import { meetApiKeyCollectionName, MeetApiKeyModel } from '../models/mongoose-schemas/api-key.schema.js';
import { CollectionMigrationRegistry, SchemaMigratableDocument } from '../models/migration.model.js';
import {
meetApiKeyCollectionName,
MeetApiKeyDocument,
MeetApiKeyModel
} from '../models/mongoose-schemas/api-key.schema.js';
import {
meetGlobalConfigCollectionName,
MeetGlobalConfigDocument,
MeetGlobalConfigModel
} from '../models/mongoose-schemas/global-config.schema.js';
import { meetRecordingCollectionName, MeetRecordingModel } from '../models/mongoose-schemas/recording.schema.js';
import { meetRoomCollectionName, MeetRoomModel } from '../models/mongoose-schemas/room.schema.js';
import { meetUserCollectionName, MeetUserModel } from '../models/mongoose-schemas/user.schema.js';
import {
meetRecordingCollectionName,
MeetRecordingDocument,
MeetRecordingModel
} from '../models/mongoose-schemas/recording.schema.js';
import { meetRoomCollectionName, MeetRoomDocument, MeetRoomModel } from '../models/mongoose-schemas/room.schema.js';
import { meetUserCollectionName, MeetUserDocument, MeetUserModel } from '../models/mongoose-schemas/user.schema.js';
import { apiKeyMigrations } from './api-key-migrations.js';
import { globalConfigMigrations } from './global-config-migrations.js';
import { recordingMigrations } from './recording-migrations.js';
@ -16,12 +25,18 @@ import { userMigrations } from './user-migrations.js';
/**
* Central registry of all collection migrations.
* Defines the current version and migration path for each collection.
* Defines the current version and migration map for each collection.
*
* Order matters: collections should be listed in dependency order.
* For example, if recordings depend on rooms, rooms should come first.
*/
export const migrationRegistry: CollectionMigrationRegistry[] = [
const migrationRegistry: [
CollectionMigrationRegistry<MeetGlobalConfigDocument>,
CollectionMigrationRegistry<MeetUserDocument>,
CollectionMigrationRegistry<MeetApiKeyDocument>,
CollectionMigrationRegistry<MeetRoomDocument>,
CollectionMigrationRegistry<MeetRecordingDocument>
] = [
// GlobalConfig - no dependencies, can run first
{
collectionName: meetGlobalConfigCollectionName,
@ -59,3 +74,10 @@ export const migrationRegistry: CollectionMigrationRegistry[] = [
migrations: recordingMigrations
}
];
/**
* Homogeneous runtime view of the migration registry.
* Used by migration execution code that iterates over all collections.
*/
export const runtimeMigrationRegistry =
migrationRegistry as unknown as CollectionMigrationRegistry<SchemaMigratableDocument>[];

View File

@ -1,24 +1,19 @@
import { ISchemaMigration } from '../models/migration.model.js';
import { MeetRecordingDocument } from '../models/mongoose-schemas/recording.schema.js';
import { MeetRecordingEncodingPreset, MeetRecordingLayout } from '@openvidu-meet/typings';
import { generateSchemaMigrationName, SchemaMigrationMap, SchemaTransform } from '../models/migration.model.js';
import { meetRecordingCollectionName, MeetRecordingDocument } from '../models/mongoose-schemas/recording.schema.js';
const recordingMigrationV1ToV2Name = generateSchemaMigrationName(meetRecordingCollectionName, 1, 2);
const recordingMigrationV1ToV2Transform: SchemaTransform<MeetRecordingDocument> = (recording) => {
recording.layout = MeetRecordingLayout.GRID;
recording.encoding = MeetRecordingEncodingPreset.H264_720P_30;
return recording;
};
/**
* All migrations for the MeetRecording collection in chronological order.
* Add new migrations to this array as the schema evolves.
*
* Example migration (when needed in the future):
*
* class RecordingMigrationV1ToV2 extends BaseSchemaMigration<MeetRecordingDocument> {
* fromVersion = 1;
* toVersion = 2;
* description = 'Add new optional field "quality" for recording quality tracking';
*
* protected async transform(document: MeetRecordingDocument): Promise<Partial<MeetRecordingDocument>> {
* return {
* quality: 'standard' // Default quality for existing recordings
* };
* }
* }
* Schema migrations for MeetRecording.
* Key format: schema_{collection}_v{from}_to_v{to}
*/
export const recordingMigrations: ISchemaMigration<MeetRecordingDocument>[] = [
// Migrations will be added here as the schema evolves
];
export const recordingMigrations: SchemaMigrationMap<MeetRecordingDocument> = new Map([
[recordingMigrationV1ToV2Name, recordingMigrationV1ToV2Transform]
]);

View File

@ -1,26 +1,20 @@
import { ISchemaMigration } from '../models/migration.model.js';
import { MeetRoomDocument } from '../models/mongoose-schemas/room.schema.js';
import { MeetRecordingEncodingPreset, MeetRecordingLayout } from '@openvidu-meet/typings';
import { generateSchemaMigrationName, SchemaMigrationMap, SchemaTransform } from '../models/migration.model.js';
import { meetRoomCollectionName, MeetRoomDocument } from '../models/mongoose-schemas/room.schema.js';
const roomMigrationV1ToV2Name = generateSchemaMigrationName(meetRoomCollectionName, 1, 2);
const roomMigrationV1ToV2Transform: SchemaTransform<MeetRoomDocument> = (room) => {
room.config.captions = { enabled: true };
room.config.recording.layout = MeetRecordingLayout.GRID;
room.config.recording.encoding = MeetRecordingEncodingPreset.H264_720P_30;
return room;
};
/**
* All migrations for the MeetRoom collection in chronological order.
* Add new migrations to this array as the schema evolves.
*
* Example migration (when needed in the future):
*
* class RoomMigrationV1ToV2 extends BaseSchemaMigration<MeetRoomDocument> {
* fromVersion = 1;
* toVersion = 2;
* description = 'Add new required field "maxParticipants" with default value';
*
* protected async transform(document: MeetRoomDocument): Promise<Partial<MeetRoomDocument>> {
* return {
* maxParticipants: 100 // Add default value for existing rooms
* };
* }
* }
* Schema migrations for MeetRoom.
* Key format: schema_{collection}_v{from}_to_v{to}
*/
export const roomMigrations: ISchemaMigration<MeetRoomDocument>[] = [
// Migrations will be added here as the schema evolves
// Example: new RoomMigrationV1ToV2(),
// Example: new RoomMigrationV2ToV3(),
];
export const roomMigrations: SchemaMigrationMap<MeetRoomDocument> = new Map([
[roomMigrationV1ToV2Name, roomMigrationV1ToV2Transform]
]);

View File

@ -1,24 +1,19 @@
import { ISchemaMigration } from '../models/migration.model.js';
import { SchemaMigrationMap } from '../models/migration.model.js';
import { MeetUserDocument } from '../models/mongoose-schemas/user.schema.js';
/**
* All migrations for the MeetUser collection in chronological order.
* Add new migrations to this array as the schema evolves.
* Schema migrations for MeetUser.
* Key format: schema_{collection}_v{from}_to_v{to}
*
* Example migration (when needed in the future):
* Example:
*
* class UserMigrationV1ToV2 extends BaseSchemaMigration<MeetUserDocument> {
* fromVersion = 1;
* toVersion = 2;
* description = 'Add email field for user notifications';
* const userMigrationV1ToV2Name = generateSchemaMigrationName(meetUserCollectionName, 1, 2);
*
* protected async transform(document: MeetUserDocument): Promise<Partial<MeetUserDocument>> {
* return {
* email: undefined // Email will be optional initially
* };
* }
* }
* const userMigrationV1ToV2Transform: SchemaTransform<MeetUserDocument> = (user) => {
* user.newField = 'defaultValue';
* return user;
* };
*/
export const userMigrations: ISchemaMigration<MeetUserDocument>[] = [
// Migrations will be added here as the schema evolves
];
export const userMigrations: SchemaMigrationMap<MeetUserDocument> = new Map([
// [userMigrationV1ToV2Name, userMigrationV1ToV2Transform]
]);

View File

@ -0,0 +1,26 @@
// Core models
export * from './db-pagination.model.js';
export * from './distributed-event.model.js';
export * from './error.model.js';
export * from './migration.model.js';
export * from './ov-components-signal.model.js';
export * from './redis.model.js';
export * from './request-context.model.js';
export * from './task-scheduler.model.js';
// Mongoose schemas
export * from './mongoose-schemas/api-key.schema.js';
export * from './mongoose-schemas/global-config.schema.js';
export * from './mongoose-schemas/migration.schema.js';
export * from './mongoose-schemas/recording.schema.js';
export * from './mongoose-schemas/room.schema.js';
export * from './mongoose-schemas/user.schema.js';
// Zod schemas
export * from './zod-schemas/auth.schema.js';
export * from './zod-schemas/global-config.schema.js';
export * from './zod-schemas/meeting.schema.js';
export * from './zod-schemas/recording.schema.js';
export * from './zod-schemas/room.schema.js';
export * from './zod-schemas/user.schema.js';

View File

@ -1,5 +1,4 @@
import { Model } from 'mongoose';
import { LoggerService } from '../services/logger.service.js';
import { Document, Model } from 'mongoose';
/**
* Interface representing a migration document in MongoDB.
@ -58,19 +57,11 @@ export enum MigrationStatus {
}
/**
* Enum defining all possible migration names in the system.
* Each migration should have a unique identifier.
*
* Schema migrations follow the pattern: schema_{collection}_v{from}_to_v{to}
* Example: 'schema_room_v1_to_v2', 'schema_recording_v2_to_v3'
*/
export enum MigrationName {
/**
* Migration from legacy storage (S3, ABS, GCS) to MongoDB.
* Includes: GlobalConfig, Users, ApiKeys, Rooms, and Recordings.
*/
LEGACY_STORAGE_TO_MONGODB = 'legacy_storage_to_mongodb'
}
export type SchemaMigrationName = `schema_${string}_v${number}_to_v${number}`;
export type MigrationName = SchemaMigrationName;
/**
* Generates a migration name for schema version upgrades.
@ -83,12 +74,49 @@ export enum MigrationName {
* @example
* generateSchemaMigrationName('MeetRoom', 1, 2) // Returns: 'schema_room_v1_to_v2'
*/
export function generateSchemaMigrationName(collectionName: string, fromVersion: number, toVersion: number): string {
export function generateSchemaMigrationName(
collectionName: string,
fromVersion: number,
toVersion: number
): SchemaMigrationName {
// Convert collection name to lowercase and remove 'Meet' prefix
const simpleName = collectionName.replace(/^Meet/, '').toLowerCase();
return `schema_${simpleName}_v${fromVersion}_to_v${toVersion}`;
}
/**
* Checks whether a string matches the schema migration naming convention.
*/
export function isSchemaMigrationName(name: string): name is SchemaMigrationName {
return /^schema_[a-z0-9_]+_v\d+_to_v\d+$/.test(name);
}
/**
* Parses a schema migration name and extracts entity and versions.
*/
export function parseSchemaMigrationName(
name: string
): { collectionName: string; fromVersion: SchemaVersion; toVersion: SchemaVersion } | null {
const match = /^schema_([a-z0-9_]+)_v(\d+)_to_v(\d+)$/.exec(name);
if (!match) {
return null;
}
return {
collectionName: match[1],
fromVersion: Number(match[2]),
toVersion: Number(match[3])
};
}
/**
* Base document shape required for schema migrations.
*/
export interface SchemaMigratableDocument extends Document {
schemaVersion?: number;
}
/**
* Represents a schema version number.
* Versions start at 1 and increment sequentially.
@ -96,14 +124,41 @@ export function generateSchemaMigrationName(collectionName: string, fromVersion:
export type SchemaVersion = number;
/**
* Context provided to migration functions.
* Contains utilities and services needed during migration.
* Function that transforms a document and returns the updated document.
*/
export interface MigrationContext {
/** Logger service for tracking migration progress */
logger: LoggerService;
/** Batch size for processing documents (default: 50) */
batchSize?: number;
export type SchemaTransform<TDocument extends SchemaMigratableDocument> = (document: TDocument) => TDocument;
/**
* Map of schema migration names to transform functions.
*/
export type SchemaMigrationMap<TDocument extends SchemaMigratableDocument> = Map<
SchemaMigrationName,
SchemaTransform<TDocument>
>;
/**
* Resolved migration step ready to be executed.
*/
export interface SchemaMigrationStep<TDocument extends SchemaMigratableDocument> {
name: SchemaMigrationName;
fromVersion: SchemaVersion;
toVersion: SchemaVersion;
transform: SchemaTransform<TDocument>;
}
/**
* Registry entry for a collection's migrations.
* Groups all migrations for a specific collection.
*/
export interface CollectionMigrationRegistry<TDocument extends SchemaMigratableDocument> {
/** Name of the collection */
collectionName: string;
/** Mongoose model for the collection */
model: Model<TDocument>;
/** Current schema version expected by the application */
currentVersion: SchemaVersion;
/** Map of migration names to their transform functions */
migrations: SchemaMigrationMap<TDocument>;
}
/**
@ -113,60 +168,8 @@ export interface MigrationContext {
export interface MigrationResult {
/** Number of documents successfully migrated */
migratedCount: number;
/** Number of documents skipped (already at target version) */
skippedCount: number;
/** Number of documents that failed migration */
failedCount: number;
/** Total time taken in milliseconds */
durationMs: number;
}
/**
* Interface for a single schema migration handler.
* Each migration transforms documents from one version to the next.
*/
// eslint-disable-next-line @typescript-eslint/no-explicit-any
export interface ISchemaMigration<TDocument = any> {
/** The source schema version this migration upgrades from */
fromVersion: SchemaVersion;
/** The target schema version this migration upgrades to */
toVersion: SchemaVersion;
/** Short description of what this migration does */
description: string;
/**
* Executes the migration on a batch of documents.
* Should update documents using MongoDB bulk operations for efficiency.
*
* @param model - Mongoose model for the collection
* @param context - Migration context with logger and configuration
* @returns Migration result with statistics
*/
execute(model: Model<TDocument>, context: MigrationContext): Promise<MigrationResult>;
/**
* Optional validation to check if migration is safe to run.
* Can verify prerequisites or data integrity before migration starts.
*
* @param model - Mongoose model for the collection
* @param context - Migration context with logger and configuration
* @returns true if migration can proceed, false otherwise
*/
validate?(model: Model<TDocument>, context: MigrationContext): Promise<boolean>;
}
/**
* Registry entry for a collection's migrations.
* Groups all migrations for a specific collection.
*/
export interface CollectionMigrationRegistry {
/** Name of the collection */
collectionName: string;
/** Mongoose model for the collection */
// eslint-disable-next-line @typescript-eslint/no-explicit-any
model: Model<any>;
/** Current schema version expected by the application */
currentVersion: SchemaVersion;
/** Array of migrations in chronological order */
migrations: ISchemaMigration[];
}

View File

@ -1,5 +1,5 @@
import { Document, model, Schema } from 'mongoose';
import { MeetMigration, MigrationName, MigrationStatus } from '../migration.model.js';
import { isSchemaMigrationName, MeetMigration, MigrationStatus } from '../migration.model.js';
/**
* Mongoose Document interface for MeetMigration.
@ -16,7 +16,10 @@ const MigrationSchema = new Schema<MeetMigrationDocument>(
name: {
type: String,
required: true,
enum: Object.values(MigrationName)
validate: {
validator: (value: string) => isSchemaMigrationName(value),
message: 'Invalid migration name format'
}
},
status: {
type: String,

View File

@ -1,4 +1,4 @@
import { MeetRecordingInfo, MeetRecordingStatus } from '@openvidu-meet/typings';
import { MeetRecordingInfo, MeetRecordingLayout, MeetRecordingStatus } from '@openvidu-meet/typings';
import { Document, model, Schema } from 'mongoose';
import { INTERNAL_CONFIG } from '../../config/internal-config.js';
@ -43,6 +43,15 @@ const MeetRecordingSchema = new Schema<MeetRecordingDocument>(
enum: Object.values(MeetRecordingStatus),
required: true
},
layout: {
type: String,
enum: Object.values(MeetRecordingLayout),
required: true
},
encoding: {
type: Schema.Types.Mixed,
required: true
},
filename: {
type: String,
required: false

View File

@ -1,5 +1,6 @@
import {
MeetRecordingAccess,
MeetRecordingLayout,
MeetRoom,
MeetRoomDeletionPolicyWithMeeting,
MeetRoomDeletionPolicyWithRecordings,
@ -49,10 +50,19 @@ const MeetRecordingConfigSchema = new Schema(
type: Boolean,
required: true
},
layout: {
type: String,
enum: Object.values(MeetRecordingLayout),
required: true
},
encoding: {
type: Schema.Types.Mixed,
required: true
},
allowAccessTo: {
type: String,
enum: Object.values(MeetRecordingAccess),
required: false
required: true
}
},
{ _id: false }
@ -91,8 +101,20 @@ const MeetE2EEConfigSchema = new Schema(
{
enabled: {
type: Boolean,
required: true,
default: false
required: true
}
},
{ _id: false }
);
/**
* Mongoose schema for MeetRoom captions configuration.
*/
const MeetCaptionsConfigSchema = new Schema(
{
enabled: {
type: Boolean,
required: true
}
},
{ _id: false }
@ -172,8 +194,11 @@ const MeetRoomConfigSchema = new Schema(
},
e2ee: {
type: MeetE2EEConfigSchema,
required: true,
default: { enabled: false }
required: true
},
captions: {
type: MeetCaptionsConfigSchema,
required: true
}
},
{ _id: false }
@ -225,14 +250,12 @@ const MeetRoomSchema = new Schema<MeetRoomDocument>(
status: {
type: String,
enum: Object.values(MeetRoomStatus),
required: true,
default: MeetRoomStatus.OPEN
required: true
},
meetingEndAction: {
type: String,
enum: Object.values(MeetingEndAction),
required: true,
default: MeetingEndAction.NONE
required: true
}
},
{

View File

@ -1,18 +1,13 @@
export const REDIS_KEY_PREFIX = 'ov_meet:';
export const enum RedisKeyName {
GLOBAL_CONFIG = `${REDIS_KEY_PREFIX}global_config`,
ROOM = `${REDIS_KEY_PREFIX}room:`,
RECORDING = `${REDIS_KEY_PREFIX}recording:`,
RECORDING_SECRETS = `${REDIS_KEY_PREFIX}recording_secrets:`,
ARCHIVED_ROOM = `${REDIS_KEY_PREFIX}archived_room:`,
USER = `${REDIS_KEY_PREFIX}user:`,
API_KEYS = `${REDIS_KEY_PREFIX}api_keys:`,
//Tracks all currently reserved participant names per room (with TTL for auto-expiration).
ROOM_PARTICIPANTS = `${REDIS_KEY_PREFIX}room_participants:`,
// Stores released numeric suffixes (per base name) in a sorted set, so that freed numbers
// can be reused efficiently instead of always incrementing to the next highest number.
PARTICIPANT_NAME_POOL = `${REDIS_KEY_PREFIX}participant_pool:`
PARTICIPANT_NAME_POOL = `${REDIS_KEY_PREFIX}participant_pool:`,
// Tracks participant-level assistant capability state in a room.
AI_ASSISTANT_PARTICIPANT_STATE = `${REDIS_KEY_PREFIX}ai_assistant:participant_state:`
}
export const enum RedisLockPrefix {
@ -25,5 +20,6 @@ export const enum RedisLockName {
SCHEDULED_TASK = 'scheduled_task',
STORAGE_INITIALIZATION = 'storage_initialization',
MIGRATION = 'migration',
WEBHOOK = 'webhook'
WEBHOOK = 'webhook',
AI_ASSISTANT = 'ai_assistant'
}

View File

@ -0,0 +1,34 @@
import { MeetAssistantCapabilityName } from '@openvidu-meet/typings';
import { z } from 'zod';
export const CreateAssistantReqSchema = z.object({
// scope: z.object({
// resourceType: z.nativeEnum(MeetAssistantScopeResourceType),
// resourceIds: z.array(z.string().trim().min(1)).min(1)
// }),
capabilities: z
.array(
z.object({
name: z.string()
})
)
.min(1)
.transform((capabilities) => {
const validValues = Object.values(MeetAssistantCapabilityName);
// Filter out invalid capabilities
const filtered = capabilities.filter((cap) =>
validValues.includes(cap.name as MeetAssistantCapabilityName)
);
// Remove duplicates based on capability name
const unique = Array.from(new Map(filtered.map((cap) => [cap.name, cap])).values());
return unique;
})
.refine((caps) => caps.length > 0, {
message: 'At least one valid capability is required'
})
});
export const AssistantIdSchema = z.string().trim().min(1);

View File

@ -1,6 +1,6 @@
import { MeetRecordingFilters, MeetRecordingStatus } from '@openvidu-meet/typings';
import { MeetRecordingFilters, MeetRecordingLayout, MeetRecordingStatus } from '@openvidu-meet/typings';
import { z } from 'zod';
import { nonEmptySanitizedRoomId } from './room.schema.js';
import { encodingValidator, nonEmptySanitizedRoomId } from './room.schema.js';
export const nonEmptySanitizedRecordingId = (fieldName: string) =>
z
@ -50,7 +50,13 @@ export const nonEmptySanitizedRecordingId = (fieldName: string) =>
);
export const StartRecordingReqSchema = z.object({
roomId: nonEmptySanitizedRoomId('roomId')
roomId: nonEmptySanitizedRoomId('roomId'),
config: z
.object({
layout: z.nativeEnum(MeetRecordingLayout).optional(),
encoding: encodingValidator.optional()
})
.optional()
});
export const RecordingFiltersSchema: z.ZodType<MeetRecordingFilters> = z.object({

View File

@ -4,8 +4,14 @@ import {
MeetE2EEConfig,
MeetPermissions,
MeetRecordingAccess,
MeetRecordingAudioCodec,
MeetRecordingConfig,
MeetRecordingEncodingOptions,
MeetRecordingEncodingPreset,
MeetRecordingLayout,
MeetRecordingVideoCodec,
MeetRoomAutoDeletionPolicy,
MeetRoomCaptionsConfig,
MeetRoomConfig,
MeetRoomDeletionPolicyWithMeeting,
MeetRoomDeletionPolicyWithRecordings,
@ -34,23 +40,124 @@ export const nonEmptySanitizedRoomId = (fieldName: string) =>
message: `${fieldName} cannot be empty after sanitization`
});
// Encoding options validation - both video and audio are required with all their fields
export const EncodingOptionsSchema: z.ZodType<MeetRecordingEncodingOptions> = z.object({
video: z.object({
width: z.number().positive('Video width must be a positive number'),
height: z.number().positive('Video height must be a positive number'),
framerate: z.number().positive('Video framerate must be a positive number'),
codec: z.nativeEnum(MeetRecordingVideoCodec),
bitrate: z.number().positive('Video bitrate must be a positive number'),
keyFrameInterval: z.number().positive('Video keyFrameInterval must be a positive number'),
depth: z.number().positive('Video depth must be a positive number')
}),
audio: z.object({
codec: z.nativeEnum(MeetRecordingAudioCodec),
bitrate: z.number().positive('Audio bitrate must be a positive number'),
frequency: z.number().positive('Audio frequency must be a positive number')
})
});
/**
* Custom encoding validator to handle both preset strings and encoding objects.
* Used in RecordingConfigSchema
*/
export const encodingValidator = z.any().superRefine((value, ctx) => {
// If undefined, skip validation (it's optional)
if (value === undefined) {
return;
}
// Check if it's a string preset
if (typeof value === 'string') {
const presetValues = Object.values(MeetRecordingEncodingPreset);
if (!presetValues.includes(value as MeetRecordingEncodingPreset)) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: `Invalid encoding preset. Must be one of: ${presetValues.join(', ')}`
});
}
return;
}
// If it's not a string, it must be an encoding object
if (typeof value !== 'object' || value === null) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: 'Encoding must be either a preset string or an encoding configuration object'
});
return;
}
// Both video and audio must be provided
if (!value.video || !value.audio) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: 'Both video and audio configuration must be provided when using encoding options'
});
return;
}
if (value.video === null || typeof value.video !== 'object') {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: 'Video encoding must be a valid object'
});
return;
}
if (value.audio === null || typeof value.audio !== 'object') {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: 'Audio encoding must be a valid object'
});
return;
}
// Check video fields
const requiredVideoFields = ['width', 'height', 'framerate', 'codec', 'bitrate', 'keyFrameInterval', 'depth'];
const missingVideoFields = requiredVideoFields.filter((field) => !(field in value.video));
if (missingVideoFields.length > 0) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: `When video encoding is provided, required fields are missing: ${missingVideoFields.join(', ')}`,
path: ['video']
});
}
// Check audio fields
const requiredAudioFields = ['codec', 'bitrate', 'frequency'];
const missingAudioFields = requiredAudioFields.filter((field) => !(field in value.audio));
if (missingAudioFields.length > 0) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: `When audio encoding is provided, required fields are missing: ${missingAudioFields.join(', ')}`,
path: ['audio']
});
}
// Validate the actual types and values using the schema
const result = EncodingOptionsSchema.safeParse(value);
if (!result.success) {
result.error.issues.forEach((issue) => {
ctx.addIssue(issue);
});
}
});
const RecordingAccessSchema: z.ZodType<MeetRecordingAccess> = z.nativeEnum(MeetRecordingAccess);
const RecordingConfigSchema: z.ZodType<MeetRecordingConfig> = z
.object({
enabled: z.boolean(),
allowAccessTo: RecordingAccessSchema.optional()
})
.refine(
(data) => {
// If recording is enabled, allowAccessTo must be provided
return !data.enabled || data.allowAccessTo !== undefined;
},
{
message: 'allowAccessTo is required when recording is enabled',
path: ['allowAccessTo']
}
);
const RecordingConfigSchema: z.ZodType<MeetRecordingConfig> = z.object({
enabled: z.boolean(),
layout: z.nativeEnum(MeetRecordingLayout).optional(),
encoding: encodingValidator.optional(),
allowAccessTo: RecordingAccessSchema.optional()
});
const ChatConfigSchema: z.ZodType<MeetChatConfig> = z.object({
enabled: z.boolean()
@ -64,6 +171,10 @@ const E2EEConfigSchema: z.ZodType<MeetE2EEConfig> = z.object({
enabled: z.boolean()
});
const CaptionsConfigSchema: z.ZodType<MeetRoomCaptionsConfig> = z.object({
enabled: z.boolean()
});
const ThemeModeSchema: z.ZodType<MeetRoomThemeMode> = z.nativeEnum(MeetRoomThemeMode);
const hexColorSchema = z
@ -92,15 +203,20 @@ export const AppearanceConfigSchema: z.ZodType<MeetAppearanceConfig> = z.object(
themes: z.array(RoomThemeSchema).length(1, 'There must be exactly one theme defined')
});
const RoomConfigSchema: z.ZodType<Partial<MeetRoomConfig>> = z
/**
* Schema for updating room config (partial updates allowed)
* Used when updating an existing room's config - missing fields keep their current values
*/
const UpdateRoomConfigSchema: z.ZodType<Partial<MeetRoomConfig>> = z
.object({
recording: RecordingConfigSchema.optional(),
chat: ChatConfigSchema.optional(),
virtualBackground: VirtualBackgroundConfigSchema.optional(),
e2ee: E2EEConfigSchema.optional()
e2ee: E2EEConfigSchema.optional(),
captions: CaptionsConfigSchema.optional()
// appearance: AppearanceConfigSchema,
})
.transform((data) => {
.transform((data: Partial<MeetRoomConfig>) => {
// Automatically disable recording when E2EE is enabled
if (data.e2ee?.enabled && data.recording?.enabled) {
data.recording = {
@ -112,6 +228,54 @@ const RoomConfigSchema: z.ZodType<Partial<MeetRoomConfig>> = z
return data;
});
/**
* Schema for creating room config (applies defaults for missing fields)
* Used when creating a new room - missing fields get default values
*
* IMPORTANT: Using functions in .default() to avoid shared mutable state.
* Each call creates a new object instance instead of reusing the same reference.
*/
const CreateRoomConfigSchema = z
.object({
recording: RecordingConfigSchema.optional().default(() => ({
enabled: true,
layout: MeetRecordingLayout.GRID,
encoding: MeetRecordingEncodingPreset.H264_720P_30,
allowAccessTo: MeetRecordingAccess.ADMIN_MODERATOR_SPEAKER
})),
chat: ChatConfigSchema.optional().default(() => ({ enabled: true })),
virtualBackground: VirtualBackgroundConfigSchema.optional().default(() => ({ enabled: true })),
e2ee: E2EEConfigSchema.optional().default(() => ({ enabled: false })),
captions: CaptionsConfigSchema.optional().default(() => ({ enabled: true }))
// appearance: AppearanceConfigSchema,
})
.transform((data) => {
// Apply default layout if not provided
if (data.recording.layout === undefined) {
data.recording.layout = MeetRecordingLayout.GRID;
}
// Apply default encoding if not provided
if (data.recording.encoding === undefined) {
data.recording.encoding = MeetRecordingEncodingPreset.H264_720P_30;
}
// Apply default allowAccessTo if not provided
if (data.recording.allowAccessTo === undefined) {
data.recording.allowAccessTo = MeetRecordingAccess.ADMIN_MODERATOR_SPEAKER;
}
// Automatically disable recording when E2EE is enabled
if (data.e2ee.enabled && data.recording.enabled) {
data.recording = {
...data.recording,
enabled: false
};
}
return data as MeetRoomConfig;
});
const RoomDeletionPolicyWithMeetingSchema: z.ZodType<MeetRoomDeletionPolicyWithMeeting> = z.nativeEnum(
MeetRoomDeletionPolicyWithMeeting
);
@ -141,10 +305,10 @@ export const RoomOptionsSchema: z.ZodType<MeetRoomOptions> = z.object({
)
.optional(),
autoDeletionPolicy: RoomAutoDeletionPolicySchema.optional()
.default({
.default(() => ({
withMeeting: MeetRoomDeletionPolicyWithMeeting.WHEN_MEETING_ENDS,
withRecordings: MeetRoomDeletionPolicyWithRecordings.CLOSE
})
}))
.refine(
(policy) => {
return !policy || policy.withMeeting !== MeetRoomDeletionPolicyWithMeeting.FAIL;
@ -163,11 +327,17 @@ export const RoomOptionsSchema: z.ZodType<MeetRoomOptions> = z.object({
path: ['withRecordings']
}
),
config: RoomConfigSchema.optional().default({
recording: { enabled: true, allowAccessTo: MeetRecordingAccess.ADMIN_MODERATOR_SPEAKER },
config: CreateRoomConfigSchema.optional().default({
recording: {
enabled: true,
layout: MeetRecordingLayout.GRID,
encoding: MeetRecordingEncodingPreset.H264_720P_30,
allowAccessTo: MeetRecordingAccess.ADMIN_MODERATOR_SPEAKER
},
chat: { enabled: true },
virtualBackground: { enabled: true },
e2ee: { enabled: false }
e2ee: { enabled: false },
captions: { enabled: true }
})
// maxParticipants: z
// .number()
@ -241,7 +411,7 @@ export const BulkDeleteRoomsReqSchema = z.object({
});
export const UpdateRoomConfigReqSchema = z.object({
config: RoomConfigSchema
config: UpdateRoomConfigSchema
});
export const UpdateRoomStatusReqSchema = z.object({

View File

@ -95,7 +95,7 @@ export class MigrationRepository extends BaseRepository<MeetMigration, MeetMigra
{
$set: {
status: MigrationStatus.FAILED,
completedAt: new Date(),
completedAt: Date.now(),
error
}
}
@ -103,27 +103,6 @@ export class MigrationRepository extends BaseRepository<MeetMigration, MeetMigra
return this.toDomain(document);
}
/**
* Get all migrations with their current status.
*
* @returns Array of all migration documents
*/
async getAllMigrations(): Promise<MeetMigration[]> {
const documents = await this.findAll();
return documents;
}
/**
* Get a specific migration by name.
*
* @param name - The name of the migration
* @returns The migration document or null if not found
*/
async getMigration(name: MigrationName): Promise<MeetMigration | null> {
const document = await this.findOne({ name });
return document ? this.toDomain(document) : null;
}
/**
* Check if a migration has been completed successfully.
*

View File

@ -2,6 +2,7 @@ import { MeetRoom, MeetRoomFilters, MeetRoomStatus } from '@openvidu-meet/typing
import { inject, injectable } from 'inversify';
import { MeetRoomDocument, MeetRoomModel } from '../models/mongoose-schemas/room.schema.js';
import { LoggerService } from '../services/logger.service.js';
import { getBasePath } from '../utils/html-dynamic-base-path.utils.js';
import { getBaseUrl } from '../utils/url.utils.js';
import { BaseRepository } from './base.repository.js';
@ -211,20 +212,35 @@ export class RoomRepository<TRoom extends MeetRoom = MeetRoom> extends BaseRepos
}
/**
* Extracts the path from a URL, removing the base URL if present.
* Extracts the path from a URL, removing the base URL and basePath if present.
* This ensures only the route path is stored in the database, without the basePath prefix.
*
* @param url - The URL to process
* @returns The path portion of the URL
* @returns The path portion of the URL without the basePath prefix
*/
private extractPathFromUrl(url: string): string {
// If already a path, return as-is
const basePath = getBasePath();
// Remove trailing slash from basePath for comparison (e.g., '/meet/' -> '/meet')
const basePathWithoutTrailingSlash = basePath.endsWith('/') ? basePath.slice(0, -1) : basePath;
// Helper to strip basePath from a path
const stripBasePath = (path: string): string => {
if (basePathWithoutTrailingSlash !== '' && path.startsWith(basePathWithoutTrailingSlash)) {
return path.slice(basePathWithoutTrailingSlash.length) || '/';
}
return path;
};
// If already a path, strip basePath and return
if (url.startsWith('/')) {
return url;
return stripBasePath(url);
}
try {
const urlObj = new URL(url);
return urlObj.pathname + urlObj.search + urlObj.hash;
const pathname = stripBasePath(urlObj.pathname);
return pathname + urlObj.search + urlObj.hash;
} catch {
// If URL parsing fails, assume it's already a path
return url;

View File

@ -0,0 +1,26 @@
import bodyParser from 'body-parser';
import { Router } from 'express';
import * as aiAssistantCtrl from '../controllers/ai-assistant.controller.js';
import { roomMemberTokenValidator, withAuth } from '../middlewares/auth.middleware.js';
import {
validateAssistantIdPathParam,
validateCreateAssistantReq
} from '../middlewares/request-validators/ai-assistant-validator.middleware.js';
export const aiAssistantRouter: Router = Router();
aiAssistantRouter.use(bodyParser.urlencoded({ extended: true }));
aiAssistantRouter.use(bodyParser.json());
aiAssistantRouter.post(
'/assistants',
withAuth(roomMemberTokenValidator),
validateCreateAssistantReq,
aiAssistantCtrl.createAssistant
);
aiAssistantRouter.delete(
'/assistants/:assistantId',
withAuth(roomMemberTokenValidator),
validateAssistantIdPathParam,
aiAssistantCtrl.cancelAssistant
);

View File

@ -41,3 +41,6 @@ configRouter.put(
globalConfigCtrl.updateRoomsAppearanceConfig
);
configRouter.get('/rooms/appearance', withAuth(allowAnonymous), globalConfigCtrl.getRoomsAppearanceConfig);
// Captions config
configRouter.get('/captions', withAuth(allowAnonymous), globalConfigCtrl.getCaptionsConfig);

View File

@ -0,0 +1,11 @@
export * from './analytics.routes.js';
export * from './ai-assistant.routes.js';
export * from './api-key.routes.js';
export * from './auth.routes.js';
export * from './global-config.routes.js';
export * from './livekit.routes.js';
export * from './meeting.routes.js';
export * from './recording.routes.js';
export * from './room.routes.js';
export * from './user.routes.js';

View File

@ -79,25 +79,24 @@ recordingRouter.get(
withCanRetrieveRecordingsPermission,
recordingCtrl.getRecordingUrl
);
// Internal Recording Routes
export const internalRecordingRouter: Router = Router();
internalRecordingRouter.use(bodyParser.urlencoded({ extended: true }));
internalRecordingRouter.use(bodyParser.json());
internalRecordingRouter.post(
recordingRouter.post(
'/',
withAuth(apiKeyValidator, roomMemberTokenValidator),
validateStartRecordingReq,
withRecordingEnabled,
withAuth(roomMemberTokenValidator),
withCanRecordPermission,
recordingCtrl.startRecording
);
internalRecordingRouter.post(
recordingRouter.post(
'/:recordingId/stop',
withAuth(apiKeyValidator, roomMemberTokenValidator),
withValidRecordingId,
withRecordingEnabled,
withAuth(roomMemberTokenValidator),
withCanRecordPermission,
recordingCtrl.stopRecording
);
// Internal Recording Routes
// export const internalRecordingRouter: Router = Router();
// internalRecordingRouter.use(bodyParser.urlencoded({ extended: true }));
// internalRecordingRouter.use(bodyParser.json());

View File

@ -1,7 +1,7 @@
import chalk from 'chalk';
import cookieParser from 'cookie-parser';
import cors from 'cors';
import express, { Express, Request, Response } from 'express';
import express, { Express, Request, Response, Router } from 'express';
import { initializeEagerServices, registerDependencies } from './config/dependency-injector.config.js';
import { INTERNAL_CONFIG } from './config/internal-config.js';
import { MEET_ENV, logEnvVars } from './environment.js';
@ -9,14 +9,16 @@ import { setBaseUrlFromRequest } from './middlewares/base-url.middleware.js';
import { jsonSyntaxErrorHandler } from './middlewares/content-type.middleware.js';
import { initRequestContext } from './middlewares/request-context.middleware.js';
import { analyticsRouter } from './routes/analytics.routes.js';
import { aiAssistantRouter } from './routes/ai-assistant.routes.js';
import { apiKeyRouter } from './routes/api-key.routes.js';
import { authRouter } from './routes/auth.routes.js';
import { configRouter } from './routes/global-config.routes.js';
import { livekitWebhookRouter } from './routes/livekit.routes.js';
import { internalMeetingRouter } from './routes/meeting.routes.js';
import { internalRecordingRouter, recordingRouter } from './routes/recording.routes.js';
import { recordingRouter } from './routes/recording.routes.js';
import { internalRoomRouter, roomRouter } from './routes/room.routes.js';
import { userRouter } from './routes/user.routes.js';
import { getBasePath, getHtmlWithBasePath, getOpenApiHtmlWithBasePath } from './utils/html-dynamic-base-path.utils.js';
import {
frontendDirectoryPath,
frontendHtmlPath,
@ -27,6 +29,7 @@ import {
const createApp = () => {
const app: Express = express();
const basePath = getBasePath();
// Enable CORS support
if (MEET_ENV.SERVER_CORS_ORIGIN) {
@ -38,9 +41,6 @@ const createApp = () => {
);
}
// Serve static files
app.use(express.static(frontendDirectoryPath));
// Configure trust proxy based on deployment topology
// This is important for rate limiting and getting the real client IP
// Can be: true, false, a number (hops), or a custom function/string
@ -69,55 +69,81 @@ const createApp = () => {
app.use(setBaseUrlFromRequest);
}
// Create a router for all app routes (to be mounted under base path)
const appRouter: Router = express.Router();
// Serve static files (disable automatic index.html serving so our catch-all can inject config)
appRouter.use(express.static(frontendDirectoryPath, { index: false }));
// Public API routes
app.use(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/docs`, (_req: Request, res: Response) =>
res.sendFile(publicApiHtmlFilePath)
appRouter.use(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/docs`, (_req: Request, res: Response) =>
res.type('html').send(getOpenApiHtmlWithBasePath(publicApiHtmlFilePath, INTERNAL_CONFIG.API_BASE_PATH_V1))
);
app.use(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms`, /*mediaTypeValidatorMiddleware,*/ roomRouter);
app.use(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings`, /*mediaTypeValidatorMiddleware,*/ recordingRouter);
appRouter.use(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms`, /*mediaTypeValidatorMiddleware,*/ roomRouter);
appRouter.use(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings`, /*mediaTypeValidatorMiddleware,*/ recordingRouter);
// Internal API routes
if (process.env.NODE_ENV === 'development') {
// Serve internal API docs only in development mode
app.use(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/docs`, (_req: Request, res: Response) =>
res.sendFile(internalApiHtmlFilePath)
appRouter.use(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/docs`, (_req: Request, res: Response) =>
res
.type('html')
.send(getOpenApiHtmlWithBasePath(internalApiHtmlFilePath, INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1))
);
}
app.use(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/auth`, authRouter);
app.use(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/api-keys`, apiKeyRouter);
app.use(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/users`, userRouter);
app.use(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/rooms`, internalRoomRouter);
app.use(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/meetings`, internalMeetingRouter);
app.use(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/recordings`, internalRecordingRouter);
app.use(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/config`, configRouter);
app.use(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/analytics`, analyticsRouter);
appRouter.use(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/auth`, authRouter);
appRouter.use(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/api-keys`, apiKeyRouter);
appRouter.use(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/users`, userRouter);
appRouter.use(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/rooms`, internalRoomRouter);
appRouter.use(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/meetings`, internalMeetingRouter);
// appRouter.use(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/recordings`, internalRecordingRouter);
appRouter.use(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/config`, configRouter);
appRouter.use(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/analytics`, analyticsRouter);
appRouter.use(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/ai`, aiAssistantRouter);
app.use('/health', (_req: Request, res: Response) => res.status(200).send('OK'));
appRouter.use('/health', (_req: Request, res: Response) => res.status(200).send('OK'));
// LiveKit Webhook route
app.use('/livekit/webhook', livekitWebhookRouter);
// Serve OpenVidu Meet webcomponent bundle file
app.get('/v1/openvidu-meet.js', (_req: Request, res: Response) => res.sendFile(webcomponentBundlePath));
// Serve OpenVidu Meet index.html file for all non-API routes
app.get(/^(?!.*\/(api|internal-api)\/).*$/, (_req: Request, res: Response) => res.sendFile(frontendHtmlPath));
appRouter.get('/v1/openvidu-meet.js', (_req: Request, res: Response) => res.sendFile(webcomponentBundlePath));
// Serve OpenVidu Meet index.html file for all non-API routes (with dynamic base path injection)
appRouter.get(/^(?!.*\/(api|internal-api)\/).*$/, (_req: Request, res: Response) => {
res.type('html').send(getHtmlWithBasePath(frontendHtmlPath));
});
// Catch all other routes and return 404
app.use((_req: Request, res: Response) =>
appRouter.use((_req: Request, res: Response) =>
res.status(404).json({ error: 'Path Not Found', message: 'API path not implemented' })
);
// LiveKit Webhook route - mounted directly on app (not under base path)
// This allows webhooks to always be received at /livekit/webhook regardless of base path configuration
app.use('/livekit/webhook', livekitWebhookRouter);
// Mount all routes under the configured base path
app.use(basePath, appRouter);
return app;
};
const startServer = (app: express.Application) => {
const basePath = getBasePath();
const basePathDisplay = basePath === '/' ? '' : basePath.slice(0, -1);
app.listen(MEET_ENV.SERVER_PORT, async () => {
console.log(' ');
console.log('---------------------------------------------------------');
console.log(' ');
console.log(`OpenVidu Meet ${MEET_ENV.EDITION} is listening on port`, chalk.cyanBright(MEET_ENV.SERVER_PORT));
if (basePath !== '/') {
console.log('Base Path:', chalk.cyanBright(basePath));
}
console.log(
'REST API Docs: ',
chalk.cyanBright(`http://localhost:${MEET_ENV.SERVER_PORT}${INTERNAL_CONFIG.API_BASE_PATH_V1}/docs`)
chalk.cyanBright(
`http://localhost:${MEET_ENV.SERVER_PORT}${basePathDisplay}${INTERNAL_CONFIG.API_BASE_PATH_V1}/docs`
)
);
logEnvVars();
});
@ -144,8 +170,8 @@ const isMainModule = (): boolean => {
if (isMainModule()) {
registerDependencies();
const app = createApp();
startServer(app);
await initializeEagerServices();
startServer(app);
}
export { createApp, registerDependencies };

View File

@ -0,0 +1,252 @@
import { MeetAssistantCapabilityName, MeetCreateAssistantResponse } from '@openvidu-meet/typings';
import { inject, injectable } from 'inversify';
import ms from 'ms';
import { INTERNAL_CONFIG } from '../config/internal-config.js';
import { MEET_ENV } from '../environment.js';
import { MeetLock } from '../helpers/redis.helper.js';
import { errorInsufficientPermissions } from '../models/error.model.js';
import { RedisKeyName } from '../models/redis.model.js';
import { LiveKitService } from './livekit.service.js';
import { LoggerService } from './logger.service.js';
import { MutexService } from './mutex.service.js';
import { RedisService } from './redis.service.js';
import { RoomService } from './room.service.js';
@injectable()
export class AiAssistantService {
private readonly ASSISTANT_STATE_LOCK_TTL = ms('15s');
constructor(
@inject(LoggerService) protected logger: LoggerService,
@inject(RoomService) protected roomService: RoomService,
@inject(LiveKitService) protected livekitService: LiveKitService,
@inject(MutexService) protected mutexService: MutexService,
@inject(RedisService) protected redisService: RedisService
) {}
/**
* Creates a live captions assistant for the specified room.
* If an assistant already exists for the room, it will be reused.
* @param roomId
* @param participantIdentity
* @returns
*/
async createLiveCaptionsAssistant(
roomId: string,
participantIdentity: string
): Promise<MeetCreateAssistantResponse> {
// ! For now, we are assuming that the only capability is live captions.
const capability = MeetAssistantCapabilityName.LIVE_CAPTIONS;
const lockName = MeetLock.getAiAssistantLock(roomId, capability);
try {
await this.validateCreateConditions(roomId, capability);
const lock = await this.mutexService.acquire(lockName, this.ASSISTANT_STATE_LOCK_TTL);
if (!lock) {
this.logger.error(`Could not acquire lock '${lockName}' for creating assistant in room '${roomId}'`);
throw new Error('Could not acquire lock for creating assistant. Please try again.');
}
const existingAgent = await this.livekitService.getAgent(roomId, INTERNAL_CONFIG.CAPTIONS_AGENT_NAME);
if (existingAgent) {
await this.setParticipantAssistantState(roomId, participantIdentity, capability, true);
return { id: existingAgent.id, status: 'active' };
}
const assistant = await this.livekitService.createAgent(roomId, INTERNAL_CONFIG.CAPTIONS_AGENT_NAME);
await this.setParticipantAssistantState(roomId, participantIdentity, capability, true);
return {
id: assistant.id,
status: 'active'
};
} finally {
await this.mutexService.release(lockName);
}
}
/**
* Stops the specified assistant for the given participant and room.
* If the assistant is not used by any other participants in the room, it will be stopped in LiveKit.
* @param assistantId
* @param roomId
* @param participantIdentity
* @returns
*/
async cancelAssistant(assistantId: string, roomId: string, participantIdentity: string): Promise<void> {
const capability = MeetAssistantCapabilityName.LIVE_CAPTIONS;
// The lock only protects the atomic "count → stop dispatch" decision.
const lockName = MeetLock.getAiAssistantLock(roomId, capability);
try {
await this.setParticipantAssistantState(roomId, participantIdentity, capability, false);
const lock = await this.mutexService.acquire(lockName, this.ASSISTANT_STATE_LOCK_TTL);
if (!lock) {
this.logger.warn(
`Could not acquire lock '${lockName}' for stopping assistant in room '${roomId}'. Participant state saved as disabled.`
);
return;
}
const enabledParticipants = await this.getEnabledParticipantsCount(roomId, capability);
if (enabledParticipants > 0) {
this.logger.debug(
`Skipping assistant stop for room '${roomId}'. Remaining enabled participants: ${enabledParticipants}`
);
return;
}
const assistant = await this.livekitService.getAgent(roomId, assistantId);
if (!assistant) {
this.logger.warn(`Captions assistant not found in room '${roomId}'. Skipping stop request.`);
return;
}
await this.livekitService.stopAgent(assistantId, roomId);
} finally {
await this.mutexService.release(lockName);
}
}
/**
* Cleanup assistant state in a room.
* - If participantIdentity is provided, removes only that participant state.
* - If participantIdentity is omitted, removes all assistant state in the room.
*
* If no enabled participants remain after cleanup, captions agent dispatch is stopped.
*/
async cleanupState(roomId: string, participantIdentity?: string): Promise<void> {
const capability = MeetAssistantCapabilityName.LIVE_CAPTIONS;
const lockName = MeetLock.getAiAssistantLock(roomId, capability);
try {
if (participantIdentity) {
await this.setParticipantAssistantState(roomId, participantIdentity, capability, false);
}
// acquireWithRetry because this is called from webhooks (participantLeft / roomFinished).
// The agent may run indefinitely with no further opportunity to stop it.
const lock = await this.mutexService.acquireWithRetry(lockName, this.ASSISTANT_STATE_LOCK_TTL);
if (!lock) {
const scope = participantIdentity ? `participant '${participantIdentity}'` : `room '${roomId}'`;
this.logger.error(
`Could not acquire lock '${lockName}' for dispatch cleanup (${scope}) after retries. ` +
(participantIdentity
? 'Participant state was saved but dispatch stop may be skipped.'
: 'Room state cleanup and dispatch stop were skipped.')
);
return;
}
if (!participantIdentity) {
const pattern = `${RedisKeyName.AI_ASSISTANT_PARTICIPANT_STATE}${roomId}:${capability}:*`;
const keys = await this.redisService.getKeys(pattern);
if (keys.length > 0) {
await this.redisService.delete(keys);
}
}
const enabledParticipants = await this.getEnabledParticipantsCount(roomId, capability);
if (enabledParticipants > 0) {
return;
}
await this.stopCaptionsAssistantIfRunning(roomId);
} catch (error) {
this.logger.error(`Error occurred while cleaning up assistant state for room '${roomId}': ${error}`);
} finally {
await this.mutexService.release(lockName);
}
}
protected async validateCreateConditions(roomId: string, capability: MeetAssistantCapabilityName): Promise<void> {
if (capability === MeetAssistantCapabilityName.LIVE_CAPTIONS) {
if (MEET_ENV.CAPTIONS_ENABLED !== 'true') {
throw errorInsufficientPermissions();
}
const room = await this.roomService.getMeetRoom(roomId);
if (!room.config.captions.enabled) {
throw errorInsufficientPermissions();
}
}
}
/**
* Sets or clears the assistant state for a participant in Redis.
* @param roomId
* @param participantIdentity
* @param capability
* @param enabled
*/
protected async setParticipantAssistantState(
roomId: string,
participantIdentity: string,
capability: MeetAssistantCapabilityName,
enabled: boolean
): Promise<void> {
const key = this.getParticipantAssistantStateKey(roomId, participantIdentity, capability);
if (!enabled) {
await this.redisService.delete(key);
return;
}
await this.redisService.setIfNotExists(
key,
JSON.stringify({
enabled: true,
updatedAt: Date.now()
})
);
}
/**
* Gets the count of participants that have the specified assistant capability enabled in the given room.
* @param roomId
* @param capability
* @returns
*/
protected async getEnabledParticipantsCount(
roomId: string,
capability: MeetAssistantCapabilityName
): Promise<number> {
const pattern = `${RedisKeyName.AI_ASSISTANT_PARTICIPANT_STATE}${roomId}:${capability}:*`;
const keys = await this.redisService.getKeys(pattern);
return keys.length;
}
protected getParticipantAssistantStateKey(
roomId: string,
participantIdentity: string,
capability: MeetAssistantCapabilityName
): string {
return `${RedisKeyName.AI_ASSISTANT_PARTICIPANT_STATE}${roomId}:${capability}:${participantIdentity}`;
}
protected async stopCaptionsAssistantIfRunning(roomId: string): Promise<void> {
const assistants = await this.livekitService.listAgents(roomId);
if (assistants.length === 0) return;
const captionsAssistant = assistants.find(
(assistant) => assistant.agentName === INTERNAL_CONFIG.CAPTIONS_AGENT_NAME
);
if (!captionsAssistant) return;
await this.livekitService.stopAgent(captionsAssistant.id, roomId);
}
}

View File

@ -0,0 +1,43 @@
// Core services
export * from './analytics.service.js';
export * from './api-key.service.js';
export * from './base-url.service.js';
export * from './distributed-event.service.js';
export * from './frontend-event.service.js';
export * from './global-config.service.js';
export * from './livekit-webhook.service.js';
export * from './livekit.service.js';
export * from './logger.service.js';
export * from './migration.service.js';
export * from './mutex.service.js';
export * from './openvidu-webhook.service.js';
export * from './participant-name.service.js';
export * from './recording-scheduled-tasks.service.js';
export * from './recording.service.js';
export * from './redis.service.js';
export * from './request-session.service.js';
export * from './room-member.service.js';
export * from './room-scheduled-tasks.service.js';
export * from './room.service.js';
export * from './task-scheduler.service.js';
export * from './token.service.js';
export * from './user.service.js';
// Storage
export * from './storage/blob-storage.service.js';
export * from './storage/mongodb.service.js';
export * from './storage/storage-init.service.js';
export * from './storage/storage.factory.js';
export * from './storage/storage.interface.js';
// Storage providers
export * from './storage/providers/abs/abs-storage.provider.js';
export * from './storage/providers/abs/abs.service.js';
export * from './storage/providers/gcp/gcs-storage.provider.js';
export * from './storage/providers/gcp/gcs.service.js';
export * from './storage/providers/s3/s3-storage-key.builder.js';
export * from './storage/providers/s3/s3-storage.provider.js';
export * from './storage/providers/s3/s3.service.js';

View File

@ -9,6 +9,7 @@ import { MeetRoomHelper } from '../helpers/room.helper.js';
import { DistributedEventType } from '../models/distributed-event.model.js';
import { RecordingRepository } from '../repositories/recording.repository.js';
import { RoomRepository } from '../repositories/room.repository.js';
import { AiAssistantService } from './ai-assistant.service.js';
import { DistributedEventService } from './distributed-event.service.js';
import { FrontendEventService } from './frontend-event.service.js';
import { LiveKitService } from './livekit.service.js';
@ -33,6 +34,7 @@ export class LivekitWebhookService {
@inject(DistributedEventService) protected distributedEventService: DistributedEventService,
@inject(FrontendEventService) protected frontendEventService: FrontendEventService,
@inject(RoomMemberService) protected roomMemberService: RoomMemberService,
@inject(AiAssistantService) protected aiAssistantService: AiAssistantService,
@inject(LoggerService) protected logger: LoggerService
) {
this.webhookReceiver = new WebhookReceiver(MEET_ENV.LIVEKIT_API_KEY, MEET_ENV.LIVEKIT_API_SECRET);
@ -163,8 +165,8 @@ export class LivekitWebhookService {
* @param participant - Information about the newly joined participant.
*/
async handleParticipantJoined(room: Room, participant: ParticipantInfo) {
// Skip if the participant is an egress participant
if (this.livekitService.isEgressParticipant(participant)) return;
// Skip if the participant is not a standard participant
if (!this.livekitService.isStandardParticipant(participant)) return;
try {
const { recordings } = await this.recordingService.getAllRecordings({ roomId: room.name });
@ -185,12 +187,14 @@ export class LivekitWebhookService {
* @param participant - Information about the participant who left.
*/
async handleParticipantLeft(room: Room, participant: ParticipantInfo) {
// Skip if the participant is an egress participant
if (this.livekitService.isEgressParticipant(participant)) return;
// Skip if the participant is not a standard participant
if (!this.livekitService.isStandardParticipant(participant)) return;
try {
// Release the participant's reserved name
await this.roomMemberService.releaseParticipantName(room.name, participant.name);
await Promise.all([
this.roomMemberService.releaseParticipantName(room.name, participant.name),
this.aiAssistantService.cleanupState(room.name, participant.identity)
]);
this.logger.verbose(`Released name for participant '${participant.name}' in room '${room.name}'`);
} catch (error) {
this.logger.error('Error releasing participant name on participant left:', error);
@ -282,7 +286,8 @@ export class LivekitWebhookService {
tasks.push(
this.roomMemberService.cleanupParticipantNames(roomId),
this.recordingService.releaseRecordingLockIfNoEgress(roomId)
this.recordingService.releaseRecordingLockIfNoEgress(roomId),
this.aiAssistantService.cleanupState(roomId)
);
await Promise.all(tasks);
} catch (error) {

View File

@ -1,5 +1,7 @@
import { AgentDispatch, ParticipantInfo_Kind } from '@livekit/protocol';
import { inject, injectable } from 'inversify';
import {
AgentDispatchClient,
CreateOptions,
DataPacket_Kind,
EgressClient,
@ -30,6 +32,7 @@ import { LoggerService } from './logger.service.js';
export class LiveKitService {
private egressClient: EgressClient;
private roomClient: RoomServiceClient;
private agentClient: AgentDispatchClient;
constructor(@inject(LoggerService) protected logger: LoggerService) {
const livekitUrlHostname = MEET_ENV.LIVEKIT_URL_PRIVATE.replace(/^ws:/, 'http:').replace(/^wss:/, 'https:');
@ -39,6 +42,11 @@ export class LiveKitService {
MEET_ENV.LIVEKIT_API_KEY,
MEET_ENV.LIVEKIT_API_SECRET
);
this.agentClient = new AgentDispatchClient(
livekitUrlHostname,
MEET_ENV.LIVEKIT_API_KEY,
MEET_ENV.LIVEKIT_API_SECRET
);
}
async createRoom(options: CreateOptions): Promise<Room> {
@ -269,6 +277,66 @@ export class LiveKitService {
}
}
/**
* Start an agent for a specific room.
* @param roomName
* @param agentName
* @returns The created AgentDispatch
*/
async createAgent(
roomName: string,
agentName: string /*, options: CreateDispatchOptions*/
): Promise<AgentDispatch> {
try {
return await this.agentClient.createDispatch(roomName, agentName);
} catch (error) {
this.logger.error(`Error creating agent dispatch for room '${roomName}':`, error);
throw internalError(`creating agent dispatch for room '${roomName}'`);
}
}
/**
* Lists all agents in a LiveKit room.
* @param roomName
* @returns An array of agents in the specified room
*/
async listAgents(roomName: string): Promise<AgentDispatch[]> {
try {
return await this.agentClient.listDispatch(roomName);
} catch (error) {
this.logger.error(`Error listing agents for room '${roomName}':`, error);
return [];
}
}
/**
* Gets an agent dispatch by its ID in a LiveKit room.
* @param roomName
* @param agentId
* @returns The agent if found, otherwise undefined
*/
async getAgent(roomName: string, agentId: string): Promise<AgentDispatch | undefined> {
try {
return await this.agentClient.getDispatch(agentId, roomName);
} catch (error) {
this.logger.error(`Error getting agent dispatch '${agentId}' for room '${roomName}':`, error);
return undefined;
}
}
/**
* Stops an agent in a LiveKit room.
* @param agentId
* @param roomName
*/
async stopAgent(agentId: string, roomName: string): Promise<void> {
try {
await this.agentClient.deleteDispatch(agentId, roomName);
} catch (error) {
this.logger.error(`Error deleting agent dispatch '${agentId}' for room '${roomName}':`, error);
}
}
async startRoomComposite(
roomName: string,
output: EncodedFileOutput | StreamOutput,
@ -400,8 +468,11 @@ export class LiveKitService {
}
}
isEgressParticipant(participant: ParticipantInfo): boolean {
// TODO: Remove deprecated warning by using ParticipantInfo_Kind: participant.kind === ParticipantInfo_Kind.EGRESS;
return participant.identity.startsWith('EG_') && participant.permission?.recorder === true;
/**
* Checks if a participant is a standard participant (web clients).
* @param participant
*/
isStandardParticipant(participant: ParticipantInfo): boolean {
return participant.kind === ParticipantInfo_Kind.STANDARD;
}
}

File diff suppressed because it is too large Load Diff

View File

@ -105,6 +105,33 @@ export class MutexService {
return locks;
}
/**
* Attempts to acquire a lock, retrying up to `maxAttempts` times with a fixed delay between
* attempts. Intended for fire-and-forget flows (e.g. webhooks) where the caller has no
* opportunity to retry externally and a missed lock acquisition would leave the system in an
* inconsistent state.
*
* @param key - The resource to acquire a lock for.
* @param ttl - The time-to-live for the lock in milliseconds.
* @param maxAttempts - Maximum number of acquisition attempts. Defaults to 3.
* @param delayMs - Fixed delay in milliseconds between attempts. Defaults to 200.
* @returns A Promise that resolves to the acquired Lock, or null if all attempts fail.
*/
async acquireWithRetry(key: string, ttl: number = this.TTL_MS, maxAttempts = 3, delayMs = 200): Promise<Lock | null> {
for (let attempt = 1; attempt <= maxAttempts; attempt++) {
const lock = await this.acquire(key, ttl);
if (lock) return lock;
if (attempt < maxAttempts) {
this.logger.warn(`Lock '${key}' attempt ${attempt}/${maxAttempts} failed. Retrying in ${delayMs}ms...`);
await new Promise((resolve) => setTimeout(resolve, delayMs));
}
}
return null;
}
lockExists(key: string): Promise<boolean> {
const registryKey = MeetLock.getRegistryLock(key);
return this.redisService.exists(registryKey);

View File

@ -1,4 +1,13 @@
import { MeetRecordingFilters, MeetRecordingInfo, MeetRecordingStatus } from '@openvidu-meet/typings';
import {
MeetRecordingEncodingOptions,
MeetRecordingEncodingPreset,
MeetRecordingFilters,
MeetRecordingInfo,
MeetRecordingLayout,
MeetRecordingStatus,
MeetRoom,
MeetRoomConfig
} from '@openvidu-meet/typings';
import { inject, injectable } from 'inversify';
import { EgressStatus, EncodedFileOutput, EncodedFileType, RoomCompositeOptions } from 'livekit-server-sdk';
import ms from 'ms';
@ -6,6 +15,7 @@ import { Readable } from 'stream';
import { uid } from 'uid';
import { INTERNAL_CONFIG } from '../config/internal-config.js';
import { MEET_ENV } from '../environment.js';
import { EncodingConverter } from '../helpers/encoding-converter.helper.js';
import { RecordingHelper } from '../helpers/recording.helper.js';
import { MeetLock } from '../helpers/redis.helper.js';
import { DistributedEventType } from '../models/distributed-event.model.js';
@ -45,7 +55,13 @@ export class RecordingService {
@inject(LoggerService) protected logger: LoggerService
) {}
async startRecording(roomId: string): Promise<MeetRecordingInfo> {
async startRecording(
roomId: string,
configOverride?: {
layout?: MeetRecordingLayout;
encoding?: MeetRecordingEncodingPreset | MeetRecordingEncodingOptions;
}
): Promise<MeetRecordingInfo> {
let acquiredLock: RedisLock | null = null;
let eventListener!: (info: Record<string, unknown>) => void;
let recordingId = '';
@ -58,7 +74,7 @@ export class RecordingService {
if (!acquiredLock) throw errorRecordingAlreadyStarted(roomId);
await this.validateRoomForStartRecording(roomId);
const room = await this.validateRoomForStartRecording(roomId);
// Manually send the recording signal to OpenVidu Components for avoiding missing event if timeout occurs
// and the egress_started webhook is not received.
@ -69,19 +85,21 @@ export class RecordingService {
status: MeetRecordingStatus.STARTING
});
// Promise that rejects after timeout
const timeoutPromise = new Promise<never>((_, reject) => {
timeoutId = setTimeout(() => {
if (isOperationCompleted) return;
isOperationCompleted = true;
//Clean up the event listener and timeout
// Clean up the event listener and timeout
this.systemEventService.off(DistributedEventType.RECORDING_ACTIVE, eventListener);
this.handleRecordingTimeout(recordingId, roomId).catch(() => {});
reject(errorRecordingStartTimeout(roomId));
}, ms(INTERNAL_CONFIG.RECORDING_STARTED_TIMEOUT));
});
// Promise that resolves when RECORDING_ACTIVE event is received
const activeEgressEventPromise = new Promise<MeetRecordingInfo>((resolve) => {
eventListener = (info: Record<string, unknown>) => {
// Process the event only if it belongs to the current room.
@ -98,9 +116,10 @@ export class RecordingService {
this.systemEventService.on(DistributedEventType.RECORDING_ACTIVE, eventListener);
});
// Promise that starts the recording process
const startRecordingPromise = (async (): Promise<MeetRecordingInfo> => {
try {
const options = this.generateCompositeOptionsFromRequest();
const options = this.generateCompositeOptionsFromRequest(room.config, configOverride);
const output = this.generateFileOutputFromRequest(roomId);
const egressInfo = await this.livekitService.startRoomComposite(roomId, output, options);
@ -128,6 +147,16 @@ export class RecordingService {
} catch (error) {
if (isOperationCompleted) {
this.logger.warn(`startRoomComposite failed after timeout: ${error}`);
// Manually send the recording FAILED signal to OpenVidu Components for avoiding missing event
await this.frontendEventService.sendRecordingSignalToOpenViduComponents(roomId, {
recordingId,
roomId,
roomName: roomId,
status: MeetRecordingStatus.FAILED,
error: (error as Error).message
});
throw errorRecordingStartTimeout(roomId);
}
@ -542,7 +571,14 @@ export class RecordingService {
}
}
protected async validateRoomForStartRecording(roomId: string): Promise<void> {
/**
* Validates that a room exists and has participants before starting a recording.
*
* @param roomId
* @returns The MeetRoom object if validation passes.
* @throws Will throw an error if the room does not exist or has no participants.
*/
protected async validateRoomForStartRecording(roomId: string): Promise<MeetRoom> {
const room = await this.roomRepository.findByRoomId(roomId);
if (!room) throw errorRoomNotFound(roomId);
@ -550,6 +586,8 @@ export class RecordingService {
const hasParticipants = await this.livekitService.roomHasParticipants(roomId);
if (!hasParticipants) throw errorRoomHasNoParticipants(roomId);
return room;
}
/**
@ -683,13 +721,32 @@ export class RecordingService {
}
}
protected generateCompositeOptionsFromRequest(layout = 'grid'): RoomCompositeOptions {
/**
* Generates composite options for recording based on the provided room configuration.
* If configOverride is provided, its values will take precedence over room configuration.
*
* @param roomConfig The room configuration
* @param configOverride Optional configuration override from the request
* @returns The generated RoomCompositeOptions object.
*/
protected generateCompositeOptionsFromRequest(
roomConfig: MeetRoomConfig,
configOverride?: {
layout?: MeetRecordingLayout;
encoding?: MeetRecordingEncodingPreset | MeetRecordingEncodingOptions;
}
): RoomCompositeOptions {
const roomRecordingConfig = roomConfig.recording;
const layout = configOverride?.layout ?? roomRecordingConfig.layout;
const encoding = configOverride?.encoding ?? roomRecordingConfig.encoding;
const encodingOptions = EncodingConverter.toLivekit(encoding);
return {
layout: layout
layout,
encodingOptions
// customBaseUrl: customLayout,
// audioOnly: false,
// videoOnly: false
// encodingOptions
};
}

View File

@ -1,6 +1,5 @@
import {
MeetingEndAction,
MeetRecordingAccess,
MeetRoom,
MeetRoomConfig,
MeetRoomDeletionErrorCode,
@ -14,6 +13,7 @@ import {
} from '@openvidu-meet/typings';
import { inject, injectable } from 'inversify';
import { CreateOptions, Room } from 'livekit-server-sdk';
import merge from 'lodash.merge';
import ms from 'ms';
import { uid as secureUid } from 'uid/secure';
import { uid } from 'uid/single';
@ -34,6 +34,7 @@ import { LoggerService } from './logger.service.js';
import { RecordingService } from './recording.service.js';
import { RequestSessionService } from './request-session.service.js';
/**
* Service for managing OpenVidu Meet rooms.
*
@ -54,7 +55,7 @@ export class RoomService {
/**
* Creates an OpenVidu Meet room with the specified options.
*
* @param {MeetRoomOptions} options - The options for creating the OpenVidu room.
* @param {MeetRoomOptions} roomOptions - The options for creating the OpenVidu room.
* @returns {Promise<MeetRoom>} A promise that resolves to the created OpenVidu room.
*
* @throws {Error} If the room creation fails.
@ -66,22 +67,7 @@ export class RoomService {
// Generate a unique room ID based on the room name
const roomIdPrefix = MeetRoomHelper.createRoomIdPrefixFromRoomName(roomName!) || 'room';
const roomId = `${roomIdPrefix}-${uid(15)}`;
const defaultConfig: MeetRoomConfig = {
recording: { enabled: true, allowAccessTo: MeetRecordingAccess.ADMIN_MODERATOR_SPEAKER },
chat: { enabled: true },
virtualBackground: { enabled: true },
e2ee: { enabled: false }
};
const roomConfig = {
...defaultConfig,
...config
};
// Disable recording if E2EE is enabled
if (roomConfig.e2ee.enabled && roomConfig.recording.enabled) {
roomConfig.recording.enabled = false;
}
const roomConfig: MeetRoomConfig = config as MeetRoomConfig;
const meetRoom: MeetRoom = {
roomId,
@ -147,11 +133,8 @@ export class RoomService {
throw errorRoomActiveMeeting(roomId);
}
// Merge the partial config with the existing config
room.config = {
...room.config,
...config
};
// Merge existing config with new config (partial update)
room.config = merge({}, room.config, config);
// Disable recording if E2EE is enabled
if (room.config.e2ee.enabled && room.config.recording.enabled) {

View File

@ -1,474 +0,0 @@
import { GlobalConfig, MeetApiKey, MeetRecordingInfo, MeetRoom, MeetUser } from '@openvidu-meet/typings';
import { inject, injectable } from 'inversify';
import { OpenViduMeetError } from '../../models/error.model.js';
import { RedisKeyName } from '../../models/redis.model.js';
import { LoggerService } from '../logger.service.js';
import { RedisService } from '../redis.service.js';
import { StorageFactory } from './storage.factory.js';
import { StorageKeyBuilder, StorageProvider } from './storage.interface.js';
/**
* Legacy storage service for reading and migrating data from S3/ABS/GCS to MongoDB.
*
* This service is used during the migration process to:
* - Read existing data from legacy storage (S3/Azure Blob Storage/Google Cloud Storage)
* - Access data cached in Redis that originated from legacy storage
* - Clean up legacy data after successful migration to MongoDB
*
* **Important**: This service is read-only for migration purposes. New data should be
* created directly in MongoDB using the appropriate repositories (RoomRepository,
* RecordingRepository, UserRepository, etc.).
*
* Legacy storage structure:
* - Rooms: Stored as JSON files in blob storage with Redis cache
* - Recordings: Metadata as JSON files, binary media as separate blob files
* - Users: Stored as JSON files with Redis cache
* - API Keys: Stored as JSON files with Redis cache
* - Global Config: Stored as JSON files with Redis cache
*/
@injectable()
export class LegacyStorageService {
protected storageProvider: StorageProvider;
protected keyBuilder: StorageKeyBuilder;
constructor(
@inject(LoggerService) protected logger: LoggerService,
@inject(StorageFactory) protected storageFactory: StorageFactory,
@inject(RedisService) protected redisService: RedisService
) {
const { provider, keyBuilder } = this.storageFactory.create();
this.storageProvider = provider;
this.keyBuilder = keyBuilder;
}
// ==========================================
// GLOBAL CONFIG DOMAIN LOGIC
// ==========================================
/**
* Retrieves the global configuration from legacy storage.
*
* @returns A promise that resolves to the global configuration, or null if not found
*/
async getGlobalConfig(): Promise<GlobalConfig | null> {
const redisKey = RedisKeyName.GLOBAL_CONFIG;
const storageKey = this.keyBuilder.buildGlobalConfigKey();
const config = await this.getFromCacheAndStorage<GlobalConfig>(redisKey, storageKey);
return config;
}
/**
* Deletes the global configuration from legacy storage.
*/
async deleteGlobalConfig(): Promise<void> {
const redisKey = RedisKeyName.GLOBAL_CONFIG;
const storageKey = this.keyBuilder.buildGlobalConfigKey();
await this.deleteFromCacheAndStorage(redisKey, storageKey);
}
// ==========================================
// ROOM DOMAIN LOGIC
// ==========================================
/**
* Retrieves a paginated list of rooms from legacy storage.
*
* @param maxItems - Optional maximum number of rooms to retrieve per page
* @param nextPageToken - Optional token for pagination to get the next set of results
* @returns Promise that resolves to an object containing:
* - rooms: Array of MRoom objects retrieved from storage
* - isTruncated: Boolean indicating if there are more results available
* - nextPageToken: Optional token for retrieving the next page of results
*/
async getRooms(
roomName?: string,
maxItems?: number,
nextPageToken?: string
): Promise<{
rooms: MeetRoom[];
isTruncated: boolean;
nextPageToken?: string;
}> {
try {
const searchKey = this.keyBuilder.buildAllMeetRoomsKey(roomName);
const { Contents, IsTruncated, NextContinuationToken } = await this.storageProvider.listObjects(
searchKey,
maxItems,
nextPageToken
);
const rooms: MeetRoom[] = [];
if (Contents && Contents.length > 0) {
const roomPromises = Contents.map(async (item) => {
if (item.Key && item.Key.endsWith('.json')) {
try {
const room = await this.storageProvider.getObject<MeetRoom>(item.Key);
return room;
} catch (error) {
this.logger.warn(`Failed to load room from ${item.Key}: ${error}`);
return null;
}
}
return null;
});
const roomResults = await Promise.all(roomPromises);
rooms.push(...roomResults.filter((room): room is Awaited<MeetRoom> => room !== null));
}
return {
rooms,
isTruncated: IsTruncated || false,
nextPageToken: NextContinuationToken
};
} catch (error) {
this.handleError(error, 'Error retrieving rooms');
throw error;
}
}
/**
* Deletes multiple rooms by roomIds from legacy storage.
*
* @param roomIds - Array of room identifiers to delete
*/
async deleteRooms(roomIds: string[]): Promise<void> {
const roomKeys = roomIds.map((roomId) => this.keyBuilder.buildMeetRoomKey(roomId));
const redisKeys = roomIds.map((roomId) => RedisKeyName.ROOM + roomId);
await this.deleteFromCacheAndStorageBatch(redisKeys, roomKeys);
}
/**
* Deletes archived room metadata for a given roomId from legacy storage.
*
* @param roomId - The unique room identifier
*/
async deleteArchivedRoomMetadata(roomId: string): Promise<void> {
const redisKey = RedisKeyName.ARCHIVED_ROOM + roomId;
const storageKey = this.keyBuilder.buildArchivedMeetRoomKey(roomId);
await this.deleteFromCacheAndStorage(redisKey, storageKey);
}
// ==========================================
// RECORDING DOMAIN LOGIC
// ==========================================
/**
* Retrieves a paginated list of recordings from legacy storage
*
* @param maxItems - Optional maximum number of items to return per page for pagination.
* @param nextPageToken - Optional token for pagination to retrieve the next page of results.
*
* @returns A promise that resolves to an object containing:
* - `recordings`: Array of recording metadata objects (MRec)
* - `isTruncated`: Optional boolean indicating if there are more results available
* - `nextContinuationToken`: Optional token to retrieve the next page of results
*/
async getRecordings(
roomId?: string,
maxItems?: number,
nextPageToken?: string
): Promise<{ recordings: MeetRecordingInfo[]; isTruncated?: boolean; nextContinuationToken?: string }> {
try {
const searchKey = this.keyBuilder.buildAllMeetRecordingsKey(roomId);
const { Contents, IsTruncated, NextContinuationToken } = await this.storageProvider.listObjects(
searchKey,
maxItems,
nextPageToken
);
const recordings: MeetRecordingInfo[] = [];
if (Contents && Contents.length > 0) {
const recordingPromises = Contents.map(async (item) => {
if (!item.Key || !item.Key.endsWith('.json')) {
return null;
}
try {
const recording = await this.storageProvider.getObject<MeetRecordingInfo>(item.Key!);
return recording;
} catch (error) {
this.logger.warn(`Failed to load recording metadata from ${item.Key}: ${error}`);
return null;
}
});
const recordingResults = await Promise.all(recordingPromises);
recordings.push(
...recordingResults.filter(
(recording): recording is Awaited<MeetRecordingInfo> => recording !== null
)
);
}
return {
recordings: recordings,
isTruncated: Boolean(IsTruncated),
nextContinuationToken: NextContinuationToken
};
} catch (error) {
this.handleError(error, 'Error retrieving recordings');
throw error;
}
}
/**
* Retrieves access secrets for a specific recording from legacy storage.
*
* @param recordingId - The unique identifier of the recording
* @returns A promise that resolves to an object containing public and private access secrets,
* or null if no secrets are found for the given recordingId
*/
async getRecordingAccessSecrets(
recordingId: string
): Promise<{ publicAccessSecret: string; privateAccessSecret: string } | null> {
try {
const redisKey = RedisKeyName.RECORDING_SECRETS + recordingId;
const secretsKey = this.keyBuilder.buildAccessRecordingSecretsKey(recordingId);
const secrets = await this.getFromCacheAndStorage<{
publicAccessSecret: string;
privateAccessSecret: string;
}>(redisKey, secretsKey);
if (!secrets) {
this.logger.warn(`No access secrets found for recording ${recordingId}`);
return null;
}
return secrets;
} catch (error) {
this.handleError(error, `Error fetching access secrets for recording ${recordingId}`);
throw error;
}
}
/**
* Deletes multiple recordings by recordingIds from legacy storage.
*
* @param recordingIds - Array of recording identifiers to delete
*/
async deleteRecordings(recordingIds: string[]): Promise<void> {
if (recordingIds.length === 0) {
this.logger.debug('No recordings to delete');
return;
}
try {
// Build all paths from recordingIds
const redisKeys: string[] = [];
const storageKeys: string[] = [];
for (const recordingId of recordingIds) {
redisKeys.push(RedisKeyName.RECORDING + recordingId);
redisKeys.push(RedisKeyName.RECORDING_SECRETS + recordingId);
storageKeys.push(this.keyBuilder.buildMeetRecordingKey(recordingId));
storageKeys.push(this.keyBuilder.buildAccessRecordingSecretsKey(recordingId));
}
await this.deleteFromCacheAndStorageBatch(redisKeys, storageKeys);
} catch (error) {
this.handleError(error, `Error deleting recordings: ${recordingIds.join(', ')}`);
throw error;
}
}
// ==========================================
// USER DOMAIN LOGIC
// ==========================================
/**
* Retrieves user data for a specific username from legacy storage.
*
* @param username - The username of the user to retrieve
* @returns A promise that resolves to the user data, or null if not found
*/
async getUser(username: string): Promise<MeetUser | null> {
const redisKey = RedisKeyName.USER + username;
const storageKey = this.keyBuilder.buildUserKey(username);
const user = await this.getFromCacheAndStorage<MeetUser>(redisKey, storageKey);
return user;
}
/**
* Deletes user data for a specific username from legacy storage.
*
* @param username - The username of the user to delete
*/
async deleteUser(username: string): Promise<void> {
const redisKey = RedisKeyName.USER + username;
const storageKey = this.keyBuilder.buildUserKey(username);
await this.deleteFromCacheAndStorage(redisKey, storageKey);
}
// ==========================================
// API KEY DOMAIN LOGIC
// ==========================================
/**
* Retrieves all API keys from legacy storage.
*
* @returns A promise that resolves to an array of MeetApiKey objects
*/
async getApiKeys(): Promise<MeetApiKey[]> {
const redisKey = RedisKeyName.API_KEYS;
const storageKey = this.keyBuilder.buildApiKeysKey();
const apiKeys = await this.getFromCacheAndStorage<MeetApiKey[]>(redisKey, storageKey);
if (!apiKeys) {
return [];
}
return apiKeys;
}
/**
* Deletes all API keys from legacy storage.
*/
async deleteApiKeys(): Promise<void> {
const redisKey = RedisKeyName.API_KEYS;
const storageKey = this.keyBuilder.buildApiKeysKey();
await this.deleteFromCacheAndStorage(redisKey, storageKey);
}
// ==========================================
// PRIVATE HYBRID CACHE METHODS (Redis + Storage)
// ==========================================
/**
* Retrieves data from Redis cache first, falls back to storage if not found.
*
* @param redisKey - The Redis key to check first
* @param storageKey - The storage key/path as fallback
* @returns Promise that resolves with the data or null if not found
*/
protected async getFromCacheAndStorage<T>(redisKey: string, storageKey: string): Promise<T | null> {
try {
// 1. Try Redis first (fast cache)
this.logger.debug(`Attempting to get data from Redis cache: ${redisKey}`);
const cachedData = await this.redisService.get(redisKey);
if (cachedData) {
this.logger.debug(`Cache HIT for key: ${redisKey}`);
try {
return JSON.parse(cachedData) as T;
} catch (parseError) {
this.logger.warn(`Failed to parse cached data for key ${redisKey}: ${parseError}`);
// Continue to storage fallback
}
} else {
this.logger.debug(`Cache MISS for key: ${redisKey}`);
}
// 2. Fallback to persistent storage
this.logger.debug(`Attempting to get data from storage: ${storageKey}`);
const storageData = await this.storageProvider.getObject<T>(storageKey);
if (!storageData) {
this.logger.debug(`Data not found in storage for key: ${storageKey}`);
}
return storageData;
} catch (error) {
this.handleError(error, `Error in hybrid cache get for keys: ${redisKey}, ${storageKey}`);
throw error;
}
}
/**
* Deletes data from both Redis cache and persistent storage.
*
* @param redisKey - The Redis key to delete
* @param storageKey - The storage key to delete
*/
protected async deleteFromCacheAndStorage(redisKey: string, storageKey: string): Promise<void> {
return await this.deleteFromCacheAndStorageBatch([redisKey], [storageKey]);
}
/**
* Deletes data from both Redis cache and persistent storage in batch.
*
* @param redisKeys - Array of Redis keys to delete
* @param storageKeys - Array of storage keys to delete
*/
protected async deleteFromCacheAndStorageBatch(redisKeys: string[], storageKeys: string[]): Promise<void> {
if (redisKeys.length === 0 && storageKeys.length === 0) {
this.logger.debug('No keys to delete in batch');
return;
}
this.logger.debug(`Batch deleting ${redisKeys.length} Redis keys and ${storageKeys.length} storage keys`);
const operations = [
// Batch delete from Redis (only if there are keys to delete)
redisKeys.length > 0
? this.redisService.delete(redisKeys).catch((error) => {
this.logger.warn(`Redis batch delete failed: ${error}`);
return Promise.reject({ type: 'redis', error, affectedKeys: redisKeys });
})
: Promise.resolve(0),
// Batch delete from storage (only if there are keys to delete)
storageKeys.length > 0
? this.storageProvider.deleteObjects(storageKeys).catch((error) => {
this.logger.warn(`Storage batch delete failed: ${error}`);
return Promise.reject({ type: 'storage', error, affectedKeys: storageKeys });
})
: Promise.resolve()
];
try {
const results = await Promise.allSettled(operations);
const redisResult = results[0];
const storageResult = results[1];
const redisSuccess = redisResult.status === 'fulfilled';
const storageSuccess = storageResult.status === 'fulfilled';
if (redisKeys.length > 0) {
if (redisSuccess) {
const deletedCount = (redisResult as PromiseFulfilledResult<number>).value;
this.logger.debug(`Redis batch delete succeeded: ${deletedCount} keys deleted`);
} else {
const redisError = (redisResult as PromiseRejectedResult).reason;
this.logger.warn(`Redis batch delete failed:`, redisError.error);
}
}
if (storageKeys.length > 0) {
if (storageSuccess) {
this.logger.debug(`Storage batch delete succeeded: ${storageKeys.length} keys deleted`);
} else {
const storageError = (storageResult as PromiseRejectedResult).reason;
this.logger.warn(`Storage batch delete failed:`, storageError.error);
}
}
this.logger.debug(`Batch delete completed: Redis=${redisSuccess}, Storage=${storageSuccess}`);
} catch (error) {
this.handleError(error, `Error in batch delete operation`);
throw error;
}
}
protected handleError(error: unknown, context: string): void {
if (error instanceof OpenViduMeetError) {
this.logger.error(`${context}: ${error.message}`);
} else {
this.logger.error(`${context}: ${error}`);
}
}
}

View File

@ -2,48 +2,8 @@ import { RecordingHelper } from '../../../../helpers/recording.helper.js';
import { StorageKeyBuilder } from '../../storage.interface.js';
export class S3KeyBuilder implements StorageKeyBuilder {
buildGlobalConfigKey(): string {
return `global-config.json`;
}
buildMeetRoomKey(roomId: string): string {
return `rooms/${roomId}/${roomId}.json`;
}
buildAllMeetRoomsKey(roomName?: string): string {
const roomSegment = roomName ? `/${roomName}` : '';
return `rooms${roomSegment}`;
}
buildArchivedMeetRoomKey(roomId: string): string {
return `recordings/.room_metadata/${roomId}/room_metadata.json`;
}
buildMeetRecordingKey(recordingId: string): string {
const { roomId, egressId, uid } = RecordingHelper.extractInfoFromRecordingId(recordingId);
return `recordings/.metadata/${roomId}/${egressId}/${uid}.json`;
}
buildBinaryRecordingKey(recordingId: string): string {
const { roomId, uid } = RecordingHelper.extractInfoFromRecordingId(recordingId);
return `recordings/${roomId}/${roomId}--${uid}.mp4`;
}
buildAllMeetRecordingsKey(roomId?: string): string {
const roomSegment = roomId ? `/${roomId}` : '';
return `recordings/.metadata${roomSegment}`;
}
buildAccessRecordingSecretsKey(recordingId: string): string {
const { roomId, egressId, uid } = RecordingHelper.extractInfoFromRecordingId(recordingId);
return `recordings/.secrets/${roomId}/${egressId}/${uid}.json`;
}
buildUserKey(userId: string): string {
return `users/${userId}.json`;
}
buildApiKeysKey(): string {
return `api_keys.json`;
}
}

View File

@ -108,69 +108,10 @@ export interface StorageProvider {
* Provides methods to generate standardized keys for different types of data storage operations.
*/
export interface StorageKeyBuilder {
/**
* Builds the key for global config storage.
*/
buildGlobalConfigKey(): string;
/**
* Builds the key for a specific room.
*
* @param roomId - The unique identifier of the meeting room
*/
buildMeetRoomKey(roomId: string): string;
/**
* Builds the key for all meeting rooms.
*
* @param roomName - Optional name of the meeting room to filter by
*/
buildAllMeetRoomsKey(roomName?: string): string;
/**
* Builds the key for archived room metadata.
*
* @param roomId - The unique identifier of the meeting room
*/
buildArchivedMeetRoomKey(roomId: string): string;
/**
* Builds the key for a specific recording.
*
* @param recordingId - The unique identifier of the recording
*/
buildBinaryRecordingKey(recordingId: string): string;
/**
* Builds the key for a specific recording metadata.
*
* @param recordingId - The unique identifier of the recording
*/
buildMeetRecordingKey(recordingId: string): string;
/**
* Builds the key for all recordings in a room or globally.
*
* @param roomId - Optional room identifier to filter recordings by room
*/
buildAllMeetRecordingsKey(roomId?: string): string;
/**
* Builds the key for access recording secrets.
*
* @param recordingId - The unique identifier of the recording
*/
buildAccessRecordingSecretsKey(recordingId: string): string;
/**
* Builds the key for a specific user
*
* @param userId - The unique identifier of the user
*/
buildUserKey(userId: string): string;
/**
* Builds Api Key
*/
buildApiKeysKey(): string;
}

View File

@ -0,0 +1,147 @@
import chalk from 'chalk';
import fs from 'fs';
import { MEET_ENV } from '../environment.js';
let cachedHtml: string | null = null;
const cachedOpenApiHtml = new Map<string, string>();
let configValidated = false;
/**
* Normalizes the base path to ensure it starts and ends with /
* @param basePath The base path to normalize
* @returns Normalized base path (e.g., '/', '/meet/', '/app/path/')
*/
export function normalizeBasePath(basePath: string): string {
let normalized = basePath.trim();
// Handle empty string
if (!normalized) {
return '/';
}
// Ensure it starts with /
if (!normalized.startsWith('/')) {
normalized = '/' + normalized;
}
// Ensure it ends with /
if (!normalized.endsWith('/')) {
normalized = normalized + '/';
}
return normalized;
}
/**
* Validates the BASE_URL and BASE_PATH configuration and warns about potential issues.
* Only runs once per process.
*/
function validateBasePathConfig(): void {
if (configValidated) return;
configValidated = true;
const baseUrl = MEET_ENV.BASE_URL;
const basePath = MEET_ENV.BASE_PATH;
if (baseUrl) {
try {
const url = new URL(baseUrl);
// Check if BASE_URL contains a path (other than just /)
if (url.pathname && url.pathname !== '/') {
console.warn(chalk.yellow('⚠️ WARNING: MEET_BASE_URL contains a path segment:'), chalk.cyan(url.pathname));
console.warn(chalk.yellow(' MEET_BASE_URL should only contain https protocol and host (e.g., https://example.com)'));
console.warn(chalk.yellow(' Use MEET_BASE_PATH for the deployment path (e.g., /meet/)'));
if (basePath && basePath !== '/') {
console.warn(chalk.red(` This may cause issues: BASE_URL path "${url.pathname}" + BASE_PATH "${basePath}"`));
}
}
} catch {
console.warn(chalk.yellow('⚠️ WARNING: MEET_BASE_URL is not a valid URL:'), chalk.cyan(baseUrl));
}
}
}
/**
* Gets the configured base path, normalized
* @returns The normalized base path from MEET_BASE_PATH environment variable
*/
export function getBasePath(): string {
validateBasePathConfig();
return normalizeBasePath(MEET_ENV.BASE_PATH);
}
/**
* Applies runtime base path configuration to the index.html
* - Replaces the <base href="/"> tag with the configured base path
* - Adds a script with window.__OPENVIDU_MEET_CONFIG__ for frontend access
*
* @param htmlPath Path to the index.html file
* @returns The modified HTML content
*/
export function getHtmlWithBasePath(htmlPath: string): string {
// In production, cache the result for performance
if (process.env.NODE_ENV === 'production' && cachedHtml) {
return cachedHtml;
}
const basePath = getBasePath();
let html = fs.readFileSync(htmlPath, 'utf-8');
// Replace the base href - handle both possible formats
html = html.replace(/<base href="[^"]*"\s*\/?>/i, `<base href="${basePath}">`);
// Inject runtime configuration script before the closing </head> tag
const configScript = `<script>window.__OPENVIDU_MEET_CONFIG__={basePath:"${basePath}"};</script>`;
html = html.replace('</head>', `${configScript}\n</head>`);
if (process.env.NODE_ENV === 'production') {
cachedHtml = html;
}
return html;
}
/**
* Applies the runtime base path to the OpenAPI documentation HTML.
* Replaces the servers URL in the embedded OpenAPI spec so that "Try It" requests
* use the correct path when deployed under a base path (e.g. /meet/api/v1).
*
* @param htmlPath Path to the OpenAPI HTML file
* @param apiBasePath The API base path (e.g. /api/v1 or /internal-api/v1)
* @returns The modified HTML content
*/
export function getOpenApiHtmlWithBasePath(htmlPath: string, apiBasePath: string): string {
if (process.env.NODE_ENV === 'production' && cachedOpenApiHtml.has(htmlPath)) {
return cachedOpenApiHtml.get(htmlPath)!;
}
const basePath = getBasePath();
// Build full server URL: strip trailing slash from basePath to avoid double slashes
const fullServerUrl = basePath.replace(/\/$/, '') + apiBasePath;
let html = fs.readFileSync(htmlPath, 'utf-8');
// Replace the servers URL in the embedded OpenAPI JSON
// Matches "servers":[{"url":"<any-url>" and replaces the URL with the full path
html = html.replace(
/("servers":\[\{"url":")[^"]*(")/,
`$1${fullServerUrl}$2`
);
if (process.env.NODE_ENV === 'production') {
cachedOpenApiHtml.set(htmlPath, html);
}
return html;
}
/**
* Clears the cached HTML (useful for testing or config changes)
*/
export function clearHtmlCache(): void {
cachedHtml = null;
cachedOpenApiHtml.clear();
}

View File

@ -0,0 +1,5 @@
export * from './array.utils.js';
export * from './path.utils.js';
export * from './token.utils.js';
export * from './url.utils.js';

View File

@ -1,5 +1,5 @@
import path from 'path';
import fs from 'fs';
import path from 'path';
import { fileURLToPath } from 'url';
/**
@ -24,8 +24,9 @@ const isDevEnvironment = (): boolean => {
// Helper: walk up the directory tree looking for a predicate
const findUp = (startDir: string, predicate: (d: string) => boolean): string | null => {
let dir = path.resolve(startDir);
let parent = path.dirname(dir);
while (true) {
while (dir !== parent) {
try {
if (predicate(dir)) {
return dir;
@ -34,12 +35,11 @@ const findUp = (startDir: string, predicate: (d: string) => boolean): string | n
// ignore fs errors and continue climbing
}
const parent = path.dirname(dir);
if (parent === dir) return null;
dir = parent;
parent = path.dirname(dir);
}
return null;
};
const getBackendRoot = (): string => {
@ -57,13 +57,19 @@ const getBackendRoot = (): string => {
}
// Otherwise, try to find upward a directory containing package.json and src
const pkgRoot = findUp(cwd, (d) => fs.existsSync(path.join(d, 'package.json')) && fs.existsSync(path.join(d, 'src')));
const pkgRoot = findUp(
cwd,
(d) => fs.existsSync(path.join(d, 'package.json')) && fs.existsSync(path.join(d, 'src'))
);
if (pkgRoot) return pkgRoot;
// Try using the file's directory as a fallback starting point
const fileDir = path.dirname(fileURLToPath(import.meta.url));
const pkgRootFromFile = findUp(fileDir, (d) => fs.existsSync(path.join(d, 'package.json')) && fs.existsSync(path.join(d, 'src')));
const pkgRootFromFile = findUp(
fileDir,
(d) => fs.existsSync(path.join(d, 'package.json')) && fs.existsSync(path.join(d, 'src'))
);
if (pkgRootFromFile) return pkgRootFromFile;
@ -71,7 +77,6 @@ const getBackendRoot = (): string => {
return path.resolve(fileDir, '../..');
};
/**
* Resolves the project root dynamically based on current environment.
* It assumes the backend directory exists in the current project (CE or PRO).

View File

@ -1,33 +1,46 @@
import { container } from '../config/dependency-injector.config.js';
import { MEET_ENV } from '../environment.js';
import { BaseUrlService } from '../services/base-url.service.js';
import { getBasePath } from './html-dynamic-base-path.utils.js';
/**
* Returns the base URL for the application.
* Returns the base URL for the application, including the configured base path.
*
* If the global `BASE_URL` variable is defined, it returns its value,
* ensuring there is no trailing slash and removing default ports (443 for HTTPS, 80 for HTTP).
* Otherwise, it retrieves the base URL from the `HttpContextService` instance.
*
* @returns {string} The base URL as a string.
* The configured BASE_PATH is appended to the URL (without trailing slash).
*
* @returns {string} The base URL as a string (e.g., 'https://example.com/meet').
*/
export const getBaseUrl = (): string => {
let hostUrl: string;
if (MEET_ENV.BASE_URL) {
let baseUrl = MEET_ENV.BASE_URL.endsWith('/') ? MEET_ENV.BASE_URL.slice(0, -1) : MEET_ENV.BASE_URL;
hostUrl = MEET_ENV.BASE_URL.endsWith('/') ? MEET_ENV.BASE_URL.slice(0, -1) : MEET_ENV.BASE_URL;
// Remove default port 443 for HTTPS URLs
if (baseUrl.startsWith('https://') && baseUrl.includes(':443')) {
baseUrl = baseUrl.replace(':443', '');
if (hostUrl.startsWith('https://') && hostUrl.includes(':443')) {
hostUrl = hostUrl.replace(':443', '');
}
// Remove default port 80 for HTTP URLs
if (baseUrl.startsWith('http://') && baseUrl.includes(':80')) {
baseUrl = baseUrl.replace(':80', '');
if (hostUrl.startsWith('http://') && hostUrl.includes(':80')) {
hostUrl = hostUrl.replace(':80', '');
}
return baseUrl;
} else {
const baseUrlService = container.get(BaseUrlService);
hostUrl = baseUrlService.getBaseUrl();
}
const baseUrlService = container.get(BaseUrlService);
return baseUrlService.getBaseUrl();
// Append the base path (without trailing slash)
const basePath = getBasePath();
if (basePath === '/') {
return hostUrl;
}
// Remove trailing slash from base path for the final URL
return `${hostUrl}${basePath.slice(0, -1)}`;
};

View File

@ -2,7 +2,10 @@ import { expect } from '@jest/globals';
import {
MeetingEndAction,
MeetRecordingAccess,
MeetRecordingEncodingOptions,
MeetRecordingEncodingPreset,
MeetRecordingInfo,
MeetRecordingLayout,
MeetRecordingStatus,
MeetRoom,
MeetRoomAutoDeletionPolicy,
@ -17,6 +20,10 @@ import { Response } from 'supertest';
import { container } from '../../src/config/dependency-injector.config';
import { INTERNAL_CONFIG } from '../../src/config/internal-config';
import { TokenService } from '../../src/services/token.service';
import { getFullPath } from './request-helpers';
export const DEFAULT_RECORDING_ENCODING_PRESET = MeetRecordingEncodingPreset.H264_720P_30;
export const DEFAULT_RECORDING_LAYOUT = MeetRecordingLayout.GRID;
export const expectErrorResponse = (
response: Response,
@ -150,16 +157,20 @@ export const expectValidRoom = (
expect(room.config).toBeDefined();
if (config !== undefined) {
expect(room.config).toEqual(config);
// Use toMatchObject to allow encoding defaults to be added without breaking tests
expect(room.config).toMatchObject(config as any);
} else {
expect(room.config).toEqual({
recording: {
enabled: true,
layout: DEFAULT_RECORDING_LAYOUT,
encoding: DEFAULT_RECORDING_ENCODING_PRESET,
allowAccessTo: MeetRecordingAccess.ADMIN_MODERATOR_SPEAKER
},
chat: { enabled: true },
virtualBackground: { enabled: true },
e2ee: { enabled: false }
e2ee: { enabled: false },
captions: { enabled: true }
});
}
@ -193,6 +204,34 @@ export const expectValidRecording = (
expect(recording.status).toBe(status);
expect(recording.filename).toBeDefined();
expect(recording.details).toBeDefined();
expect(recording.layout).toBeDefined();
// Validate layout is a valid value
if (recording.layout !== undefined) {
expect(Object.values(MeetRecordingLayout)).toContain(recording.layout);
}
// Validate encoding is present and has a valid value
expect(recording.encoding).toBeDefined();
if (recording.encoding !== undefined) {
if (typeof recording.encoding === 'string') {
// Encoding preset: should match the default H264_720P_30
expect(recording.encoding).toBe('H264_720P_30');
} else {
// Advanced encoding options: should have valid codec values
expect(typeof recording.encoding).toBe('object');
const encodingObj = recording.encoding as MeetRecordingEncodingOptions;
if (encodingObj.video?.codec) {
expect(['H264_BASELINE', 'H264_MAIN', 'H264_HIGH', 'VP8']).toContain(encodingObj.video.codec);
}
if (encodingObj.audio?.codec) {
expect(['OPUS', 'AAC']).toContain(encodingObj.audio.codec);
}
}
}
};
export const expectValidRoomWithFields = (room: MeetRoom, fields: string[] = []) => {
@ -222,7 +261,7 @@ export const expectValidRecordingLocationHeader = (response: Response) => {
expect(locationHeader).toBeDefined();
const locationHeaderUrl = new URL(locationHeader);
expect(locationHeaderUrl.pathname).toBe(
`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings/${response.body.recordingId}`
getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings/${response.body.recordingId}`)
);
};
@ -356,7 +395,13 @@ export const expectSuccessRecordingMediaResponse = (
}
};
export const expectValidStartRecordingResponse = (response: Response, roomId: string, roomName: string) => {
export const expectValidStartRecordingResponse = (
response: Response,
roomId: string,
roomName: string,
expectedLayout?: MeetRecordingLayout,
expectedEncoding?: MeetRecordingEncodingPreset | MeetRecordingEncodingOptions
) => {
expect(response.status).toBe(201);
expect(response.body).toHaveProperty('recordingId');
@ -371,19 +416,47 @@ export const expectValidStartRecordingResponse = (response: Response, roomId: st
expect(response.body).toHaveProperty('startDate');
expect(response.body).toHaveProperty('status', 'active');
expect(response.body).toHaveProperty('filename');
expect(response.body).toHaveProperty('layout');
expect(response.body).not.toHaveProperty('duration');
expect(response.body).not.toHaveProperty('endDate');
expect(response.body).not.toHaveProperty('size');
expect(response.body.layout).toBeDefined();
expect(response.body.encoding).toBeDefined();
// Validate expected layout if provided
if (expectedLayout) {
expect(response.body.layout).toEqual(expectedLayout);
} else {
// Default layout
expect(response.body.layout).toEqual(DEFAULT_RECORDING_LAYOUT);
}
if (expectedEncoding !== undefined) {
if (typeof expectedEncoding === 'string') {
// Encoding preset
expect(response.body.encoding).toEqual(expectedEncoding);
} else {
// Advanced encoding options
expect(response.body.encoding).toMatchObject(expectedEncoding as any);
}
} else {
// Default encoding preset
expect(response.body.encoding).toEqual(DEFAULT_RECORDING_ENCODING_PRESET);
}
};
export const expectValidStopRecordingResponse = (
response: Response,
recordingId: string,
roomId: string,
roomName: string
roomName: string,
expectedLayout?: MeetRecordingLayout,
expectedEncoding?: MeetRecordingEncodingPreset | MeetRecordingEncodingOptions
) => {
expect(response.status).toBe(202);
expect(response.body).toBeDefined();
expectValidRecordingLocationHeader(response);
expect(response.body).toHaveProperty('recordingId', recordingId);
expect([MeetRecordingStatus.COMPLETE, MeetRecordingStatus.ENDING]).toContain(response.body.status);
expect(response.body).toHaveProperty('roomId', roomId);
@ -391,30 +464,86 @@ export const expectValidStopRecordingResponse = (
expect(response.body).toHaveProperty('filename');
expect(response.body).toHaveProperty('startDate');
expect(response.body).toHaveProperty('duration', expect.any(Number));
expect(response.body).toHaveProperty('layout');
expect(response.body).toHaveProperty('encoding');
expectValidRecordingLocationHeader(response);
// Validate layout is a valid value
if (expectedLayout) {
expect(response.body.layout).toEqual(expectedLayout);
} else {
// Default layout
expect(response.body.layout).toEqual(DEFAULT_RECORDING_LAYOUT);
}
// Validate encoding property
if (expectedEncoding) {
expect(response.body.encoding).toEqual(expectedEncoding);
} else {
// Default encoding preset
expect(response.body.encoding).toEqual(DEFAULT_RECORDING_ENCODING_PRESET);
}
};
export const expectValidGetRecordingResponse = (
response: Response,
recordingId: string,
roomId: string,
roomName: string,
status?: MeetRecordingStatus,
maxSecDuration?: number
expectedConfig: {
recordingId: string;
roomId: string;
roomName: string;
recordingStatus?: MeetRecordingStatus;
recordingDuration?: number;
recordingLayout?: MeetRecordingLayout;
recordingEncoding?: MeetRecordingEncodingPreset | MeetRecordingEncodingOptions;
}
) => {
expect(response.status).toBe(200);
expect(response.body).toBeDefined();
const body = response.body;
const { recordingId, roomId, roomName, recordingStatus, recordingDuration, recordingLayout, recordingEncoding } =
expectedConfig;
expect(body).toMatchObject({ recordingId, roomId, roomName });
// Validate layout property
expect(body).toHaveProperty('layout');
expect(body.layout).toBeDefined();
if (recordingLayout !== undefined) {
expect(body.layout).toBe(recordingLayout);
} else {
// Default layout
expect(body.layout).toBe(DEFAULT_RECORDING_LAYOUT);
}
// Validate encoding property
expect(body).toHaveProperty('encoding');
expect(body.encoding).toBeDefined();
// Validate encoding property is present and coherent
if (recordingEncoding !== undefined) {
if (typeof recordingEncoding === 'string') {
expect(body.layout).toBe(recordingLayout);
} else {
expect(body.encoding).toMatchObject(recordingEncoding as any);
}
} else {
// Default encoding preset
expect(body.encoding).toBe(DEFAULT_RECORDING_ENCODING_PRESET);
}
expect(body.status).toBeDefined();
if (recordingStatus !== undefined) {
expect(body.status).toBe(recordingStatus);
}
const isRecFinished =
status &&
(status === MeetRecordingStatus.COMPLETE ||
status === MeetRecordingStatus.ABORTED ||
status === MeetRecordingStatus.FAILED ||
status === MeetRecordingStatus.LIMIT_REACHED);
recordingStatus &&
(recordingStatus === MeetRecordingStatus.COMPLETE ||
recordingStatus === MeetRecordingStatus.ABORTED ||
recordingStatus === MeetRecordingStatus.FAILED ||
recordingStatus === MeetRecordingStatus.LIMIT_REACHED);
expect(body).toEqual(
expect.objectContaining({
recordingId: expect.stringMatching(new RegExp(`^${recordingId}$`)),
@ -430,22 +559,16 @@ export const expectValidGetRecordingResponse = (
})
);
expect(body.status).toBeDefined();
if (status !== undefined) {
expect(body.status).toBe(status);
}
if (isRecFinished) {
expect(body.endDate).toBeGreaterThanOrEqual(body.startDate);
expect(body.duration).toBeGreaterThanOrEqual(0);
}
if (isRecFinished && maxSecDuration) {
expect(body.duration).toBeLessThanOrEqual(maxSecDuration);
if (isRecFinished && recordingDuration) {
expect(body.duration).toBeLessThanOrEqual(recordingDuration);
const computedSec = (body.endDate - body.startDate) / 1000;
const diffSec = Math.abs(maxSecDuration - computedSec);
const diffSec = Math.abs(recordingDuration - computedSec);
// Estimate 5 seconds of tolerace because of time to start/stop recording
expect(diffSec).toBeLessThanOrEqual(5);
}
@ -484,7 +607,7 @@ export const expectValidGetRecordingUrlResponse = (response: Response, recording
expect(recordingUrl).toBeDefined();
const parsedUrl = new URL(recordingUrl);
expect(parsedUrl.pathname).toBe(`/recording/${recordingId}`);
expect(parsedUrl.pathname).toBe(getFullPath(`/recording/${recordingId}`));
expect(parsedUrl.searchParams.get('secret')).toBeDefined();
};

View File

@ -3,6 +3,8 @@ import {
AuthMode,
MeetAppearanceConfig,
MeetRecordingAccess,
MeetRecordingEncodingOptions,
MeetRecordingEncodingPreset,
MeetRecordingInfo,
MeetRecordingStatus,
MeetRoom,
@ -29,6 +31,22 @@ import { ApiKeyService } from '../../src/services/api-key.service.js';
import { GlobalConfigService } from '../../src/services/global-config.service.js';
import { RecordingService } from '../../src/services/recording.service.js';
import { RoomScheduledTasksService } from '../../src/services/room-scheduled-tasks.service.js';
import { getBasePath } from '../../src/utils/html-dynamic-base-path.utils.js';
/**
* Constructs the full API path by prepending the base path.
* Handles trailing/leading slashes to avoid double slashes.
*/
export const getFullPath = (apiPath: string): string => {
const basePath = getBasePath();
// Remove trailing slash from base path if apiPath starts with /
if (basePath.endsWith('/') && apiPath.startsWith('/')) {
return basePath.slice(0, -1) + apiPath;
}
return basePath + apiPath;
};
const CREDENTIALS = {
admin: {
@ -60,7 +78,7 @@ export const generateApiKey = async (): Promise<string> => {
const accessToken = await loginUser();
const response = await request(app)
.post(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/api-keys`)
.post(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/api-keys`))
.set(INTERNAL_CONFIG.ACCESS_TOKEN_HEADER, accessToken)
.send();
expect(response.status).toBe(201);
@ -73,7 +91,7 @@ export const getApiKeys = async () => {
const accessToken = await loginUser();
const response = await request(app)
.get(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/api-keys`)
.get(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/api-keys`))
.set(INTERNAL_CONFIG.ACCESS_TOKEN_HEADER, accessToken)
.send();
return response;
@ -84,7 +102,7 @@ export const deleteApiKeys = async () => {
const accessToken = await loginUser();
const response = await request(app)
.delete(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/api-keys`)
.delete(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/api-keys`))
.set(INTERNAL_CONFIG.ACCESS_TOKEN_HEADER, accessToken)
.send();
return response;
@ -107,7 +125,7 @@ export const getRoomsAppearanceConfig = async () => {
checkAppIsRunning();
const response = await request(app)
.get(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/config/rooms/appearance`)
.get(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/config/rooms/appearance`))
.send();
return response;
};
@ -117,7 +135,7 @@ export const updateRoomsAppearanceConfig = async (config: { appearance: MeetAppe
const accessToken = await loginUser();
const response = await request(app)
.put(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/config/rooms/appearance`)
.put(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/config/rooms/appearance`))
.set(INTERNAL_CONFIG.ACCESS_TOKEN_HEADER, accessToken)
.send(config);
return response;
@ -128,7 +146,7 @@ export const getWebbhookConfig = async () => {
const accessToken = await loginUser();
const response = await request(app)
.get(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/config/webhooks`)
.get(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/config/webhooks`))
.set(INTERNAL_CONFIG.ACCESS_TOKEN_HEADER, accessToken)
.send();
return response;
@ -139,7 +157,7 @@ export const updateWebbhookConfig = async (config: WebhookConfig) => {
const accessToken = await loginUser();
const response = await request(app)
.put(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/config/webhooks`)
.put(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/config/webhooks`))
.set(INTERNAL_CONFIG.ACCESS_TOKEN_HEADER, accessToken)
.send(config);
@ -150,7 +168,7 @@ export const testWebhookUrl = async (url: string) => {
checkAppIsRunning();
const response = await request(app)
.post(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/config/webhooks/test`)
.post(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/config/webhooks/test`))
.send({ url });
return response;
};
@ -158,7 +176,7 @@ export const testWebhookUrl = async (url: string) => {
export const getSecurityConfig = async () => {
checkAppIsRunning();
const response = await request(app).get(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/config/security`).send();
const response = await request(app).get(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/config/security`)).send();
return response;
};
@ -167,7 +185,7 @@ export const updateSecurityConfig = async (config: SecurityConfig) => {
const accessToken = await loginUser();
const response = await request(app)
.put(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/config/security`)
.put(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/config/security`))
.set(INTERNAL_CONFIG.ACCESS_TOKEN_HEADER, accessToken)
.send(config);
return response;
@ -184,6 +202,13 @@ export const changeSecurityConfig = async (authMode: AuthMode) => {
expect(response.status).toBe(200);
};
export const getCaptionsConfig = async () => {
checkAppIsRunning();
const response = await request(app).get(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/config/captions`)).send();
return response;
};
export const restoreDefaultGlobalConfig = async () => {
const configService = container.get(GlobalConfigService);
const defaultGlobalConfig = configService['getDefaultConfig']();
@ -197,7 +222,7 @@ export const loginUser = async (): Promise<string> => {
checkAppIsRunning();
const response = await request(app)
.post(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/auth/login`)
.post(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/auth/login`))
.send(CREDENTIALS.admin)
.expect(200);
@ -209,7 +234,7 @@ export const getProfile = async (accessToken: string) => {
checkAppIsRunning();
return await request(app)
.get(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/users/profile`)
.get(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/users/profile`))
.set(INTERNAL_CONFIG.ACCESS_TOKEN_HEADER, accessToken)
.send();
};
@ -218,7 +243,7 @@ export const changePassword = async (currentPassword: string, newPassword: strin
checkAppIsRunning();
return await request(app)
.post(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/users/change-password`)
.post(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/users/change-password`))
.set(INTERNAL_CONFIG.ACCESS_TOKEN_HEADER, accessToken)
.send({ currentPassword, newPassword });
};
@ -227,7 +252,7 @@ export const createRoom = async (options: MeetRoomOptions = {}): Promise<MeetRoo
checkAppIsRunning();
const response = await request(app)
.post(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms`)
.post(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms`))
.set(INTERNAL_CONFIG.API_KEY_HEADER, MEET_ENV.INITIAL_API_KEY)
.send(options)
.expect(201);
@ -238,7 +263,7 @@ export const getRooms = async (query: Record<string, unknown> = {}) => {
checkAppIsRunning();
return await request(app)
.get(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms`)
.get(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms`))
.set(INTERNAL_CONFIG.API_KEY_HEADER, MEET_ENV.INITIAL_API_KEY)
.query(query);
};
@ -255,7 +280,7 @@ export const getRooms = async (query: Record<string, unknown> = {}) => {
export const getRoom = async (roomId: string, fields?: string, roomMemberToken?: string) => {
checkAppIsRunning();
const req = request(app).get(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms/${roomId}`).query({ fields });
const req = request(app).get(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms/${roomId}`)).query({ fields });
if (roomMemberToken) {
req.set(INTERNAL_CONFIG.ROOM_MEMBER_TOKEN_HEADER, roomMemberToken);
@ -270,7 +295,7 @@ export const getRoomConfig = async (roomId: string): Promise<Response> => {
checkAppIsRunning();
return await request(app)
.get(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms/${roomId}/config`)
.get(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms/${roomId}/config`))
.set(INTERNAL_CONFIG.API_KEY_HEADER, MEET_ENV.INITIAL_API_KEY)
.send();
};
@ -279,7 +304,7 @@ export const updateRoomConfig = async (roomId: string, config: Partial<MeetRoomC
checkAppIsRunning();
return await request(app)
.put(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms/${roomId}/config`)
.put(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms/${roomId}/config`))
.set(INTERNAL_CONFIG.API_KEY_HEADER, MEET_ENV.INITIAL_API_KEY)
.send({ config });
};
@ -298,7 +323,7 @@ export const updateRoomStatus = async (roomId: string, status: MeetRoomStatus) =
checkAppIsRunning();
return await request(app)
.put(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms/${roomId}/status`)
.put(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms/${roomId}/status`))
.set(INTERNAL_CONFIG.API_KEY_HEADER, MEET_ENV.INITIAL_API_KEY)
.send({ status });
};
@ -307,10 +332,10 @@ export const deleteRoom = async (roomId: string, query: Record<string, unknown>
checkAppIsRunning();
const result = await request(app)
.delete(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms/${roomId}`)
.delete(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms/${roomId}`))
.set(INTERNAL_CONFIG.API_KEY_HEADER, MEET_ENV.INITIAL_API_KEY)
.query(query);
await sleep('1s');
await sleep('5s'); // TODO - replace with a more robust solution to ensure webhook is processed before proceeding with the tests
return result;
};
@ -318,10 +343,10 @@ export const bulkDeleteRooms = async (roomIds: string[], withMeeting?: string, w
checkAppIsRunning();
const result = await request(app)
.delete(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms`)
.delete(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms`))
.set(INTERNAL_CONFIG.API_KEY_HEADER, MEET_ENV.INITIAL_API_KEY)
.query({ roomIds: roomIds.join(','), withMeeting, withRecordings });
await sleep('1s');
await sleep('5s'); // TODO - replace with a more robust solution to ensure webhook is processed before proceeding with the tests
return result;
};
@ -361,7 +386,7 @@ export const runExpiredRoomsGC = async () => {
const roomTaskScheduler = container.get(RoomScheduledTasksService);
await roomTaskScheduler['deleteExpiredRooms']();
await sleep('1s');
await sleep('5s'); // TODO - replace with a more robust solution to ensure webhook is processed before proceeding with the tests
};
/**
@ -390,7 +415,7 @@ export const getRoomMemberRoles = async (roomId: string) => {
checkAppIsRunning();
const response = await request(app)
.get(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/rooms/${roomId}/roles`)
.get(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/rooms/${roomId}/roles`))
.send();
return response;
};
@ -399,7 +424,7 @@ export const getRoomMemberRoleBySecret = async (roomId: string, secret: string)
checkAppIsRunning();
const response = await request(app)
.get(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/rooms/${roomId}/roles/${secret}`)
.get(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/rooms/${roomId}/roles/${secret}`))
.send();
return response;
};
@ -412,7 +437,7 @@ export const generateRoomMemberTokenRequest = async (roomId: string, tokenOption
// Generate the room member token
const response = await request(app)
.post(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/rooms/${roomId}/token`)
.post(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/rooms/${roomId}/token`))
.send(tokenOptions);
return response;
};
@ -542,7 +567,7 @@ export const disconnectFakeParticipants = async () => {
});
fakeParticipantsProcesses.clear();
await sleep('1s');
await sleep('1s'); // TODO - replace with a more robust solution to ensure webhook is processed before proceeding with the tests
};
export const updateParticipant = async (
@ -554,7 +579,7 @@ export const updateParticipant = async (
checkAppIsRunning();
const response = await request(app)
.put(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/meetings/${roomId}/participants/${participantIdentity}/role`)
.put(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/meetings/${roomId}/participants/${participantIdentity}/role`))
.set(INTERNAL_CONFIG.ROOM_MEMBER_TOKEN_HEADER, moderatorToken)
.send({ role: newRole });
return response;
@ -564,7 +589,7 @@ export const kickParticipant = async (roomId: string, participantIdentity: strin
checkAppIsRunning();
const response = await request(app)
.delete(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/meetings/${roomId}/participants/${participantIdentity}`)
.delete(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/meetings/${roomId}/participants/${participantIdentity}`))
.set(INTERNAL_CONFIG.ROOM_MEMBER_TOKEN_HEADER, moderatorToken)
.send();
return response;
@ -574,30 +599,48 @@ export const endMeeting = async (roomId: string, moderatorToken: string) => {
checkAppIsRunning();
const response = await request(app)
.delete(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/meetings/${roomId}`)
.delete(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/meetings/${roomId}`))
.set(INTERNAL_CONFIG.ROOM_MEMBER_TOKEN_HEADER, moderatorToken)
.send();
await sleep('1s');
await sleep('5s'); // TODO - replace with a more robust solution to ensure webhook is processed before proceeding with the tests
return response;
};
export const startRecording = async (roomId: string, moderatorToken: string) => {
export const startRecording = async (
roomId: string,
config?: {
layout?: string;
encoding?: MeetRecordingEncodingPreset | MeetRecordingEncodingOptions;
}
) => {
checkAppIsRunning();
const body: {
roomId: string;
config?: {
layout?: string;
encoding?: MeetRecordingEncodingPreset | MeetRecordingEncodingOptions;
};
} = { roomId };
if (config) {
body.config = config;
}
return await request(app)
.post(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/recordings`)
.set(INTERNAL_CONFIG.ROOM_MEMBER_TOKEN_HEADER, moderatorToken)
.send({ roomId });
.post(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings`))
.set(INTERNAL_CONFIG.API_KEY_HEADER, MEET_ENV.INITIAL_API_KEY)
.send(body);
};
export const stopRecording = async (recordingId: string, moderatorToken: string) => {
export const stopRecording = async (recordingId: string) => {
checkAppIsRunning();
const response = await request(app)
.post(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/recordings/${recordingId}/stop`)
.set(INTERNAL_CONFIG.ROOM_MEMBER_TOKEN_HEADER, moderatorToken)
.post(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings/${recordingId}/stop`))
.set(INTERNAL_CONFIG.API_KEY_HEADER, MEET_ENV.INITIAL_API_KEY)
.send();
await sleep('2.5s');
await sleep('2.5s'); // TODO - replace with a more robust solution to ensure webhook is processed before proceeding with the tests
return response;
};
@ -606,7 +649,7 @@ export const getRecording = async (recordingId: string) => {
checkAppIsRunning();
return await request(app)
.get(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings/${recordingId}`)
.get(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings/${recordingId}`))
.set(INTERNAL_CONFIG.API_KEY_HEADER, MEET_ENV.INITIAL_API_KEY);
};
@ -614,7 +657,7 @@ export const getRecordingMedia = async (recordingId: string, range?: string) =>
checkAppIsRunning();
const req = request(app)
.get(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings/${recordingId}/media`)
.get(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings/${recordingId}/media`))
.set(INTERNAL_CONFIG.API_KEY_HEADER, MEET_ENV.INITIAL_API_KEY);
if (range) {
@ -628,7 +671,7 @@ export const getRecordingUrl = async (recordingId: string, privateAccess = false
checkAppIsRunning();
return await request(app)
.get(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings/${recordingId}/url`)
.get(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings/${recordingId}/url`))
.query({ privateAccess })
.set(INTERNAL_CONFIG.API_KEY_HEADER, MEET_ENV.INITIAL_API_KEY);
};
@ -637,7 +680,7 @@ export const deleteRecording = async (recordingId: string) => {
checkAppIsRunning();
return await request(app)
.delete(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings/${recordingId}`)
.delete(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings/${recordingId}`))
.set(INTERNAL_CONFIG.API_KEY_HEADER, MEET_ENV.INITIAL_API_KEY);
};
@ -645,7 +688,7 @@ export const bulkDeleteRecordings = async (recordingIds: string[], roomMemberTok
checkAppIsRunning();
const req = request(app)
.delete(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings`)
.delete(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings`))
.query({ recordingIds: recordingIds.join(',') });
if (roomMemberToken) {
@ -665,7 +708,7 @@ export const downloadRecordings = async (
checkAppIsRunning();
const req = request(app)
.get(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings/download`)
.get(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings/download`))
.query({ recordingIds: recordingIds.join(',') });
if (roomMemberToken) {
@ -685,7 +728,7 @@ export const downloadRecordings = async (
return await req;
};
export const stopAllRecordings = async (moderatorToken: string) => {
export const stopAllRecordings = async () => {
checkAppIsRunning();
const response = await getAllRecordings();
@ -701,8 +744,8 @@ export const stopAllRecordings = async (moderatorToken: string) => {
console.log(`Stopping ${recordingIds.length} recordings...`, recordingIds);
const tasks = recordingIds.map((recordingId: string) =>
request(app)
.post(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/recordings/${recordingId}/stop`)
.set(INTERNAL_CONFIG.ROOM_MEMBER_TOKEN_HEADER, moderatorToken)
.post(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings/${recordingId}/stop`))
.set(INTERNAL_CONFIG.API_KEY_HEADER, MEET_ENV.INITIAL_API_KEY)
.send()
);
const results = await Promise.all(tasks);
@ -718,7 +761,7 @@ export const getAllRecordings = async (query: Record<string, unknown> = {}) => {
checkAppIsRunning();
return await request(app)
.get(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings`)
.get(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings`))
.set(INTERNAL_CONFIG.API_KEY_HEADER, MEET_ENV.INITIAL_API_KEY)
.query(query);
};
@ -727,7 +770,7 @@ export const getAllRecordingsFromRoom = async (roomMemberToken: string) => {
checkAppIsRunning();
return await request(app)
.get(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings`)
.get(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings`))
.set(INTERNAL_CONFIG.ROOM_MEMBER_TOKEN_HEADER, roomMemberToken);
};
@ -762,7 +805,7 @@ export const getAnalytics = async () => {
const accessToken = await loginUser();
const response = await request(app)
.get(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/analytics`)
.get(getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/analytics`))
.set(INTERNAL_CONFIG.ACCESS_TOKEN_HEADER, accessToken)
.send();

View File

@ -100,7 +100,7 @@ export const setupSingleRoomWithRecording = async (
roomName = 'TEST_ROOM'
): Promise<RoomData> => {
const roomData = await setupSingleRoom(true, roomName);
const response = await startRecording(roomData.room.roomId, roomData.moderatorToken);
const response = await startRecording(roomData.room.roomId);
expectValidStartRecordingResponse(response, roomData.room.roomId, roomData.room.roomName);
roomData.recordingId = response.body.recordingId;
@ -110,7 +110,7 @@ export const setupSingleRoomWithRecording = async (
}
if (stopRecordingCond) {
await stopRecording(roomData.recordingId!, roomData.moderatorToken);
await stopRecording(roomData.recordingId!);
}
return roomData;
@ -145,7 +145,7 @@ export const setupMultiRecordingsTestContext = async (
}
// Send start recording request
const response = await startRecording(roomData.room.roomId, roomData.moderatorToken);
const response = await startRecording(roomData.room.roomId);
expectValidStartRecordingResponse(response, roomData.room.roomId, roomData.room.roomName);
// Store the recordingId in context
@ -162,7 +162,7 @@ export const setupMultiRecordingsTestContext = async (
// Stop recordings for the first numStops rooms
const stopPromises = startedRooms.slice(0, numStops).map(async (roomData) => {
if (roomData.recordingId) {
await stopRecording(roomData.recordingId, roomData.moderatorToken);
await stopRecording(roomData.recordingId);
console.log(`Recording stopped for room ${roomData.room.roomId}`);
return roomData.recordingId;
}

View File

@ -5,12 +5,13 @@ import { INTERNAL_CONFIG } from '../../../../src/config/internal-config.js';
import {
generateApiKey,
getApiKeys,
getFullPath,
loginUser,
restoreDefaultApiKeys,
startTestServer
} from '../../../helpers/request-helpers.js';
const API_KEYS_PATH = `${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/api-keys`;
const API_KEYS_PATH = getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/api-keys`);
describe('API Keys API Tests', () => {
let app: Express;
@ -27,7 +28,7 @@ describe('API Keys API Tests', () => {
const getRoomsWithApiKey = async (apiKey: string) => {
return request(app)
.get(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms`)
.get(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms`))
.set(INTERNAL_CONFIG.API_KEY_HEADER, apiKey);
};

View File

@ -6,6 +6,7 @@ import {
deleteApiKeys,
generateApiKey,
getApiKeys,
getFullPath,
restoreDefaultApiKeys,
startTestServer
} from '../../../helpers/request-helpers.js';
@ -23,7 +24,7 @@ describe('API Keys API Tests', () => {
const getRoomsWithApiKey = async (apiKey: string) => {
return request(app)
.get(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms`)
.get(getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/rooms`))
.set(INTERNAL_CONFIG.API_KEY_HEADER, apiKey);
};

View File

@ -3,9 +3,9 @@ import { Express } from 'express';
import request from 'supertest';
import { INTERNAL_CONFIG } from '../../../../src/config/internal-config.js';
import { expectValidationError } from '../../../helpers/assertion-helpers.js';
import { startTestServer } from '../../../helpers/request-helpers.js';
import { getFullPath, startTestServer } from '../../../helpers/request-helpers.js';
const AUTH_PATH = `${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/auth`;
const AUTH_PATH = getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/auth`);
describe('Authentication API Tests', () => {
let app: Express;

View File

@ -2,9 +2,9 @@ import { beforeAll, describe, expect, it } from '@jest/globals';
import { Express } from 'express';
import request from 'supertest';
import { INTERNAL_CONFIG } from '../../../../src/config/internal-config.js';
import { startTestServer } from '../../../helpers/request-helpers.js';
import { getFullPath, startTestServer } from '../../../helpers/request-helpers.js';
const AUTH_PATH = `${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/auth`;
const AUTH_PATH = getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/auth`);
describe('Authentication API Tests', () => {
let app: Express;

View File

@ -2,9 +2,9 @@ import { beforeAll, describe, expect, it } from '@jest/globals';
import { Express } from 'express';
import request from 'supertest';
import { INTERNAL_CONFIG } from '../../../../src/config/internal-config.js';
import { startTestServer } from '../../../helpers/request-helpers.js';
import { getFullPath, startTestServer } from '../../../helpers/request-helpers.js';
const AUTH_PATH = `${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/auth`;
const AUTH_PATH = getFullPath(`${INTERNAL_CONFIG.INTERNAL_API_BASE_PATH_V1}/auth`);
describe('Authentication API Tests', () => {
let app: Express;

View File

@ -0,0 +1,23 @@
import { beforeAll, describe, expect, it } from '@jest/globals';
import { getCaptionsConfig, startTestServer } from '../../../helpers/request-helpers.js';
describe('Captions Config API Tests', () => {
beforeAll(async () => {
await startTestServer();
});
describe('Get captions config', () => {
it('should return captions config when not authenticated', async () => {
const response = await getCaptionsConfig();
expect(response.status).toBe(200);
expect(response.body).toHaveProperty('enabled');
expect(typeof response.body.enabled).toBe('boolean');
});
it('should return enabled true by default', async () => {
const response = await getCaptionsConfig();
expect(response.status).toBe(200);
expect(response.body).toEqual({ enabled: false });
});
});
});

View File

@ -6,6 +6,7 @@ import {
deleteAllRecordings,
deleteAllRooms,
disconnectFakeParticipants,
getFullPath,
getRecordingUrl,
startTestServer
} from '../../../helpers/request-helpers.js';
@ -31,7 +32,7 @@ describe('Recording API Tests', () => {
});
describe('Get Recording URL Tests', () => {
const RECORDINGS_PATH = `${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings`;
const RECORDINGS_PATH = getFullPath(`${INTERNAL_CONFIG.API_BASE_PATH_V1}/recordings`);
it('should get public recording URL', async () => {
const response = await getRecordingUrl(recordingId);

View File

@ -36,34 +36,28 @@ describe('Recording API Tests', () => {
it('should return 200 when recording exists', async () => {
const response = await getRecording(recordingId);
expectValidGetRecordingResponse(
response,
expectValidGetRecordingResponse(response, {
recordingId,
room.roomId,
room.roomName,
MeetRecordingStatus.COMPLETE,
1
);
roomId: room.roomId,
roomName: room.roomName,
recordingStatus: MeetRecordingStatus.COMPLETE,
recordingDuration: 1
});
});
it('should get an ACTIVE recording status', async () => {
const contextAux = await setupMultiRecordingsTestContext(1, 1, 0);
const {
room: roomAux,
recordingId: recordingIdAux = '',
moderatorToken: moderatorTokenAux
} = contextAux.getRoomByIndex(0)!;
const { room: roomAux, recordingId: recordingIdAux = '' } = contextAux.getRoomByIndex(0)!;
const response = await getRecording(recordingIdAux);
expectValidGetRecordingResponse(
response,
recordingIdAux,
roomAux.roomId,
roomAux.roomName,
MeetRecordingStatus.ACTIVE
);
expectValidGetRecordingResponse(response, {
recordingId: recordingIdAux,
roomId: roomAux.roomId,
roomName: roomAux.roomName,
recordingStatus: MeetRecordingStatus.ACTIVE
});
await stopAllRecordings(moderatorTokenAux);
await stopAllRecordings();
});
it('should return 404 when recording does not exist', async () => {

Some files were not shown because too many files have changed in this diff Show More