Overall explanation

API Routes

/api/video
    /stream
        /get-all: List all video avalable for streaming
    /upload
        /set-up: initialize the uploading process for the next api
        /: api for uploading each chunk
        /clean-all: clear all uploaded file in the server
    /process
        /get-all: Get detail info of all videos (bitrate, encodings, resolution, etc.)

/static/hls: serving `.m3u8` file for HLS streaming
/static: serving raw uploaded video

Video upload to video stream flow chart

The flowchart belows demonstrates steps from when the client uploading file chunk with request POST /api/video/upload to final HLS master playlist generation with FFMPEG command. flow-diagram The uploading and streaming video process revolves around 3 abstract directory: upload/chunks for uploading video split chunks, upload/videos stored the uploaded file after chunk merge, and streams/ contains HLS master playlist generated from FFMPEG. The upload to stream flow contains following steps:

  • Front-end divides videos into chunks (chunk size are determined by server).
  • Sending each chunk to server, server saved chunk into upload/chunks folder of the S3 bucket.
  • Server read each chunk from s3 bucket and write continuosly into a complete final file in upload/videos.
  • Server run FFMPEG to generate HLS master playlist.
  • Front-end web application get the master.m3u8 file and push to hls.js plugin to start adaptive streaming process.

Some notable class

File system path

The project was developed for local file system usage first, then migrate into AWS S3 bucket. Hence, some boilerplate class implementation is necessary for smooth transition between local file system and AWS S3 Bucket. The file system path class responsible for generate file path whether it is relative or absolute. file-system-path

File system action

The file system action responsible for action such as: read from directory, create file, remove file, remove directory, create write stream, create read stream, pipe read and write stream together, etc. file-system-action

Multer engine

The local file system version use multer.diskStorage class to handle file upload parsing and storing, the AWS S3 version has to implement a custom engine using multipart upload commands from the AWS S3 SDK. multer

File write stream and read stream

In the local file system version, the step of merging upload chunks into a complete upload file was handle by this section of the code

const writeStream = fs.createWriteStream('<complete file path>');
for (let chunkIdx = 0; chunkIdx < chunkNums; chunkIdx += 1) {
    const readStream = fs.createReadStream('chunkIdx chunk path');
    await new Promise<void>((resolve, reject) => {
        readStream.pipe(writeStream, {end: false});
        readStream.on('end', () => resolve());
        readStream.on('error', () => reject());
    })
}

To easily switch back and forth between using local file system and AWS S3 bucket, a custom write stream and read stream were implemented, inherited from Writable and Readable class from node:stream.

Sidenote: The solution of using s3fs-fuse mounted storage was found after I implemented these custom stream. The stream classes work as expected so I kept them. For the later step of FFMPEG HLS playlist generation, I ultilized s3fs for an easier transition to AWS S3.

stream