- Server now has configurable MAX_WS_BUFFER_BYTES defaulting to 2097152, and skips JPEG frames
when the WebSocket is backed up instead of queueing stale frames in ws (server/index.js:30,
server/index.js:1439).
- Browser frame handling now decodes frames sequentially, drops late frames against the audio
clock, caps pending/decoded queues, and draws only the latest due frame per animation tick
(public/app.js:280, public/app.js:381).
- Relay/split normal EOF closes are no longer mislabeled as client_disconnect, which should
make logs around ffmpeg decode warnings less misleading (server/index.js:797, server/
index.js:1071).
- Documented MAX_WS_BUFFER_BYTES in README, Compose, and AGENTS.
82 lines
4.8 KiB
Markdown
82 lines
4.8 KiB
Markdown
# Frame Stream Player
|
|
|
|
A small web app that plays a remote video stream without using browser video decoding. The server uses `ffmpeg` to decode the input URL into:
|
|
|
|
- an MP3 audio stream served to a normal `<audio>` element
|
|
- timed JPEG image frames sent over a WebSocket and painted onto a `<canvas>`
|
|
|
|
This is meant for machines where image and audio decoding work but browser video decoding is unavailable or unreliable.
|
|
|
|
## Run
|
|
|
|
```sh
|
|
npm install
|
|
npm start
|
|
```
|
|
|
|
Open `http://localhost:3000`, paste a direct HTTP(S) stream URL, and click `Next`.
|
|
|
|
You need an `ffmpeg` binary with decoders for the stream's video and audio codecs. If your stream is H.264 or HEVC, make sure your installed `ffmpeg` actually includes those decoders. You can point the app at a different binary with:
|
|
|
|
```sh
|
|
FFMPEG_PATH=/path/to/ffmpeg npm start
|
|
```
|
|
|
|
## Docker
|
|
|
|
```sh
|
|
docker build -t frame-stream-player .
|
|
docker run --rm -p 3000:3000 frame-stream-player
|
|
```
|
|
|
|
Then open `http://localhost:3000`.
|
|
|
|
For Docker Compose:
|
|
|
|
```sh
|
|
docker compose -f docker-compose-example.yml up --build
|
|
```
|
|
|
|
The app uses CPU decoding by default, so no video device is required. The compose example includes commented VAAPI/NVIDIA passthrough options for future hardware-accelerated `ffmpeg` setups, but hardware acceleration is usually only useful when server CPU is saturated.
|
|
|
|
Recently played URLs are stored globally by the backend. In Docker Compose, they are persisted in the `frame-stream-data` named volume.
|
|
|
|
`ffmpeg` worker lifecycle, stderr warnings/errors, and source proxy open/close events are written to stdout/stderr, so they appear in `docker logs`. For more detail while debugging a stream, set `FFMPEG_LOG_LEVEL=info` in Docker Compose and run:
|
|
|
|
```sh
|
|
docker logs -f frame-stream-player
|
|
```
|
|
|
|
The app sets `FFMPEG_INPUT_SEEKABLE=0` by default so `ffmpeg` reads stream inputs sequentially and avoids extra HTTP range connections. If a specific VOD file requires seeking for metadata, set `FFMPEG_INPUT_SEEKABLE=-1` to restore ffmpeg's automatic behavior.
|
|
|
|
JPEG frames are dropped when the browser WebSocket falls behind instead of letting stale frames queue indefinitely. Tune the server-side backlog cap with `MAX_WS_BUFFER_BYTES`; the default is `2097152`.
|
|
|
|
In single mode, audio output from `ffmpeg` is buffered before it is written to the browser so short HTTP backpressure pauses are less likely to stall frame generation. Tune the cap with `MAX_AUDIO_QUEUE_BYTES`; the default is `16777216`.
|
|
|
|
Playback uses `PLAYBACK_CONNECTION_MODE=split` by default. The Docker Compose example sets `PLAYBACK_CONNECTION_MODE=relay` so IPTV-style streams can be tested with one upstream connection.
|
|
|
|
Available playback modes:
|
|
|
|
- `split`: Separate source connections and separate `ffmpeg` workers for audio and JPEG frames. This is usually the smoothest mode.
|
|
- `relay`: One source connection from the backend, then the compressed input bytes are teed into separate audio and frame `ffmpeg` workers. This is intended for IPTV hosts that stop early or reject multiple active connections.
|
|
- `single`: One source connection and one `ffmpeg` worker with both audio and frame outputs. This is the simplest one-connection fallback, but audio and frame delivery can affect each other.
|
|
|
|
Relay mode uses bounded per-worker input queues so one branch can briefly lag without immediately stalling the other. Tune the cap with `MAX_RELAY_BRANCH_QUEUE_BYTES`; the default is `16777216`.
|
|
|
|
## Tuning
|
|
|
|
The UI intentionally hides these settings, but the backend still supports them through `POST /api/session`.
|
|
|
|
- Frame rate defaults to `24fps`. Lower it if the client cannot keep up.
|
|
- Max width defaults to `960px`. Lower it first if bandwidth or image decode is the bottleneck.
|
|
- JPEG quality uses ffmpeg's `-q:v` scale, where lower is better. `5` is the default, `2` is high quality, and `18` is rough but lighter.
|
|
- Audio defaults to MP3 at `160k`.
|
|
|
|
## Tradeoffs
|
|
|
|
JPEG frames are used instead of PNG or GIF. PNG is usually too large for 24fps video, and GIF has poor quality and weak timing control. JPEG is simple, browser-native, streamable per frame, and lets the audio element act as the playback clock.
|
|
|
|
The default split mode starts separate `ffmpeg` workers for audio and frames. That is simple and usually smoother for direct files and many HTTP streams, but live streams can have small startup offset differences and some hosts only allow one active connection. Relay mode avoids that host-side issue while keeping separate audio/frame workers, but it works best with sequential stream containers such as MPEG-TS. Single mode is kept as a fallback. The input URL is proxied or relayed by the backend before it is handed to `ffmpeg`, so query-string tokens are not exposed in `ffmpeg` process arguments.
|
|
|
|
Arbitrary URLs are still fetched by your server, so do not expose this app publicly without adding authentication and URL allowlisting.
|