Files
carplay/README.md
2026-05-01 22:50:34 -07:00

3.3 KiB

Frame Stream Player

A small web app that plays a remote video stream without using browser video decoding. The server uses ffmpeg to decode the input URL into:

  • an MP3 audio stream served to a normal <audio> element
  • timed JPEG image frames sent over a WebSocket and painted onto a <canvas>

This is meant for machines where image and audio decoding work but browser video decoding is unavailable or unreliable.

Run

npm install
npm start

Open http://localhost:3000, paste a direct HTTP(S) stream URL, and click Next.

You need an ffmpeg binary with decoders for the stream's video and audio codecs. If your stream is H.264 or HEVC, make sure your installed ffmpeg actually includes those decoders. You can point the app at a different binary with:

FFMPEG_PATH=/path/to/ffmpeg npm start

Docker

docker build -t frame-stream-player .
docker run --rm -p 3000:3000 frame-stream-player

Then open http://localhost:3000.

For Docker Compose:

docker compose -f docker-compose-example.yml up --build

The app uses CPU decoding by default, so no video device is required. The compose example includes commented VAAPI/NVIDIA passthrough options for future hardware-accelerated ffmpeg setups.

Recently played URLs are stored globally by the backend. In Docker Compose, they are persisted in the frame-stream-data named volume.

ffmpeg worker lifecycle, stderr warnings/errors, and source proxy open/close events are written to stdout/stderr, so they appear in docker logs. For more detail while debugging a stream, set FFMPEG_LOG_LEVEL=info in Docker Compose and run:

docker logs -f frame-stream-player

The app sets FFMPEG_INPUT_SEEKABLE=0 by default so ffmpeg reads stream inputs sequentially and avoids extra HTTP range connections. If a specific VOD file requires seeking for metadata, set FFMPEG_INPUT_SEEKABLE=-1 to restore ffmpeg's automatic behavior.

Audio output from ffmpeg is buffered before it is written to the browser so short HTTP backpressure pauses do not stall frame generation. Tune the cap with MAX_AUDIO_QUEUE_BYTES; the default is 16777216.

Tuning

The UI intentionally hides these settings, but the backend still supports them through POST /api/session.

  • Frame rate defaults to 24fps. Lower it if the client cannot keep up.
  • Max width defaults to 960px. Lower it first if bandwidth or image decode is the bottleneck.
  • JPEG quality uses ffmpeg's -q:v scale, where lower is better. 5 is the default, 2 is high quality, and 18 is rough but lighter.
  • Audio defaults to MP3 at 160k.

Tradeoffs

JPEG frames are used instead of PNG or GIF. PNG is usually too large for 24fps video, and GIF has poor quality and weak timing control. JPEG is simple, browser-native, streamable per frame, and lets the audio element act as the playback clock.

The current implementation starts separate ffmpeg workers for audio and frames. That is simple and works well for direct files and many HTTP streams, but live streams can have small startup offset differences. The input URL is proxied through a short local URL before it is handed to ffmpeg, so query-string tokens are not exposed in ffmpeg process arguments.

Arbitrary URLs are still fetched by your server, so do not expose this app publicly without adding authentication and URL allowlisting.