Video Composition
Surface — Architecture & Concepts
Canvas Mirroring: Efficient Multi-Display Architecture
Surface elements use canvas mirroring to display the same video content multiple times without duplicating expensive video decoding and rendering operations. This architecture provides significant performance and resource efficiency advantages.
The Problem: Displaying the Same Video Multiple Times
Many video compositions need to show the same video source multiple times: picture-in-picture overlays, split-screen effects, multiple filtered variations of the same source, or thumbnail previews alongside main playback.
Using multiple video elements for the same source presents these engineering challenges:
Multiple decoders:
- Each video element requires its own decoder instance
- Decoding is CPU-intensive, especially for high-resolution video
- Multiple decoders compete for CPU resources, causing performance degradation
- Memory usage multiplies with each additional video element
Multiple network streams:
- Each video element makes independent network requests
- Even with caching, initial requests duplicate bandwidth usage
- Network overhead increases linearly with the number of video elements
- Buffering strategies must be coordinated across multiple elements
Synchronization challenges:
- Multiple independent video elements can drift out of sync
- Frame timing differences create visual artifacts
- Maintaining perfect synchronization requires complex coordination logic
- Scrubbing or seeking must update multiple elements simultaneously
The Solution: Canvas Mirroring Architecture
Surface elements solve this by mirroring canvas content from a single target video element. One video element handles all decoding and rendering; surfaces copy the rendered canvas content. This approach provides:
Single video decode:
- CPU usage remains constant regardless of surface count
- Memory footprint stays minimal with one video source
- No performance degradation from multiple decoders competing for resources
Single network stream:
- Only one video element makes network requests
- Bandwidth usage is independent of surface count
- Caching benefits all surfaces automatically
Perfect synchronization:
- Surfaces copy frames after the target video has rendered them
- All surfaces show identical frames at identical times
- Synchronization is automatic and guaranteed
Efficient styling:
- CSS filters applied to surfaces don't require re-decoding
- Each surface can have different filters without performance cost
- Multiple styled variations from a single decode operation
Concrete Benefits
CPU usage: Constant regardless of surface count — one decoder vs. N decoders.
Memory usage: Approximately 90% reduction when displaying video 3+ times — one video buffer vs. N buffers.
Network bandwidth: Single stream shared across all surfaces.
Frame rate: No degradation from multiple decoders competing for resources.
Developer experience: Simple API — just set target to a CSS selector. Automatic synchronization without manual coordination. CSS filters work seamlessly without performance concerns.
Scalability: Can create dozens of surfaces from one video without performance impact. Ideal for thumbnail grids, multi-view displays, and complex compositions. Performance characteristics remain consistent as surface count increases.
Frame Synchronization: Perfect Timing Guarantees
Surface elements ensure perfect synchronization with their target video elements by waiting for frame rendering completion before copying canvas content. This architecture eliminates frame tearing, timing drift, and visual artifacts.
The Problem: Ensuring Correct Frame Display
When copying canvas content from a video element, timing is critical:
- Copy too early: surface shows a partially rendered frame
- Copy too late: surface shows a stale frame from the previous render cycle
- No coordination: multiple surfaces copy at different times, causing visual inconsistencies
Using polling or event listeners presents these engineering challenges: timing uncertainty, potential race conditions, and difficulty coordinating multiple surfaces copying from the same target.
The Solution: Frame Task Synchronization
Surface elements integrate with Editframe's frame scheduling system to guarantee perfect synchronization. Each surface waits for its target video's frame rendering to complete before copying canvas content. This ensures all surfaces copy the same, complete frame at the same time.
Frame task coordination:
- Surfaces wait for the target video's frame rendering to complete
- Canvas copying happens only after the target has fully rendered its frame
- All surfaces copy the same, complete frame simultaneously
- Multiple surfaces can safely wait on the same target without conflicts
Automatic updates:
- Surfaces also copy frames when the target element updates
- This handles cases where frame tasks aren't actively running
- Ensures surfaces stay current even during initialization or manual updates
Benefits
Perfect synchronization: All surfaces display identical frames at identical times. No frame tearing or partial frame displays. Deterministic frame updates eliminate visual artifacts.
No race conditions: Frame task coordination eliminates timing uncertainty. Surfaces can't copy frames before they're ready. Synchronization is guaranteed, not probabilistic.
Efficient operation: The frame task system is lightweight and integrated with Editframe's rendering pipeline. No polling overhead or event listener management. Synchronization happens as part of the natural frame rendering cycle.
Reliable behavior: Works consistently across different browsers and performance conditions. Handles edge cases like rapid seeking or playback state changes. Surfaces automatically recover if the target video updates outside the normal frame cycle.
This approach leverages Editframe's existing frame scheduling infrastructure, ensuring surfaces work seamlessly with timegroups, sequences, and other temporal composition features.