Video Composition
ef-video Concepts
JIT Transcoding: Instant Playback Without Preprocessing
Editframe's Just-In-Time (JIT) transcoding enables playback of video URLs without downloading or preprocessing the entire file.
Engineering Challenges with Native Video Playback
Using native video playback with original files presents these challenges: full file download is required before seeking, making previews slow; supporting multiple quality levels typically requires preprocessing infrastructure to transcode and store variants upfront; seeking is often slow and unresponsive, discouraging exploration in editing workflows.
The Solution: On-Demand Video Generation
Instead of preprocessing entire videos, Editframe generates video data on-demand as users need it:
- Fast startup: Playback begins typically within seconds without downloading entire files
- Efficient bandwidth: Only watched portions are fetched, reducing costs significantly
- Fast seeking: Very fast seeking with immediate visual feedback
- No preprocessing: Works with most standard video URLs without infrastructure setup
- Automatic optimization: Quality levels are selected automatically for optimal playback
Simply provide a video URL with the src attribute — no preprocessing required:
The video starts playing within seconds. Scrubbing through the timeline is very fast.
Time Coordinates
Editframe provides three different time coordinate systems. A single coordinate system forces compromises — different use cases genuinely need different perspectives on time.
The Problem with Single Time Systems
- Trimming breaks tracking: If you track an object using playback time, adding a trim offset breaks the tracking data
- Composition complexity: When videos are combined in sequences, local timing gets lost
- Effect scoping: Animations tied to playback time can't be reused across different compositions
Three Coordinate Systems
Timeline Time (Root Timegroup Relative)
Properties: startTimeMs, endTimeMs, currentTimeMs, durationMs
Answers: "When does this video play relative to other elements?"
Enables precise coordination between multiple elements on a shared timeline.
Element Time (Element-Scoped)
Property: ownCurrentTimeMs
Answers: "Animate this video's opacity from 0 to 1 over its duration, regardless of when it appears in the timeline."
Starts at 0 when the video begins — independent of its position in the composition. Enables reusable effects.
Source Time (Trimming-Agnostic)
Property: currentSourceTimeMs
Answers: "Track an object at source time 5.2s — this tracking remains valid even if I trim the video later."
Always references the original source file, unaffected by trim operations. Enables robust data association.
Why This Architecture Matters
Without three coordinate systems, developers must manually calculate offsets when trimming, maintain separate tracking that breaks on edits, or choose between reusable effects and timeline awareness. The three systems let each concern operate in its natural coordinate space.
<!-- startTimeMs=2000: video enters at 2s on the timeline --><!-- When timeline is at 3s: currentTimeMs=3000, ownCurrentTimeMs=1000 --><!-- With trimstart="2s": currentSourceTimeMs = 2000 + ownCurrentTimeMs --><ef-video src="video.mp4" start-time-ms="2000" trimstart="2s"></ef-video>
Trimming Semantics
Editframe supports two mental models for trimming, each serving different workflows. Both produce identical visual output — they differ in time tracking and ergonomics.
Why Two Approaches?
- UI builders need relative trimming (drag handles inward from edges)
- Professional workflows need absolute timecode (precise frame references)
- Data tracking needs trimming that doesn't break source-time alignment
Relative Trimming: trimstart / trimend
Mental model: "Remove X seconds from the start/end of the clip."
This matches consumer video editors where you drag handles inward.
<!-- 10s source → 6s clip (remove 2s from each end) --><ef-video src="video.mp4" trimstart="2s" trimend="2s"></ef-video>
Duration formula: sourceDuration - trimstart - trimend
Effect on source tracking: currentSourceTimeMs = trimstart + ownCurrentTimeMs
Absolute Trimming: sourcein / sourceout
Mental model: "Show frames from timestamp A to timestamp B."
This matches professional editors that use absolute source timecode.
<!-- Show exactly 2s to 4s from source (2s clip) --><ef-video src="video.mp4" sourcein="2s" sourceout="4s"></ef-video>
Duration formula: sourceout - sourcein
Effect on source tracking: currentSourceTimeMs = sourcein + ownCurrentTimeMs
When to Use Each
Use trimstart/trimend | Use sourcein/sourceout |
|---|---|
| Building UI with drag handles or sliders that adjust trim length | Working with timecode from video editing software |
| Thinking "how much to cut off" | Referencing specific moments by timestamp |
| Working with durations rather than absolute times | Frame-perfect accuracy with known time positions |
Both approaches survive trim changes correctly because currentSourceTimeMs always references the original source file.
Timegroup Hierarchy
Video elements can exist within a hierarchy of timegroups, but timegroups are not required — a standalone <ef-video> plays immediately without any wrapper.
Flexible Composition Without Forced Complexity
Traditional video systems force a single composition model: either everything must be in a timeline (adding complexity for simple cases) or everything is standalone (preventing multi-element compositions). Editframe's optional hierarchy provides both.
Standalone video — works without any wrapper:
<ef-video src="video.mp4"></ef-video><!-- parentTimegroup: null --><!-- rootTimegroup: null -->
Video within a timegroup:
<ef-timegroup duration="10s"><ef-video src="video.mp4" start-time-ms="2000"></ef-video><!-- parentTimegroup → references the timegroup --><!-- rootTimegroup → same timegroup (it's the root) --><!-- Video starts at 2s on the timegroup's timeline --></ef-timegroup>
Nested timegroups:
<ef-timegroup duration="20s"> <!-- Root timegroup --><ef-timegroup mode="sequence" duration="5s"> <!-- Parent timegroup --><ef-video src="video.mp4"></ef-video><!-- parentTimegroup → sequence timegroup --><!-- rootTimegroup → outer timegroup --></ef-timegroup></ef-timegroup>
parentTimegroup vs rootTimegroup
parentTimegroup — the closest containing timegroup, or null for standalone. Used for immediate containment, positioning relative to parent, local timing context.
rootTimegroup — the outermost timegroup defining the main timeline, or null for standalone. Used for global timeline positioning and coordinating with other elements.
startTimeMs is relative to rootTimegroup when present; otherwise relative to the video's own timeline.
When Timegroups Are Required
Timegroups are needed when:
- Combining multiple elements (videos, text, graphics) in a composition
- Creating sequences that play one after another
- Layering content with precise timing relationships
For the timegroup element itself, see the Timegroup documentation.
Audio Analysis
Video elements expose real-time audio frequency data, enabling waveform visualizations and audio-reactive effects without separate audio processing pipelines.
Why Real-Time Audio Analysis Matters
Extracting audio frequency data typically requires separate analysis tools or preprocessing steps. These add preprocessing overhead before visualization can happen, produce static pre-computed waveforms that can't respond to real-time playback changes, and require separate tools and formats for audio vs video data.
Editframe integrates audio analysis directly into the video element:
- Real-time frequency data: Magnitude data updates continuously during playback
- Unified API: Audio data accessed through the same element that handles video
- No preprocessing: Waveforms and audio-reactive effects work immediately
Configuration
Configure the FFT resolution and smoothing:
<ef-videosrc="video-with-audio.mp4"fft-size="2048"audio-buffer-duration="0.8"></ef-video>
| Attribute | Description |
|---|---|
fft-size | FFT bin count. Higher values give more frequency resolution. Powers of 2. |
audio-buffer-duration | Smoothing time constant (0–1). Higher values produce smoother transitions. |
Access frequency data via the DOM element's fftData property — a Float32Array that updates continuously during playback.
Use Cases
- Waveform visualizations that match audio content
- Audio-reactive effects that respond to frequency changes
- Real-time audio visualization without separate processing
For complete property specifications, see the Video Element Reference.
Production vs Development Workflows (src vs asset-id)
The src and asset-id attributes serve different architectural needs: real-time preview requires flexibility, while production rendering requires parallel processing.
Development and Preview: src
<ef-video src="https://example.com/video.mp4"></ef-video>
No preprocessing required. Provide any video URL and it works immediately via JIT transcoding. Perfect for development, testing, and real-time preview interfaces.
Limitation: Not suitable for production rendering due to parallel processing requirements.
Production Rendering: asset-id
<ef-video asset-id="upload-123abc"></ef-video>
Enables parallel rendering across a worker fleet. Multiple workers render different timeline slices simultaneously, reducing render time and scaling with the number of workers. Each worker processes only the portion it needs, with precise segment-level access.
Why src can't work for production: Production rendering requires parallel processing where multiple workers render simultaneously. This only works with pre-processed, uploaded assets — not arbitrary remote URLs.
When to Use Each
src | asset-id | |
|---|---|---|
| Development | ✓ | — |
| Real-time preview UI | ✓ | — |
| Testing with arbitrary URLs | ✓ | — |
| Production rendering jobs | — | ✓ |
| Parallel worker fleet rendering | — | ✓ |
Complete workflow pattern:
// Development: instant preview with src// <ef-video src="https://user-video.example.com/v.mp4"></ef-video>// Production: upload first, then render with asset-idconst asset = await uploadVideo(videoFile);// <ef-video asset-id="${asset.id}"></ef-video>