Broadcast-Quality Sports Graphics with Alpha Transparency in Editframe
How to composite athletes over animated backgrounds, layer graphics behind and in front of subjects, and render data-driven highlight packages — all from HTML.
12 min read
The Effect
You've seen it on ESPN, NFL Network, and every college sports broadcast: a freeze-frame moment where the player is suspended mid-action, their team colors explode behind them, and their name slams in letter by letter. Graphic elements appear both in front of and behind the athlete simultaneously.
That's alpha compositing. And building it — for any sport, any team, any player — used to require Adobe After Effects, a dedicated motion graphics operator, and hours per clip.
This guide shows you how to do it entirely in HTML using Editframe, rembg, and ffmpeg. The result is a data-driven template that takes a JSON object and renders a broadcast-quality highlight package in minutes.
Here is what the final output looks like across five different sports and team identities — all rendered from one HTML template:
- Baseball pitcher on a royal blue stadium background, 102 MPH radar, name animating in
- Martial arts athlete on crimson, mid-high-kick frozen frame, bout record displayed
- Tennis player on forest green, backhand follow-through, serve speed readout
- Basketball player on navy, spinning the ball on one finger
- Baseball batter on dark navy, mid-swing freeze with batting stats
Every one of these is the same index.html — different --data-file arguments.
How It Works
The key insight is that <ef-video alpha> renders VP9 alpha WebM natively in the browser. That means the athlete's video (with a transparent background) composites over whatever is behind it in the DOM. Put a gradient, a canvas animation, or large text before the video in the DOM and it appears behind the athlete. Put overlays and lower-third graphics after the video and they appear in front.
Scene layers (bottom to top):
① Team color gradient background ← ef-timegroup style=
② Animated light burst canvas ← <canvas> + addFrameTask
③ Ghost jersey number (huge, faint) ← <div class="ghost-number">
④ Athlete in motion ← <ef-video alpha src="*.webm">
⑤ Freeze-frame graphics + lower third ← <div class="lower-third">
⑥ FASTBALL / 126 MPH HUD ← <div class="pitch-hud">
No compositing software. No pre-renders. Just HTML stacking order.
Step 1: Extract the Alpha Channel
The source footage is a regular MP4. We need to remove the background and encode the result as a VP9 alpha WebM — the only video format Chrome can decode with per-pixel transparency.
We use rembg with the u2net_human_seg model for clean person edges, unioned with the general u2net model to capture held objects (the ball, the bat, the racket):
from rembg import remove, new_session
from PIL import Image
import numpy as np
session_human = new_session("u2net_human_seg")
session_general = new_session("u2net")
def union_alpha(img):
a_human = np.array(remove(img, session=session_human))[:, :, 3]
a_general = np.array(remove(img, session=session_general))[:, :, 3]
return np.maximum(a_human, a_general)
Taking the per-pixel maximum of both models gives you clean human edges plus whatever the athlete is holding. A tennis racket, a basketball on a fingertip, a bat mid-swing — all included.
After extracting per-frame alpha masks, we compose the final WebM with ffmpeg's alphamerge filter:
ffmpeg \
-i source.mp4 \
-framerate 24000/1001 -start_number 1 -i masks/%04d.png \
-filter_complex "[0:v]format=rgba[rgb];[1:v]format=gray[a];[rgb][a]alphamerge" \
-c:v libvpx-vp9 -pix_fmt yuva420p -auto-alt-ref 0 \
-g 72 -keyint_min 72 \
-b:v 0 -crf 15 -deadline good \
output_alpha.webm
Critical detail — keyframe interval: The -g 72 flag sets a keyframe every 72 frames (~3 seconds at 23.976 fps). This matters because the freeze-frame feature (current-time attribute on <ef-video>) requires a VP9 keyframe within seek distance of the freeze timestamp. If your keyframe interval is too large, the freeze frame will fail to render. Choose your freezeAt time to align with a keyframe boundary — multiples of (g / fps) seconds.
Also critical — using the source MP4 for timing: Feed the original MP4 as the primary video input (for correct PTS timestamps) rather than a raw frame sequence. Frame-sequence inputs can produce VP9 streams with incorrect timestamps that cause the Editframe renderer to crash.
# Right: use source MP4 for timing
ffmpeg -i source.mp4 -i masks/%04d.png ...
# Wrong: timestamps come out broken
ffmpeg -framerate 24000/1001 -i frames/%04d.jpg -i masks/%04d.png ...
Step 2: The Composition Template
The entire composition is one index.html. All player data — video source, freeze time, scene durations, team colors, stats — flows in through EF_RENDER_DATA.
The Template Structure
Three scenes in a mode="sequence" composition:
<ef-timegroup workbench mode="sequence" class="w-[1920px] h-[1080px]">
<!-- Scene A: Live action (0s → freezeAt) -->
<ef-timegroup id="scene-a" mode="fixed" duration="4.367s"
style="background: linear-gradient(160deg, var(--team-deeper) 0%, var(--team-primary) 100%);">
<!-- Optional grayscale background video -->
<ef-video id="video-bg" mute style="filter: grayscale(1); display: none;"></ef-video>
<!-- Animated speed streaks canvas -->
<canvas id="canvas-a"></canvas>
<!-- Alpha athlete — composites over everything above -->
<ef-video id="video-a" mute alpha></ef-video>
<!-- Overlays appear IN FRONT of athlete -->
<div class="live-badge">LIVE</div>
<div class="speed-radar">...</div>
</ef-timegroup>
<!-- Scene B: Freeze frame (2s) -->
<ef-timegroup id="scene-b" mode="fixed" duration="2s"
style="background: linear-gradient(...);">
<!-- Ghost number and light burst BEHIND athlete -->
<div class="ghost-number" data-dyn="jersey-num">33</div>
<canvas id="canvas-b"></canvas>
<!-- Frozen athlete -->
<ef-video id="video-b" mute alpha></ef-video>
<!-- Lower third IN FRONT of athlete -->
<div class="lower-third">...</div>
</ef-timegroup>
<!-- Scene C: Resume (freezeAt → end) -->
<ef-timegroup id="scene-c" mode="fixed" duration="3.633s"
style="background: linear-gradient(...);">
<div class="mega-name-glitch"></div>
<ef-video id="video-c" mute alpha></ef-video>
<div class="pitch-hud">...</div>
<div class="lower-third lower-third-static">...</div>
</ef-timegroup>
</ef-timegroup>
The Mega Name Effect
The JORDAN RILEY text slamming in behind the player — that's drawn directly on canvas-b using addFrameTask. Each character is revealed with a spring-eased translateY + blur:
sceneB.addFrameTask((t, d) => {
// ... light burst drawing ...
// Mega name: character-by-character reveal BEHIND the athlete
const pd = window.EF_RENDER_DATA || {};
const LINES = [pd.firstName.toUpperCase(), pd.lastName.toUpperCase()];
const FS = 480, STAGGER_MS = 48, REVEAL_MS = 650;
ctx.font = `900 ${FS}px 'Bebas Neue', Impact, sans-serif`;
let globalIdx = 0;
LINES.forEach((line, li) => {
const charWidths = Array.from(line).map(ch => ctx.measureText(ch).width);
const totalW = charWidths.reduce((a, b) => a + b, 0);
let x = (W - totalW) / 2;
Array.from(line).forEach((ch, i) => {
const progress = Math.min(1, Math.max(0, t - globalIdx * STAGGER_MS) / REVEAL_MS);
if (progress > 0) {
// Approximation of cubic-bezier(0.22, 1, 0.36, 1)
const e = 1 - Math.pow(1 - progress, 2.8);
const ty = (1 - e) * 140; // slides up from 140px below
ctx.save();
ctx.globalAlpha = e * 0.14;
ctx.fillStyle = '#fff';
ctx.fillText(ch, x, baseY + li * lineHeight + ty);
ctx.globalAlpha = e * 0.22;
ctx.strokeStyle = '#fff';
ctx.lineWidth = 1.5;
ctx.strokeText(ch, x, baseY + li * lineHeight + ty);
ctx.restore();
}
x += charWidths[i];
globalIdx++;
});
});
});
Because addFrameTask receives (ownCurrentTimeMs, durationMs), every frame is a deterministic function of time. No Date.now(), no Math.random() — the render is perfectly reproducible.
Dynamic Theming with CSS Custom Properties
Team colors live on :root. The entire composition re-themes by setting three variables:
:root {
--team-primary: #134A8C; /* main blue */
--team-dark: #0e3060; /* mid shade */
--team-deeper: #071628; /* deep shadow */
}
All scene backgrounds, position badges, and lower-third accents use var(--team-primary). One JSON field change transforms the entire visual identity.
Step 3: Data-Driven Rendering with EF_RENDER_DATA
The Editframe CLI sets window.EF_RENDER_DATA after the page loads but before rendering starts. By installing a property setter, we intercept that assignment and apply all dynamic content to the DOM immediately — before the first frame is captured:
(function() {
var _data;
function applyData(d) {
if (!d) return;
// Theme colors
document.documentElement.style.setProperty('--team-primary', d.teamPrimary);
document.documentElement.style.setProperty('--team-dark', d.teamDark);
document.documentElement.style.setProperty('--team-deeper', d.teamDeeper);
// Scene durations
document.getElementById('scene-a').setAttribute('duration', d.sceneADuration);
document.getElementById('scene-b').setAttribute('duration', d.sceneBDuration);
document.getElementById('scene-c').setAttribute('duration', d.sceneCDuration);
// Video sources + timing
const freeze = d.freezeAt;
const va = document.getElementById('video-a');
va.setAttribute('src', d.videoFile);
va.setAttribute('sourceout', freeze);
const vb = document.getElementById('video-b');
vb.setAttribute('src', d.videoFile);
vb.setAttribute('current-time', freeze); // ← freeze frame
vb.setAttribute('duration', d.sceneBDuration);
const vc = document.getElementById('video-c');
vc.setAttribute('src', d.videoFile);
vc.setAttribute('sourcein', freeze); // ← resume from freeze
// Optional background video (grayscale original footage)
const vbg = document.getElementById('video-bg');
if (d.bgVideoFile) {
vbg.setAttribute('src', d.bgVideoFile);
vbg.setAttribute('sourceout', freeze);
vbg.style.display = '';
}
// Player text content
const set = (key, val) =>
document.querySelectorAll(`[data-dyn="${key}"]`)
.forEach(el => el.textContent = val);
set('player-name', `${d.firstName} ${d.lastName}`);
set('school', d.school);
set('position', d.position);
set('jersey', `#${d.jersey}`);
set('stat1-val', d.stat1Val); set('stat1-lbl', d.stat1Lbl);
// ... etc.
}
// Intercept the CLI's data injection — fires before rendering starts
Object.defineProperty(window, 'EF_RENDER_DATA', {
enumerable: true, configurable: true,
get: () => _data,
set: (val) => { _data = val; applyData(val); },
});
// Apply defaults for browser preview
applyData({ videoFile: '/assets/pitcher_alpha.webm', ... });
}());
Because ef-video uses Lit's reactive property system, every setAttribute call triggers a re-initialization with the new value. The Editframe composition won't signal "ready" until all media finishes loading — so the render engine naturally waits for the video to buffer before capturing any frames. No polling, no timeouts, no mid-render hacks.
Step 4: The Data Schema
Each sport gets a JSON file. The schema is generic — no sport-specific field names:
{
"videoFile": "/assets/basketball_alpha.webm",
"bgVideoFile": "/assets/basketballer.mp4",
"freezeAt": "3s",
"sceneADuration": "3s",
"sceneBDuration": "2s",
"sceneCDuration": "4.51s",
"firstName": "Jordan", "lastName": "Blake",
"jersey": "67", "position": "Point Guard",
"school": "Roosevelt University Basketball",
"metricValue": 24, "actionType": "3-Pointer",
"metricUnit": "PPG", "metricLabel": "Scoring Avg",
"stat1Val": "24.3", "stat1Lbl": "Points Per Game",
"stat2Val": "8.1", "stat2Lbl": "Rebounds Per Game",
"stat3Val": "4.2", "stat3Lbl": "Assists Per Game",
"stat4Val": "Conf. POY", "stat4Lbl": "2024 Honors",
"teamPrimary": "#1A3F8B",
"teamDark": "#0e2456",
"teamDeeper": "#061228"
}
metricValue, metricUnit, and metricLabel replace sport-specific fields like velocity or PPG — the template doesn't know or care what the number means. It just displays it.
Step 5: Rendering
With the template and data files in place:
# Render a single sport
npx editframe render . --data-file players/basketball.json -o renders/basketball.mp4
# Render all at once
./render-all.sh
The render-all.sh is five lines:
#!/usr/bin/env bash
set -e
cd "$(dirname "$0")"
mkdir -p renders
npx editframe render . --data-file players/pitcher.json -o renders/pitcher.mp4
npx editframe render . --data-file players/karate.json -o renders/karate.mp4
npx editframe render . --data-file players/tennis.json -o renders/tennis.mp4
npx editframe render . --data-file players/basketball.json -o renders/basketball.mp4
npx editframe render . --data-file players/batter.json -o renders/batter.mp4
Each render starts its own Vite dev server, loads the composition, applies the data, and exports to MP4. Total time for five 10-second 1920×1080 renders on an M-series Mac: under two minutes.
Gotchas and Hard-Won Lessons
VP9 alpha requires alpha_mode=1 in stream tags. ffmpeg sets this automatically when you use -pix_fmt yuva420p -auto-alt-ref 0. If you see athletes rendering as black rectangles, check your WebM with ffprobe -show_streams and look for alpha_mode: 1.
The SeekHead must be voided. VP9 WebM files written by ffmpeg contain a SeekHead element with absolute byte offsets into the original file. Editframe's JIT segment server patches this on first access — replacing the SeekHead with a VOID element so the decoder doesn't try to seek to positions that don't exist in a sliced segment response.
Freeze frames need keyframe alignment. The current-time attribute (which pins playback to a single frame) requires the target timestamp to be reachable by decoding forward from a nearby keyframe. Use -g 72 (~3s interval at 23.976 fps) and choose your freezeAt at a multiple of 72/fps ≈ 3.003s.
Use the source MP4 for PTS timestamps. When combining frame-sequence JPEG inputs with alpha mask PNGs in ffmpeg, the output VP9 stream can have incorrect timestamps. Feed the original MP4 as the primary video input and the alpha masks as a secondary sequence input — the MP4's embedded PTS values transfer correctly to the WebM output.
rembg drops held objects. The u2net_human_seg model segments the person body — not what they're holding. A basketball on a fingertip, a tennis racket, or a baseball bat will be partially or fully transparent. Taking the per-pixel union with the general u2net model recovers held objects at the cost of slightly noisier edges near the hands. For the demo quality required here, this tradeoff is acceptable.
What You Need
- Editframe — the composition and rendering engine
- rembg — background removal (Python)
- ffmpeg — frame extraction and VP9 alpha encoding
- Source video — filmed against a clean background (green screen, black studio, or outdoor footage with distinct subject separation)
The full source code for this project — including the HTML template, all player JSON files, extract_alpha.py, and render-all.sh — demonstrates how a single template can generate an entire broadcast graphics package across multiple sports and team identities.
The Bigger Picture
What makes this approach genuinely powerful for a sports software company is the data-driven architecture. Adding a new athlete doesn't require an After Effects operator, a motion designer, or even opening a design tool. It requires filling in a JSON object:
{
"videoFile": "/assets/new_athlete_alpha.webm",
"firstName": "Alex", "lastName": "Torres",
"school": "Westview High School Soccer",
"teamPrimary": "#2E7D32",
...
}
The template handles the rest — team colors, name animations, mega-name background text, stats panel, action HUD — all consistent with every other output in the package.
Scale that to a platform serving hundreds of high school and college programs, each with their own colors, players, and stats, and you have a compelling product story: professional broadcast graphics, generated at the push of a button, at a fraction of the cost of traditional production.
That's what programmatic video makes possible.