Add an audio waveform visualizer to a video using FFmpeg and Editframe
If you, like many people, first encountered audio visualizers in the late aughts or early 2010s when Apple’s iTunes brought them to the world, there will always be a special place in your heart for this wonder of audio technology. Perhaps you spent hours transfixed by the undulating hues that Radiohead’s OK Computer produced on your laptop screen. Maybe you projected these hypnotic waves on your dorm room wall to set a mood for your Friday night pregame. Are these anecdotes are too specific to be relatable? Maybe, though given that you’re currently reading technical content on a video infrastructure platform’s docs site, we’re willing to go out on that limb.
Suffice it to say, audio visualizers ruled back in 2008, and in 2023… they’re still total rad. And what’s even cooler, you ask? Adding audio visualizers a video, that’s what. Embedding a visual element that responds to the audio in a video file will help your viewers follow your music or audio easier, and bring another interesting dimension to your content.
Many online videos (think slideshows, product videos, or podcast recordings) are static and repetitive, and fail to deliver any visual stimulation to the viewer. Adding a waveform that responds to the audio track is an easy way to bring this much-needed dimension to your video content, and quickly up the ante with your video game.
In this tutorial, we will show you how to quickly add a waveform to your videos using two methods, FFmpeg and Editframe.
Let’s get started!
Part 1: Using FFmpeg
First, we’ll walk through the process of adjusting saturation, brightness, and contrast with FFmpeg.
Required Tools
- Sample video files: provided by pexels.com
- FFmpeg: (You’ll need to install FFmpeg and set up the appropriate environment variables before beginning this tutorial)
Download the required assets
Here is a sample video file provided by pexels.com and music by pixabay.com to use in this tutorial:
video.mp4
Using FFmpeg’s filter complex
Here’s the FFmpeg command to add an audio waveform to a video file:
ffmpeg -i video.mp4 \
-filter_complex "[0:a]showwaves=s=1080x200:colors=White\
:mode=line,format=yuv420p[v];\
[0:v][v]overlay=(W-w)/2:H-h[outv]" \
-map "[outv]" -pix_fmt yuv420p -map 0:a \
-c:v libx264 -c:a copy waveform.mp4
Let’s break down what this code is doing.
- In this line, we will import the video.mp4 video file:
ffmpeg -i video.mp4 \
- Here, we set the audio waveform mode to line, set the video format to
yuv420p
, and set the audio waveform part to[v]
variable:
:mode=line,format=yuv420p[v];\
- In this line, we add the audio waveform
[v]
to the video[0:v]
, set the position of x to center(W-w)/2
, and y to the bottom(H-h)
of the video. Additionally, we set this composition to[outv]
variable:
[0:v][v]overlay=(W-w)/2:H-h[outv]" \
- In this line, we map over the
[outv]
composition and first video input audio stream:
-map "[outv]" -pix_fmt yuv420p -map 0:a \
- Here, we copy the audio and video streams into waveform.mp4 video output.
-c:v libx264 -c:a copy waveform.mp4
Here’s the output video using this FFmpeg command:
waveform.mp4
Part 2: Using Editframe
Now we’ll perform the same color grading operations again, only this time we will use Editframe instead of FFmpeg.
Required tools:
- Node.js installed on your machine
- No need to have FFmpeg installed on your machine
- Editframe API Token (you can create an account from this link)
Let’s get started:
- Create a folder for your project:
mkdir editframe-audio-waveform
- Initialize a Node.js project in your new directory:
yarn init -y
- Install the Editframe Node.js SDK:
yarn add @editframe/editframe-js
- Create an
index.js
file to merge videos into one video:
const { Editframe } = require("@editframe/editframe-js");
const path = require("path");
(async () => {
const editframe = new Editframe({
token: process.env.EDITFRAME_TOKEN,
});
const composition = await editframe.videos.new({
backgroundColor: "#000",
dimensions: {
height: 1920,
width: 1080,
},
});
const full = path.join(__dirname, "video.mp4");
const videoLayer = await composition.addVideo(full, { size: { format: "fill" } });
await composition.addWaveform(
// options
null,
{ backgroundColor: "#000", color: "#fff", style: "bars" },
// config
{
position: {
x: "center",
y: "bottom",
},
size: {
height: 200,
},
timeline: {
start: 0,
},
}
);
await composition.addSequence([videoLayer]);
console.log("Pre-enconding...")
const video = await composition.encodeSync();
console.log(video);
})();
Let’s break down what the code above is doing.
- In this line, we’ll initialize an Editframe instance with the Editframe API Token (obtained by creating an Editframe application):
const editframe = new Editframe({
token: process.env.EDITFRAME_TOKEN,
});
- Here, we’ll create a new video composition with 1080x1920 dimensions and a black background:
const composition = await editframe.videos.new({
backgroundColor: "#000",
dimensions: {
height: 1920,
width: 1080,
},
});
- In this line, we’ll add the video file path to the video composition using composition.addVideo method:
const full = path.join(__dirname, "video.mp4");
const videoLayer = await composition.addVideo(
// file
full,
// config
{
size: { format: "fill" }
}
);
- Here, we’ll add the trimmed video to the composition as a sequence array. After that, we’ll encode the video synchronously, but we can do it async with a webhook:
await composition.addSequence([videoLayer]);
const video = await composition.encodeSync();
console.log(video);
- Now, run the video script:
node index
Here’s the output video from the Editframe API:
ediframe-waveform.mp4
Here’s the video comparison between using Editframe and FFmpeg:
compare.mp4
Note: You can add transitions, filters, trim videos, and more. You can learn more here in the Editframe API docs.
You can request access to the waveform feature by contacting team@editframe.com