Automatically turn images into a slideshow using FFmpeg and Editframe
You might not associate the word “slideshow” with new, modern, technology-forward presentations. But from sales and marketing assets to social media content, this tried-and-true storytelling method is still alive and well in the digital age. With the right editing, slideshows can make for an eye-popping way to show off your brand, promote an event, or elevate the look of your social media feeds.
In this tutorial, we’ll show you how to programmatically create a slideshow by merging and combining multiple images into a single video using two methods: FFmpeg, and Editframe. The final output will be a video MP4. You can even take this a step further and use either FFmpeg or Editframe (whichever method best suits you) to incorporate audio and transition effects that take your slideshow to the next level.
By the end of this tutorial, you’ll know how to make an amazing photo or video slideshow programmatically and at scale, and you’ll forget all about the dusty projector in your grandma’s basement.
Let’s get started.
File assets
Here are the sample image files provided by unsplash.com that we will use in our tutorial:
file1.jpg
file2.jpg
file3.jpg
Part 1: Using FFmpeg
First, we’ll walk through this workflow using FFmpeg.
Required Tools
- Sample video and audio files (provided above)
- FFmpeg: (You’ll need to install FFmpeg and set up the appropriate environment variables before beginning this tutorial)
- Create a video.txt file.
touch video.txt
- Paste the path to use the images for the video in
video.txt
. For example:
file '/Users/mac/dev/ffmpeg-tutorials/file1.jpg'
duration 4
file '/Users/mac/dev/ffmpeg-tutorials/file2.jpg'
duration 2
file '/Users/mac/dev/ffmpeg-tutorials/file3.jpg'
duration 2
- Run this script to import all images and make a video from them:
ffmpeg -safe 0 -f concat -i video.txt -c:v libx264 \
-vf "scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920" -pix_fmt yuv420p slideshow.mp4
Let’s break down what the code above is doing.
- In this line, we concat all images inside video.txt, and disable file safety using
-safe 0
. We also set the video encoder tolibx264
:
ffmpeg -safe 0 -f concat -i video.txt -c:v libx264
- In this line, we will resize and crop all images to 1080x1920 while keeping their aspect ratio. We also add the pixel format to render images
-pix_fmt yuv420p
:
-vf "scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920" -pix_fmt yuv420p
slideshow.mp4
Using named files
- In this folder, we’ll have file1.jpg, file2.jpg, and file3.jpg. Instead of adding their path manually, we can use this method to add all of them:
ffmpeg -framerate 1 -i file%d.jpg -vf "scale=1080:1920:force_original_aspect_ratio=increase,crop=1080:1920" -c:v libx264 -pix_fmt yuv420p output.mp4
Let’s break down the code above.
- In this line, we specify a frame rate of one frame per second, and import all of our files with
file%d.jpg
:
ffmpeg -framerate 1 -i file%d.jpg
- In this line, we resize and crop all images to 1080x1920 while keeping their aspect ratio. Also, we specify the pixel format for rendering out images with
-pix_fmt yuv420p
:
-vf scale=1080:1920 -c:v libx264
Here is the output video from the FFmpeg command:
slideshow.mp4
Part 2: Using Editframe
Now let’s perform the same task using Editframe instead of FFmpeg.
Required tools:
- Node.js installed on your machine (v16+)
- Editframe API Token (you can create an account from this link)
*No need to have FFmpeg installed on your machine
- Create a folder for your project:
mkdir editframe-slideshow
- Initialize a Node.js project:
yarn init -y
- Install the Editframe Node.js SDK:
yarn add @editframe/editframe-js
- Create a
create-video.js
file to merge videos into one video:
const { Editframe } = require("@editframe/editframe-js");
const main = async () => {
const editframe = new Editframe({
develop: true,
clientId: "YOUR_EDITFRAME_CLIENT_ID",
token: "YOUR_EDITFRAME_TOKEN",
});
const composition = await editframe.videos.new({
backgroundColor: "#000",
dimensions: {
height: 1920,
width: 1080,
},
});
const image1 = await composition.addImage(`${__dirname}/file1.jpg`, {
trim: { end: 4 },
size: { format: "fill" },
});
const image2 = await composition.addImage(`${__dirname}/file2.jpg`, {
trim: { end: 2 },
size: { format: "fill" },
});
const image3 = await composition.addImage(`${__dirname}/file3.jpg`, {
trim: { end: 2 },
size: { format: "fill" },
});
await composition.addSequence([image1, image2, image3]);
const video = await composition.encodeSync();
console.log(video);
};
main();
Let’s dive into what the code in this file is doing.
- Here, we initialize an Editframe instance with our Editframe Token (which we obtained by creating an Editframe application). We also set develop to
true
to open the output video in new tab when encoding has finished:
const editframe = new Editframe({
develop: true,
clientId: "YOUR_EDITFRAME_CLIENT_ID",
token: "YOUR_EDITFRAME_TOKEN",
});
- In this line, we will create a new video composition with 1080x1920 dimensions and a black background.
const composition = await editframe.videos.new({
backgroundColor: "#000",
dimensions: {
height: 1080,
width: 1920,
}
});
- In each of these objects declarations, we:
- Add the selected image to the Editframe video composition using composition.addImage.
- Set the image duration using the trim layer configuration object
- Fill the video (to prevent black bars in the bottom and top or right and left) using the size layer configuration object.
const image1 = await composition.addImage(`${__dirname}/file1.jpg`, {
trim: { end: 4 },
size: { format: "fill" },
});
const image2 = await composition.addImage(`${__dirname}/file2.jpg`, {
trim: { end: 2 },
size: { format: "fill" },
});
const image3 = await composition.addImage(`${__dirname}/file3.jpg`, {
trim: { end: 2 },
size: { format: "fill" },
});
- Here, we add image layers to the video composition using the composition.addSequence method. Additionally, we encode the video synchronously. We can also do this async using a webhook:
await composition.addSequence([image1, image2, image3]);
const video = await composition.encodeSync();
console.log(video);
- Run the video script:
node create-video
Here’s the output video using the Editframe API method:
editframe-slideshow.mp4
Note: You can also add transitions, filters, trim videos, and much more using the Editframe compositional API. Learn more about using Editframe at the Editframe API docs.
Comparison video between FFmpeg and Editframe API
Here is a comparison of the videos created with FFmpeg (left) and Editframe (right):