How to Generate Open Graph Videos Using Editframe API and Next JS
Use the Editframe API to programmatically generate Open Graph videos for your app or website content.
Open Graph (OG) tags present a golden opportunity for content owners to catch the eye of a casual scroller. The preview images and headlines that people see while browsing social networks are often what determines whether something is liked and shared, or skipped over immediately. However, manually generating Open Graph tags for a large content library can be extremely time and labor-intensive, and many brands do not have the resources to execute this at scale.
Here, we’ll show you how to automate the process of generating Open Graph video metadata using the Editframe API along with assets that already live on your webpages, and reap the benefits of a next-generation Open Graph experience without all the repetitive, time-consuming work.
What is Open Graph?
Open Graph is a protocol that controls how metadata is shared between websites and social networks. First developed by Facebook in 2010, the Open Graph (OG) protocol is now the primary way in which link preview data travels between web domains and places like LinkedIn, Slack, Discord, iMessage, and basically everywhere else on the internet.
Why use Video Open Graph Metadata?
On the web, first impressions are everything. By adding video content to your link previews (perhaps even with a GIF or Lottie overlay), you can stand out from your competitors and grab the attention of more viewers than you would with monotonous static images. Also, detailed and relevant OG metadata is like cat nip for search engine robots. Going the extra mile with the indexable tags on your site’s links will help increase the discoverability of your content.
Use cases
After completing this tutorial, you will be able to autogenerate Open Graph assets including:
- Twitter Card videos
- Pinterest videos for SEO
- Videos for Webflow
- Videos for WordPress
- Videos for Shopify stores and products
Tutorial Introduction:
In this tutorial, we are going to build a Next.js blog that generates a custom dynamic Open Graph video using the Editframe API and Node.js. We will develop these videos using the images and data that already lives on our website, but if we didn’t have these assets on hand, we could certainly use third-party sources like stock image repositories.
Here is an example of the output of our template using photos from a blog post:
editframe-open-graph.mp4
The GitHub repository for this project is also available. Feel free to clone and reference it along with this tutorial.
Requirements
To work on this tutorial you will need the following items:
- Node.js installed on your machine (v16+).
- An Editframe API account
- Redis is installed on your machine (Optional)
Let’s dive in!
Initialize a new Next.js project
- The first step is to set up a new Next.js project:
npx create-next-app open-graph-editframe
- Create a
posts
directory to store markdown files:
mkdir posts
- Create a
functions.mdx
file:
touch posts/functions.mdx
- Paste the content below into
functions.mdx
:
---
title: Editframe for Beginners
date: 'July 22, 2022'
description: Learn how to use Editframe
thumbnailUrl: '/blog.jpeg'
---
<div>
This is a blog post about <strong>Editframe</strong>.
</div>
<br/>
- Set the image below (blog.jpeg) as your blog post thumbnail by adding it to your public folder:
- Next, add two packages to your project:
next-mdx-remote
to render markdown content, andgray-matter
to extract info like title and thumbnail from the markdown files:
yarn add next-mdx-remote gray-matter
- Update
pages/index.js
with the code below:
import fs from 'fs'
import path from 'path'
import matter from 'gray-matter'
import Link from 'next/link'
import Image from 'next/image'
const Home = ({ posts }) => {
return (
<div className="mt-5">
<div className="relative px-4 pt-16 pb-20 sm:px-6 lg:px-8 lg:pt-24 lg:pb-28">
<div className="relative mx-auto max-w-7xl">
<div className="text-center">
<h2 className="text-3xl font-extrabold tracking-tight text-gray-900 sm:text-4xl">
From the blog
</h2>
<p className="mx-auto mt-3 max-w-2xl text-xl text-gray-500 sm:mt-4">
Lorem ipsum dolor sit amet consectetur, adipisicing elit. Ipsa
libero labore natus atque, ducimus sed.
</p>
</div>
<div className="mx-auto mt-12 grid max-w-lg gap-5 lg:max-w-none lg:grid-cols-3">
{posts.map((post, index) => (
<div
key={index}
className="flex flex-col overflow-hidden rounded-lg shadow-lg"
>
<div className="flex-shrink-0">
<Image
src={post.frontMatter.thumbnailUrl}
className="h-48 w-full object-cover"
alt="thumbnail"
width={500}
height={400}
objectFit="cover"
/>
</div>
<div className="flex flex-1 flex-col justify-between bg-white p-6">
<div className="flex-1">
<p className="text-sm font-medium text-indigo-600">
{post.frontMatter.date}
</p>
<Link
href={'/blog/' + post.slug}
passHref
className="mt-2 block"
>
<p className="text-xl font-semibold text-gray-900">
{post.frontMatter.title}
</p>
</Link>
<p className="mt-3 text-base text-gray-500">
{post.frontMatter.description}
</p>
</div>
</div>
</div>
))}
</div>
</div>
</div>
</div>
)
}
export const getStaticProps = async () => {
// Read posts folder to get all file names
const files = fs.readdirSync(path.join('posts'))
// Map each file using their name to get file data
const posts = files.map((filename) => {
const marxkdownWithMeta = fs.readFileSync(
path.join('posts', filename),
'utf-8'
)
// Parse markdown content to get data like frontMatter which include data like title
const { data: frontMatter } = matter(markdownWithMeta)
return {
frontMatter,
slug: filename.split('.')[0],
}
})
return {
props: {
posts,
},
}
}
export default Home
Let’s break down the code we just added. In the line below, we are using the Next.js getStaticProps function to get all markdown files in the posts folder, map each of the files by name, then return front matter (title, date, thumbnail file path, …) as well as a slug:
export const getStaticProps = async () => {
// Read posts folder to get all file names
const files = fs.readdirSync(path.join('posts'))
// Map each file using their name to get file data
const posts = files.map((filename) => {
const markdownWithMeta = fs.readFileSync(
path.join('posts', filename),
'utf-8'
)
// Parse markdown content to get data like frontMatter which include data like title
const { data: frontMatter } = matter(markdownWithMeta)
return {
frontMatter,
slug: filename.split('.')[0],
}
})
return {
props: {
posts,
},
}
}
Add a blog post page template
- Create a
blog
folder in the pages directory:
mkdir pages/blog
- Create a
[slug].js
file inside the newly created blog folder:
touch "pages/blog/[slug].js"
- Paste the code below inside
[slug].js
:
import { MDXRemote } from 'next-mdx-remote'
import fs from 'fs'
import path from 'path'
import matter from 'gray-matter'
import { serialize } from 'next-mdx-remote/serialize'
const PostPage = ({ frontMatter: { title, date, description }, mdxSource }) => {
return (
<>
<div className="relative overflow-hidden bg-white py-16">
<div className="relative px-4 sm:px-6 lg:px-8">
<div className="mx-auto max-w-prose text-lg">
<h1 className="text-2xl font-bold">{title}</h1>
<p className="mt-8 text-xl leading-8 text-gray-500">
{description}
</p>
<div className="prose prose-lg prose-indigo mx-auto mt-6 text-gray-500">
<MDXRemote {...mdxSource} />
</div>
</div>
</div>
</div>
</>
)
}
const getServerSideProps = async ({ params: { slug } }) => {
const file = path.join(path.resolve('posts'), slug + '.mdx') // file is available
const markdownWithMeta = fs.readFileSync(file, 'utf-8')
const { data: frontMatter, content } = matter(markdownWithMeta)
const mdxSource = await serialize(content)
return {
props: {
frontMatter,
slug,
mdxSource,
},
}
}
export { getServerSideProps }
export default PostPage
Let’s examine some of the code we just added to our project. In this line, we’re using getServerSideProps to get blog post data like front matter, and post the content using the slug as a parameter.
Note: We used getServerSideProps
instead of getStaticProps
because the Redis client doesn’t work in the getStaticProps
function.
const getServerSideProps = async ({ params: { slug } }) => {
const file = path.join(path.resolve('posts'), slug + '.mdx') // file is available
const markdownWithMeta = fs.readFileSync(file, 'utf-8')
const { data: frontMatter, content } = matter(markdownWithMeta)
const mdxSource = await serialize(content)
return {
props: {
frontMatter,
slug,
mdxSource,
},
}
}
In this section, we’re rendering a PostPage React component using the data we got from getServerSideProps
. Then we’re rendering the markdown content using the next-mdx-remote package:
const PostPage = ({ frontMatter: { title, date, description }, mdxSource }) => {
return (
<>
<div className="relative overflow-hidden bg-white py-16">
<div className="relative px-4 sm:px-6 lg:px-8">
<div className="mx-auto max-w-prose text-lg">
<h1 className="text-2xl font-bold">{title}</h1>
<p className="mt-8 text-xl leading-8 text-gray-500">
{description}
</p>
<div className="prose prose-lg prose-indigo mx-auto mt-6 text-gray-500">
<MDXRemote {...mdxSource} />
</div>
</div>
</div>
</div>
</>
)
}
Add Tailwind CSS to style the blog
- Install Tailwind CSS, Post CSS, and Auto Prefixer as dev dependencies:
yarn add -D tailwindcss postcss autoprefixer
- Initialize the
tailwindcss.config.js
file to configure Taiwlind CSS:
npx tailwindcss init -p
- Add a purge option to only generate used CSS class names:
module.exports = {
content: ['./pages/**/*.{js,ts,js,tsx}', './components/**/*.{js,ts,js,tsx}'],
theme: {
extend: {},
},
plugins: [],
}
- Update the
global.css
file with Tailwind CSS files:
@tailwind base;
@tailwind components;
@tailwind utilities;
Integrate Editframe with the blog file
- Install the Ediframe SDK and Redis Node.js client:
yarn add @editframe/editframe-js ioredis
- Initialize a new Editframe composition object in the
[slug].js
file:
const getServerSideProps = async ({ params: { slug } }) => {
const file = path.join(path.resolve('posts'), slug + '.mdx') // file is available
const markdownWithMeta = fs.readFileSync(file, 'utf-8')
const { data: frontMatter, content } = matter(markdownWithMeta)
const mdxSource = await serialize(content)
// begin of the new lines
const editframe = new Editframe({
clientId: process.env.EDITFRAME_CLIENT_ID,
token: process.env.EDITFRAME_TOKEN,
})
// end of the new lines
return {
props: {
frontMatter,
slug,
mdxSource,
},
}
}
- Initialize a new Editframe video with a dark background color, and add a blog post title:
const getServerSideProps = async ({ params: { slug } }) => {
const file = path.join(path.resolve('posts'), slug + '.mdx') // file is available
const markdownWithMeta = fs.readFileSync(file, 'utf-8')
const { data: frontMatter, content } = matter(markdownWithMeta)
const mdxSource = await serialize(content)
const editframe = new Editframe({
clientId: process.env.EDITFRAME_CLIENT_ID,
token: process.env.EDITFRAME_TOKEN,
})
// begin of the new lines
const composition = await editframe.videos.new(
// options
{
// any solid hexadecimal, rgb, or named color
backgroundColor: '#000000',
dimensions: {
// Height in pixels
height: 418,
// Width in pixels
width: 800,
},
duration: 15,
}
)
composition.addText(
{
text: frontMatter.title,
fontSize: 40,
color: '#ffffff',
},
{
position: {
x: 'center',
y: 'center',
},
timeline: {
start: 3,
},
trim: {
end: 15,
},
}
)
// end of the new lines
return {
props: {
frontMatter,
slug,
mdxSource,
},
}
}
Let’s examine some of the code we just added. In these lines, we’re creating a new Editframe video composition with a dark background that is 418x800 dimensions, and specifying a 15-second duration:
const composition = await editframe.videos.new(
// options
{
// any solid hexadecimal, rgb, or named color
backgroundColor: '#000000',
dimensions: {
// Height in pixels
height: 418,
// Width in pixels
width: 800,
},
duration: 15,
}
)
Here, we’re adding text to the composition object using the composition.addText method, and specifying that the text will start at the 3 second mark and last for 15 seconds past this:
composition.addText(
{
text: frontMatter.title,
fontSize: 40,
color: '#ffffff',
},
{
position: {
x: 'center',
y: 'center',
},
timeline: {
start: 3,
},
trim: {
end: 15,
},
}
)
- Encode the video synchronously without using webhooks. You can also use the async video encoding function:
const getServerSideProps = async ({ params: { slug } }) => {
const file = path.join(path.resolve('posts'), slug + '.mdx') // file is available
const markdownWithMeta = fs.readFileSync(file, 'utf-8')
const { data: frontMatter, content } = matter(markdownWithMeta)
const mdxSource = await serialize(content)
const editframe = new Editframe({
clientId: process.env.EDITFRAME_CLIENT_ID,
token: process.env.EDITFRAME_TOKEN,
})
const composition = await editframe.videos.new(
// options
{
// any solid hexadecimal, rgb, or named color
backgroundColor: '#000000',
dimensions: {
// Height in pixels
height: 418,
// Width in pixels
width: 800,
},
duration: 15,
}
)
composition.addText(
{
text: frontMatter.title,
fontSize: 40,
color: '#ffffff',
},
{
position: {
x: 'center',
y: 'center',
},
timeline: {
start: 3,
},
trim: {
end: 15,
},
}
)
// begin of the new lines
const video = await composition.encodeSync()
console.log(video)
// end of the new lines
return {
props: {
frontMatter,
slug,
mdxSource,
},
}
}
Add video caching using Redis
- Create a
lib
folder in the root directory:
mdkir lib
- Create a
redis.js
file inside the lib folder:
touch lib/redis.js
- Paste the code below into
redis.js
to connect to Redis (make sure you have Redis running):
import Redis from 'ioredis'
const redis = new Redis(process.env.REDIS_URL)
export default redis
- Update the
[slug.js]
file with new cached version:
import Head from 'next/head'
import redis from '../../lib/redis'
const getServerSideProps = async ({ params: { slug } }) => {
const file = path.join(path.resolve('posts'), slug + '.mdx') // file is available
const markdownWithMeta = fs.readFileSync(file, 'utf-8')
const editframe = new Editframe({
clientId: process.env.EDITFRAME_CLIENT_ID,
token: process.env.EDITFRAME_TOKEN,
})
const { data: frontMatter, content } = matter(markdownWithMeta)
const mdxSource = await serialize(content)
const composition = await editframe.videos.new(
// options
{
// any solid hexadecimal, rgb, or named color
backgroundColor: '#000000',
dimensions: {
// Height in pixels
height: 418,
// Width in pixels
width: 800,
},
duration: 15,
}
)
composition.addText(
{
text: frontMatter.title,
fontSize: 40,
color: '#ffffff',
},
{
position: {
x: 'center',
y: 'center',
},
timeline: {
start: 3,
},
trim: {
end: 15,
},
}
)
console.log('Encoding')
// begin of the new lines
let video
let videoCached = await redisClient.get(JSON.stringify({ slug }))
if (videoCached == null) {
video = await composition.encodeSync()
if (video && video.streamUrl) {
await redisClient.set(JSON.stringify({ slug }), JSON.stringify(video))
}
} else {
video = JSON.parse(videoCached)
}
console.log(video)
// end of the new lines
return {
props: {
frontMatter,
slug,
mdxSource,
video,
},
}
}
Let’s take a look at some of the code we just added. In these lines, we’re using the Redis client to check for a cached video version of the blog post video. If we don’t find one, we can set the new video using the Redis client for better caching in the next render:
let video
let videoCached = await redisClient.get(JSON.stringify({ slug }))
if (videoCached == null) {
video = await composition.encodeSync()
if (video && video.streamUrl) {
await redisClient.set(JSON.stringify({ slug }), JSON.stringify(video))
}
} else {
video = JSON.parse(videoCached)
}
console.log(video)
- Render the video object using Next/head component:
const PostPage = ({
frontMatter: { title, description },
mdxSource,
video,
}) => {
return (
<>
<Head>
<meta property="og:title" content={title} />
<meta property="og:type" content="video.episode" />
<meta property="og:description" content={description} />
<meta property="og:url" content={video.streamUrl} />
<meta property="og:type" content="video" />
<meta property="og:image" content={video.thumbnailUrl} />
<meta property="og:video" content={video.streamUrl} />
<meta
property="og:video:type"
content="application/x-shockwave-flash"
/>
<meta property="og:video:width" content="398" />
<meta property="og:video:height" content="224" />
<meta name="twitter:card" content="player" />
<meta name="twitter:title" content={title} />
<meta name="twitter:description" content={description} />
<meta name="twitter:player" content={video.streamUrl} />
<meta name="twitter:player:width" content="360" />
<meta name="twitter:player:height" content="200" />
<meta name="twitter:image" content={video.thumbnailUrl} />
</Head>
<div className="relative overflow-hidden bg-white py-16">
<div className="relative px-4 sm:px-6 lg:px-8">
<div className="mx-auto max-w-prose text-lg">
<h1 className="text-2xl font-bold">{title}</h1>
<p className="mt-8 text-xl leading-8 text-gray-500">
{description}
</p>
<div className="prose prose-lg prose-indigo mx-auto mt-6 text-gray-500">
<MDXRemote {...mdxSource} />
</div>
</div>
</div>
</div>
</>
)
}
Let’s explore some of the code we added abode. Here, we’re using the next/head package to render the Open Graph meta tags for social networks like Twitter, Facebook, and LinkedIn:
<Head>
<meta property="og:title" content={title} />
<meta property="og:type" content="video.episode" />
<meta property="og:description" content={description} />
<meta property="og:url" content={video.streamUrl} />
<meta property="og:type" content="video" />
<meta property="og:image" content={video.thumbnailUrl} />
<meta property="og:video" content={video.streamUrl} />
<meta property="og:video:type" content="application/x-shockwave-flash" />
<meta property="og:video:width" content="398" />
<meta property="og:video:height" content="224" />
<meta name="twitter:card" content="player" />
<meta name="twitter:title" content={title} />
<meta name="twitter:description" content={description} />
<meta name="twitter:player" content={video.streamUrl} />
<meta name="twitter:player:width" content="360" />
<meta name="twitter:player:height" content="200" />
<meta name="twitter:image" content={video.thumbnailUrl} />
</Head>
Here’s the final version of the [slug].js file:
import { MDXRemote } from 'next-mdx-remote'
import fs from 'fs'
import path from 'path'
import matter from 'gray-matter'
import { serialize } from 'next-mdx-remote/serialize'
import { Editframe } from '@editframe/editframe-js'
import Head from 'next/head'
import redis from '../../lib/redis'
const PostPage = ({
frontMatter: { title, description },
mdxSource,
video,
}) => {
return (
<>
<Head>
<meta property="og:title" content={title} />
<meta property="og:type" content="video.episode" />
<meta property="og:description" content={description} />
<meta property="og:url" content={video.streamUrl} />
<meta property="og:type" content="video" />
<meta property="og:image" content={video.thumbnailUrl} />
<meta property="og:video" content={video.streamUrl} />
<meta
property="og:video:type"
content="application/x-shockwave-flash"
/>
<meta property="og:video:width" content="398" />
<meta property="og:video:height" content="224" />
<meta name="twitter:card" content="player" />
<meta name="twitter:title" content={title} />
<meta name="twitter:description" content={description} />
<meta name="twitter:player" content={video.streamUrl} />
<meta name="twitter:player:width" content="360" />
<meta name="twitter:player:height" content="200" />
<meta name="twitter:image" content={video.thumbnailUrl} />
</Head>
<div className="relative overflow-hidden bg-white py-16">
<div className="relative px-4 sm:px-6 lg:px-8">
<div className="mx-auto max-w-prose text-lg">
<h1 className="text-2xl font-bold">{title}</h1>
<p className="mt-8 text-xl leading-8 text-gray-500">
{description}
</p>
<div className="prose prose-lg prose-indigo mx-auto mt-6 text-gray-500">
<MDXRemote {...mdxSource} />
</div>
</div>
</div>
</div>
</>
)
}
const getServerSideProps = async ({ params }) => {
console.log(params)
const { slug } = params
const file = path.join(path.resolve('posts'), slug + '.mdx') // file is available
const markdownWithMeta = fs.readFileSync(file, 'utf-8')
const { data: frontMatter, content } = matter(markdownWithMeta)
const mdxSource = await serialize(content)
const editframe = new Editframe({
clientId: process.env.EDITFRAME_CLIENT_ID,
token: process.env.EDITFRAME_TOKEN,
})
const composition = await editframe.videos.new(
// options
{
// any solid hexadecimal, rgb, or named color
backgroundColor: '#000000',
dimensions: {
// Height in pixels
height: 418,
// Width in pixels
width: 800,
},
duration: 15,
}
)
composition.addText(
{
text: frontMatter.title,
fontSize: 40,
color: '#ffffff',
},
{
position: {
x: 'center',
y: 'center',
},
timeline: {
start: 3,
},
trim: {
end: 15,
},
}
)
let video
let videoCached = await redis.get(JSON.stringify({ slug }))
if (videoCached == null) {
video = await composition.encodeSync()
if (video && video.streamUrl) {
await redis.set(JSON.stringify({ slug }), JSON.stringify(video))
}
} else {
video = JSON.parse(videoCached)
}
console.log(video)
return {
props: {
frontMatter,
slug,
mdxSource,
video,
},
}
}
export { getServerSideProps }
export default PostPage
Conclusion and next steps
We did it! Now you can take what we built above and add other functionality to your videos, or integrate the code into your own existing projects.
Here’s the final Github Repo—feel free to clone, fork, and have a fun building your own extensions!