How to use FFmpeg (with examples)

Have you ever had a video file that you needed to modify or optimise? You might have a video that is taking up too much space on your hard drive, or you just need to trim a small section from a long video or reduce the resolution. The go-to tool in these situations is FFmpeg , a software utility used by professionals and home users. In this article, we'll explain what FFmpeg is, how to install it, and look at some of the most common and useful commands using FFmpeg.

What is FFmpeg?

FFmpeg is a free and open-source video and audio processing tool that you run from the command-line.

FFmpeg is the tool of choice choice for multiple reasons:

  • Free: It's a completely free option.
  • Open-source: It has an active and dedicated open-source community continually deploying fixes, improvements, and new features.
  • Platform compatibility: FFmpeg is available for Windows, Mac, and Linux.
  • Command-line interface: It is a lightweight solution offering a vast array of options through a command-line interface.

How to install FFmpeg

Some operating systems, such as Ubuntu , install FFmpeg by default, so you might already have it on your computer.

Check if it's installed with the following command:

If it gives you a version number and build information, you already have it installed.

If not, or you are using Windows or a Mac then you will need to download a static or compiled binary executable from a third party vendor. Unfortunately FFmpeg only provide the source code and not the ready to run software.

Here are the key steps you'll need to follow:

  • Navigate to the FFmpeg download page.
  • Under Get packages & executable files , select your operating system to display a list of vendors.
  • Visit the most suitable vendor and follow the instructions on their web site. Typically you will either need to run a set of commands or you will need to download a zipped file (.zip, .7z, .tar.gz, etc.) containing the FFmpeg executable.
  • If downloading, extract the contents of the zipped file to your chosen location. If you browse the extracted files you should find a file called ffmpeg or ffmpeg.exe in a bin folder.

To run FFmpeg you will need to use the command-line; open a new terminal and navigate to the directory where you extracted the ffmpeg file, then type and run the following command again:

If installed correctly, you should see an output similar to below:

One last step to make FFMpeg more useful and available from any folder is to add it to your system PATH. This is different for each operating system, but typically involves adding the directory where the ffmpeg executable is located to the PATH environment variable.

Now that FFmpeg is successfully installed, let's look at how to use FFmpeg, with examples!

FFmpeg examples and common uses

Let's look at some of the most common and useful commands in FFmpeg.

You will need a sample video file to test the commands with. You can use any video file you have on your computer, or you can download this test file , which is names scott-ko.mp4 .

Convert video formats using FFmpeg

One of the simplest and easiest commands to get started with is converting one video format to another. This is a common task when you need to make a video compatible with a specific device, player or platform.

A basic command to convert a video from one format to another, using our scott-ko.mp4 sample file is:

This simple command will convert the video from the MP4 format to WEBM. FFmpeg is smart enough to know that the video and audio codec should be converted to be compatible with the new file type. For example, from h264 (MP4) to vp9 (WEBM) for video and aac (MP4) to opus (WEBM) for audio.

It is also possible to convert from one video format to another and have full control over the encoding options. The command to do that uses the following template:

Here are the options and the placeholders you can replace with your own values:

  • -i input.mp4 : Replace input.mp4 with the path to your input video file.
  • -c:v video_codec : Specify the video codec for the output. Replace video_codec with the desired video codec (e.g., libx265 for H.265).
  • -c:a audio_codec : Specify the audio codec for the output. Replace audio_codec with the desired audio codec (e.g., aac for AAC audio).
  • output.ext : Replace this with the desired output file name and extension (e.g., output.mp4).

Here's an example of converting an MP4 video to an MKV video using H.264 codec for video and AAC codec for audio:

Trim a video using FFmpeg

If you have a long video and want to extract a small portion, you can trim the video using FFmpeg. You use the ss (start time) and t (duration) options.

Here's an example template command:

Use the following options and replace the placeholders with your specific values:

  • -ss start_time : Replace start_time with the start time to trim from. You can use various time formats like HH:MM:SS or seconds. For example, if you want to start trimming from 1 minute and 30 seconds, you can use -ss 00:01:30 or drop the hour and use -ss 01:30 .
  • -t duration : Specify the duration of the trim. Again, you can use various time formats. For example, if you want to trim 20 seconds, you can use -t 20 .
  • -c copy : This option copies the video and audio codecs without re-encoding, which is faster and preserves the original quality. If you need to re-encode, you can specify different codecs or omit this option.

Here's an example command trimming a video from 1 minute and 30 seconds to 20 seconds:

For more information and examples, see how to trim a video using FFmpeg .

Crop a video using FFmpeg

In the age of smartphones and social networks, cropping videos to different sizes and aspect rations has become an essential requirement when working with video. To crop a video using FFmpeg, use the crop filter .

Here's an example template:

The options and placeholders are described below:

  • input.mp4 : Replace this with the name of your filename or the path to your input video.
  • -filter:v "crop=w:h:x:y" Use the crop video filter and specify the cropping parameters w (width), h (height), x (cropping x coordinate), and y (cropping y coordinate) according to your requirements.
  • output.mp4 : Replace this with the desired filename or path for the output video.

Here's an example command cropping a video to a width of 640 pixels, a height of 640 pixels, and starting the crop from coordinates 900 pixels across and 50 pixels down:

If you run this command using the provided test file , you'll see it is creates a square video cropped to the speakers face.

For more information and examples, see how to crop and resize videos using FFmpeg .

Extract or remove the audio from a video using FFmpeg

There are two common scenarios where you might want to work with a videos audio - extracting the audio so there is no video, or removing the audio from a video so it is silent, or muted.

To extract and save the audio from a video file using FFmpeg, use this command template:

The following options are used and you can replace the following placeholders with your own preferences:

  • input.mp4 : Replace this with the path to your input video file.
  • -vn : This option disables video processing.
  • output.mp3 : Replace this with the desired output audio file name and extension. In this example, the output is saved as an MP3 file.

Here is an example command using our test file:

To remove audio (or mute) a video file using FFmpeg, you can use the -an option, which disables audio processing. Here's an example command:

Here is an explanation of the options used:

  • -an : This option disables audio processing.
  • -c:v copy : This option copies the video stream without re-encoding, which is faster and preserves the original video quality. If you want to re-encode the video, you can specify a different video codec.

Here is an example using the test file:

Concatenate videos using FFmpeg

Concatenating videos is the technical term FFmpeg uses to describe joining, merging or stitching multiple video clips together. To concatenate (or join) multiple video files together in FFmpeg, you can use the concat demuxer.

First, create a text file containing the list of video files you want to concatenate. Each line should contain the file path of a video file.

For example, create a file named filelist.txt and include a list of video files on your hard drive:

Then, use the following FFmpeg command to concatenate the videos:

Here is a summary of the options used:

  • -f concat : This specifies the format (concat) to be used.
  • -safe 0 : This allows using absolute paths in the file list.
  • -i filelist.tx t: This specifies the input file list.
  • -c copy : This copies the streams (video, audio) without re-encoding, preserving the original quality. If you need to re-encode, you can specify different codecs or omit this option.
  • merged.mp4 : Replace this with the desired output file name and extension.

Adjust the file paths in filelist.txt according to your specific file names and paths. The order in which you list the files in the text file determines the order of concatenation.

For more information and examples, see merge videos using FFmpeg concat .

Resize a video using FFmpeg

You might need to resize a video if the resolution is very high, for example - you have a 4K video but you player only supports 1080p. To resize a video using FFmpeg, you can use the scale filter set using the -vf (video filter) option.

Replace the placeholders with your specific values:

  • -vf "scale=w:h" : Replace w and h with the desired width and height of the output video. You can also set a single dimension, such as -vf "scale=-1:720" to maintain the original aspect ratio.
  • resized.mp4 : Replace this with the desired output video file name and extension.

Here's an example command resizing our test video to 720p resolution and maintaining the aspect ratio:

Compress a video using FFmpeg

Video files are typically large and can take up a lot of space on your hard drive, cloud storage or take a long time to download. To compress a video using FFmpeg, you typically need to re-encode it using a more efficient video codec or by adjusting other encoding parameters.

There are many different ways to do this but here's an example template to get you started:

Here's the options and placeholders you can replace:

  • -c:v libx264 : This option sets the video codec to H.264 (libx264). H.264 is a widely used and efficient video codec.
  • -crf 25 : This controls the video quality. A lower CRF (constant rate factor) value results in higher quality but larger file size. Typical values range from 18 to 28, with 23 being a reasonable default.
  • -c:a aac -b:a 128k : These options set the audio codec to AAC with a bitrate of 128 kbps. Adjust the bitrate according to your preferences.
  • compressed.mp4 : Replace this with the desired output file name and extension.

For more information and examples, see how to compress video using FFmpeg .

Using our test file, we can compress the video from 31.9MB to 6.99MB using this command:

Convert a series of images to a video using FFmpeg

Who doesn't love a video montage? With FFmpeg it's easy to create a video from a series of images, simply use wildcard input glob pattern along with the -framerate option.

Here's an example command:

  • -framerate 1 : This sets the frame rate of the output video. Adjust the value according to your preference (e.g., 1 picture per second). Omitting the framerate will default to a framerate of 25.
  • -pattern_type glob -i 'path/to/images/*.jpg' : This specifies the input images using a glob pattern. Adjust the pattern and path to the location of your image files.
  • -c:v libx264 -pix_fmt yuv420p : These options specify the video codec (libx264) and pixel format. Adjust these options based on your preferences and compatibility requirements.
  • montage.mp4 : Replace this with the desired output file name and extension.

For more information and examples, see How to use FFmpeg to convert images to video .

Convert video to GIF using FFmpeg

GIFs are a popular animation format used for memes in messaging applications like WhatsApp or Facebook Messenger and a great way to send animations in emails among other use cases. There are a number of ways to convert and optimise a video to a GIF using FFmpeg, but here is a simple command template to get started with:

Here's a breakdown of the options used and what to replace:

  • -vf "fps=10,scale=320:-1:flags=lanczos" : This sets the video filters for the GIF conversion. The fps option sets the frames per second (adjust the value as needed), and scale specifies the output dimensions. The flags=lanczos part is for quality optimization.
  • -c:v gif : This specifies the video codec for the output, in this case, GIF.
  • animation.gif : Replace this with the desired output file name and extension.

For more information and examples, see how to convert video to animated GIF using FFmpeg .

Speed up and slow down videos using FFmpeg

To speed up or slow down a video in FFmpeg, you can use the setpts filter . The setpts filter adjusts the presentation timestamp of video frames, effectively changing the speed of the video. Here are examples of both speeding up and slowing down a video.

Speed up a video

To double the speed of a video, use a setpts value of 0.5:

Slow down a video

To slow down a video by a factor (e.g., 2x slower), you can use a setpts value greater than 1:

These commands adjust the video speed by manipulating the presentation timestamps (PTS). The values 0.5 and 2.0 in the examples represent the speed factor. You can experiment with different speed factors to achieve the desired result.

Here is an example command that doubles the speed of our test file:

Note that only the video is sped up, but not the audio.

Go forth and explore

This guide provides a quick primer on how to get started and use FFmpeg for various video processing tasks, along with some simple examples. The number of options and possibilities with FFmpeg is vast, and it's worth exploring the FFmpeg documentation and FFmpeg wiki to learn more about the tool and its capabilities.

FFmpeg's major strength is its versatility. However, it has a steep learning curve, with cryptic commands and an intimidating array of options. If you want to run FFmpeg commercially as part of a workflow, pipeline or application you'll also need to consider hosting the software, managing updates and security, and scaling the infrastructure to meet demand.

Shotstack was created to streamline automated video editing and video processing without having to learn complicated commands or worry about scaling infrastructure. Shotstack is an FFmpeg alternative offered as a collection of API's and SDK's that allow you to programmatically create, edit and render videos in the cloud. It's a great way to get started with video processing without having to worry about the complexities of FFmpeg.

Andrew Bone

BY ANDREW BONE 5th February, 2024

Become an Automated Video Editing Pro

Every month we share articles like this one to keep you up to speed with automated video editing.

You might also like

How to compress video using FFmpeg

How to compress video using FFmpeg

Maab Saleem

How to crop and resize videos using FFmpeg

Kathy Calilao

How to use FFmpeg to convert images to video

Joyce Echessa

ottverse.com

How to Speed Up or Slow Down a Video using FFmpeg

In this article, we explain how to speed up or slow down a video using FFmpeg. Whether you’re a video editor, a developer dealing with media files, or an enthusiast curious about video manipulation, you’ll find value in this guide 🙂

We’ll begin with setting up FFmpeg on your system, delve into understanding some essential commands, and then move towards our main focus – slowing down and speeding up videos using FFmpeg.

Let’s dive right in.

Table of Contents

Setting up FFmpeg

Before we learn to speed up or slow down videos using FFmpeg, we first have to install it on our computers. Setting up FFmpeg on your computer, whether it’s Windows, macOS, or Linux-based, is a straightforward process. However, the steps vary slightly depending on the platform. Here are the steps to install it –

For Windows users:

  • Download the latest FFmpeg static build from the official website . Ensure you select the correct architecture (32-bit or 64-bit) for your system.
  • Extract the downloaded ZIP file. You’ll find a folder named ‘ffmpeg-<version>-essentials_build’.
  • Add the ‘bin’ directory within this folder to your system’s PATH. This step allows you to run FFmpeg from the command line, irrespective of the directory you’re in.

For macOS users:

  • Open Terminal.
  • If you have Homebrew installed, simply type in brew install ffmpeg . If you don’t, consider installing Homebrew first. It simplifies the software installation process on macOS.

For Linux users:

The FFmpeg package is generally included in the standard repository of most Linux distributions. Use your distribution’s package manager to install FFmpeg. For example, on Ubuntu, use sudo apt-get install ffmpeg .

After installation, verify it by typing ffmpeg -version in the command line. The output should look something like this –

If you see details of the installed FFmpeg version, you’ve successfully set it up. If you are interested in FFmpeg, go here to see more options on installing FFmpeg .

Now, let’s learn to slow down a video using FFmpeg. After that, we’ll learn how to speed up a video.

How to Slow Down Video with FFmpeg

We’ll now explore the process of slowing down a video using the setpts video filter in FFmpeg. ‘setpts’ stands for “set presentation timestamp” and it adjusts the frame timestamps, which can effectively slow down or speed up your video playback.

Here’s the basic command to slow down a video using FFmpeg and the setpts parameter:

ffmpeg -i input.mp4 -vf "setpts=2.0*PTS" output.mp4

In this command,

  • the -vf option tells FFmpeg that we are going to apply a video filter.
  • The “setpts=2.0*PTS” portion is our filter of choice. PTS stands for Presentation TimeStamp in the video stream, and by multiplying it by 2.0, we are effectively doubling the length of the video, thus slowing it down by half.

The setpts filter can take any positive floating-point number as an argument. If you want to slow the video down even more, simply increase the value. For instance, using setpts=4.0*PTS would make the video play at quarter speed.

How to Speed Up Video with FFmpeg

Speeding up a video involves reducing its overall playback duration. So, if you have a 10 min video (600 seconds) and you speed it up by a factor of 10, then the output is going to be 1 min long (60 seconds). We can easily speed up a video using the setpts filter in FFmpeg as follows.

As you might recall, the ‘setpts’ filter adjusts the frame timestamps, which affects the playback speed. To speed up the video, we use a value less than 1.0. Here’s how to do it:

ffmpeg -i input.mp4 -vf "setpts=0.5*PTS" output.mp4

This command reduces the timestamps by half, effectively doubling the video speed. You can adjust the multiplier according to your needs. A smaller value will speed up the video more.

In the example below, I speed up the original video by a factor of 4 using setpts=0.25*PTS .

Before we end this article, I’d like to briefly touch upon presentation timestamps so that you know what they are when you are changing their values.

SetPts will Re-encode your videos

Just as a cautionary note, when you use setpts in FFmpeg, it will drop frames to achieve the requested speed-up. And, this will force FFmpeg to re- encode your content. Always remember that you can set the video quality you need using CBR, Capped VBR, or CRF techniques while speeding up or slowing down your videos.

Appendix: What is PTS or Presentation Time Stamp?

The Presentation Time Stamp, or PTS, is part of the data in video and audio streams that indicates when a frame (video) or packet (audio) should be presented to the viewer or listener.

Essentially, it’s a timestamp that tells the media player, “This is the exact moment when this particular frame or packet should be displayed or played.”

To better understand, consider watching a movie. Each frame of the movie has a specific time when it should appear on your screen. This timing ensures that all the frames are shown in the correct sequence and at the right speed, giving you a smooth viewing experience. The timing of these frames is dictated by their PTS values.

Now, the PTS values are expressed in “timebase units,” which are fractions of a second. The timebase is determined by the video or audio stream’s frame or sample rate. For instance, if a video has a frame rate of 30 frames per second (fps), each frame will have a duration of 1/30 of a second, and the PTS will increment by this amount for each subsequent frame.

So, if we have a sequence of frames with PTS values like 0, 1/30, 2/30, 3/30, and so forth, the video player will present each frame precisely 1/30 of a second after the previous one, resulting in a smoothly playing video at the correct speed.

When we manipulate the PTS values, as we do when slowing down or speeding up a video using FFmpeg, we’re altering these timestamps. For instance, if we slow down a video by a factor of two (using setpts=2.0*PTS ), we’re effectively doubling the PTS values for each frame. This makes the video player display each frame for twice as long, halving the video’s playback speed. Conversely, if we speed up a video by a factor of two (using setpts=0.5*PTS ), we’re halving the PTS values, making the frames display twice as quickly and doubling the playback speed.

It’s important to note that manipulating PTS values doesn’t alter the actual media content (i.e., the images in the video frames or the audio samples), but rather the timing of when they are presented, which is how the speed changes are achieved.

It’s also worth noting that PTS values can be presented in different ways. They are generally represented in timebase units, as previously explained, but they can also be represented as real time, depending on the context. The pkt_pts_time field shows the PTS in seconds, which is the real time representation.

By now, you’ve gained a solid understanding of how to speed up or slow down a video using FFmpeg using the setpts filter. Remember, that it involves re-encoding and you can always adjust the encoding parameters to achieve your desired output video quality.

To learn more about FFmpeg, head over our Recipes in FFmpeg section .

Until next time, happy streaming!

krishna rao vijayanagar

Krishna Rao Vijayanagar

Krishna Rao Vijayanagar, Ph.D., is the Editor-in-Chief of OTTVerse, a news portal covering tech and business news in the OTT industry.

With extensive experience in video encoding, streaming, analytics, monetization, end-to-end streaming, and more, Krishna has held multiple leadership roles in R&D, Engineering, and Product at companies such as Harmonic Inc., MediaMelon , and Airtel Digital. Krishna has published numerous articles and research papers and speaks at industry events to share his insights and perspectives on the fundamentals and the future of OTT streaming.

1200x200-Pallycon

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Enjoying this article? Subscribe to OTTVerse and receive exclusive news and information from the OTT Industry.

An ffmpeg and SDL Tutorial

Tutorial 05: synching video, how video syncs.

So this whole time, we've had an essentially useless movie player. It plays the video, yeah, and it plays the audio, yeah, but it's not quite yet what we would call a movie . So what do we do?

PTS and DTS

Fortunately, both the audio and video streams have the information about how fast and when you are supposed to play them inside of them. Audio streams have a sample rate, and the video streams have a frames per second value. However, if we simply synced the video by just counting frames and multiplying by frame rate, there is a chance that it will go out of sync with the audio. Instead, packets from the stream might have what is called a decoding time stamp (DTS) and a presentation time stamp (PTS) . To understand these two values, you need to know about the way movies are stored. Some formats, like MPEG, use what they call "B" frames (B stands for "bidirectional"). The two other kinds of frames are called "I" frames and "P" frames ("I" for "intra" and "P" for "predicted"). I frames contain a full image. P frames depend upon previous I and P frames and are like diffs or deltas. B frames are the same as P frames, but depend upon information found in frames that are displayed both before and after them! This explains why we might not have a finished frame after we call avcodec_decode_video2 .

So let's say we had a movie, and the frames were displayed like: I B B P. Now, we need to know the information in P before we can display either B frame. Because of this, the frames might be stored like this: I P B B. This is why we have a decoding timestamp and a presentation timestamp on each frame. The decoding timestamp tells us when we need to decode something, and the presentation time stamp tells us when we need to display something. So, in this case, our stream might look like this: PTS: 1 4 2 3 DTS: 1 2 3 4 Stream: I P B B Generally the PTS and DTS will only differ when the stream we are playing has B frames in it.

When we get a packet from av_read_frame () , it will contain the PTS and DTS values for the information inside that packet. But what we really want is the PTS of our newly decoded raw frame, so we know when to display it.

Fortunately, FFMpeg supplies us with a "best effort" timestamp, which you can get via, av_frame_get_best_effort_timestamp ()

Now, while it's all well and good to know when we're supposed to show a particular video frame, but how do we actually do so? Here's the idea: after we show a frame, we figure out when the next frame should be shown. Then we simply set a new timeout to refresh the video again after that amount of time. As you might expect, we check the value of the PTS of the next frame against the system clock to see how long our timeout should be. This approach works, but there are two issues that need to be dealt with.

First is the issue of knowing when the next PTS will be. Now, you might think that we can just add the video rate to the current PTS — and you'd be mostly right. However, some kinds of video call for frames to be repeated. This means that we're supposed to repeat the current frame a certain number of times. This could cause the program to display the next frame too soon. So we need to account for that.

The second issue is that as the program stands now, the video and the audio chugging away happily, not bothering to sync at all. We wouldn't have to worry about that if everything worked perfectly. But your computer isn't perfect, and a lot of video files aren't, either. So we have three choices: sync the audio to the video, sync the video to the audio, or sync both to an external clock (like your computer). For now, we're going to sync the video to the audio.

Coding it: getting the frame PTS

Now let's get into the code to do all this. We're going to need to add some more members to our big struct, but we'll do this as we need to. First let's look at our video thread. Remember, this is where we pick up the packets that were put on the queue by our decode thread. What we need to do in this part of the code is get the PTS of the frame given to us by avcodec_decode_video2 . The first way we talked about was getting the DTS of the last packet processed, which is pretty easy: double pts; for(;;) { if(packet_queue_get(&is->videoq, packet, 1) avcodec_decode_video2 (is->video_st->codec, pFrame, &frameFinished, packet); if(packet->dts != AV_NOPTS_VALUE) { pts = av_frame_get_best_effort_timestamp (pFrame); } else { pts = 0; } pts *= av_q2d (is->video_st->time_base); We set the PTS to 0 if we can't figure out what it is.

Well, that was easy. A technical note: You may have noticed we're using int64 for the PTS. This is because the PTS is stored as an integer. This value is a timestamp that corresponds to a measurement of time in that stream's time_base unit. For example, if a stream has 24 frames per second, a PTS of 42 is going to indicate that the frame should go where the 42nd frame would be if there we had a frame every 1/24 of a second (certainly not necessarily true).

We can convert this value to seconds by dividing by the framerate. The time_base value of the stream is going to be 1/framerate (for fixed-fps content), so to get the PTS in seconds, we multiply by the time_base .

Coding: Synching and using the PTS

So now we've got our PTS all set. Now we've got to take care of the two synchronization problems we talked about above. We're going to define a function called synchronize_video that will update the PTS to be in sync with everything. This function will also finally deal with cases where we don't get a PTS value for our frame. At the same time we need to keep track of when the next frame is expected so we can set our refresh rate properly. We can accomplish this by using an internal video_clock value which keeps track of how much time has passed according to the video. We add this value to our big struct. typedef struct VideoState { double video_clock; // pts of last decoded frame / predicted pts of next decoded frame Here's the synchronize_video function, which is pretty self-explanatory: double synchronize_video(VideoState *is, AVFrame *src_frame, double pts) { double frame_delay; if(pts != 0) { /* if we have pts, set video clock to it */ is->video_clock = pts; } else { /* if we aren't given a pts, set it to the clock */ pts = is->video_clock; } /* update the video clock */ frame_delay = av_q2d (is->video_st->codec->time_base); /* if we are repeating a frame, adjust clock accordingly */ frame_delay += src_frame->repeat_pict * (frame_delay * 0.5); is->video_clock += frame_delay; return pts; } You'll notice we account for repeated frames in this function, too.

Now let's get our proper PTS and queue up the frame using queue_picture , adding a new pts argument: // Did we get a video frame? if(frameFinished) { pts = synchronize_video(is, pFrame, pts); if(queue_picture(is, pFrame, pts) The only thing that changes about queue_picture is that we save that pts value to the VideoPicture structure that we queue up. So we have to add a pts variable to the struct and add a line of code: typedef struct VideoPicture { ... double pts; } int queue_picture(VideoState *is, AVFrame *pFrame, double pts) { ... stuff ... if(vp->bmp) { ... convert picture ... vp->pts = pts; ... alert queue ... } So now we've got pictures lining up onto our picture queue with proper PTS values, so let's take a look at our video refreshing function. You may recall from last time that we just faked it and put a refresh of 80ms. Well, now we're going to find out how to actually figure it out.

Our strategy is going to be to predict the time of the next PTS by simply measuring the time between the previous pts and this one. At the same time, we need to sync the video to the audio. We're going to make an audio clock : an internal value thatkeeps track of what position the audio we're playing is at. It's like the digital readout on any mp3 player. Since we're synching the video to the audio, the video thread uses this value to figure out if it's too far ahead or too far behind.

We'll get to the implementation later; for now let's assume we have a get_audio_clock function that will give us the time on the audio clock. Once we have that value, though, what do we do if the video and audio are out of sync? It would silly to simply try and leap to the correct packet through seeking or something. Instead, we're just going to adjust the value we've calculated for the next refresh: if the PTS is too far behind the audio time, we double our calculated delay. if the PTS is too far ahead of the audio time, we simply refresh as quickly as possible. Now that we have our adjusted refresh time, or delay , we're going to compare that with our computer's clock by keeping a running frame_timer . This frame timer will sum up all of our calculated delays while playing the movie. In other words, this frame_timer is what time it should be when we display the next frame. We simply add the new delay to the frame timer, compare it to the time on our computer's clock, and use that value to schedule the next refresh. This might be a bit confusing, so study the code carefully: void video_refresh_timer(void *userdata) { VideoState *is = (VideoState *)userdata; VideoPicture *vp; double actual_delay, delay, sync_threshold, ref_clock, diff; if(is->video_st) { if(is->pictq_size == 0) { schedule_refresh(is, 1); } else { vp = &is->pictq[is->pictq_rindex]; delay = vp->pts - is->frame_last_pts; /* the pts from last time */ if(delay = 1.0) { /* if incorrect delay, use previous one */ delay = is->frame_last_delay; } /* save for next time */ is->frame_last_delay = delay; is->frame_last_pts = vp->pts; /* update delay to sync to audio */ ref_clock = get_audio_clock(is); diff = vp->pts - ref_clock; /* Skip or repeat the frame. Take delay into account FFPlay still doesn't "know if this is the best guess." */ sync_threshold = (delay > AV_SYNC_THRESHOLD) ? delay : AV_SYNC_THRESHOLD; if(fabs(diff) = sync_threshold) { delay = 2 * delay; } } is->frame_timer += delay; /* computer the REAL delay */ actual_delay = is->frame_timer - ( av_gettime () / 1000000.0); if(actual_delay SDL_LockMutex (is->pictq_mutex); is->pictq_size--; SDL_CondSignal (is->pictq_cond); SDL_UnlockMutex (is->pictq_mutex); } } else { schedule_refresh(is, 100); } } There are a few checks we make: first, we make sure that the delay between the PTS and the previous PTS make sense. If it doesn't we just guess and use the last delay. Next, we make sure we have a synch threshold because things are never going to be perfectly in synch. ffplay uses 0.01 for its value. We also make sure that the synch threshold is never smaller than the gaps in between PTS values. Finally, we make the minimum refresh value 10 milliseconds*. * Really here we should skip the frame, but we're not going to bother.

We added a bunch of variables to the big struct so don't forget to check the code. Also, don't forget to initialize the frame timer and the initial previous frame delay in stream_component_open : is->frame_timer = (double) av_gettime () / 1000000.0; is->frame_last_delay = 40e-3;

Synching: The Audio Clock

Now it's time for us to implement the audio clock. We can update the clock time in our audio_decode_frame function, which is where we decode the audio. Now, remember that we don't always process a new packet every time we call this function, so there are two places we have to update the clock at. The first place is where we get the new packet: we simply set the audio clock to the packet's PTS. Then if a packet has multiple frames, we keep time the audio play by counting the number of samples and multiplying them by the given samples-per-second rate. So once we have the packet: /* if update, update the audio clock w/pts */ if(pkt->pts != AV_NOPTS_VALUE) { is->audio_clock = av_q2d (is->audio_st->time_base)*pkt->pts; } And once we are processing the packet: /* Keep audio_clock up-to-date */ pts = is->audio_clock; *pts_ptr = pts; n = 2 * is->audio_st->codec->channels; is->audio_clock += (double)data_size / (double)(n * is->audio_st->codec->sample_rate); A few fine details: the template of the function has changed to include pts_ptr , so make sure you change that. pts_ptr is a pointer we use to inform audio_callback the pts of the audio packet. This will be used next time for synchronizing the audio with the video.

Now we can finally implement our get_audio_clock function. It's not as simple as getting the is->audio_clock value, thought. Notice that we set the audio PTS every time we process it, but if you look at the audio_callback function, it takes time to move all the data from our audio packet into our output buffer. That means that the value in our audio clock could be too far ahead. So we have to check how much we have left to write. Here's the complete code: double get_audio_clock(VideoState *is) { double pts; int hw_buf_size, bytes_per_sec, n; pts = is->audio_clock; /* maintained in the audio thread */ hw_buf_size = is->audio_buf_size - is->audio_buf_index; bytes_per_sec = 0; n = is->audio_st->codec->channels * 2; if(is->audio_st) { bytes_per_sec = is->audio_st->codec->sample_rate * n; } if(bytes_per_sec) { pts -= (double)hw_buf_size / bytes_per_sec; } return pts; } You should be able to tell why this function works by now ;)

So that's it! Go ahead and compile it: gcc -o tutorial05 tutorial05.c -lavutil -lavformat -lavcodec -lswscale -lz -lm \ `sdl-config --cflags --libs` and finally! you can watch a movie on your own movie player. Next time we'll look at audio synching, and then the tutorial after that we'll talk about seeking.

>> Synching Audio

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

You must log in to answer this question.

Not the answer you're looking for browse other questions tagged ffmpeg ..

  • The Overflow Blog
  • Net neutrality is in; TikTok and noncompetes are out
  • Upcoming research at Stack Overflow
  • Featured on Meta
  • Testing a new version of Stack Overflow Jobs

Hot Network Questions

  • Is 造れる intransitive form for 造る?
  • Pipe union fitting leaks slowly. How to seal?
  • Why can't I use my FIDO keys for logins in Firefox anymore?
  • Holomorphic function preserving real and imaginary axis
  • Centering a group of table columns under a wide header
  • Units of "pixels" in a research paper on digital streak formation for optical payloads
  • is_number Function Implementation in C++
  • If I give my daughter $50k for her wedding and she elects to use the money to pay down a house, can I sue her?
  • SpatiaLite Equals function vs the standard ‘=‘ operator
  • Why do I get 9V while the figure in the datasheet says I should get ~4V?
  • Could a VTOL aircraft use a helipad for landing/taking off?
  • Having a second bite of the data-apple without p-hacking
  • Does consumer protection cover price changes at point of sale?
  • Why doesn't Singapore stamp passports?
  • Changed file and folder names within a compressed .zip email attachment received from Outlook
  • Are there non-antisymmetric solutions to the electronic Hamiltonian?
  • Is It Appropriate to Use Class Time for Students to Fill Out Course Evaluations?
  • Quantum circuit not showing when using qiskit with pycharm
  • Why do scientific laws persist?
  • Is this frog/toad dangerous?
  • Radar, radio, and Biplanes
  • How do Christians who reject Young Earth Creationism respond to the "God is not a liar" argument?
  • Is it idiomatic to say "I have to race with time" to mean I have to do a thing very fast and finish it before something bad might happen?
  • Feasible compact fusion power source

presentation timestamp ffmpeg

IMAGES

  1. Adding a timestamp on frames captured using FFmpeg

    presentation timestamp ffmpeg

  2. Ffmpeg extract frames from video with timestamp

    presentation timestamp ffmpeg

  3. How to create a timelapse video with FFMPEG?

    presentation timestamp ffmpeg

  4. get timestamp of a keyframe exactly before a given timestamp with

    presentation timestamp ffmpeg

  5. How to get time stamp of closest keyframe before a given timestamp with

    presentation timestamp ffmpeg

  6. How to fetch both live video frame and timestamp from ffmpeg to python

    presentation timestamp ffmpeg

VIDEO

  1. Infowars=CIA mockingbird media

  2. UBUNTU FIX: glconv_vaapi_drm gl error: vaInitialize: unknown libva error

  3. FFMPEG Splitting Video with Same Quality (Very easy)

  4. B2Z Stigma Summit 2023

  5. A Nonlinear Autoregressive Exogenous NARX Neural Network Model for the Prediction of Timestamp Influ

  6. Graphene: a method to compute properties of van der Waals structures

COMMENTS

  1. ffmpeg.c what are pts and dts ? what does this code block do in ffmpeg

    Because of this, the frames might be stored like this: I P B B. This is why we have a decoding timestamp and a presentation timestamp on each frame. The decoding timestamp tells us when we need to decode something, and the presentation time stamp tells us when we need to display something. So, in this case, our stream might look like this:

  2. Interpreting pts_time in FFmpeg

    pts_time=6.506000 means an absolute presentation timestamp of 6.506 seconds. It's relative presentation time depends on the start_time of the file, for which use -show_entries format=start_time. ffprobe seeks to keyframes, so it will seek to the nearest KF at or before the specified time and then print info for the stated number of packets.

  3. linux

    Using ffmpeg to add presentation timestamp. Ask Question Asked 4 years, 6 months ago. Modified 4 years, 6 months ago. Viewed 3k times 3 I'm trying to add presentation timestamp given a known frame rate. While this does work it does seem to be deprecated. I'm running the command:

  4. How to use FFmpeg (with examples)

    The setpts filter adjusts the presentation timestamp of video frames, effectively changing the speed of the video. Here are examples of both speeding up and slowing down a video. Speed up a video. To double the speed of a video, use a setpts value of 0.5: ffmpeg -i input.mp4 -filter:v "setpts=0.5*PTS" fast.mp4 Slow down a video

  5. ffmpeg

    I am attempting to extract frames with their timestamps from videos using the command line, but I am struggling to relate both the output of the showinfo filter to the actual frames outputted by the command, and the corresponding output file names from the -frame_pts option. I am extracting 1 frame per second using the following command:

  6. ffmpeg Documentation

    If the argument consists of timestamps, ffmpeg will round the specified times to the nearest output timestamp as per the encoder time base and force a keyframe at the first frame having timestamp equal or greater than the computed timestamp. ... Presentation timestamp of the frame or packet, as an integer. Should be multiplied by the timebase ...

  7. How to speed up / slow down a video

    To double the speed of the video with the setpts filter, you can use: ffmpeg -i input.mkv -filter:v "setpts=0.5*PTS" output.mkv. The filter works by changing the presentation timestamp (PTS) of each video frame. For example, if there are two succesive frames shown at timestamps 1 and 2, and you want to speed up the video, those timestamps need ...

  8. Using ffmpeg to add presentation timestamp

    Using ffmpeg to add presentation timestampHelpful? Please support me on Patreon: https://www.patreon.com/roelvandepaarWith thanks & praise to God, and with ...

  9. With ffmpeg print onto clipped video hh:mm:ss time *from original* and

    The ffmpeg presentation timestamp %{pts} can only be zeroed and offest, as far as I know, using the C gmtime function, with the time set to 1st Jan 2000 in unix time seconds, ie 946684800 seconds, or some other arbitrary time at midnight.

  10. FFMPEG- how to set presentation timestamp of a second video while

    FFMPEG- how to set presentation timestamp of a second video while merging a second video side-by-side with a first video. Ask Question Asked 3 years, 6 months ago. Modified 3 years, 6 months ago. Viewed 165 times 2 I was trying to merge a second video into a first video, side-by-side in a stacked fashion as a self-interest work. ...

  11. How to Speed Up or Slow Down a Video using FFmpeg

    the -vf option tells FFmpeg that we are going to apply a video filter. The "setpts=2.0*PTS" portion is our filter of choice. PTS stands for Presentation TimeStamp in the video stream, and by multiplying it by 2.0, we are effectively doubling the length of the video, thus slowing it down by half.

  12. An ffmpeg and SDL Tutorial

    Because of this, the frames might be stored like this: I P B B. This is why we have a decoding timestamp and a presentation timestamp on each frame. The decoding timestamp tells us when we need to decode something, and the presentation time stamp tells us when we need to display something. So, in this case, our stream might look like this:

  13. ffmpeg

    FFmpeg will, by default, remove the starting offset. To preserve it, add -copyts.. The description of -vsync 0 isn't accurate and was written 8+ years ago: Each frame is passed with its timestamp from the demuxer to the muxer..Video sync takes effect only once the frame has exited the decoder(+filtergraph) pipeline.

  14. FFmpeg: AVPacket Struct Reference

    Presentation timestamp in AVStream->time_base units; the time at which the decompressed packet will be presented to the user. Can be AV_NOPTS_VALUE if it is not stored in the file. pts MUST be larger or equal to dts as presentation cannot happen before decompression, unless one wants to view hex dumps.

  15. video

    Hi, All! ffMpeg -timstamp option works likes upper image? 07:21:54 07/07/05 white text in black box container. in ubuntu 12.04 typed the excute like this. ... The ffmpeg -timestamp option takes a date, which it stores in the output file. It is not related to the timecode and does not cause any text to be rendered. - mark4o.

  16. How do I `drawtext` video playtime ("elapsed time") with `:`s on a

    Question. How do I drawtext video playtime ("elapsed time") on a video, with FFmpeg's --filter_complex option?. Example. Assuming I have a video whose duration is 150 seconds: Elapsed 1 second since the video started: the video displays 00:01 / 02:30.; Elapsed 2 seconds since the video started: the video displays 00:02 / 02:30.; Elapsed 3 seconds since the video started: the video displays ...

  17. FFmpeg: AVPacket Struct Reference

    Presentation timestamp in AVStream->time_base units; the time at which the decompressed packet will be presented to the user. ... AVPacket is one of the few structs in FFmpeg, whose size is a part of public ABI. Thus it may be allocated on stack and no new fields can be added to it without libavcodec and libavformat major bump.

  18. ffmpeg

    Oh, I see, then simply using ffmpeg -use_wallclock_as_timestamps 1 -i - -c:v copy out.mkv would do most of the job, and my Android tablet can actually do this fast enough. I can then renormalize on my computer (although the file seems to play fine without renormalization). If my tablet crashes during recording, the MKV file would still be readable with the correct timestamps, so I can salvage ...

  19. How to burn timestamp in video with different date format using FFMPEG?

    I am using below command to add timestamps to video: ffmpeg -y -i input.mp4 -vf "drawtext=fontfile=roboto.ttf:fontsize=36:fontcolor=yellow:text='%{pts\:gmtime\:1456007118}'" -preset ultrafast output.mp4 command is working but i don't get desired output. it is adding timestamp like this: I want to use different date format like this: Simple Date ...

  20. c++

    I code a Demo read RTSP using ffmpeg library and print the presentation time of every frame . but I find the presentation time is relative , It start from zero up to end. How I can convert the time to UTC? RTP Packet's timestamp is relative, It only need RTCP sender report's absolute timestamp to calculate the UTC.

  21. ffmpeg transcoding reset the start time of file

    Here's how. first convert the original ts file into a raw format. ffmpeg -i original.ts original.avi. apply a setpts filter and convert to encoded format (this will differ depending on frame rate and desired time shift) ffmpeg -i original.avi -filter:v 'setpts=240+PTS' -sameq -vcodec libx264 shift.mp4.