FFmpeg – Extract Blu-Ray Audio

Install Required Packages

Although MPlayer can also be used, FFmpeg seems more refined when dumping or clipping specific audio chapters from DVD or Blu-Ray media.

root #emerge --ask media-video/ffmpeg

(If somebody successfully uses MPlayer/MPlayer2 to dump PCM specified chapters, feel free to add it to this Wiki page and retitle appropriately. I’ve only experienced MPlayer seeking to the beginnning chapter and, not recognizing or stopping at the specified end chapter. ie. “mplayer -ao pcm:fast:file=audio.wav -chapter 2-2 -vo null -vc null input_file”)

Mount Blu-Ray Disc

Blu-Rays use UDF, and require to be mounted as such. Probably best to edit the following file to provide mount points as such. (I use AutoFS, so incorporate as needed.)

FILE /etc/fstab
/dev/sr0       /mnt/dvd        iso9660         noauto,user,ro  0 0
/dev/sr0       /mnt/dvd-udf    udf             noauto,user,rw  0 0

Or the following will automatically decide with little to no additional access time difference,

FILE /etc/fstab
/dev/sr0       /mnt/dvd        auto            noauto,user,ro  0 0

Create the mount folders if you don’t have them already,

root #mkdir /mnt/dvd /mnt/dvd-udf

Mount the disc,

user $sudo mount /mnt/dvd-udf

Find Available Stream Types

You’ve likely found your main large media stream file on your Blu-Ray, something similar to ./BDMV/STREAM/0000.m2ts.

Using ffplay, you’ll likely see something like this within stdout,

user $ffplay ...
Stream #0:0[0x1011]: Video: h264 (High) (HDMV / 0x564D4448), yuv420p, 1920x1080 [SAR 1:1 DAR 16:9], 29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc
Stream #0:1[0x1100]: Audio: pcm_bluray (HDMV / 0x564D4448), 48000 Hz, stereo, s32, 2304 kb/s
Stream #0:2[0x1101]: Audio: pcm_bluray (HDMV / 0x564D4448), 48000 Hz, 5.1(side), s32, 6912 kb/s
Stream #0:3[0x1102]: Audio: dts (DTS-HD MA) ([134][0][0][0] / 0x0086), 48000 Hz, 5.1(side), s16, 1536 kb/s
...
Stream #0 on this audio only Blu-Ray is only a black screen with song titles.  We'll skip this stream since we want audio only PCM WAV
Stream #1 is the PCM two channel stereo mix.
Stream #2 is the PCM 5.1 high resolution mix.
Stream #2 is the DTS mix.

Keep an eye on the Hz, s16/s24/s32 and kb/s, as they’re indicators of audio quality.

Extract Audio Streams

Extract Full Audio Streams

To extract the three individual stream types into one large file, you can use FFmpeg. (Although this is likely undesirable due to file size limitations on VFAT filesystems.)

user $ffmpeg -i ./BDMV/STREAM/00000.m2ts -map 0:1 -acodec pcm_s24le music.wav
user $ffmpeg -i ./BDMV/STREAM/00000.m2ts -map 0:2 -acodec pcm_s24le music-pcm51.wav
user $ffmpeg -i ./BDMV/STREAM/00000.m2ts -map 0:3 -acodec copy music.dts

Verify you have successfully extracted the streams using ffplay or mplayer. Monitor the stdout messages to ensure proper drivers and codecs are used for the stream types specified.

(For DTS playback using MPlayer, you’ll likely need to specify ac=hwdts for MPlayer for passing through DTS to your HDMI/SPDIF audio receiver. MPlayer uses the following for specifying streams, “mplayer -aid 1 -demuxer lavf ./BDMV/STREAM/00000.m2ts”.)

Devices with only 16 Bit Microsoft PCM Audio Support

Some audio receivers and devices will only play 16 bit Microsoft PCM WAV files! If you have 24 bit audio files as indicated above and such hindered devices, you will need to unfortunately down mix in order for the files to be playable on those devices. The above formentioned conversion provides 24 bit PCM Riff/Aiff files, while the below ffmpeg incanatation will provide 16 bit Microsoft PCM WAV files.

user $ffmpeg -i ./BDMV/STREAM/00000.m2ts -map 0:1 -acodec pcm_s16le music.wav

Another work around, is to play the 24bit PCM WAV files using a software media player such as FFplay and MPlayer, and route the sound to your audio receiver using HDMI or S/PDIF. One other option, ensure you buy a receiver capable of playing the 24 bit PCM files via USB media!

Note
If this section applies to you, then you will need to augment the further FFmpeg incanatations below with the “-acodec pcm_s16le” option.

Extract Individual Chapters

Find Chapters

FFprobe will display the chapters to stdout, if they are preserved within the media file.

user $ffprobe ./music.mkv
Note
If you use MakeMKV, make sure you extract to a format preserving Chapters with using “makemkvcon mkv”. Using “makemkvcon backup” does not preserve chapter information as of this writing!

(MPlayer can also identify chapters using “mindentify”, however the chapter times do not appear comptabile with FFmpeg.)

Extract a Chapter

At this point, we’ll assume we want Stream #1 for standard two stereo PCM WAV files (ie. map 0:1) and the second (#0.2) chapter.

FFprobe’s snipped output:

user $ffprobe ...
...
Chapter #0.2: start 534.934400, end 888.087200

The incanatation of FFmpeg we’ll use for exacting this individual chapter, using seconds for start and duration indicators.

user $ffmpeg -ss [start] -i in.dts -t [duration] -c:v copy -c:a copy out.wav

With this example, the start time will be 534.934400 and duration will be 888.087200 minus 534.934400.

For example,

user $ffmpeg -ss 534.934400 -i ./BDMV/STREAM/00000.m2ts -t 353.152800 -c:v copy -c:a copy out.wav
Extract Multiple Chapters

I have only piped the message stdout of the CLI tools to a series of text files, utilizing grep and bc (CLI Calculator), along side VI/VIM for line duplication and clipping for creating one time scripts for extracting multiple files at once.

Someday, this will likely be automated and integrated into abcde.sh.

Tips

Cover Art

Cover art is usually found within the /mnt/dvd/BDMV/META/DL folder. For example:

user $cp /mnt/dvd/BDMV/META/DL/discinfo_640x360.jpg ${HOME}/Music/My_Album/cover.jpg

MPlayer Upmix When 24bit Decoding Not Available

My receiver is apparently not capable of decoding 24 bit PCM WAV, but will decode 16 and 32 bit PCM WAV through HDMI.

The PCM 5.1 WAV files are encoded at 24 bit PCM 5.1 WAV 48000 Hz.

The work around here is to upmix to 32 bit using sb32le or floatle, since MPlayer by default down mixes to 16 bit or s16le. MPlayer also by default cuts channels to two channels.

user $mplayer -af format=s32le,channels=8 PCM51-24bit/01.my_music_track.dts
user $mplayer2 -af format=s32le,channels=8 PCM51-24bit/01.my_music_track.dts

No DTS-H Master?

My receiver shows it’s decoding DTS-HD Master stream when bit perfect or high definition audio decoding is selected within my Window’s player, but my receiver only says it’s decoding the usual “DTS” decoding while playing streams within Linux. From reports on the web, bit perfect or high definition streaming to the receiver isn’t possible within Linux. Other reports state it is possible using Intel’s HDMI. (NVidia’s video card HDMI using Linux binary drivers isn’t performing DTS-HD Master here.)

Gapless Playback

Split tracks of long streams, it’s nice to have gapless playback for preventing interruptions between tracks.

FIXME: The following is from Snipplr, but doesn’t work for me. :-/

user $mkfifo /tmp/aufifo
user $aplay -t raw -c 2 -f S16_LE -r 44100 /tmp/aufifo &> /tmp/aplayfifo.log &
user $mplayer -ao pcm:nowaveheader:file=/tmp/aufifo 01.track.wav 02.track.wav 03.track.wav &

Or use MPlayer2:

user $mplayer2 -ac hwdts -af channels=8 -ao alsa:device=hw=1.3 -gapless-audio DTS/*.dts

Additional Tools

Additional tools which might be useful, but not utilized within this Wiki:

  • media-sound/shntool – A multi-purpose WAVE data processing and reporting utility, ie. splitting WAV files.
  • MPlayer – Media Player for Linux, as an option to FFmpeg
  • media-video/tsmuxer – Utility to create and demux TS and M2TS files

References

Properly configure ALSA for pass-through digital audio, including specifying default decoding codecs for hardware digital decoders when using MPlayer.

Advertisements

MPEG-DASH Content Generation with MP4Box and x264

The Situation: Your pre-MP4Box DASH file

A video is given in some container format, with a certain codec, probably including one or more audio tracks. Let’s call this file inputvideo.mkv. This video should be prepared for MPEG-DASH playout.

H.264/AVC for video will be used within segmented mp4 containers.

The tools: X264 and MP4Box

Two tools will be used. x264 to prepare the video content, and MP4Box to segment the file and create a Media Presentation Description (MPD). Alternatively it is possible to generate MPEG-DASH & HLS content out of this mkv with our Bitmovin Encoding Service, which perfectly integrates with our Bitmovin Player.

Preparing the video file

If the source video is already in the correct format, this step can be skipped. However, the odds are long for this being the case.

The following command (re-) encodes the video in H.264/AVC with the properties we will need. All the command line parameters are explained after the code.

1
x264 --output intermediate_2400k.264 --fps 24 --preset slow --bitrate 2400 --vbv-maxrate 4800 --vbv-bufsize 9600 --min-keyint 48 --keyint 48 --scenecut 0 --no-scenecut --pass 1 --video-filter "resize:width=1280,height=720" inputvideo.mkv
Parameter Explanation
--output intermediate_2400k.264 Specifies the output filename. File extension is .264 as it is a raw H.264/AVC stream.
--fps 24 Specifies the framerate which shall be used, here 24 frames per second.
--preset slow Presets can be used to easily tell x264 if it should try to be fast to enhance compression/quality. Slow is a good default.
--bitrate 2400 The bitrate this representation should achieve in kbps.
--vbv-maxrate 4800 Rule of thumb: set this value to the double of --bitrate.
--vbv-bufsize 9600 Rule of thumb: set this value to the double of --vbv-maxrate.
--keyint 96 Sets the maximum interval between keyframes. This setting is important as we will later split the video into segments and at the beginning of each segment should be a keyframe. Therefore, --keyint should match the desired segment length in seconds mulitplied with the frame rate. Here: 4 seconds * 24 frames/seconds = 96 frames.
--min-keyint 96 Sets the minimum interval between keyframes. See --keyint for more information.We achieve a constant segment length by setting minimum and maximum keyframe interval to the same value and furthermore by disabling scenecut detection with the --no-scenecut parameter.
--no-scenecut Completely disables adaptive keyframe decision.
--pass 1 Only one pass encoding is used. Can be set to 2 to further improve quality, but takes a long time.
--video-filter "resize:width=1280,height=720" Is used to change the resolution. Can be omitted if the resolution should stay the same as in the source video.
inputvideo.mkv The source video

Note that these are only example values. Depending on the use case, you might need to use totally different options. For more details and options consult x264’s documentation.

Segmenting

Now we add the previously created h264 raw video to an mp4 container as this is our container format of choice.

1
MP4Box -add intermediate.264 -fps 24 output_2400k.mp4
Parameter Explanation
intermediate_2400k.264 The H.264/AVC raw video we want to put in a mp4.
-fps 24 Specifies the framerate. H.264 doesn’t provide meta information about the framerate so it’s recommended to specify it. The number (in this example 24 frames per second) must match the framerate used in the x264 command.
output_2400k.mp4 The output file name.

What follows is the step to actual create the segments and the corresponding MPD.

1
MP4Box -dash 4000 -frag 4000 -rap -segment-name segment_ output_2400k.mp4
Parameter Explanation
-dash 4000 Segments the given file into 4000ms chunks.
-frag 4000 Creates subsegments within segments and the duration therefore must be longer than the duration given to -dash. By setting it to the same value, there will only one subsegment per segment. Please see refer to this GPAC post for more information on fragmentation, segmentation, splitting and interleaving.
-rap Forces segments to start random access points, i.e. keyframes. Segment duration may vary due to where keyframes are in the video – that’s why we (re-) encoded the video before with the appropriate settings!
-segment-name segment_ The name of the segments. An increasing number and the file extension is added automatically. So in this case, the segments will be named like this: segment_1.m4s, segment_2.m4s, …
output_2400k.mp4 The video we have created just before which should be segmented.
mp4box-v11

Fore more details please refer to the MP4Box documentation.

The output is one video representation, in form of segments. Additionally, there is one initialization segment, called output_2400k_dash.mp4. Finally, there is a MPD.

And that’s it.

What’s next?

Just put the segments, the initialization segment, and the MPD onto a web server. Then point the Bitmovin Player config to the MPD on the web server and enjoy your content.

In case of problems with the player, please refer to the FAQ..

What about more representations?

The steps explained in this post can be repeated over and over again, just pass another bitrate to x264. And make sure previously created files are not overwritten.

For each representation another MPD will be created. As it is just a XML file, it is possible to open it with a text editor and copy & paste the representation into the video AdaptationSet of another MPD file until all needed representations are in the same MPD.

What about audio?

The same MP4Box command as for video can be used for audio. If you have audio already in a separate file, just use this as input.

MP4Box -dash 4000 -frag 4000 -rap -segment-name segment_ audiofile.mp4

If audio is in the video file, you can use the #audio selector:

MP4Box -dash 4000 -frag 4000 -rap -segment-name segment_ video.mp4#audio

To have the MPD content in the same MPD as video, copy & paste the whole Adaptation Set.

Encoding MPEG-DASH & HLS?

Learn more about MPEG-DASH & HLS encoding!

All the best,
Daniel & the Bitmovin Team
Follow us on Twitter: @bitmovin

How to encode multi-bitrate videos in MPEG-DASH for MSE based media players

INTRODUCTION

Online video streaming best-practices have evolved significantly since the introduction of the html5 <video> tag in 2008. To overcome the limitations of progressive download streaming, online video industry leaders created proprietary Adaptive Bitrate Streaming formats like Microsoft Smooth Streaming, Apple HLS and Adobe HDS. In recent years, a new standard has emerged to replace these legacy formats and unify the video delivery workflow: MPEG-DASH.

In this article, we’d like to talk about why Adaptive Bitrate Streaming technology is a must-have for any VOD or Live online publisher, and how to encode Multi-bitrate videos mp4 files with ffmpeg to be compatible with MPEG-DASH streaming. In a subsequent post, we will show you how to package videos and generate the MPEG-DASH .mpd manifests that will allow you to deliver high quality adaptive streaming to your users.

ADVANTAGES OF ADAPTIVE STREAMING

Adaptive streaming technologies have become so popular because they greatly enhance the end-user experience: the video quality dynamically adjusts to the viewer’s network conditions, to deliver the best possible quality the user can receive at any given moment. It reduces buffering and optimizes delivery across a wide range of devices.

adaptive-streaming-schema

Adaptive streaming schema

But Adaptive Bitrate Streaming is trickier on the back-end side: distributers need to re-encode videos into multiple qualities and create a Manifest file containing the information on the location of the different quality segments that the user’s player will then use to obtain the segments he or she needs. Good news, this guide will show you how to do it the right way for MPEG-DASH!

TOOLS

As we saw before, there are two steps to providing adaptive streaming. First, encode your file into different qualities, and second, divide those files into segments and create the manifest that will link to the files.

Today we’ll take a look at how to  properly re-encode your files into different qualities. To do so, we’ll use the very well-known video tool FFmpeg. This library is the Swiss army knife of video encoding and packaging. Download FFmpeg

For the second step, you can use GPAC’s MP4Box, or do it on-the-fly with a Media Streaming Server such as Wowza, USP or Mist Server. We’ll provide an overview of the different solutions in the second post on this subject.

VIDEO ENCODING: THE IMPORTANCE OF I-FRAMES

Even if the MPEG-DASH standard is codec agnostic, we will encode our videos with h264/AAC codecs with fMP4 packaging, which is the most commonly supported format in today’s browsers.

The biggest trick in encoding is to align the I-frames between all the qualities. In the encoding language, I-frames are the frames that can be reconstructed without having any reference to other frames of the video. As they incorporate all of the information about the pixels in each image, I-Frames take up a lot more space and are much less frequent than the other types of encoding frames. Plus, they are not necessarily at the same place across different qualities if we try to encode with the default parameters. The issue is that after the encoding, we will need to divide the videos into short segments, and each segment has to start with an I-frame. If the I-frames are not aligned between different qualities, the lengths of the segments will not match, rendering quality switching impossible.

To ensure that users will be able to switch between the different qualities without issues, we need to force regular I-Frames in our file while we encode the video into different qualities.

FFMPEG COMMAND LINE

The cleanest way to force I-frame positions using FFmpeg is to use the  x264opts ‘keyint=24:min-keyint=24:no-scenecut’  argument.

  • -x264opts allow you to use additional options for the x264 encoding lib.
  • keyint sets the maximum GOP (Group of Pictures) size, which is the group of frames contained between two I-Frames.More info on GOPs.
  • min-keyint sets the minimum GOP size.
  • no-scenecut removes key-frames on scenecuts.

Let’s look at an example of encoding a 720p file with FFmpeg, while forcing I-Frames.

As you can see, we use a framerate of 24 images per second and then a GOP size of 24 images, which means that our I-Frames are located every second. Using the same command with a different bitrate, we can create files of three different qualities with the same I-Frames position:

To choose the value of the GOP size, you’ll need to take in account the length of the segments you want to generate: segment length * framerate has to be a multiple of the GOP size.

Example: if the framerate is 24 and you want 2-seconds segments, the GOP size needs to be either 48 or 24). Know that if your GOP size is too big, the seek might not work properly in some players: as for quality switching, the player has to seek out an I-frame to resume the streaming.

to learn more about h264 encoding with ffmpeg, check out their guide.

CONCLUSION

Congratulations! You’ve successfully encoded our video into different qualities with aligned I-frames. Now, simply fragment them into video segments and generate the MPEG-DASH Manifest file. We’ll show you how to do this in our next blog post, so stay tuned!


PART 2


In our previous article How to encode Multi-bitrate videos in MPEG-DASH for MSE based media players (1/2), we examined how to encode a video file in different qualities with FFmpeg encoder. If all has gone well, you now have different files with different bitrates for your video. Because we have chosen an adequate framerate and GOP size, we can now create a functional Multibitrate Stream with several video qualities for the player of your choice, according to different end-user parameters (network connection, CPU power, etc.).

As we saw before, there are several Adaptive Bitrate Streaming technologies out there. For this tutorial, we chose to focus on MPEG-DASH, which we strongly believe will become a ubiquitous format in upcoming years. We are not alone in this belief. The DASH working group has the support of a range of companies such as Apple, Adobe, Microsoft, Netflix, Qualcomm, and many others. Unlike other RTMP-based flash streaming technologies, MPEG-DASH uses the usual HTTP protocol.

The advantage of HTTP is that it doesn’t need an expensive, near-continuous connection with a streaming server (as is the case for RMTP) and that it is firewall friendly and can take advantage of HTTP caching mechanisms on the web. It is also interesting to compare the DASH format with existing streaming formats like Apple HTTP Live Streaming or Microsoft Smooth Streaming. All of these implementations use different manifest and segment format; therefore, to receive content from each server, a device must support its corresponding proprietary client protocol. DASH has been created to become am open standard (supposedly royalty-free) format to stream content from any standard-based server to any type of client. In a nutshell, there are a host of benefits to using MPEG-DASH, and we hope you are now convinced that if you need to deliver Adaptive Bitrate Streaming on the web, MPEG-DASH is the way to go.

In this article, we’ll see :

  • first what is MPEG-DASH format and how it mainly works
  • then we’ll point out different tools you can use to generate MPEG-DASH manifests
  • and eventually we’ll explain how to play the MPEG-DASH files you will have just created.

At the end of your reading, you will be in possession of re-encoded files and an MPEG-DASH .mpd manifest generated from the different video files you packaged. This manifest is the most important file for delivering high quality adaptive streams to users, so stay focused!

 

1. LET’S DASHIFY!

MPEG-DASH, like other Adaptive Bitrate Streaming technologies, has two main components: the encoded streams that will be played for the user and the manifest file. This file contains the metadata necessary to stream the correct video file to the user. More precisely, it splits the video into several time periods, which are in turn split into adaptation sets. An adaptation set contains media content. Generally there are two adaptation sets: one contains the video and the other the audio. However, there may be more adaptation sets (if there are several languages, for instance) or only one adaptation set. In this case, the single set contains both the video and the audio; the content is said to be muxed.

11808-ozer-dash-fig1-org-1

Each adaptation set contains one or several representations, each a single stream in the adaptive streaming experience. In the figure, Representation 1 is 640×480@500Kbps and Representation 2 is 640×480@250Kbps. Each representation is divided into media segments called chunks. These data chunks can be represented by a list of their urls, or in time-based or index-based url templates. They can also be represented by byte-range in a single media file. In this case, no need to split the media file into thousands of little files for each data chunk (as is required for HLS, for example).

The DASH manifest, a .mpd file (Media Presentation Description), is an XML providing the identification and location of the above items, particularly the urls where the media files are hosted. To play the stream, the DASH player simply needs the manifest, as it fetches each part of the video needed from the information contained in the manifest. According to the network and CPU status, the player will choose the segment from the most suitable representation to deliver a stream with no buffering.

Let’s look at what tools we can use to create this manifest file from our mp4 video files, and how to use these tools!

 

2. WHAT TOOLS SHOULD I USE TO PACKAGE IN MPEG-DASH?

  • DO IT YOURSELF WITH MP4BOX:

MP4Box is a very useful multimedia packager distributed by GPAC. You can get the binary files here. Once you have installed it, you can execute the following command to Dashify your file, with the correct options and arguments. Watch out though, it is important that when you encoded your files in mp4, you entered the right parameters to get one IFrame per segment. You can refer to our previous article about encoding with ffmpeg to learn how to do it properly with ffmpeg.

  • -dash [DURATION]: enables MPEG-DASH segmentation, creating segments of the given duration (in milliseconds). We advise you to set the duration to 2 seconds for Live and short VOD files, and 5 seconds for long VOD videos.
  • -rap -frag-rap: forces segments to begin with Random Access Points. Mandatory to have a working playback.
  • profile [PROFILE]: MPEG-DASH profile. Set it to onDemand for VOD videos, and live for live streams.
  • -out [path/to/outpout.file]: output file location. This parameter is optional: by default, MP4box will create an output.mpd file and the corresponding output.mp4 files in the current directory.
  • [path/to/input1.file]…: indicates where your input mp4 files are. They can be video or audio files.
    (More options are available here.)

Let’s take a look at an example. For a set with 2 video qualities (vid1.mp4 and vid2.mp4) and 2 audio qualities (aud1.mp4 and aud2.mp4), you would need to run the following command:

Be careful: for this to work, you need vid1.mp4 and vid2.mp4 to have only video. If not, you can isolate the video track by adding #video after the file name: vid1.mp4#video and vid2.mp4#video. It might be better to generate a manifest with at least one video and one audio track, as muxed content isn’t always supported (Wowza isn’t able to manage muxed content, for instance).

After doing that, you should get one manifest file and one file by track (in the previous example, you should get four mp4 files: two audio files and two video files). That’s it –  you have just Dashified your video files! You can then put them on a server and provide the manifest url to your player. The urls of the mp4 files are contained in the manifest; just open it with a text editor if you ever wish to change them.

  • THE EASY WAY: WOWZA

Wowza is a Media Server providing easy solutions to stream your Videos On Demand or your Live streams in the format of your choice. Wowza has paid services to stream videos, but also offers a Free Trial if you’d like to try the Wowza Streaming Engine. This service has a lot of benefits if you need to get your videos online easily and quickly, and one of them is that Wowza provides DASH for your videos. Simply load your mp4 files (encoded using the methods from the first part of our article) to your server and Wowza will take care of creating the DASH manifest from the files. As simple as that. To get a url to your manifest, read this part of the Wowza documentation explaining the Wowza url syntax. Another advantage is that it is very quick to install Streamroot solutions if your streams are created using the Wowza Streaming Engine, so go for it!

You can find on their website the Wowza quick start guide and Wowza pricing. Moreover, Wowza can also be used to transcode your files (meaning to re-encode them) with a paid add-on.

  • OTHER FREE TOOLS:

Of course, other tools exist to re-package your videos in DASH format.

  • Nimble is a free media server that is extremely easy to use and totally compatible with Streamroot. Linux-based, it is a very reliant and efficient server that works on all operating systems. For easy integration with Streamroot, check out our tutorial on configuring Nimble and Streamroot.
  • The Bento4 Packager is a software tool for content packaging and parsing that works with several DRM platforms like CENC, PlayReady or Marlin. Like MP4box, it allows you to package your mp4 in DASH, generating a .mpd manifest. You can download it here and find some useful documentation here.
  • Nginx is an open-source HTTP Server that can be turned into a Media Server thanks to the nginx-rtmp-module. It takes a Live RTMP stream in input and on the other side provides a Live stream in HLS or Dash format. Nginx is free but has some constraints: it is only for live streams, your input stream has to be a RTMP stream, and the setup can be quite painful.
  • Unified Streaming Platform is a very efficient platform to encode and stream your media. With a host of output formats, it can be used to deliver MPEG-DASH and HLS for VOD or Live streaming. Among its benefits are smooth integration into your CDN and DRM management. Users must pay a license to use USP, but a Free Trial installation is available here to help you make up your mind.
  • If you’d rather use a Free Media Server solution, take a look at MistServer. This solution is said to be highly customizable and potentially very effective for many output formats (including HLS & DASH). The one trade-off: you might have to go through a complex configuration process. Be aware, however, that DASH and DRM are only available in the non-free MistServer Pro version.
  • There are also other (paid) solutions to obtain DASH streams from your videos: Zencoder and encoding.com are two cloud-encoding services which allow you to create DASH streams from any video files.

To sum it up, here is a comparative table to help you make your choice:

dash-tools-comparative-sheet-feuille-1-2-1024x286

* available with a paying add-on
** a Pro version exists including DASH and DRM management.

 

3. AND NOW THAT I HAVE MY DASH, HOW DO I PLAY IT?

If you have your manifest and mp4 files online and want to test them quickly, why not try the Streamroot demo player? Just enter the url of your manifest, click on load, and your VOD or Live stream will be played by our player. If you do this, open the demo page with your stream in several tabs: you will see the Streamroot p2p module start working with your stream and a graph will show you how much bandwidth you could save with Streamroot’s technology. ( PS: our player also supports HLS and Smooth Streaming streams !)

Be careful though, you will have to allow CORS requests, Range requests and OPTIONS requests on your webserver (quick and easy with Wowza!) if you want the p2p module to work. This is necesary as all the new HTML5 based players are asking the video segment with XHR requests ! For more information on how to configure the CORS parameters on your webserver, you can check out this website.

If you have your DASH files hosted on a server, you can also use the dash.js player to play them on a web page. Check out the documentation on github to install it. If you just want to test your stream, go to this page and enter the url of your manifest to play your media.

Lastly, if your file is stored locally, you can use a local player like the one offered by GPAC (the same guys who did MP4Box). This player can be configured in many ways and is perfect for testing the DASH files you just created. You can download it here and find some interesting configuration information here.

 

CONCLUSION

Congrats! Now that you’ve read the both parts of our article, you perfectly know how to encode your files in different bitrates, how to package them in MPEG-DASH files, and event how to test it with the Streamroot Demo Player. At this time, you’re totally ready to integrate Streamroot p2p module into your streaming delivery system !

Otherwise you are still ready to deliver high quality Multibitrate Streaming in a format that is expected to become the next international Streaming Standard.

Stay tuned for other news about MPEG-DASH and video streaming – we still have a lot to say about it. Finally, please feel free to send us feedback about this article via comments or Twitter. We’re also very curious about what hot topic you would like to read about in our next month blog post ( Dash ? HLS ?  HTML5 streaming and Media Source Extensions ? or more WebRTC tutorials ?), so don’t hesitate to send us your suggestions !

Cắt video mà ko làm giảm chất lượng

ffmpeg -i largefile.mp4 -t 00:50:00 -c copy smallfile1.mp4 -ss 00:50:00 -c copy smallfile2.mp4

cắt ra 2 file: file1 từ đầu đến phút 50, file2 từ phút 50 đến cuối

ffmpeg -i %1 -ss 00:00:07.600 -vcodec copy -acodec copy -movflags +faststart %~n1_1.mp4

tạo file mới từ giây thứ 00:00:07.600 đến cuối file

Correcting for audio/video sync issues with the ffmpeg program’s ITSOFFSET switch

From https://wjwoodrow.wordpress.com/2013/02/04/correcting-for-audiovideo-sync-issues-with-the-ffmpeg-programs-itsoffset-switch/

The ffmpeg program has numerous “switches” that help to adjust and convert audio and video files. Some of them are not explained very well in the documentation, and many websites have confusing postings by well-meaning people trying to make use of the switches. I will try to explain how to use a couple of these switches to correct common sync problems with videos. It will take some time to learn, but is very powerful once you understand it.

The itsoffset switch is used to nudge (forward or backward) the start time of either an audio or video “stream”. A typical video camera will record one video stream and one audio stream which are merged into one file. On my camera, they merge into an MTS high-def formatted file. But sometimes during a conversion to another file format (such as mp4), the audio and video will not remain in sync and the itsoffset switch can be used to adjust them.

The itsoffset switch is nearly always used in conjunction with the “map” switch, since this tells ffmpeg which stream you want to affect, and what streams you wish to merge into a new output file.

For our purposes, we will deal with just one input file that has two streams out of sync (the most common problem). We will use this one input file twice, once for its audio portion, and once for its video portion. We will use itsoffset and map to delay one of the streams, and then merge them back together into another file.

in-sync

There are a few different ways to accomplish the same result with minor variations, and I will try to demonstrate them. First I will demonstrate the syntax of the “map” and “itsoffset” switches and what they mean. Here is a picture to better clarify the description (click pic for better view):
ffcmd
Map syntax:
-map “input file number”:”stream number”
The input file number will be 0 or 1, and stream will be 0 (video) or 1 (audio)

An important side note on file numbering with ffmpeg: 0 is the first, 1 is the second

“-map 0:1” means first input file mentioned on the command-line and its stream 1 (audio)
“-map 1:0” means second input file and stream 0 (video)
“-map 1:1” means second input file and stream 1 (audio)

“Itsoffset” is used with a specific amount of time that you want to apply to a file. If the audio is off by 1 second, you might type -itsoffset 1.0 (or -itsoffset 00:00:01.0000). Itsoffset applies to both streams of a file, and we use “map” to split out the stream we want to change. This is why we have to specify the input file twice, once for the stream we don’t change, and once for the stream we do change.

I’ll talk more about how to find the correct time shortly.

“-itsoffset 1.0 -i clip.mts” means to apply a 1 second delay to the input file clip.mts

Also, it matters WHERE you put the itsoffset switch in the command-line.
 It applies to the input file that comes just after it.

Trial and Error with a small clip
Finding the correct adjustment time can be tricky. Sometimes it may be out of sync by a tiny amount like 0.150 seconds, but it makes all the difference in the world when you get it correct. Trial and error is the only way I know to get it, so working with a 1 minute clip instead of the whole video you can get a fast answer. Once you have the clip fixed the way you like, you can apply the settings to the whole video.

To extract just a 1 minute portion of a video, try this:

ffmpeg -ss 15:30 -i 00001.MTS -vcodec copy -acodec copy -t 1:00 clip.mts

(takes the video 00001.MTS, goes to fifteen minutes and thirty seconds [-ss 15:30] and then takes 1 minute [-t 1:00] from there and creates a new file called clip.mts. There is often more action in the middle of a video, so I chose to start there.)

So we take the short clip and use it to adjust the sync. Go ahead and create a clip so you can experiment with it.

Examples
The following examples move a stream by 2.0 seconds so you can better perceive the change (assuming that you follow the examples with a clip of your own).

The following commandlines all result the same thing, “delay the audio by 2 seconds”. This means that in the output file, you will see the video start and then 2 seconds later the audio will start. The differences are the location of “itsoffset” and what stream is mapped:

ffmpeg -i clip.mts -itsoffset 2.0 -i clip.mts -vcodec copy -acodec copy -map 0:0 -map 1:1 delay1.mts
Applies itsoffset to file “1” (because it is placed just before the 2nd input), and the map for file 1 points to stream 1 (audio)

ffmpeg -i clip.mts -itsoffset 2.0 -i clip.mts -vcodec copy -acodec copy -map 1:1 -map 0:0 delay2.mts
Applies itsoffset to file “1” (because it is placed just before the 2nd input), and the map for
file 1 points to stream 1 (audio). I just changed the order of which map came first, it doesn’t matter.

ffmpeg -itsoffset 2.0 -i clip.mts -i clip.mts -vcodec copy -acodec copy -map 0:1 -map 1:0 delay3.mts
Applies itsoffset to file “0” (because it is placed just before the 1st input), and the map for
file 0 points to stream 1 (audio). So I changed the location of itsoffset and the mapping.

ffmpeg -i clip.mts -itsoffset -2.0 -i clip.mts -vcodec copy -acodec copy -map 0:1 -map 1:0 delay4.mts
This one adjusts the video forward 2 seconds rather than delaying the audio, but accomplishes the same thing. I gave a negative 2.0 value to itsoffset. Itsoffset is just before file 1, and map for file 1 points to stream 0 (video). That is, instead of waiting two seconds to start the audio, we tell the video to nudge back two seconds.

*Note: “-vcodec copy -acodec copy” can be shortened to “-c:v copy -c:a copy” This command keeps the same video and audio format in the output file as was in the input file.

That’s it for experiments with the clip.

Now lets deal with the two most common sync problems. Remember that we are using the out of sync file as the input twice, splitting out just one stream from each input, applying a delay to one of the streams, and then merging the streams back into an output file.

aud-ahead

vid-ahead

CASE 1: Audio happens before video (aka “need to delay audio stream 1”):
ffmpeg -i clip.mp4 -itsoffset 0.150 -i clip.mp4 -vcodec copy -acodec copy -map 0:0 -map 1:1 output.mp4

The “itsoffset” in the above example is placed before file 1 (remember that linux counts from 0, so 0 is the first and 1 is the second), so when the mapping happens, it says “Take the video of file 0 and the audio of file 1, leave the video of file 0 alone and apply the offset to the audio of file 1 and merge them into a new output file”. The delay is only .15 seconds.

CASE 2: Video happens before audio (aka “need to delay video stream 0”):
ffmpeg -i clip.mp4 -itsoffset 0.150 -i clip.mp4 -vcodec copy -acodec copy -map 0:1 -map 1:0 output.mp4

The “itsoffset” in the above example is placed before file 1. When the mapping happens, it says “Take the audio of file 0 and the video of file 1, leave the audio of file 0 alone and apply the offset to the video of file 1 and merge them into a new output file”. The delay is only .15 seconds.

I hope this all made sense to you and helps clarify what can be a very confusing command-line.

How to watermark a video using FFmpeg

This article explains how to add a watermark image to a video file using FFmpeg (www.ffmpeg.org). Typically a watermark is used to protect ownership/credit of the video and for Marketing/Branding the video with a Logo.  One of the most common areas where watermarks appear is the bottom right hand corner of a video.  I’m going to cover all four corners for you, since these are generally the ideal placements for watermarks.  Plus, if you want to get really creative I’ll let you in on an alternative.

FFmpeg is a free software / open source project that produces libraries and programs for handling multimedia such as video.  Many of it developers also are part of the MPlayer project.  Primarily this project is geared towards Linux OS, however, much of it has been ported over to work with Windows 32bit and 64bit.  FFmpeg is being utilized in a number of software applications; including web applications such as PHPmotion (www.phpmotion.com).  Not only does it provide handy tools, it also provided extremely useful features and functionality that can be added to a variety of software applications.

FFmpeg on Windows
If you want to use FFpmeg on Windows, I recommend checking out the FFmpeg Windows builds at Zeranoe (http://ffmpeg.zeranoe.com/builds/) for compiled binaries, executables and source code.  Everything you need to get FFmpeg working on Windows is there.  If you’re looking for a handy Windows GUI command line tool, check out WinFF www.winff.org.  You can configure WinFF to work with whatever builds of FFmpeg you have installed on windows.  You can also customize you’re own presets (stored command lines) to work with FFpmeg.

Getting familiar with it.
Perhaps one of the best ways to get familiar with using FFmpeg on windows is to create a .bat script file that you can modify and experiment with.  Retyping command lines over again from scratch becomes a tedious process, especially when working with an command line tool you’re trying to become more familiar with.  If you’re on Linux you’ll be working with shell scripts instead of .bat files.

Please keep in mind that FFmpeg has been, and still is, a rather experimental project.  Working with FFmpeg’s Command Line Interface (CLI) is not easy at first and will take some time getting familiar with it.  You need to be familiar with the basics of opening a video file, converting it, and saving the output to a new video file.  I strongly recommend creating and working with FFmpeg in shell/bat scripting files while learning the functionality of it’s Command Line Interface.

-vhook (Video Hook)
Please note that the functionality of “-vhook” (video hook) in older versions of FFmpeg has been replaced with “-vf” (video filters) libavfilter . You’ll need to use –vf instead of –vhook in the command line.  This applies to both Linux and Windows builds.

What we’re going to do
In a nutshell;  We’re going to load a .png image as a Video Source “Movie” and use the Overlay filter to position it. While it might seem a little absurd to load an image file as a Video Source “Movie” to overlay, this is the way it’s done. (i.e. movie=watermarklogo.png)

What’s awesome about working with png (portable network graphics) files is that they support background transparency and are excellent to use in overlaying on top of videos and other images.

The Overlay Filter  overlay=x:y
This filter is used to overlay one video on top of another. It accepts the parameters x:y.  Where x and y is the top left position of overlayed video on the main video.  In this case, the top left position of the watermark graphic on the main video.

To position the watermark 10 pixels to the right and 10 pixels down from the top left corner of the main video, we would use “overlay=10:10”

The following expression variables represent the size properties of the Main and overlay videos.

  • main_w (main video width)
  • main_h (main video height)
  • overlay_w (overlay video width)
  • overlay_h  (overlay video hieght)

For example if the; main video is 640×360 and the overlay video is 120×60 then

  • main_w = 640
  • main_h = 360
  • overlay_w = 120
  • overlay_h = 60

We can get the actual size (width and height in pixels) of both the watermark and the video file, and use this information to calculate the desired positioning of things.  These properties are extremely handy for building expressions to programmatically set the x:y position of the overlay on top of the main video. (see examples below)

Watermark Overlay Examples

VideoWaterMark

The following 4 video filter (-vf) examples embed an image named “watermarklogo.png” into one of the four corners of the video file, the image is placed 10 pixels away from the sides (offset for desired padding/margin).

Top left corner
ffmpeg –i inputvideo.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=10:10 [out]" outputvideo.flv

Top right corner
ffmpeg –i inputvideo.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=main_w-overlay_w-10:10 [out]" outputvideo.flv

Bottom left corner
ffmpeg –i inputvideo.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=10:main_h-overlay_h-10 [out]" outputvideo.flv

Bottom right corner
ffmpeg –i inputvideo.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=main_w-overlay_w-10:main_h-overlay_h-10 [out]" outputvideo.flv

These examples use something known as Filter Chains.  The pad names for streams used in the filter chain are contained in square brackets [watermark],[in] and [out].  The ones labeled [in] and [out] are specific to video input and output.  The one labeled [watermark] is a custom name given to the stream for the video overlay.  You can change [watermark] to another name if you like.  We are taking the output of the [watermark] and merging it into the input [in] stream for final output [out].

Padding Filter vs. Offset
A padding filter is available to add padding to a video overlay (watermark), however it’s a little complicated and confusing to work with.  In the examples above I used an offset value of 10 pixels in the expressions for x and y.

For instance, when calculating the x position for placing the watermark overlay to right side of the video, 10 pixels away from the edge.

x=main_w-overlay_x-10 
<strong>or rather</strong> 
x=((main video width)-(watermark width)-(offset))

Another Watermark positioning Technique
Is to create a .png with same size as the converted video (ie. 640×360).  Set it’s background to being transparent and place your watermark/logo where you desire it to appear over top of the video.  This what’s known as a “full overlay”.   You can actually get rather creative with your watermark design and branding using this technique.

ffmpeg –i inputvideo.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=0:0 [out]" outputvideo.flv

Full command line example
This is a more realistic example of what a a full FFmpeg command line looks like with various switches enabled.  The examples in this article are extremely minified so you could get the basic idea.

ffmpeg -i test.mts -vcodec flv -f flv -r 29.97 -aspect 16:9 -b 300k -g 160 -cmp dct -subcmp dct -mbd 2 -flags +aic+cbp+mv0+mv4 -trellis 1 -ac 1 -ar 22050 -ab 56k -s 640x360 -vf "movie=dv_sml.png [wm]; [in][wm] overlay=main_w-overlay_w-10:main_h-overlay_h-10 [out]" test.flv

Windows users – Please Note
On the Windows OS the file paths used in video filters, such as “C:\graphics\watermarklogo.png” should be modified to be “/graphics/watermarklogo.png”.  I myself experienced errors being thrown while using the Windows Builds of FFmpeg.  This behavior may or may not change in the future.  Please keep in mind that FFmeg is a Linux based project that has been ported over to work on Windows.

Watermarks and Branding in General
You can get some really great ideas for watermarking and branding by simply watching TV or videos online.  One thing that many people tend to over look is including their website address in the watermark.  Simply displaying it at the end or start of the video is not as effective.  So some important elements would be a Logo, perhaps even a phone number or email address.  The goal is to give people some piece of useful information for contacting or follow you. If you display it as part of your watermark, they have plenty of time to make note of your website URL, phone number or email address.  A well designed logo is effective as well.  The more professional looking your logo is, the more professional you come off as being to your audience.

If you are running a video portal service, and wish to brand the videos in conjunction/addition to watermark branding being done by your users.  It’s wise to pick a corner such as the top right or top left to display your watermark.  Perhaps go for far to give them an option of specifying which corner to display your watermark in, so it does not conflict with their own branding.  I thought this was worth wild to mention since FFmpeg is used in web applications such as PHPmotion.

If you’re working with “full overlays” you can get pretty creative. You can get some really amazing ideas from watching the Major News networks on TV.  Even the Home shopping networks such as QVC.  These are just a few ideas for creative sources to watch and pull ideas from.

Comments
I’ve tried to make this article somewhat useful, however it’s by no means all encompassing.  If there is any interest, I have examples of how to chain a Text Draw Filter to display text along with a Watermark overlay. Even how to incorporate a video fade-in filter.  Working with filter chains can prove to be rather challenging at times.


 

Examples to overlay/watermark image on video:

Centered

enter image description here

ffmpeg -i input.mp4 -i logo.png -filter_complex \
"overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2" \
-codec:a copy output.mp4

or with the shortened overlay options:

overlay=(W-w)/2:(H-h)/2

Top left

This is the easy one because the default, if you provide no options to overlay, is to place the image in the top left.

This example adds 5 pixels of padding so the image is not touching the edges:

overlay=5:5

Top right

With 5 pixels of padding:

overlay=main_w-overlay_w-5:5

or with the shortened options:

overlay=W-w-5:5

Bottom right

With 5 pixels of padding:

overlay=main_w-overlay_w-5:main_h-overlay_h-5

or with the shortened options:

overlay=W-w-5:H-h-5

Bottom left

With 5 pixels of padding:

overlay=5:main_h-overlay_h

or with the shortened options:

overlay=5:H-h-5

Notes

  • The audio is simply stream copied (remuxed) in this example with -codec:a copy instead of being re-encoded. You may have to re-encode depending on your output container format.
  • See the documentation on the overlay video filter for more information and examples.
  • See the FFmpeg H.264 Video Encoding Guide for more information on getting a good quality output.
  • If your image being overlaid is RGB colorspace (such as most PNG images) you may see a visual improvement if you add format=rbg to your overlay. Note that if you do this and if you’re outputting H.264, then you will have to add format=yuv420p (this is another filer–it is different that the similarly named option in the overlay filter). So it may look like this:
    overlay=5:H-h-5:format=rgb,format=yuv420p