Cách đánh số trang trong word 2007 2010 2013 từ trang bất kỳ trang đầu tiên, trang bìa thứ 2 đơn giản
The ffmpeg program has numerous “switches” that help to adjust and convert audio and video files. Some of them are not explained very well in the documentation, and many websites have confusing postings by well-meaning people trying to make use of the switches. I will try to explain how to use a couple of these switches to correct common sync problems with videos. It will take some time to learn, but is very powerful once you understand it.
The itsoffset switch is used to nudge (forward or backward) the start time of either an audio or video “stream”. A typical video camera will record one video stream and one audio stream which are merged into one file. On my camera, they merge into an MTS high-def formatted file. But sometimes during a conversion to another file format (such as mp4), the audio and video will not remain in sync and the itsoffset switch can be used to adjust them.
The itsoffset switch is nearly always used in conjunction with the “map” switch, since this tells ffmpeg which stream you want to affect, and what streams you wish to merge into a new output file.
For our purposes, we will deal with just one input file that has two streams out of sync (the most common problem). We will use this one input file twice, once for its audio portion, and once for its video portion. We will use itsoffset and map to delay one of the streams, and then merge them back together into another file.
There are a few different ways to accomplish the same result with minor variations, and I will try to demonstrate them. First I will demonstrate the syntax of the “map” and “itsoffset” switches and what they mean. Here is a picture to better clarify the description (click pic for better view):
-map “input file number”:”stream number”
The input file number will be 0 or 1, and stream will be 0 (video) or 1 (audio)
An important side note on file numbering with ffmpeg: 0 is the first, 1 is the second
“-map 0:1” means first input file mentioned on the command-line and its stream 1 (audio)
“-map 1:0” means second input file and stream 0 (video)
“-map 1:1” means second input file and stream 1 (audio)
“Itsoffset” is used with a specific amount of time that you want to apply to a file. If the audio is off by 1 second, you might type -itsoffset 1.0 (or -itsoffset 00:00:01.0000). Itsoffset applies to both streams of a file, and we use “map” to split out the stream we want to change. This is why we have to specify the input file twice, once for the stream we don’t change, and once for the stream we do change.
I’ll talk more about how to find the correct time shortly.
“-itsoffset 1.0 -i clip.mts” means to apply a 1 second delay to the input file clip.mts
Also, it matters WHERE you put the itsoffset switch in the command-line.
It applies to the input file that comes just after it.
Trial and Error with a small clip
Finding the correct adjustment time can be tricky. Sometimes it may be out of sync by a tiny amount like 0.150 seconds, but it makes all the difference in the world when you get it correct. Trial and error is the only way I know to get it, so working with a 1 minute clip instead of the whole video you can get a fast answer. Once you have the clip fixed the way you like, you can apply the settings to the whole video.
To extract just a 1 minute portion of a video, try this:
ffmpeg -ss 15:30 -i 00001.MTS -vcodec copy -acodec copy -t 1:00 clip.mts
(takes the video 00001.MTS, goes to fifteen minutes and thirty seconds [-ss 15:30] and then takes 1 minute [-t 1:00] from there and creates a new file called clip.mts. There is often more action in the middle of a video, so I chose to start there.)
So we take the short clip and use it to adjust the sync. Go ahead and create a clip so you can experiment with it.
The following examples move a stream by 2.0 seconds so you can better perceive the change (assuming that you follow the examples with a clip of your own).
The following commandlines all result the same thing, “delay the audio by 2 seconds”. This means that in the output file, you will see the video start and then 2 seconds later the audio will start. The differences are the location of “itsoffset” and what stream is mapped:
ffmpeg -i clip.mts -itsoffset 2.0 -i clip.mts -vcodec copy -acodec copy -map 0:0 -map 1:1 delay1.mts
Applies itsoffset to file “1” (because it is placed just before the 2nd input), and the map for file 1 points to stream 1 (audio)
ffmpeg -i clip.mts -itsoffset 2.0 -i clip.mts -vcodec copy -acodec copy -map 1:1 -map 0:0 delay2.mts
Applies itsoffset to file “1” (because it is placed just before the 2nd input), and the map for
file 1 points to stream 1 (audio). I just changed the order of which map came first, it doesn’t matter.
ffmpeg -itsoffset 2.0 -i clip.mts -i clip.mts -vcodec copy -acodec copy -map 0:1 -map 1:0 delay3.mts
Applies itsoffset to file “0” (because it is placed just before the 1st input), and the map for
file 0 points to stream 1 (audio). So I changed the location of itsoffset and the mapping.
ffmpeg -i clip.mts -itsoffset -2.0 -i clip.mts -vcodec copy -acodec copy -map 0:1 -map 1:0 delay4.mts
This one adjusts the video forward 2 seconds rather than delaying the audio, but accomplishes the same thing. I gave a negative 2.0 value to itsoffset. Itsoffset is just before file 1, and map for file 1 points to stream 0 (video). That is, instead of waiting two seconds to start the audio, we tell the video to nudge back two seconds.
*Note: “-vcodec copy -acodec copy” can be shortened to “-c:v copy -c:a copy” This command keeps the same video and audio format in the output file as was in the input file.
That’s it for experiments with the clip.
Now lets deal with the two most common sync problems. Remember that we are using the out of sync file as the input twice, splitting out just one stream from each input, applying a delay to one of the streams, and then merging the streams back into an output file.
CASE 1: Audio happens before video (aka “need to delay audio stream 1”):
ffmpeg -i clip.mp4 -itsoffset 0.150 -i clip.mp4 -vcodec copy -acodec copy -map 0:0 -map 1:1 output.mp4
The “itsoffset” in the above example is placed before file 1 (remember that linux counts from 0, so 0 is the first and 1 is the second), so when the mapping happens, it says “Take the video of file 0 and the audio of file 1, leave the video of file 0 alone and apply the offset to the audio of file 1 and merge them into a new output file”. The delay is only .15 seconds.
CASE 2: Video happens before audio (aka “need to delay video stream 0”):
ffmpeg -i clip.mp4 -itsoffset 0.150 -i clip.mp4 -vcodec copy -acodec copy -map 0:1 -map 1:0 output.mp4
The “itsoffset” in the above example is placed before file 1. When the mapping happens, it says “Take the audio of file 0 and the video of file 1, leave the audio of file 0 alone and apply the offset to the video of file 1 and merge them into a new output file”. The delay is only .15 seconds.
I hope this all made sense to you and helps clarify what can be a very confusing command-line.
You most probably know how to assign a drive letter to a network share, but that’s not impressive. Why not change that by mapping your network shares from the command prompt?
Mapping a Network Drive in Windows
To map a network drive from the command prompt we have to use the net use command. If you already know the UNC path to your share, then you’re good to go–the following command will map the movies share to the S drive.
net use s: \\tower\movies
Your share will probably be protected with some sort of authentication, and the user switch allows us to specify a username and password. The following example assumes:
- The username you authenticate with on the remote machine is HTG
- The password for the HTG account is Pa$$word
net use s: \\tower\movies /user HTG Pa$$word
The previous commands are not persistent, this means that the minute you reboot your PC the shares will disappear from your computer. To get the shares to survive a reboot, you need to add use the persistent switch. Note that you can also just use /P instead.
net use s: \\tower\movies /P:Yes
If you ever need to delete a mapped network drive, you can use the delete switch after specifying it’s drive letter.
net use s: /Delete
Alternatively, if you ever want to get rid of all the shares on your machine, you can use the wildcard character instead of specifying a specific drive letter.
net use * /Delete
That’s all there is to it.
Windows has included batch files since before it existed… batch files are really old! Old or not, I still find myself frequently creating batch files to help me automate common tasks. One common task is uploading files to a remote FTP server. Here’s the way that I got around it.
First, you will have to create a file called fileup.bat in your windows directory, or at least inside some directory included in your path. You can use the “path” command to see what the current path is.
Inside the batch file, you will want to paste the following:
echo user MyUserName> ftpcmd.dat
echo MyPassword>> ftpcmd.dat
echo bin>> ftpcmd.dat
echo put %1>> ftpcmd.dat
echo quit>> ftpcmd.dat
ftp -n -s:ftpcmd.dat SERVERNAME.COM
You will want to replace the MyUserName, MyPassword and SERVERNAME.COM with the correct values for your ftp server. What this batch file is doing is scripting the ftp utility using the -s option for the command line utility.
The batch file uses the “echo” command to send text to the ftp server as if you had typed it. In the middle of the file you can add extra commands, potentionally a change directory command:
echo cd /pathname/>>ftpcmd.dat
In order to call this batch file, you will call the batchfile using the fileup.bat name that we gave it, and pass in the name of a file as the parameter. You don’t have to type the .bat part of the filename to make it work, either.
> fileup FileToUpload.zip
Connected to ftp.myserver.com.
220 Microsoft FTP Service
ftp> user myusername
331 Password required for myusername.
230 User myusername logged in.
200 Type set to I.
ftp> put FileToUpload.zip
200 PORT command successful.
150 Opening BINARY mode data connection for FileToUpload.zip
226 Transfer complete.
ftp: 106 bytes sent in 0.01Seconds 7.07Kbytes/sec.
And that’s all there is to it. Now your file should be sitting on the remote server.
This article explains how to add a watermark image to a video file using FFmpeg (www.ffmpeg.org). Typically a watermark is used to protect ownership/credit of the video and for Marketing/Branding the video with a Logo. One of the most common areas where watermarks appear is the bottom right hand corner of a video. I’m going to cover all four corners for you, since these are generally the ideal placements for watermarks. Plus, if you want to get really creative I’ll let you in on an alternative.
FFmpeg is a free software / open source project that produces libraries and programs for handling multimedia such as video. Many of it developers also are part of the MPlayer project. Primarily this project is geared towards Linux OS, however, much of it has been ported over to work with Windows 32bit and 64bit. FFmpeg is being utilized in a number of software applications; including web applications such as PHPmotion (www.phpmotion.com). Not only does it provide handy tools, it also provided extremely useful features and functionality that can be added to a variety of software applications.
FFmpeg on Windows
If you want to use FFpmeg on Windows, I recommend checking out the FFmpeg Windows builds at Zeranoe (http://ffmpeg.zeranoe.com/builds/) for compiled binaries, executables and source code. Everything you need to get FFmpeg working on Windows is there. If you’re looking for a handy Windows GUI command line tool, check out WinFF www.winff.org. You can configure WinFF to work with whatever builds of FFmpeg you have installed on windows. You can also customize you’re own presets (stored command lines) to work with FFpmeg.
Getting familiar with it.
Perhaps one of the best ways to get familiar with using FFmpeg on windows is to create a .bat script file that you can modify and experiment with. Retyping command lines over again from scratch becomes a tedious process, especially when working with an command line tool you’re trying to become more familiar with. If you’re on Linux you’ll be working with shell scripts instead of .bat files.
Please keep in mind that FFmpeg has been, and still is, a rather experimental project. Working with FFmpeg’s Command Line Interface (CLI) is not easy at first and will take some time getting familiar with it. You need to be familiar with the basics of opening a video file, converting it, and saving the output to a new video file. I strongly recommend creating and working with FFmpeg in shell/bat scripting files while learning the functionality of it’s Command Line Interface.
-vhook (Video Hook)
Please note that the functionality of “-vhook” (video hook) in older versions of FFmpeg has been replaced with “-vf” (video filters) libavfilter . You’ll need to use –vf instead of –vhook in the command line. This applies to both Linux and Windows builds.
What we’re going to do
In a nutshell; We’re going to load a .png image as a Video Source “Movie” and use the Overlay filter to position it. While it might seem a little absurd to load an image file as a Video Source “Movie” to overlay, this is the way it’s done. (i.e. movie=watermarklogo.png)
What’s awesome about working with png (portable network graphics) files is that they support background transparency and are excellent to use in overlaying on top of videos and other images.
The Overlay Filter
This filter is used to overlay one video on top of another. It accepts the parameters x:y. Where x and y is the top left position of overlayed video on the main video. In this case, the top left position of the watermark graphic on the main video.
To position the watermark 10 pixels to the right and 10 pixels down from the top left corner of the main video, we would use “overlay=10:10”
The following expression variables represent the size properties of the Main and overlay videos.
- main_w (main video width)
- main_h (main video height)
- overlay_w (overlay video width)
- overlay_h (overlay video hieght)
For example if the; main video is 640×360 and the overlay video is 120×60 then
- main_w = 640
- main_h = 360
- overlay_w = 120
- overlay_h = 60
We can get the actual size (width and height in pixels) of both the watermark and the video file, and use this information to calculate the desired positioning of things. These properties are extremely handy for building expressions to programmatically set the x:y position of the overlay on top of the main video. (see examples below)
Watermark Overlay Examples
The following 4 video filter (-vf) examples embed an image named “watermarklogo.png” into one of the four corners of the video file, the image is placed 10 pixels away from the sides (offset for desired padding/margin).
Top left corner
ffmpeg –i inputvideo.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=10:10 [out]" outputvideo.flv
Top right corner
ffmpeg –i inputvideo.avi
-vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=main_w-overlay_w-10:10 [out]" outputvideo.flv
Bottom left corner
ffmpeg –i inputvideo.avi
-vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=10:main_h-overlay_h-10 [out]" outputvideo.flv
Bottom right corner
ffmpeg –i inputvideo.avi
-vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=main_w-overlay_w-10:main_h-overlay_h-10 [out]" outputvideo.flv
These examples use something known as Filter Chains. The pad names for streams used in the filter chain are contained in square brackets [watermark],[in] and [out]. The ones labeled [in] and [out] are specific to video input and output. The one labeled [watermark] is a custom name given to the stream for the video overlay. You can change [watermark] to another name if you like. We are taking the output of the [watermark] and merging it into the input [in] stream for final output [out].
Padding Filter vs. Offset
A padding filter is available to add padding to a video overlay (watermark), however it’s a little complicated and confusing to work with. In the examples above I used an offset value of 10 pixels in the expressions for x and y.
For instance, when calculating the x position for placing the watermark overlay to right side of the video, 10 pixels away from the edge.
x=main_w-overlay_x-10 <strong>or rather</strong> x=((main video width)-(watermark width)-(offset))
Another Watermark positioning Technique
Is to create a .png with same size as the converted video (ie. 640×360). Set it’s background to being transparent and place your watermark/logo where you desire it to appear over top of the video. This what’s known as a “full overlay”. You can actually get rather creative with your watermark design and branding using this technique.
ffmpeg –i inputvideo.avi -vf "movie=watermarklogo.png [watermark]; [in][watermark] overlay=0:0 [out]" outputvideo.flv
Full command line example
This is a more realistic example of what a a full FFmpeg command line looks like with various switches enabled. The examples in this article are extremely minified so you could get the basic idea.
ffmpeg -i test.mts -vcodec flv -f flv -r 29.97 -aspect 16:9 -b 300k -g 160 -cmp dct -subcmp dct -mbd 2 -flags +aic+cbp+mv0+mv4 -trellis 1 -ac 1 -ar 22050 -ab 56k -s 640x360 -vf "movie=dv_sml.png [wm]; [in][wm] overlay=main_w-overlay_w-10:main_h-overlay_h-10 [out]" test.flv
Windows users – Please Note
On the Windows OS the file paths used in video filters, such as “C:\graphics\watermarklogo.png” should be modified to be “/graphics/watermarklogo.png”. I myself experienced errors being thrown while using the Windows Builds of FFmpeg. This behavior may or may not change in the future. Please keep in mind that FFmeg is a Linux based project that has been ported over to work on Windows.
Watermarks and Branding in General
You can get some really great ideas for watermarking and branding by simply watching TV or videos online. One thing that many people tend to over look is including their website address in the watermark. Simply displaying it at the end or start of the video is not as effective. So some important elements would be a Logo, perhaps even a phone number or email address. The goal is to give people some piece of useful information for contacting or follow you. If you display it as part of your watermark, they have plenty of time to make note of your website URL, phone number or email address. A well designed logo is effective as well. The more professional looking your logo is, the more professional you come off as being to your audience.
If you are running a video portal service, and wish to brand the videos in conjunction/addition to watermark branding being done by your users. It’s wise to pick a corner such as the top right or top left to display your watermark. Perhaps go for far to give them an option of specifying which corner to display your watermark in, so it does not conflict with their own branding. I thought this was worth wild to mention since FFmpeg is used in web applications such as PHPmotion.
If you’re working with “full overlays” you can get pretty creative. You can get some really amazing ideas from watching the Major News networks on TV. Even the Home shopping networks such as QVC. These are just a few ideas for creative sources to watch and pull ideas from.
I’ve tried to make this article somewhat useful, however it’s by no means all encompassing. If there is any interest, I have examples of how to chain a Text Draw Filter to display text along with a Watermark overlay. Even how to incorporate a video fade-in filter. Working with filter chains can prove to be rather challenging at times.
Examples to overlay/watermark image on video:
ffmpeg -i input.mp4 -i logo.png -filter_complex \ "overlay=(main_w-overlay_w)/2:(main_h-overlay_h)/2" \ -codec:a copy output.mp4
or with the shortened overlay options:
This is the easy one because the default, if you provide no options to overlay, is to place the image in the top left.
This example adds 5 pixels of padding so the image is not touching the edges:
With 5 pixels of padding:
or with the shortened options:
With 5 pixels of padding:
or with the shortened options:
With 5 pixels of padding:
or with the shortened options:
- The audio is simply stream copied (remuxed) in this example with
-codec:a copyinstead of being re-encoded. You may have to re-encode depending on your output container format.
- See the documentation on the
overlayvideo filter for more information and examples.
- See the FFmpeg H.264 Video Encoding Guide for more information on getting a good quality output.
- If your image being overlaid is RGB colorspace (such as most PNG images) you may see a visual improvement if you add
format=rbgto your overlay. Note that if you do this and if you’re outputting H.264, then you will have to add
format=yuv420p(this is another filer–it is different that the similarly named option in the overlay filter). So it may look like this: