Project status: Abandoned
Last update: 2017/08/01
I don’t support that guide anymore and it might be outdated
Read the official documentations instead
No support for any failures and questions
What exactly does it mean?
Before encoding you’ve to filter the source video. In other words, making it less bad and actually improving the source. It’s more complicated, but that sums it up pretty well. Filtering (what the person does) and encoding (what a tool like x264 does) are actually completly different, but in the “fansub slang” we both sum it up most of the times as “encoding”.
What editors are there to use?
Most functions have a documentation in the script itself. Plugins on their github page (some functions as well; use google for that).
What you’ve to know about Vapoursynth:
Vapoursynth scripts are written in python. If you are familiar with Python, you probably don’t’ve any problems. If you aren’t, still no problem. You really just need some minimal basic knowledge. The most important thing probably is: GIVE YOUR SHIT VARIABLES NAMES. That is a huge difference compared to Avisynth.
Here an example:
your_variable = doShit()
However, the first steep is to initialise vapoursynth. Take a look at “What you’ve to know about vsedit & yuuno” below, because that step varies on the editor you use. After doing that, the next step is to load the scripts. With Avisynth and AvsPmod you load both default. With Vapoursynth just plugins (“plugins64” folder). For scripts you do that manually and give them names (“site-packages” folder). For example:
import fvsfunc as fvf import mvsfunc as mvf import kagefunc as kgf import havsfunc as hvf import muvsfunc as muvf
With that there are also differences in calling them compared to plugins:
out = fvf.otherShit(src) #variable = your_custom_scriptname.function(your_clip)
Look above at the example. My custom name for fvsfunc.py is “fvf”.
For plugins you can rely on “core” instead of a name:
out = core.std.someShit(src) #variable = core.pluginname.function(your_clip)
You can also call variables (like your video input) before them:
out = src.std.someShit() #variable = your_clip.pluginname.function()
Keep in mind that this only works, if you already have a clip/src or your plugin doesn’t need more than one clip.
Another thing that’s possible, you can give the variables the same name:
out = src.some.filtering() out = out.some.morefiltering()
But giving them other names can give you some benefits.
For example, if you’ve to work with the clip without “morefiltering()” later in your script.
What you’ve to know about vsedit & yuuno:
The two steps are for vsedit, yuuno is doing that automatically:
Write the following always in the first line of your script (not necessary for yuuno, but recommended):
import vapoursynth as vs from vapoursynth import core
And the following always in the last line of your script:
“your_last_variable” should contain the clip you want to encode.
Now that you know the very basics of Vapoursynth, you’ve to start with loading a video. This step is the “Basic” part of the guide.
Loading video input:
src = core.lsmas.LWLibavSource(r"Y:\Trash\Anime.m2ts") #Recommended for .m2ts src = core.ffms2.Source(r"Y:\Trash\Anime.mkv") #Recommended for .mkv and .avi src = core.d2v.Source(r"Y:\Trash\Anime.d2v") #Recommended for .d2v (indexed from .ts with d2vwitch)
Forcing 23,976 FPS on your video (because of false fps showing or just to be save):
src = src.std.AssumeFPS(fpsnum=24000,fpsden=1001)
IVTC a telecined 29,97 FPS to 23,976 FPS:
out = src.vivtc.VFM().vivtc.VDecimate()
If your source isn’t a Japanese cartoon and not 29,97 FPS you can look HERE for help.
Later in the guide I explain it better and show another method (only works for Anime).
Trimming blackframes (for example the usual 24 blackframes at the beginning with Blu-rays):
src = src.std.Trim(24,34070)
Using the last frame for the second value:
src = src.std.Trim(24,src.num_frames - 1) src = src.std.Trim(24,len(src) - 1)
Removing black borders:
src = src.std.CropRel(left=0, right=0, top=0, bottom=0) #Change the values
Fixing the borders (instead of removing, that works for small borders):
src = src.edgefixer.ContinuityFixer(left=[2,1,1],top=[2,1,1],bottom=[2,1,1], right=[2,1,1]) #Change the values
Burning subtitles in your video (Hardsub):
sub = src.sub.TextFile("Test.ass")
Adding typeset (typecuts):
typecut1 = core.ffms2.Source("Typecuts/test_2000.avi") out = fvf.InsertSign(out, typecut1, 2000)
It’s recommended to use the “Create_TypecutsCode.bat” from the Encode-Pack instead of writing it by hand.
Resizing the source to another resolution:
res = src.resize.Spline36(1280,720)
Later in the guide I explain it better and show another method.
Removing a frame:
sub = src.std.DeleteFrames(99999)
Duplicating a frame:
sub = src.std.DuplicateFrames(99999)
Freezing a frame:
sub = src.std.FreezeFrames(99999,99999,100000) #Replacing frame 99999-99999 (you can set up a range here) with frame 100000
Some important functions for you might missing here, just check the documentation in that case: http://www.vapoursynth.com/doc/functions.html
Converting to 16-Bit instead of staying with 8-Bit/10-Bit:
src = fvf.Depth(src, 16)
Write this at the beginning of your script (after loading the video of course)
Dithering 16-Bit back to 10-Bit/8-Bit for the final encode:
final = fvf.Depth(out, 10)
You usually have this at the end of your script. Depending on 8bit or 10bit x264, you’ve to decide the output bitdepth.
IVTC a telecined 29,97 FPS to 23,976 FPS:
out = video.vivtc.VFM(order=1, cthresh=10) out = video.vivtc.VDecimate()
src = core.d2v.Source(r"Test.ts.d2v") c0 = fvf.JIVTC(src, 0, draft=False, thr=15) c1 = fvf.JIVTC(src, 1, draft=False, thr=15) c2 = fvf.JIVTC(src, 2, draft=False, thr=15) c3 = fvf.JIVTC(src, 3, draft=False, thr=15) c4 = fvf.JIVTC(src, 4, draft=False, thr=15) out = c3.std.Trim(1204,4345)+c0.std.Trim(4586,17099)+c2.std.Trim(17101,35419)+c2.std.Trim(35660,35731)
The first option using
I personally recommend JIVTC instead of wobbly. It’s faster and the results are similiar.
A basic Vapoursynthscript is HERE. You can use it and it also explains you what to do (Step 1 & Step 2).
out = fag3kdb.Fag3kdb(out,radiusy=12, radiusc=8, thry=60, thrc=40, grainy=15, grainc=0, output_depth=16) #Deband+Mask+Merge; Change the values
ref = src #Everything above db = out.f3kdb.Deband(range=12, y=60, cb=40, cr=40, grainy=15, grainc=0, output_depth=16) #Deband; Change the values mask = kgf.retinex_edgemask(ref).std.Binarize(5000).std.Inflate() #Mask merged = core.std.MaskedMerge(db, ref, mask) #Merge all together
out = src.dfttest.DFTTest(sigma=3,tbsize=1)
The first option fag3kdb has the default gradfun3 mask with f3kdb.
It works for most cases.
If you want better results in darker scenes and less detail loss there, use the second option. f3kdb combined with a retinex_edgemask.
However, it’s not recommended for values above y=60. In that cases take the first option. very strong (use it for very strong banding combined with scenefiltering).
The third option DFTTest is smoothing the source. Results in heavy detail loss.
out = mvf.BM3D(src, sigma=1, radius1=1) #radius1=0 to make it faster with the risk losing details
out = kgf.hybriddenoise(src, knl=0.5, sigma=1, radius1=0) #radius1=0 to make it faster with the risk losing details
out = src.knlm.KNLMeansCL(a=2, h=0.25, d=3, channels="YUV", device_type='gpu', device_id=0)
out = src.dfttest.DFTTest(sigma=0.5, tbsize=1, sbsize=24, sosize=18)
The first option is using BM3D (the best denoiser) for luma and chroma.
It’s slow, but the results are fantastic.
Much faster (but still slow) is the second option hybriddenoise, using BM3D on luma and KNLMeansCL on chroma. You need a good GPU for that.
The third option (and recommended one) is using KNLMeansCL on luma and chroma. Here as well, you need a good GPU for doing that.
The last option using DFTTest is only useful for sources with little noise. It works “Quick & Dirty”. So it can result in detail loss and only should be used, when you don’t have a GPU.
ref = src.dfttest.DFTTest(sigma=3,tbsize=1) out = mvf.BM3D(src, ref=ref, sigma=1, radius1=1)
Instead of the default ref-clip (same as source), you can setup your own ref-clip. That results in more accurate results (but maybe also in detail loss). It doesn’t work for hybriddenoise.
out = taa.TAAmbk(src,aatype='Nnedi3')
out = src.deblock.Deblock()
out = src.dfttest.DFTTest(sigma=8,tbsize=1)
out = hvf.Deblock_QED(src)
out = fvf.AutoDeblock(src)
The first option Deblock is using some basic and mostly strong deblock. It works for blocked sources. Don’t use it on the whole clip, you’ve to scenefilter.
The second option DFTTest is smoothing the source very strong. That fixes blocking.
The third option Deblock_QED is using a great, soft Deblock. Works pretty good for all kind of blocking and can be put on the whole clip with the right strength (but not recommended).
The fourth option autodeblock is trying to detect blocked frames and won’t touch non-blocked frames. It’s recommended for MPEG2 input, but also works with h264 input. You can use it on the whole clip.
The topic about resizers is so individual and big, that I’ve everything written on the linked extra website.
out = kgf.adaptive_grain(src,0.30)
db = out.f3kdb.Deband(y=20, cb=10, cr=10, grainy=40, grainc=0, output_depth=16)
(just some stuff that’s worth mentioning here):
deband = src.dfttest.DFTTest(sigma=8,tbsize=1) scenefiltered = fvf.ReplaceFramesSimple(src, deband, mappings="[X Y][X Y]")
It’s recommended using ReplaceFramesSimple for that.
[X Y] are your [FIRST_FRAME LAST_FRAME]. You can add as much as you want.
masksubtile = kgf.hardsubmask(hardsubsource,cleansource) mixvideo = core.std.MaskedMerge(hardsubsource,cleansource,masksubtile)
masksubtile2 = kgf.hardsubmask_fades(hardsubsource,cleansource, highpass=4500) mixvideo2 = core.std.MaskedMerge(hardsubsource,cleansource,masksubtile2) truemixvideo = fvf.ReplaceFramesSimple(mixvideo, mixvideo2, mappings="[X Y]")
The first one masksubtile is only for normal subtitles without fades or coloured typeset.
The second one masksubtile_fades works for fades and coloured typeset, but is not as accurate as the first one. It’s recommended to use this together with scenefiltering.
Stacking two clips:
out = core.std.StackHorizontal([clip1,clip2]) out = core.std.StackVertical([clip1,clip2])
To sum everything up:
Keep in mind that some steps depend on the source or what you really want.
For example: Not every source needs dehalo/dering or maybe you don’t want to resize. Steps with ( ) matter on preferences or aren’t necessary.