hevc encoding presets
Posted: Sat Sep 21, 2024 6:28 am
i think encodes are good in i2p so here's some info on the settings i use which is heavily opinionated to how i personally like things and my eyesight lol.
A description is left for the settings used like "software encode using CRF 21" and i never explained anything. Generally i use a handful of different presets for most content. I specify software encode as most use CPU since it's higher quality, but some use GPU (i call "hardware encode"). GPU encodes can often be done in a couple of minutes whereas cpu encodes can take a couple hours depending on content. The majority of this process is fully automated minus the handful i do manually.
Single pass and double pass
cq/crf is single pass encode with lower number being better quality. less than 22-23 is "relatively" transparent to my eyes, in the sense that from a normal viewing distance on a normal sized tv i can't reliably tell if it's an encode, every time. This is something i've actually done, flipping back and forth from source content and encode, to try to find a sweet spot and it's just my opinion. My aim is to just be able to watch something and not notice artifacts, i'm not really trying to have the smallest file size.
If you're actively looking for artifacts, standing 6 inches from the screen or frame peeping you will see compression. Depending on the scene you may see compression without looking for it, it's not a truly transparent encode, i make some tradeoffs.
i don't think i've shared double pass encodes here, since they're are easy to find (megusta, psa, tgx rarbg: i think all do this or used to do this). Double pass encodes are smaller and stay a constant bitrate. There's a lot of positives for these since the size of the encode can be easily determined and they're more space efficient overall. But I don't personally like these because high movement scenes become bit starved and you see the compression. Often for slightly more space you can do a better looking encode with single pass.
Presets
For most new content, some variation of these settings is what i use. The majority are done automatically like this based on the content type.
I group content and think in terms of stuff that has lots of movement, dark scenes, grainy, all bad for encode efficiency. Things that are bright, not a lot of movement, not grainy, large patches of the screen the same color, good for encode efficiency.
Live Action
CPU crf 19-21 (most often 21) medium (for most things), variable bit rate, 10 bit
for the average show crf 21 medium is a good tradeoff for encode time versus file size and quality. fast is bad quality small file size (megusta uses this for example, i think), slow is best quality. Shows i know will be visually "pretty" or have a lot of dark scenes i may reduce crf. A handful i might do on slow. very slow is never worth it, there are diminishing returns.
Reality
GPU crf 36, variable bit rate, 10 bit
this is sort of meaningless because it depends on the graphics card you use. But most reality shows compress fine like this and i don't personally like or care about reality tv, no offense. So i crunch it with the least amount of resources needed to not have (super) obvious compression artifacts. To do this on your own gpu just need to try a couple crf values and see what it looks like, newer cards look better.
For filters you can do a denoise NLMeans preset high motion and sharpen with lapsharp, but the space saving to quality cost isn't worth it in my opinion. The compression will be more noticeable and you'll have saved like 50mb.
Daily/Night shows
GPU crf 48, variable bit rate, 10 bit
i stopped posting these, but you can bump down the quality a lot on this type of content and it will look ok. they don't tend to have a lot of movement and are mostly well lit, whereas reality tv has a lot of movement usually.
Animation
GPU crf 50, variable bit rate, 10 bit
animation has large blocks of solid color, often bright, large parts of the image don't change frame by frame. so animation compresses very well. I find using cpu encoding just isn't worth it, it's not noticeable enough to me, personally. I can often tell if live action has gone through gpu encoding, but animation it's really only noticeable to me on very fast movements. To me that's enough to justify not wasting a CPU.
filters:
denoise HQDN3D preset medium - This reduces file size quite a bit. It reduces the noise/grain of an image, which isn't very noticeable on most new animation. It does make the image appear a bit blurry at times.
lapsharp medium tuned animation - This sharpens the image. The denoise filter is primarily used to increase compression efficiency, sharpening the image brings it closer to the original.
Other
Like i said the majority of this is fully automated but in some cases i do spend some manual effort with filters and tuning. I'll encode short snippets, compare the frames with the source, tweak a bit. Then the final test is putting it on a full sized tv and watching it normally. I do this primarily for older shows, which is why i don't put out a lot of those. It takes a lot of time to get it right.
You'll also notice in longer running shows they switch from film to digital cameras. The extra grain from film warrants some tweaking on the seasons it was shot on film. Similar kind of thing with older animation.
Audio/subs
The audio isn't a big part of the file unless we're talking a multi so i don't personally think it's worth it to re-compress to something like aac. In most cases i simply pass through the audio and subs untouched from the source.
In a handful of cases i'll mux subs in if they're not present in the source for some reason or fix audio if it's not synced and i catch it. Older bluray rips seem to have that issue more, webdl content is mostly fine the way it is from groups like NTb (btn), flux (bhd) edith (scene) etc. For webdl it sort of doesn't matter much other than scene is going to release fastest and p2p goes more for quality.
Source content
the better the source content the better the encode will be so where the group rips from is worth checking. A webdl (or web) should be the same regardless of group, but amzn webdls will almost always be the best source. hulu, neflix, peacock etc are lower quality. I'm unsure about HBO, at times i think it has been the highest quality option and apple also releases in high quality too. But consistently nflx/hulu are pretty low in comparison, at release time there may only be one option though. Scene groups like edith release the fastest.
Bluray is better if available and remuxes ensure the best quality. A non remux is likely already an encode, it's better to avoid doing that if possible. But it will still likely be higher quality than a webdl if the previous encode had a good bitrate and settings. If the size is in the range of the webdl i wouldn't touch it, something several times the size i'd consider if there were no other options.
If you have the physical disc you can always rip it yourself too. Unless it's obscure, old or just came out there's a high chance it's already been done though.
A description is left for the settings used like "software encode using CRF 21" and i never explained anything. Generally i use a handful of different presets for most content. I specify software encode as most use CPU since it's higher quality, but some use GPU (i call "hardware encode"). GPU encodes can often be done in a couple of minutes whereas cpu encodes can take a couple hours depending on content. The majority of this process is fully automated minus the handful i do manually.
Single pass and double pass
cq/crf is single pass encode with lower number being better quality. less than 22-23 is "relatively" transparent to my eyes, in the sense that from a normal viewing distance on a normal sized tv i can't reliably tell if it's an encode, every time. This is something i've actually done, flipping back and forth from source content and encode, to try to find a sweet spot and it's just my opinion. My aim is to just be able to watch something and not notice artifacts, i'm not really trying to have the smallest file size.
If you're actively looking for artifacts, standing 6 inches from the screen or frame peeping you will see compression. Depending on the scene you may see compression without looking for it, it's not a truly transparent encode, i make some tradeoffs.
i don't think i've shared double pass encodes here, since they're are easy to find (megusta, psa, tgx rarbg: i think all do this or used to do this). Double pass encodes are smaller and stay a constant bitrate. There's a lot of positives for these since the size of the encode can be easily determined and they're more space efficient overall. But I don't personally like these because high movement scenes become bit starved and you see the compression. Often for slightly more space you can do a better looking encode with single pass.
Presets
For most new content, some variation of these settings is what i use. The majority are done automatically like this based on the content type.
I group content and think in terms of stuff that has lots of movement, dark scenes, grainy, all bad for encode efficiency. Things that are bright, not a lot of movement, not grainy, large patches of the screen the same color, good for encode efficiency.
Live Action
CPU crf 19-21 (most often 21) medium (for most things), variable bit rate, 10 bit
for the average show crf 21 medium is a good tradeoff for encode time versus file size and quality. fast is bad quality small file size (megusta uses this for example, i think), slow is best quality. Shows i know will be visually "pretty" or have a lot of dark scenes i may reduce crf. A handful i might do on slow. very slow is never worth it, there are diminishing returns.
Reality
GPU crf 36, variable bit rate, 10 bit
this is sort of meaningless because it depends on the graphics card you use. But most reality shows compress fine like this and i don't personally like or care about reality tv, no offense. So i crunch it with the least amount of resources needed to not have (super) obvious compression artifacts. To do this on your own gpu just need to try a couple crf values and see what it looks like, newer cards look better.
For filters you can do a denoise NLMeans preset high motion and sharpen with lapsharp, but the space saving to quality cost isn't worth it in my opinion. The compression will be more noticeable and you'll have saved like 50mb.
Daily/Night shows
GPU crf 48, variable bit rate, 10 bit
i stopped posting these, but you can bump down the quality a lot on this type of content and it will look ok. they don't tend to have a lot of movement and are mostly well lit, whereas reality tv has a lot of movement usually.
Animation
GPU crf 50, variable bit rate, 10 bit
animation has large blocks of solid color, often bright, large parts of the image don't change frame by frame. so animation compresses very well. I find using cpu encoding just isn't worth it, it's not noticeable enough to me, personally. I can often tell if live action has gone through gpu encoding, but animation it's really only noticeable to me on very fast movements. To me that's enough to justify not wasting a CPU.
filters:
denoise HQDN3D preset medium - This reduces file size quite a bit. It reduces the noise/grain of an image, which isn't very noticeable on most new animation. It does make the image appear a bit blurry at times.
lapsharp medium tuned animation - This sharpens the image. The denoise filter is primarily used to increase compression efficiency, sharpening the image brings it closer to the original.
Other
Like i said the majority of this is fully automated but in some cases i do spend some manual effort with filters and tuning. I'll encode short snippets, compare the frames with the source, tweak a bit. Then the final test is putting it on a full sized tv and watching it normally. I do this primarily for older shows, which is why i don't put out a lot of those. It takes a lot of time to get it right.
You'll also notice in longer running shows they switch from film to digital cameras. The extra grain from film warrants some tweaking on the seasons it was shot on film. Similar kind of thing with older animation.
Audio/subs
The audio isn't a big part of the file unless we're talking a multi so i don't personally think it's worth it to re-compress to something like aac. In most cases i simply pass through the audio and subs untouched from the source.
In a handful of cases i'll mux subs in if they're not present in the source for some reason or fix audio if it's not synced and i catch it. Older bluray rips seem to have that issue more, webdl content is mostly fine the way it is from groups like NTb (btn), flux (bhd) edith (scene) etc. For webdl it sort of doesn't matter much other than scene is going to release fastest and p2p goes more for quality.
Source content
the better the source content the better the encode will be so where the group rips from is worth checking. A webdl (or web) should be the same regardless of group, but amzn webdls will almost always be the best source. hulu, neflix, peacock etc are lower quality. I'm unsure about HBO, at times i think it has been the highest quality option and apple also releases in high quality too. But consistently nflx/hulu are pretty low in comparison, at release time there may only be one option though. Scene groups like edith release the fastest.
Bluray is better if available and remuxes ensure the best quality. A non remux is likely already an encode, it's better to avoid doing that if possible. But it will still likely be higher quality than a webdl if the previous encode had a good bitrate and settings. If the size is in the range of the webdl i wouldn't touch it, something several times the size i'd consider if there were no other options.
If you have the physical disc you can always rip it yourself too. Unless it's obscure, old or just came out there's a high chance it's already been done though.