hevc encoding presets
hevc encoding presets
i think encodes are good in i2p so here's some info on the settings i use which is heavily opinionated to how i personally like things and my eyesight lol.
A description is left for the settings used like "software encode using CRF 21" and i never explained anything. Generally i use a handful of different presets for most content. I specify software encode as most use CPU since it's higher quality, but some use GPU (i call "hardware encode"). GPU encodes can often be done in a couple of minutes whereas cpu encodes can take a couple hours depending on content. The majority of this process is fully automated minus the handful i do manually.
Single pass and double pass
cq/crf is single pass encode with lower number being better quality. less than 22-23 is "relatively" transparent to my eyes, in the sense that from a normal viewing distance on a normal sized tv i can't reliably tell if it's an encode, every time. This is something i've actually done, flipping back and forth from source content and encode, to try to find a sweet spot and it's just my opinion. My aim is to just be able to watch something and not notice artifacts, i'm not really trying to have the smallest file size.
If you're actively looking for artifacts, standing 6 inches from the screen or frame peeping you will see compression. Depending on the scene you may see compression without looking for it, it's not a truly transparent encode, i make some tradeoffs.
i don't think i've shared double pass encodes here, since they're are easy to find (megusta, psa, tgx rarbg: i think all do this or used to do this). Double pass encodes are smaller and stay a constant bitrate. There's a lot of positives for these since the size of the encode can be easily determined and they're more space efficient overall. But I don't personally like these because high movement scenes become bit starved and you see the compression. Often for slightly more space you can do a better looking encode with single pass.
Presets
For most new content, some variation of these settings is what i use. The majority are done automatically like this based on the content type.
I group content and think in terms of stuff that has lots of movement, dark scenes, grainy, all bad for encode efficiency. Things that are bright, not a lot of movement, not grainy, large patches of the screen the same color, good for encode efficiency.
Live Action
CPU crf 19-21 (most often 21) medium (for most things), variable bit rate, 10 bit
for the average show crf 21 medium is a good tradeoff for encode time versus file size and quality. fast is bad quality small file size (megusta uses this for example, i think), slow is best quality. Shows i know will be visually "pretty" or have a lot of dark scenes i may reduce crf. A handful i might do on slow. very slow is never worth it, there are diminishing returns.
Reality
GPU crf 36, variable bit rate, 10 bit
this is sort of meaningless because it depends on the graphics card you use. But most reality shows compress fine like this and i don't personally like or care about reality tv, no offense. So i crunch it with the least amount of resources needed to not have (super) obvious compression artifacts. To do this on your own gpu just need to try a couple crf values and see what it looks like, newer cards look better.
For filters you can do a denoise NLMeans preset high motion and sharpen with lapsharp, but the space saving to quality cost isn't worth it in my opinion. The compression will be more noticeable and you'll have saved like 50mb.
Daily/Night shows
GPU crf 48, variable bit rate, 10 bit
i stopped posting these, but you can bump down the quality a lot on this type of content and it will look ok. they don't tend to have a lot of movement and are mostly well lit, whereas reality tv has a lot of movement usually.
Animation
GPU crf 50, variable bit rate, 10 bit
animation has large blocks of solid color, often bright, large parts of the image don't change frame by frame. so animation compresses very well. I find using cpu encoding just isn't worth it, it's not noticeable enough to me, personally. I can often tell if live action has gone through gpu encoding, but animation it's really only noticeable to me on very fast movements. To me that's enough to justify not wasting a CPU.
filters:
denoise HQDN3D preset medium - This reduces file size quite a bit. It reduces the noise/grain of an image, which isn't very noticeable on most new animation. It does make the image appear a bit blurry at times.
lapsharp medium tuned animation - This sharpens the image. The denoise filter is primarily used to increase compression efficiency, sharpening the image brings it closer to the original.
Other
Like i said the majority of this is fully automated but in some cases i do spend some manual effort with filters and tuning. I'll encode short snippets, compare the frames with the source, tweak a bit. Then the final test is putting it on a full sized tv and watching it normally. I do this primarily for older shows, which is why i don't put out a lot of those. It takes a lot of time to get it right.
You'll also notice in longer running shows they switch from film to digital cameras. The extra grain from film warrants some tweaking on the seasons it was shot on film. Similar kind of thing with older animation.
Audio/subs
The audio isn't a big part of the file unless we're talking a multi so i don't personally think it's worth it to re-compress to something like aac. In most cases i simply pass through the audio and subs untouched from the source.
In a handful of cases i'll mux subs in if they're not present in the source for some reason or fix audio if it's not synced and i catch it. Older bluray rips seem to have that issue more, webdl content is mostly fine the way it is from groups like NTb (btn), flux (bhd) edith (scene) etc. For webdl it sort of doesn't matter much other than scene is going to release fastest and p2p goes more for quality.
Source content
the better the source content the better the encode will be so where the group rips from is worth checking. A webdl (or web) should be the same regardless of group, but amzn webdls will almost always be the best source. hulu, neflix, peacock etc are lower quality. I'm unsure about HBO, at times i think it has been the highest quality option and apple also releases in high quality too. But consistently nflx/hulu are pretty low in comparison, at release time there may only be one option though. Scene groups like edith release the fastest.
Bluray is better if available and remuxes ensure the best quality. A non remux is likely already an encode, it's better to avoid doing that if possible. But it will still likely be higher quality than a webdl if the previous encode had a good bitrate and settings. If the size is in the range of the webdl i wouldn't touch it, something several times the size i'd consider if there were no other options.
If you have the physical disc you can always rip it yourself too. Unless it's obscure, old or just came out there's a high chance it's already been done though.
A description is left for the settings used like "software encode using CRF 21" and i never explained anything. Generally i use a handful of different presets for most content. I specify software encode as most use CPU since it's higher quality, but some use GPU (i call "hardware encode"). GPU encodes can often be done in a couple of minutes whereas cpu encodes can take a couple hours depending on content. The majority of this process is fully automated minus the handful i do manually.
Single pass and double pass
cq/crf is single pass encode with lower number being better quality. less than 22-23 is "relatively" transparent to my eyes, in the sense that from a normal viewing distance on a normal sized tv i can't reliably tell if it's an encode, every time. This is something i've actually done, flipping back and forth from source content and encode, to try to find a sweet spot and it's just my opinion. My aim is to just be able to watch something and not notice artifacts, i'm not really trying to have the smallest file size.
If you're actively looking for artifacts, standing 6 inches from the screen or frame peeping you will see compression. Depending on the scene you may see compression without looking for it, it's not a truly transparent encode, i make some tradeoffs.
i don't think i've shared double pass encodes here, since they're are easy to find (megusta, psa, tgx rarbg: i think all do this or used to do this). Double pass encodes are smaller and stay a constant bitrate. There's a lot of positives for these since the size of the encode can be easily determined and they're more space efficient overall. But I don't personally like these because high movement scenes become bit starved and you see the compression. Often for slightly more space you can do a better looking encode with single pass.
Presets
For most new content, some variation of these settings is what i use. The majority are done automatically like this based on the content type.
I group content and think in terms of stuff that has lots of movement, dark scenes, grainy, all bad for encode efficiency. Things that are bright, not a lot of movement, not grainy, large patches of the screen the same color, good for encode efficiency.
Live Action
CPU crf 19-21 (most often 21) medium (for most things), variable bit rate, 10 bit
for the average show crf 21 medium is a good tradeoff for encode time versus file size and quality. fast is bad quality small file size (megusta uses this for example, i think), slow is best quality. Shows i know will be visually "pretty" or have a lot of dark scenes i may reduce crf. A handful i might do on slow. very slow is never worth it, there are diminishing returns.
Reality
GPU crf 36, variable bit rate, 10 bit
this is sort of meaningless because it depends on the graphics card you use. But most reality shows compress fine like this and i don't personally like or care about reality tv, no offense. So i crunch it with the least amount of resources needed to not have (super) obvious compression artifacts. To do this on your own gpu just need to try a couple crf values and see what it looks like, newer cards look better.
For filters you can do a denoise NLMeans preset high motion and sharpen with lapsharp, but the space saving to quality cost isn't worth it in my opinion. The compression will be more noticeable and you'll have saved like 50mb.
Daily/Night shows
GPU crf 48, variable bit rate, 10 bit
i stopped posting these, but you can bump down the quality a lot on this type of content and it will look ok. they don't tend to have a lot of movement and are mostly well lit, whereas reality tv has a lot of movement usually.
Animation
GPU crf 50, variable bit rate, 10 bit
animation has large blocks of solid color, often bright, large parts of the image don't change frame by frame. so animation compresses very well. I find using cpu encoding just isn't worth it, it's not noticeable enough to me, personally. I can often tell if live action has gone through gpu encoding, but animation it's really only noticeable to me on very fast movements. To me that's enough to justify not wasting a CPU.
filters:
denoise HQDN3D preset medium - This reduces file size quite a bit. It reduces the noise/grain of an image, which isn't very noticeable on most new animation. It does make the image appear a bit blurry at times.
lapsharp medium tuned animation - This sharpens the image. The denoise filter is primarily used to increase compression efficiency, sharpening the image brings it closer to the original.
Other
Like i said the majority of this is fully automated but in some cases i do spend some manual effort with filters and tuning. I'll encode short snippets, compare the frames with the source, tweak a bit. Then the final test is putting it on a full sized tv and watching it normally. I do this primarily for older shows, which is why i don't put out a lot of those. It takes a lot of time to get it right.
You'll also notice in longer running shows they switch from film to digital cameras. The extra grain from film warrants some tweaking on the seasons it was shot on film. Similar kind of thing with older animation.
Audio/subs
The audio isn't a big part of the file unless we're talking a multi so i don't personally think it's worth it to re-compress to something like aac. In most cases i simply pass through the audio and subs untouched from the source.
In a handful of cases i'll mux subs in if they're not present in the source for some reason or fix audio if it's not synced and i catch it. Older bluray rips seem to have that issue more, webdl content is mostly fine the way it is from groups like NTb (btn), flux (bhd) edith (scene) etc. For webdl it sort of doesn't matter much other than scene is going to release fastest and p2p goes more for quality.
Source content
the better the source content the better the encode will be so where the group rips from is worth checking. A webdl (or web) should be the same regardless of group, but amzn webdls will almost always be the best source. hulu, neflix, peacock etc are lower quality. I'm unsure about HBO, at times i think it has been the highest quality option and apple also releases in high quality too. But consistently nflx/hulu are pretty low in comparison, at release time there may only be one option though. Scene groups like edith release the fastest.
Bluray is better if available and remuxes ensure the best quality. A non remux is likely already an encode, it's better to avoid doing that if possible. But it will still likely be higher quality than a webdl if the previous encode had a good bitrate and settings. If the size is in the range of the webdl i wouldn't touch it, something several times the size i'd consider if there were no other options.
If you have the physical disc you can always rip it yourself too. Unless it's obscure, old or just came out there's a high chance it's already been done though.
Re: hevc encoding presets
why even transcode things for i2p? can't you just take the existing data?
Re: hevc encoding presets
Some of the groups I mentioned are low quality with fast settings and two pass, which is fine, I just don't like to see compression like I can with micro encodes. Encodes the way I like aren't always available or might be too heavy in my opinion for i2p or they crunch the audio which I don't like. Sometimes it exists but enough times it doesn't so I just do my own.
Also, my primary motivation is to try and get more people using i2p and move capability into i2p. If more people do this sort of thing i2p can become less reliant on clearnet cross seeding and stand on its own.
Also, my primary motivation is to try and get more people using i2p and move capability into i2p. If more people do this sort of thing i2p can become less reliant on clearnet cross seeding and stand on its own.
Re: hevc encoding presets
Just passing by, thanks a lot for this description on the work you do.
I also thank you for the added things, you seem to be really active lately (the most i would say).
There have not been much activity in the tracker lately, compared to some months ago.
Just wanted to add that some WebDL TV shows can be compressed a lot, some time ago i compressed one 10x and i barely saw a difference. Audio tracks in ac3 can take a lot of space if you go or multi, its a good idea to downsample to 128k 2channels if size is an issue.
I try to add files from time to time, but i cannot keep your pace
By the way, you have implemented something to improve your seeding speed? (like several router instances) i noticed that you had several seed counts in your TV show episode, even before anybody started to download it.
Or does it was a constant restarting of the torrent client?
I also thank you for the added things, you seem to be really active lately (the most i would say).
There have not been much activity in the tracker lately, compared to some months ago.
Just wanted to add that some WebDL TV shows can be compressed a lot, some time ago i compressed one 10x and i barely saw a difference. Audio tracks in ac3 can take a lot of space if you go or multi, its a good idea to downsample to 128k 2channels if size is an issue.
I try to add files from time to time, but i cannot keep your pace

By the way, you have implemented something to improve your seeding speed? (like several router instances) i noticed that you had several seed counts in your TV show episode, even before anybody started to download it.
Or does it was a constant restarting of the torrent client?
Re: hevc encoding presets
Nothing is more subjective than the CRF and the bit rates …
Re: hevc encoding presets
Welcome thought it could be useful for people doing their own encodes or thinking of it. It's a personal preference thing, a lot of people preffer lighter encodes for all kinds of reasons so variety is a good thing. Plenty of people will be happy with crunched audio. Encoding anything is reducing quality so it's always a factor of how much quality you're ok losing
Automation is how I can post as much as I can. it's the only way to hope to keep up with the deluge of content available in the clear, which is also automated and has been for many years. Something I've complained about with postman tracker in the past, so I'm working on a solution. At least for single episodes. It can be time consuming and unreliable which can require manual input from me which can slow the process down until I can sit and do it. If we want i2p to be an alternative to clearnet torrenting (as it just isn't now, not even close) my opinion is that this is necessary to expand content offerrings. Should add that i understand why postman doesn't want this
And thanks for the kind words and adding some things. I have used multiple clients for a while but I've been changing things up with multiple snark instances. It's low resource so it's easier to take advantage of more tunnels that way. I made a multisnark tool to manage this limitation and have shifted focus to working on the other limitation for now
Automation is how I can post as much as I can. it's the only way to hope to keep up with the deluge of content available in the clear, which is also automated and has been for many years. Something I've complained about with postman tracker in the past, so I'm working on a solution. At least for single episodes. It can be time consuming and unreliable which can require manual input from me which can slow the process down until I can sit and do it. If we want i2p to be an alternative to clearnet torrenting (as it just isn't now, not even close) my opinion is that this is necessary to expand content offerrings. Should add that i understand why postman doesn't want this
And thanks for the kind words and adding some things. I have used multiple clients for a while but I've been changing things up with multiple snark instances. It's low resource so it's easier to take advantage of more tunnels that way. I made a multisnark tool to manage this limitation and have shifted focus to working on the other limitation for now
Re: hevc encoding presets
Why not use the more efficient AV1 codec?
»Tests by Moscow State University have shown that AV1 can outperform the encoding and decoding efficiency of HEVC by around 28%. AV1 is able to deliver the same quality as X264 at 55% of the average bitrate, while the best HEVC encoder (x265 in three-pass placebo mode) runs at 67% of the bitrate. In other words, with AV1, distributors can send streams faster and cheaper, and we can enjoy higher resolution content over the same bandwidth.« (AV1-Codec ist 30% effizienter als H.265)
»Tests by Moscow State University have shown that AV1 can outperform the encoding and decoding efficiency of HEVC by around 28%. AV1 is able to deliver the same quality as X264 at 55% of the average bitrate, while the best HEVC encoder (x265 in three-pass placebo mode) runs at 67% of the bitrate. In other words, with AV1, distributors can send streams faster and cheaper, and we can enjoy higher resolution content over the same bandwidth.« (AV1-Codec ist 30% effizienter als H.265)
Re: hevc encoding presets
Support for hardware decoding is something i'm not sure on. I feel like the profile of the average i2p torrent user is less likely to have a compatible device? than say someone torrenting with a vpn or running a seedbox in private trackers. I get the vibe that there are people here that are here because it's free, or can't afford those things so maybe they wouldn't have a compatible device. Not sure how many people fit into that category versus people that are here more out of nerdy or privacy reasons. i'll just have to do a couple and see if people are ok with it, maybe it's not a big deal
on my end of things still some to do. i rebuilt some of my hardware to be modular so it should be easy to add cpus for the extra load i'd anticipate, need to learn it more and fidget with the presets. also will admit that i'm not as motivated now as i might have been some years ago to switch to a new codec since space is relatively cheap now, so it's not going to benefit me as far as that goes. i could potentially increase quality some for similar file size which i would like or go for smaller file sizes which would help with seeding time
on my end of things still some to do. i rebuilt some of my hardware to be modular so it should be easy to add cpus for the extra load i'd anticipate, need to learn it more and fidget with the presets. also will admit that i'm not as motivated now as i might have been some years ago to switch to a new codec since space is relatively cheap now, so it's not going to benefit me as far as that goes. i could potentially increase quality some for similar file size which i would like or go for smaller file sizes which would help with seeding time
Re: hevc encoding presets
Up to and including FullHD, no hardware decoding is required. Any standard office PC with a GPU in the CPU should be able to play this (AV1) without jerking.
People's motivation is generally the same everywhere, they are hoping for a small competitive advantage. However, they then use the money saved elsewhere, someone once said, for higher-quality consumer goods. I don't know whether this is the case, but the money saved could also be invested in a more powerful PC.
What is striking is the consistently conservative attitude in the global file sharing scene. While the industry has already declared new codecs to be the standard, the scene is sticking to yesterday's stuff. Just as it took almost ages to establish the switch from Xvid to AVC. Today, AVC is still the preferred codec.
Perhaps it should be kept similar to the German public broadcasting system. Everyone who has a permanent residence has to pay fees, but the fee payers are free to buy the necessary equipment to be able to use the program. What I'm saying is, just do it, people will move, especially if they have to.
People's motivation is generally the same everywhere, they are hoping for a small competitive advantage. However, they then use the money saved elsewhere, someone once said, for higher-quality consumer goods. I don't know whether this is the case, but the money saved could also be invested in a more powerful PC.
What is striking is the consistently conservative attitude in the global file sharing scene. While the industry has already declared new codecs to be the standard, the scene is sticking to yesterday's stuff. Just as it took almost ages to establish the switch from Xvid to AVC. Today, AVC is still the preferred codec.
Perhaps it should be kept similar to the German public broadcasting system. Everyone who has a permanent residence has to pay fees, but the fee payers are free to buy the necessary equipment to be able to use the program. What I'm saying is, just do it, people will move, especially if they have to.
Re: hevc encoding presets
I'm talking about tv's, phones, top boxes. I don't want to assume everyone is watching things on their pc. Decode capability for non pc is still spotty, pcs should be able to play it back fine regardless but other devices older than 3-4 yr's old might have issues. On mobiles and laptops with no dedicated hardware support playback is going to suck battery life i'd think.
On encoding side of things, nvidia released their first gpu with hardware encoder last year, i don't have a gpu that has this yet. I don't care to be first in line for new hardware and wait a little to see how they perform and improve. Depending on the content the efficiency gain isn't as drastic as you might think, i do consistently see 10-30%, occasionally going up to around 40-45%. I need to do more testing but that's what i've been seeing so far. I still have more to learn to increase the efficiency but i've been hearing similar things from other encoders.
So i need more hardware upgrades, will take much longer time to encode, an unknown number of i2p users can't play it on their device of choice, for something in the range of a 10-30% space savings/seeding time gain. No doubt av1 is the future and it's better in every way, but the effort/cost just doesn't seem to match up with the gain for me right now to be highly motivated to switch.
Kind of same thing with hevc. The space savings gain avc to hevc was more drastic, and although it took more cpu power to do those kinds of encodes and hardware support was spotty, space was more valuable so people really were pushing to maximize that any way they could.
On encoding side of things, nvidia released their first gpu with hardware encoder last year, i don't have a gpu that has this yet. I don't care to be first in line for new hardware and wait a little to see how they perform and improve. Depending on the content the efficiency gain isn't as drastic as you might think, i do consistently see 10-30%, occasionally going up to around 40-45%. I need to do more testing but that's what i've been seeing so far. I still have more to learn to increase the efficiency but i've been hearing similar things from other encoders.
So i need more hardware upgrades, will take much longer time to encode, an unknown number of i2p users can't play it on their device of choice, for something in the range of a 10-30% space savings/seeding time gain. No doubt av1 is the future and it's better in every way, but the effort/cost just doesn't seem to match up with the gain for me right now to be highly motivated to switch.
Kind of same thing with hevc. The space savings gain avc to hevc was more drastic, and although it took more cpu power to do those kinds of encodes and hardware support was spotty, space was more valuable so people really were pushing to maximize that any way they could.