torrent compiler for season packs (python, chaotic)

All other I2P Bittorrent related talk
User avatar
cumlord
Posts: 33
Joined: Thu Oct 05, 2023 5:01 pm
Location: Erect, NC
Contact:

torrent compiler for season packs (python, chaotic)

Post by cumlord »

it's not pretty and a work in progress but this is what i've been using to compile torrents. been working on modularization and a queue system/gui to monitor new single episodes and one for movies but this one was made for season packs. hopefully someone can get it to work and also would appreciate any help.

It can be retooled for movies with biggest changes being needed to getting the .json info from tmdb. Was working on this but got busy.

The naming scheme should be "/name of series/season 1" or "/name of series (year)/season 1". It will search the containing folder name on tmdb. Or you can force manual search terms in the "settings" area

add the full location of each season into the "season_path_list". You can add multiple seasons as a list and it will just cycle through all of them when ran. (but as of now the other settings will remain the same for each batch, which is why i'm working on a queue system)

If the episodes are encoded, they need to be renamed to include ex "_1080p_hevc". if you use tdarr the "rename based on codec and resolution" plugin will do this for you. it will then rename each episode from media info. If it hasn't been renamed with "_1080p_hevc" it will leave it as is but will still name the torrent appropriately (usually). you can set it to rename as a release group and give credit to the original source.

You need a safely acquired tmdb api key which is free. The script will route api requests through tor (one to search for the series, another to get the details of that series). Any images found in the season folder will have their metadata removed and resized to under 100kb for the postman info page (the full rez images are kept in their folder)

Mediainfo is ran to calculate average episode length. Then it's ran on a random episode to rename audio/video codec and resolution and for the general info text.

It copies to a local area, determines the appropriate number of pieces for the size, and runs mktorrent to make the torrent. From there it places a copy/symlink of the files and .torrent to two places, a "long-term seed" and "short-term seed" location. there is a setting to do only one or the other. I have it set to copy the files to an ssd and make a symlink in another folder to link back to the original location.

if this is overly confusing...my goal was to make it copy to a seperate instance to seed from an ssd for the first 3 days or so to save the load on my hdds from the initial swarm. and make symlinks for a seperate permaseed that draws from the original file on a hdd. In my case it copies the original from a nas to a seed with an ssd when i/o demands are high. then it makes a symlink from the original location on the nas for the "long-term/permaseed" to see. The .torrent file is placed in a folder (such as "season packs") where biglybt will tag it as a "season pack" and begin seeding it.

You can do a similar thing with qbittorrent (i would 100% preffer it personally) but doesn't seem to work as well as biglybt with i2p yet.

From there the compiled info text, .torrent, and any resized images are sent to a "waiting area" where they need to be uploaded to postman manually, but it's reachable from other trackers/dht by then.

there are some major dependencies it does CLI calls on: mktorrent, mediainfo, and tor. may need to adjust the commands for these based on your system...

but...this is what i've got so far, hopefully it's useful or someone out there that knows more than i do can add on

Code: Select all

import requests
import json
import subprocess
import requests
import os
import shutil
from tqdm import tqdm
import time
import glob
import random
import re
import shutil
from PIL import Image
import io
import math
from fake_useragent import UserAgent
from stem import Signal
from stem.control import Controller
import threading

season_path_list = ["/some/directory/some series/season 1"]

#Self release group, if left blank it will default to the release group entered below
release_group_name = "NOGRP"

# Release group of source (if encode)
release_group = "EDITH"

#This message will appear at the end of the overview at top of nfo
message_addendum = ""

#This message will interject in the title ex "REPACK"
title_addendum = ""

#WEBDL, WEBRIP, Bluray, HDTV, DVD, etc.
source_type = "WEBDL"

###is this an encode to hevc? y/n.
reencoded = "y"

##if yes, edit the following, do not end any of these with a "."
encode_hw_sw = "software"
cq_or_bitrate = "CQ 21"
encode_msg = "reencoded to hevc"
audio_encode_info = "E-AC3 passthru"

#These aren't needed, don't end any of these messages with a "."
subtitle_encode_info = "subtitle passthru"
encode_filters = ""
encode_addendum = ""

#Set Use_manual_search to Y to automatically use the other values
#Otherwise the manual search items entered here will be applied if no results found
use_manual_search = "n"
manual_search = ""
search_air_date = ""

# sign off in .nfo
sign_off = "please give seed ಥ_ಥ"
tag_line = ""

# trackers
tracker_list = ["http://ahsplxkbhemefwvvml7qovzl5a2b5xo5i7lyai7ntdunvcyfdtna.b32.i2p/announce.php", "http://tracker2.postman.i2p/announce.php", "http://tu5skej67ftbxjghnx3r2txp6fqz6ulkolkejc77be2er5v5zrfq.b32.i2p/announce.php", "http://lnQ6yoBTxQuQU8EQ1FlF395ITIQF-HGJxUeFvzETLFnoczNjQvKDbtSB7aHhn853zjVXrJBgwlB9sO57KakBDaJ50lUZgVPhjlI19TgJ-CxyHhHSCeKx5JzURdEW-ucdONMynr-b2zwhsx8VQCJwCEkARvt21YkOyQDaB9IdV8aTAmP~PUJQxRwceaTMn96FcVenwdXqleE16fI8CVFOV18jbJKrhTOYpTtcZKV4l1wNYBDwKgwPx5c0kcrRzFyw5~bjuAKO~GJ5dR7BQsL7AwBoQUS4k1lwoYrG1kOIBeDD3XF8BWb6K3GOOoyjc1umYKpur3G~FxBuqtHAsDRICkEbKUqJ9mPYQlTSujhNxiRIW-oLwMtvayCFci99oX8MvazPS7~97x0Gsm-onEK1Td9nBdmq30OqDxpRtXBimbzkLbR1IKObbg9HvrKs3L-kSyGwTUmHG9rSQSoZEvFMA-S0EXO~o4g21q1oikmxPMhkeVwQ22VHB0-LZJfmLr4SAAAA.i2p/announce.php", "http://w7tpbzncbcocrqtwwm3nezhnnsw4ozadvi2hmvzdhrqzfxfum7wa.b32.i2p/a", "http://opentracker.dg2.i2p/a", "http://opentracker.skank.i2p/a", "http://omitracker.i2p/announce.php",]       

#setup an account with tmdb and api key safely (yes it's tmdb despite the vars saying imdb)
#then the script will route api requests through tor
imdb_api = "?api_key="
imdb_access_token = ""


### CHANGE THIS: bring file to local folder for creation the torrent
destination_directory = '/some/directory/creating'

#symlink directory long-term .torrent only
symlink_directory_long = "/some/directory/season packs"
#symlink directory long-term
symlink_directory_file = "/some/directory/torrent"

### SSD torrent file location (use if you want a seperate SSD seeder for initial swarm)
ssd_location = "/some/directory/SSD/torrents"
#symlink directory file location short-term
symlink_directory_short = "/some/directory/torrent local"
#symlink directory short-term .torrent file location (in biglyBT for ex you can place .torrent file in different folders, and they will be applied a seperate tag for organization like "season packs")
symlink_directory_short_torrent = "/some/directory/torrents local/season packs"

#specify the folder for torrents and infos waiting to be uploaded
file_path_waiting = "/some/directory/waiting/"

# media info template location (this is the template for media info to use)
video_text_location = "/some/directory/video.txt"
video_text_command_prefix = "--Output=file://"
video_text = (video_text_command_prefix + video_text_location)

video_text_command_prefix_x = "--Output="
codec_command = 'Video;"%Width% %Format%"'
audio_command = 'Audio;"%Format%,c:%Channel(s)%"'
length_command = 'Video;"%Duration/String1%"'
video_info = (video_text_command_prefix_x + codec_command)
audio_info = (video_text_command_prefix_x + audio_command)
length_info = (video_text_command_prefix_x + length_command)

### tor start and stop commands
tor_command_stop = "service stop tor"
tor_command_start = "service start tor"

# delay before placing .torrent file in the local directory
delay_time = 5
#######################################################################

####start tor proxy for api calls
finished_yet = 0
output = subprocess.check_output(tor_command_start, shell=True, text=True)
print(output)
proxies = {
    'http': 'socks5://127.0.0.1:9050',
    'https': 'socks5://127.0.0.1:9050'
}

def worker(exit_event):
    while not exit_event.is_set():
        print("Will continue to change ip address every 10 minutes....\n\n")
        headers = { 'User-Agent': UserAgent().random }
        time.sleep(600)
        with Controller.from_port(port = 9051) as c:
            c.authenticate()
            c.signal(Signal.NEWNYM)
            print(f"Your IP is : {requests.get('https://ident.me', proxies=proxies, headers=headers).text}  ||  User Agent is : {headers['User-Agent']}")
        time.sleep(1)

    print("Thread running tor received exit signal and is terminating")

exit_event = threading.Event()
thread = threading.Thread(target=worker, args=(exit_event,))
thread.start()

#Wait a couple seconds for tor to start
time.sleep(5)
#######################################################################
#this assembles the message and will be placed only if reencoded is "y"
#######################################################################
for item in season_path_list:
    season_path = item

    if encode_filters == "":
        encode_filters_x = ". "
    else:
        encode_filters_x = " with " + encode_filters + ". "

    if subtitle_encode_info == "":
        subtitle_encode_info_x = ". "
    else:
        subtitle_encode_info_x = " with " + subtitle_encode_info + "."
    if encode_addendum == "":
        encode_addendum_x = ""
    else:
        encode_addendum_x = " " + encode_addendum + ". "       
    if message_addendum == "":
        message_addendum_x = ""
    else:
        message_addendum_x = " " + message_addendum     
    if release_group_name == "":
        release_group_name = release_group
    reencode_msg_nfo = encode_msg + ".\n\nThis is a " + encode_hw_sw + " encode using " + cq_or_bitrate + encode_filters_x + audio_encode_info + subtitle_encode_info_x + encode_addendum_x
    print(reencode_msg_nfo)

    if reencoded == "y":
        rencoded_msg = " " + reencode_msg_nfo
    else:
        #reencoded_x = " "
        rencoded_msg = "." 

    if release_group == "":
        release_group_x = ". "
    else:
        release_group_x = " from " + release_group + ". "     

    if source_type == "":
        source_type_x = "Unknown source"
    else:
        source_type_x = source_type + " source"  

    #### determine season number to set as season_number
    def extract_last_two_digits(string):
        last_two_digits = ''.join(filter(str.isdigit, string[-2:]))
        return int(last_two_digits)

    season_number = extract_last_two_digits(os.path.basename(season_path))

    parent_folder_name = os.path.basename(os.path.dirname(season_path))

    print("Season containing folder name:", parent_folder_name)
#######################################################################    
#number of media files    
#######################################################################

    media_file_extensions = ['.mkv', 'mp4', 'avi', 'mpeg', '.mov', '.wmv']

    file_list = []

    search_patterns = [os.path.join(season_path, f"*{ext}") for ext in media_file_extensions]
    for pattern in search_patterns:
        file_list.extend(glob.glob(pattern))

    print("Number of episodes found:", len(file_list))
#######################################################################
# search json with tor proxy
#######################################################################    
    search_title = parent_folder_name
    search_air_date = ""
    additional_params = "&include_adult=false&language=en-US&page=1"
    output_string = search_title.replace(" ", "%20")
    search_url = ("https://api.themoviedb.org/3/search/tv?query={}".format(output_string) + "&first_air_date_year={}".format(search_air_date) + additional_params)
    url = search_url
    headers = {
        "accept": "application/json",
        "Authorization": "Bearer " + imdb_access_token
    }

    response = requests.get(url, headers=headers, proxies=proxies)
    json_data = response.text
    print(json_data)

    # Parse the JSON data
    data = json.loads(json_data)
    print(data)
    # Extract the "id" in the "results" list
    if use_manual_search == "y":
        search_title = manual_search
        additional_params = "&include_adult=false&language=en-US&page=1"
        output_string = search_title.replace(" ", "%20")
        #print("Search query: " + output_string)
        search_url = ("https://api.themoviedb.org/3/search/tv?query={}".format(output_string) + "&first_air_date_year={}".format(search_air_date) + additional_params)
        #print(search_url)
        url = search_url
        headers = {
            "accept": "application/json",
            "Authorization": "Bearer " + imdb_access_token
        }
        response = requests.get(url, headers=headers, proxies=proxies)
        json_data = response.text
        data = json.loads(json_data)
        result_id = data["results"][0]["id"]
        result_name = data["results"][0]["name"]
    else:
        try:
            result_id = data["results"][0]["id"]
            result_name = data["results"][0]["name"]
        except IndexError:
            print('Nothing found, falling back to manual search string instead')
            search_title = manual_search
            additional_params = "&include_adult=false&language=en-US&page=1"
            output_string = search_title.replace(" ", "%20")
            #print("Search query: " + output_string)
            search_url = ("https://api.themoviedb.org/3/search/tv?query={}".format(output_string) + "&first_air_date_year={}".format(search_air_date) + additional_params)
            #print(search_url)
            url = search_url
            headers = {
                "accept": "application/json",
                "Authorization": "Bearer " + imdb_access_token
            }
            response = requests.get(url, headers=headers, proxies=proxies)
            json_data = response.text
            data = json.loads(json_data)
            result_id = data["results"][0]["id"]
            result_name = data["results"][0]["name"]


#######################################################################
# run media info for codec info and renaming the individual files
#######################################################################

    # Print the extracted "id"
    print("TMDB ID: ", result_id)
    print("TMDB given name: ", result_name)
    print("Running media info...")

    ### runs full mediainfo for text output
    media_file_to_use = random.choice(file_list)
    try:
        result = subprocess.run(['mediainfo', video_text, media_file_to_use], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, check=True)
        mediainfo_output = result.stdout
    except subprocess.CalledProcessError as e:
        print("Error running MediaInfo:", e)
    ####get audio info
    try:
        result = subprocess.run(['mediainfo', audio_info, media_file_to_use], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, check=True)
        audio_info_output = result.stdout
        audio_codec_info = audio_info_output.split(",")[0]
        if "E-AC-3" in audio_codec_info:
            audio_codec_info = audio_codec_info.replace("E-AC-3", "DDP")
        if "AC-3" in audio_codec_info:
            audio_codec_info = audio_codec_info.replace("AC-3", "DD")        
        audio_channel_info = re.search(r'c:\s*(\d+)', audio_info_output)
        if audio_channel_info:
            audio_channel_number = audio_channel_info.group(1)
        else:
            audio_channel_number = ""
        if "6" in audio_channel_number:
            audio_channel_number = audio_channel_number.replace("6", "5.1")
        if "8" in audio_channel_number:
            audio_channel_number = audio_channel_number.replace("8", "7.1")     
        audio_channel_number = " " + audio_channel_number  
        if "2" in audio_channel_number: 
            audio_channel_number = audio_channel_number.replace(" 2", "2")              
    except subprocess.CalledProcessError as e:
        print("Error running MediaInfo:", e)

    ### get video info
    try:
        result = subprocess.run(['mediainfo', video_info, media_file_to_use], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, check=True)
        
        codec_x = result.stdout.replace("\n", " ")
        video_codec_info = codec_x
        print(codec_x)
        # find horizontal resolution and convert
        space_index = codec_x.find(" ")
        substring_before_space = codec_x[:space_index]
        if substring_before_space.isdigit():
            found_resolution = int(substring_before_space)
            print("Found horizontal resolution: " + str(found_resolution) + ", converting...")
        else:
            print("No valid number found while searching for horizontal resolution")
        if found_resolution < 480: 
            video_codec_info = codec_x.replace(str(found_resolution), "360p") 
        if 680 <= found_resolution <= 1000: 
            video_codec_info = codec_x.replace(str(found_resolution), "480p")
        if 1000 <= found_resolution <= 1800: 
            video_codec_info = codec_x.replace(str(found_resolution), "720p") 
        if 1800 <= found_resolution <= 3000: 
            video_codec_info = codec_x.replace(str(found_resolution), "1080p")       
        if 3000 <= found_resolution <= 7000:
            video_codec_info = codec_x.replace(str(found_resolution), "4k")                  
        if found_resolution >7000:
            video_codec_info = codec_x.replace(str(found_resolution), "8k")    

        print(video_codec_info)

        title_codec_info_for_files = video_codec_info + audio_codec_info + audio_channel_number + "-"
        title_codec_info = video_codec_info + source_type + " " + audio_codec_info + audio_channel_number + "-" + release_group_name
        nfo_message = " in " + video_codec_info + audio_codec_info + audio_channel_number + ". " + release_group + " " + source_type_x + rencoded_msg + message_addendum_x
        print(nfo_message)
        print(rencoded_msg)
    except subprocess.CalledProcessError as e:
        print("Error running MediaInfo:", e)

    print("Calculating average episode runtime...")
    time_x = []
    for string in file_list:
        media_file_path_y = (f"{string}")
        try:
            result = subprocess.run(['mediainfo', length_info, media_file_path_y], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, check=True)
            time_episode_y = result.stdout[:3]
            print("time episode y: " + time_episode_y)
            try:
                time_episode = int(time_episode_y)
            except ValueError:
                time_episode = int(time_episode_y[:1]) * 60
            time_x.append(time_episode)
        except subprocess.CalledProcessError as e:
            print("Error running MediaInfo:", e)  
    average_episode_length_media_info_x = math.ceil(sum(time_x) / len(file_list))
    average_episode_length_media_info_string = str(math.ceil(sum(time_x) / len(file_list))) + " min"

    print("Average episode runtime: " + average_episode_length_media_info_string)    

#######################################################################
#JSON IMDB querry through tor
#######################################################################
    url = ("https://api.themoviedb.org/3/tv/{}".format(result_id) + imdb_api)

    response = requests.get(url, proxies=proxies)

    name_of_torrent = result_name + " Season " + str(season_number) + " S" + str(f"{season_number:02}") + " " + title_addendum + " " + title_codec_info
    print("Title given for torrent: " + name_of_torrent)

    # Check if the request was successful 
    genres_list = ""
    genres_text = ""
    first_air_date = ""
    spoken_languages_list = ""
    spoken_languages_text = ""
    networks_list = ""
    networks_text = ""
    status_text = ""
    last_episode_to_air_list = ""
    last_episode_to_air_text = ""
    series_overview_text = ""
    runtime = average_episode_length_media_info_x

    if response.status_code == 200:
        try:
            data = response.json()
            genres_list = data.get("genres", [])
            genres_text = ", ".join(genre["name"] for genre in genres_list)
            first_air_date = data.get("first_air_date")
            #episode_run_time_list = data.get("episode_run_time", [])
            #runtime = data["last_episode_to_air"]["runtime"]
            spoken_languages_list = data.get("spoken_languages", [])
            spoken_languages_text = ", ".join(language["name"] for language in spoken_languages_list)
            networks_list = data.get("networks", [])
            networks_text = ", ".join(networks["name"] for networks in networks_list)
            status_text = data.get("status")
            last_episode_to_air_list = data.get("last_episode_to_air", ["overview"])
            last_episode_to_air_text = (last_episode_to_air_list["overview"])
            series_overview_text = data.get("overview")
        except Exception as e:
            print("Error while processing JSON data:", e)    
    else:
        print("Error: HTTP request returned a non-200 status, unable to connect to TMDB")    

    ######### setting vars to use in output NFO

    first_air_date_x = ("First air date: " + first_air_date)
    genres_x = ("Genres: " + genres_text)
    networks_x = ("Networks: " + networks_text)
    runtime_x = ("Runtime: {} minutes".format(runtime))
    status_x = ("Status: " + status_text)
    overview_x = ("Season " + str(season_number) + " of " + result_name + nfo_message)
    series_overview_x = ("Overview: " + series_overview_text)
    languages_x = ("Languages: " + spoken_languages_text)
    tmdb_url_x = "TMDB: https://www.themoviedb.org/tv/" + str(result_id)
    episodes_found_x = "Episodes: " + str(len(file_list))
    sign_off_x = sign_off
    line_bar_x = "=============================================="
    line_bar_thin_x = "----------------------------------------------"

    ######### THINGS TO SAVE TO TEXT FILE
    ##### Name of torrent

    text_strings_to_save = [
    "Torrent title: " + "\n" + name_of_torrent + "\n",
    line_bar_x,
    "                   I N F O",
    line_bar_thin_x + "\n",
    overview_x + "\n",
    line_bar_x + "\n",
    series_overview_x + "\n",
    genres_x,
    first_air_date_x,
    runtime_x,
    status_x,
    networks_x,
    languages_x,
    episodes_found_x,
    tmdb_url_x + "\n",
    line_bar_x,
    "                   N O T E S",
    line_bar_thin_x + "\n",
    sign_off_x,
    mediainfo_output,
    ]


    file_path = file_path_waiting + name_of_torrent + ".txt"   

    with open(file_path, 'w') as file:
        for text_string in text_strings_to_save:
            file.write(text_string + '\n') 

    file_path = season_path + "/" + result_name + " season " + str(season_number) + " info.txt"   

    with open(file_path, 'w') as file:
        for text_string in text_strings_to_save[1:]:
            file.write(text_string + '\n')

    count_rename = 0
    for string in file_list:
        if reencoded =="y":
            media_file_path = (f"{string}")
            media_name = os.path.basename(media_file_path)

            result_string1 = media_name.replace("_1080p_hevc", " " + title_codec_info_for_files + release_group_name)
            result_string2 = result_string1.replace("_720p_hevc", " " + title_codec_info_for_files + release_group_name)
            result_string3 = result_string2.replace("_480p_hevc", " " + title_codec_info_for_files + release_group_name)
            result_string4 = result_string3.replace("_576p_hevc", " " + title_codec_info_for_files + release_group_name)
            result_string5 = result_string4.replace("_4KUHD_hevc", " " + title_codec_info_for_files + release_group_name)
            result_string6 = result_string5.replace("_DCI4K_hevc", " " + title_codec_info_for_files + release_group_name)
            result_string7 = result_string6.replace("_8KUHD_hevc", " " + title_codec_info_for_files + release_group_name)
            result_string8 = result_string7.replace("_Other_hevc", " " + title_codec_info_for_files + release_group_name)
            name_of_torrent_file = result_string8

            current_file_name = media_file_path
            new_file_name = os.path.dirname(media_file_path) + "/" + name_of_torrent_file
            try:
                os.rename(current_file_name, new_file_name)
                media_file_path = new_file_name
                file = media_file_path
                count_rename = 1 + count_rename
                print("Renaming episode " + str(count_rename) + "/" + str(len(file_list)))
            except FileNotFoundError:
                print(f"File '{current_file_name}' not found.")
            except FileExistsError:
                print(f"File '{new_file_name}' already exists.")
            except Exception as e:
                print(f"An error occurred: {e}")
        else:
            media_file_path = (f"{string}")
            media_name = os.path.basename(media_file_path)
            parts = os.path.splitext(media_name)
            if release_group not in media_name:
                name_of_torrent_file = parts[0] + "-" + release_group + parts[1]
                current_file_name = media_file_path
                new_file_name = os.path.dirname(media_file_path) + "/" + name_of_torrent_file
            else:
                new_file_name = media_file_path
            try:
                os.rename(current_file_name, new_file_name)
                media_file_path = new_file_name
                file = media_file_path
                count_rename = 1 + count_rename
                print("Renaming episode " + str(count_rename) + "/" + str(len(file_list)))
            except FileNotFoundError:
                print(f"File '{current_file_name}' not found.")
            except FileExistsError:
                print(f"File '{new_file_name}' already exists.")
            except Exception as e:
                print(f"An error occurred: {e}")
    print("Renaming complete")

    ## copy
    print("Info txt file saved to ", file_path)
    print("Deleting extraneous files in 5 seconds..")

    time.sleep(5)
    # Delete any "._" files
    for filename_y in os.listdir(season_path):
        if filename_y.startswith("._"):
            file_path_y = os.path.join(season_path, filename_y)
            
            if os.path.exists(file_path_y):
                os.remove(file_path_y)
                print(f"Deleted file: {file_path_y}")
            else:
                print(f"File not found: {file_path_y}")            

    print("Deletion of '._' files completed.")

    ### Remove metadata from images
    def remove_metadata_in_place(folder_path):
        files = os.listdir(folder_path)

        for file_name in files:
            if file_name.lower().endswith(('.png', '.jpg', '.jpeg', '.gif', '.bmp')):
                file_path = os.path.join(folder_path, file_name)
                print("removing metadata from images...")
                with Image.open(file_path) as img:
                    img_without_metadata = Image.new("RGB", img.size)
                    img_without_metadata.paste(img)
                    img_without_metadata.save(file_path)
    if __name__ == "__main__":
        folder_path = season_path
        remove_metadata_in_place(folder_path)   

#######################################################################
#copy files locally to run mktorrent
#######################################################################
    print("Copying files locally to be processed...")

    source_directory = season_path
    destination_directory_x = destination_directory + "/" + name_of_torrent 
    print("Name of torrent: " + name_of_torrent)

    if not os.path.exists(destination_directory_x):
        os.makedirs(destination_directory_x)

    chunk_size = 1024 

    for filename in os.listdir(source_directory):
        source_file = os.path.join(source_directory, filename)
        if os.path.isfile(source_file): 
            file_size = os.path.getsize(source_file)
            destination_file = os.path.join(destination_directory_x, filename)
            
            print("Copying " + filename)
            with tqdm(total=file_size, unit='B', unit_scale=True, unit_divisor=1024) as pbar:
                with open(source_file, 'rb') as src_file, open(destination_file, 'wb') as dest_file:
                    while True:
                        data = src_file.read(chunk_size)
                        if not data:
                            break
                        dest_file.write(data)
                        pbar.update(len(data))

    print("Location of the copied files: " + destination_directory_x)   

    print("Deleting extraneous files in 5 seconds..")

    time.sleep(5)
    for filename_y in os.listdir(destination_directory_x):
        if filename_y.startswith("._"):
            file_path_y = os.path.join(destination_directory_x, filename_y)
            if os.path.exists(file_path_y):
                os.remove(file_path_y)
                print(f"Deleted file: {file_path_y}")
            else:
                print(f"File not found: {file_path_y}")

    # MK torrent
    #This adjust the piece sizes for MKtorrent, adjusts to Gb
    total_size = 0
    for dirpath, dirnames, filenames in os.walk(season_path):
        for filename in filenames:
            file_path = os.path.join(dirpath, filename)
            total_size += os.path.getsize(file_path)
    total_size = total_size / 1000000000
    print("total size: " + str(total_size))
    if total_size < 0.512:
        piece_size = "-l 18"
    if 0.512 <= total_size <= 1.024:    
        piece_size = "-l 19"
    if 1.024 <= total_size <= 2:    
        piece_size = "-l 20"
    if 2 <= total_size <= 4:    
        piece_size = "-l 21"
    if 4 <= total_size <= 8:    
        piece_size = "-l 22"    
    if 8 <= total_size <= 16:    
        piece_size = "-l 23"
    if total_size > 16:
        piece_size = "-l 24"            
    print("For file size " + str(total_size) + "G, " + piece_size + " will be used in MKtorrent")

    character_to_append = '-a '
    modified_list = [character_to_append + element for element in tracker_list]
    tracker_string = ' '.join(modified_list)
    torrent_file_name = '"' + destination_directory_x + ".torrent" + '"'
    file_name_qouted = '"' + destination_directory_x + '"'

    mktorrent_command = "mktorrent -v " + piece_size + " " + tracker_string + " -o " + torrent_file_name + " " + file_name_qouted

    try:
        completed_process = subprocess.Popen(mktorrent_command, text=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
        completed_process.wait()  
        stdout, stderr = completed_process.communicate()
        print("Standard Output:")
        print(stdout)
        
        if stderr == "":
            print("Success. No errors reported.")
        else:
            print("Standard Error:")    
            print(stderr)
    except subprocess.CalledProcessError as e:
        print(f"Command failed with return code {e.returncode}:")
        print(e.stderr)

    torrent_file_name_no_qoute = destination_directory_x + ".torrent"

#######################################################################
    ######## copy this to SSD
#######################################################################
    source_directory = destination_directory_x
    destination_directory_ssd = ssd_location + "/" + name_of_torrent 

    if not os.path.exists(destination_directory_ssd):
        os.makedirs(destination_directory_ssd)
    chunk_size = 1024 
    for filename in os.listdir(source_directory):
        source_file = os.path.join(source_directory, filename)
        if os.path.isfile(source_file): 
            file_size = os.path.getsize(source_file)
            destination_file = os.path.join(destination_directory_ssd, filename)
            
            print("Copying " + filename)
            with tqdm(total=file_size, unit='B', unit_scale=True, unit_divisor=1024) as pbar:
                with open(source_file, 'rb') as src_file, open(destination_file, 'wb') as dest_file:
                    while True:
                        data = src_file.read(chunk_size)
                        if not data:
                            break
                        dest_file.write(data)
                        pbar.update(len(data))


    # Display the location of the copied file
    print("Location on SSD: " + destination_directory_ssd)   
#######################################################################
    #### Copy .torrent files
#######################################################################
    source_file = torrent_file_name_no_qoute
    source_file_base = os.path.basename(source_file)

    print("Copying .torrent to ssd...")
    file_size = os.path.getsize(source_file)
    chunk_size = 1024  # 1 KB
    with tqdm(total=file_size, unit='B', unit_scale=True, unit_divisor=1024) as pbar:
        with open(source_file, 'rb') as src_file, open(os.path.join(ssd_location, os.path.basename(source_file)), 'wb') as dest_file:
            while True:
                data = src_file.read(chunk_size)
                if not data:
                    break
                dest_file.write(data)
                pbar.update(len(data))

    torrent_ssd_destination = os.path.join(ssd_location, source_file_base)
    print("SSD location of .torrent: " + torrent_ssd_destination)
#######################################################################
    ####### MAKE SYMLINK TO SHORT-TERM LOCAL DIRECTORY
#######################################################################
    ### destination_directory_ssd is torrent directory on ssd
    ### torrent_ssd_destination is location of .torrent file on ssd

    # grab the names of the directory and torrent
    filename_symlink_file = os.path.basename(destination_directory_ssd)
    filename_symlink_torrent = os.path.basename(torrent_ssd_destination)

    symlink_name = os.path.join(symlink_directory_short, filename_symlink_file)
    try:
        os.symlink(destination_directory_ssd, symlink_name)
        containing_folder = os.path.dirname(media_file_path)
        print("Short-term symlink location: " + containing_folder)
    except FileExistsError:
        # If symbolic link already exists, replace it
        os.replace(destination_directory_ssd, symlink_name)
        print("Symbolic link replaced in directory " + containing_folder)

#######################################################################
    ####### MAKE SYMLINK TO LONG-TERM DIRECTORY
#######################################################################

    filename_symlink_file = os.path.basename(season_path)
    symlink_name = os.path.join(symlink_directory_file, name_of_torrent)
    try:
        os.symlink(season_path, symlink_name)
        containing_folder = os.path.dirname(media_file_path)
        print("Long-term symlink location: " + containing_folder)
    except FileExistsError:
        os.replace(season_path, symlink_name)
        print("Symbolic link replaced in directory " + containing_folder)


#######################################################################
    # COPY TORRENT FILE TO LONG-TERM SEED
#######################################################################
    print("Copying torrent file to long-term seed location...")
    source_file = torrent_file_name_no_qoute
    source_file_base = os.path.basename(source_file)
    file_size = os.path.getsize(source_file)
    chunk_size = 1024  
    with tqdm(total=file_size, unit='B', unit_scale=True, unit_divisor=1024) as pbar:
        with open(source_file, 'rb') as src_file, open(os.path.join(symlink_directory_long, os.path.basename(source_file)), 'wb') as dest_file:
            while True:
                data = src_file.read(chunk_size)
                if not data:
                    break
                dest_file.write(data)
                pbar.update(len(data))
    torrent_destination_final_long = os.path.join(symlink_directory_long, source_file_base)
    print(f"Location of torrent file on long-term seed: {torrent_destination_final_long}")

#######################################################################
    # COPY TORRENT FILE TO SHORT-TERM SEED
#######################################################################
    print("Copying torrent file to long-term seed location...")
    source_file = torrent_file_name_no_qoute
    source_file_base = os.path.basename(source_file)
    file_size = os.path.getsize(source_file)
    chunk_size = 1024  # 1 KB

    with tqdm(total=file_size, unit='B', unit_scale=True, unit_divisor=1024) as pbar:
        with open(source_file, 'rb') as src_file, open(os.path.join(symlink_directory_short_torrent, os.path.basename(source_file)), 'wb') as dest_file:
            while True:
                data = src_file.read(chunk_size)
                if not data:
                    break
                dest_file.write(data)
                pbar.update(len(data))
    torrent_destination_final_short = os.path.join(symlink_directory_short_torrent, source_file_base)
    print(f"Location of torrent file on long-term seed: {torrent_destination_final_short}")


#######################################################################
    # COPY TORRENT FILE TO WAITING AREA
#######################################################################
    print("Copying torrent file to waiting area...")
    source_file = torrent_file_name_no_qoute
    source_file_base = os.path.basename(source_file)
    file_size = os.path.getsize(source_file)
    chunk_size = 1024  # 1 KB
    with tqdm(total=file_size, unit='B', unit_scale=True, unit_divisor=1024) as pbar:
        with open(source_file, 'rb') as src_file, open(os.path.join(file_path_waiting, os.path.basename(source_file)), 'wb') as dest_file:
            while True:
                data = src_file.read(chunk_size)
                if not data:
                    break
                dest_file.write(data)
                pbar.update(len(data))
    torrent_destination_final_short = os.path.join(file_path_waiting, source_file_base)
    print(f"Location of torrent file in waiting area: {torrent_destination_final_short}")

#######################################################################
#delete files in creation area
#######################################################################
    directory_to_delete = destination_directory_x
    try:
        shutil.rmtree(directory_to_delete)
        print(f"{directory_to_delete} and its contents have been deleted.")
    except FileNotFoundError:
        print(f"{directory_to_delete} does not exist.")

    #delete torrent file
    if os.path.exists(torrent_file_name_no_qoute):
        os.remove(torrent_file_name_no_qoute)
        print(f"{torrent_file_name_no_qoute} has been deleted.")
    else:
        print(f"{torrent_file_name_no_qoute} does not exist.")      

#######################################################################
#resize images under 100kb
#######################################################################
    search_word = ''
    exclude_phrases = ['._']
    search_word = ''
    image_extensions = ['.jpg', '.jpeg', '.png']
    found_image_files = []

    def file_name_contains_exclude_phrases(file_name, exclude_phrases):
        return any(phrase in file_name for phrase in exclude_phrases)
    for root, dirs, files in os.walk(season_path):
        for file in files:
            _, file_extension = os.path.splitext(file)
            if file_extension.lower() in image_extensions:
                if search_word in file and not file_name_contains_exclude_phrases(file, exclude_phrases):
                    found_image_files.append(os.path.join(root, file))
    from PIL import Image
    import io
    def resize_image_to_target_size(image, target_size_kb):
        while True:
            img_buffer = io.BytesIO()
            image = image.convert("RGB") 
            image.save(img_buffer, format="JPEG", quality=40)
            image_size_bytes = len(img_buffer.getvalue())
            if image_size_bytes <= target_size_kb * 1024:
                return image
            width, height = image.size
            new_width = int(width * 0.6)
            new_height = int(height * 0.6)
            image = image.resize((new_width, new_height), Image.Resampling.LANCZOS)
    if (len(found_image_files)) == 0:
        print("No season image file found, continuing Without. If this was included please make sure the image is named (Series name) Season x.png/jpeg")
    if (len(found_image_files)) > 0:
        print("Images found: " + str(len(found_image_files)) + ", will resize them now to under 100kb and place in waiting area. Make sure the season images are named (Series name) Season x.png/jpeg")
        for string in found_image_files:
            input_image = (f"{string}")
            before_period, _, _ = input_image.partition('.')
            image_name = file_path_waiting + os.path.basename(before_period) + ".jpeg"
            original_image = Image.open(input_image)
            target_size_kb = 180
            resized_image = resize_image_to_target_size(original_image, target_size_kb)
            resized_image.save(image_name, format="JPEG", quality=15)
            print(os.path.basename(input_image) + " resized and moved to " + file_path_waiting)
  
exit_event.set()
thread.join() 
output = subprocess.check_output(tor_command_stop, shell=True, text=True)
print(output)
print("Success")
This is the video.txt file, a template for mediainfo output:

Code: Select all

General;"
==============================================
		M E D I A I N F O
----------------------------------------------
General
==============================================
Title.................: %Title%
Size..................: %FileSize/String4%
Last Modified.........: %File_Modified_Date_Local%
Overall Bitrate.......: %OverallBitRate/String%
Frame Rate............: %FrameRate/String%
Format................: %Format%
Format Version........: %Format_Version%
Duration..............: %Duration/String3%
Video Codecs..........: %Video_Format_WithHint_List%
Audio Codecs..........: %Audio_Format_WithHint_List%
Text Codecs...........: %Text_Format_WithHint_List%
Video Count...........: %VideoCount%
Audio Count...........: %AudioCount%
Text Count............: %TextCount%"\n


Video;"
==============================================
Video %StreamKindPos%
==============================================
ID....................: %ID/String%
Format................: %Format_Commercial% %Format/Info%
Resolution............: %Width%x%Height%
Aspect Ratio..........: %DisplayAspectRatio/String%
Codec.................: %Codec/String%%Format_Profile%
Codec ID..............: %CodecID%
Duration..............: %Duration/String1%
Bit Rate..............: %BitRate/String%
Bit Depth.............: %BitDepth/String%
Framerate.............: %FrameRate% fps
Frame Rate Mode.......: %FrameRate_Mode%\n"


Audio;"
==============================================
Audio %StreamKindPos%
==============================================
ID....................: %ID/String%
Audio.................: %Language/String%
Format................: %Format_Commercial% %Format/Info%
Duration..............: %Duration/String1%
Frame Rate............: %FrameRate%
Bitrate...............: %BitRate/String%
Bitrate Mode..........: %BitRate_Mode%
Channel Layout........: %ChannelPositions%
Channels..............: %Channel(s)/String%"\n

Text;"
==============================================
Subtitle %StreamKindPos%
==============================================
Stream Kind...........: %StreamKind/String%
ID....................: %ID%
Format................: %Format_Commercial% %Format/Info%
Codec ID..............: %CodecID%
Default...............: %Default%
Forced................: %Forced%
Language..............: %Language/String%\n"

Menu;"
==============================================
Chapters %StreamKindPos%
==============================================
Chapter Pos Begin.....: %Chapters_Pos_Begin%
Chapter Pos End.......: %Chapters_Pos_End%"