Just use yt-dlp instead of relying on websites that shove ads in your face and may do what ever they want to the files you’re downloading?
Just use yt-dlp instead of relying on websites that shove ads in your face and may do what ever they want to the files you’re downloading?
if predb.net is anything to go by, there has not yet been any scene release for that series: https://predb.net/search/rey mysterio?page=1. Either it’s too new, interest is too low, or a mix of both. Or something else entirely, who’s to say.
Hm, I have yet to mess around with matrix. As anything fediverse, the increased complexity is a little overwhelming for me, and since I am not pulled to matrix by any communities im a part of, I wasn’t yet forced to make any decisions. I mainly hang out on discord, if that’s something you use.
I’d love to have everything centralized at home, but my net connection tends to fail a lot and I dont want critical services (AdGuard, Vaultwarden and a bunch of others that arent listed) to be running off of flakey internet, so those will remain in a datacenter. Other stuff might move around, or maybe not. Only time will tell, I’m still at the beginning of my journey after all!
Pretty sure ruTorrent is a typical download client. The real reason is that it came preinstalled and I never had a reason to change it ¯_(ツ)_/¯
The rclone mount works via SSH credentials. Torrent files and tracker searches run over simple HTTPS, since both my torrent client and jackett expose public APIs for these purposes, so I can just enter the web address of these endpoints into the apps running on my homelab.
Sidenote, since you said sshfs mount
: I tried sshfs, but has significantly lower copy speeds than with rclone mount
. Might have been a misconfiguration, but it was more time efficient to use rclone than trying to debug my sshfs connection speed.
Allow me to cross-post my recent post about my own infrastructure, which has pretty much exactly this established: lemmy.dbzer0.com/post/13552101.
At the homelab (A
in your case), I have tailscale running on the host and caddy in docker exposing port 8443 (though the port matters not). The external VPS (B
in your case) runs docker-less caddy and tailscale (probably also works with caddy in docker when you run it in network: host
mode). Caddy takes in all web requests to my domain and reverse_proxies them to the tailscale hostname of my homelab :8443. It does so with a wildcard entry (*.mydomain.com
), and it forwards everything. That way it also handles the wildcard TLS certificate for the domain. The caddy instance on the homelab then checks for specific subdomains or paths, and reverse_proxies the requests again to the targeted docker container.
The original source IP is available to your local docker containers by making use of the X-Forwarded-For
header, which caddy handles beautifully. Simply add this block at the top of your Caddyfile on server A:
{
servers {
trusted_proxies static 192.168.144.1/24 100.111.166.92
}
}
replacing the first IP with the gateway in the docker network, and the second IP with the “virtual” IP of server A inside the tailnet. Your containers, if they’re written properly, should automatically read this value and display the real source IP in their logs.
Let me know if you have any further questions.
May I present to you: Caddy but for docker and with labels so kind of like traefik but the labels are shorter 👏 https://github.com/lucaslorentz/caddy-docker-proxy
Jokes aside, I did actually use this for a while and it worked great. The concept of having my reverse proxy config at the same place as my docker container config is intriguing. But managing labels is horrible on unraid, so I moved to classic caddy instead.
its basically a VPS that comes with torrenting software preinstalled. Depending on hoster and package, you’ll be able to install all kinds of webapps on the server. Some even enable Plex/Jellyfin on the more expensive plans.
Nope, don’t have that yet. But since all my compose and config files are neatly organized on the file system, by domain and then by service, I tar up that entire docker dir once a week and pull it to the homelab, just in case.
How have you setup your provisioning script? Any special services or just some clever batch scripting?
Absolutely! To be honest, I don’t even want to have countless machines under my umbrella, and constantly have consodilation in mind - but right now, each machine fulfills a separate purpose and feels justified in itself (homelab for large data, main VPS for anything thats operation critical and cant afford power/network outages and so on). So unless I find another purpose that none of the current machines can serve, I’ll probably scale vertically instead of horizontally (is that even how you use that expression?)
The crowdsec agent running on my homelab (8 Cores, 16GB RAM) is currently sitting idle at 96.86MiB RAM and between 0.4 and 1.5% CPU usage. I have a separate crowdsec agent running on the Main VPS, which is a 2 vCPU 4GB RAM machine. There, it’s using 1.3% CPU and around 2.5% RAM. All in all, very manageable.
There is definitely a learning curve to it. When I first dove into the docs, I was overwhelmed by all the new terminology, and wrapping my head around it was not super straightforward. Now that I’ve had some time with it though, it’s become more and more clear. I’ve even written my own simple parsers for apps that aren’t on the hub!
What I find especially helpful are features like explain
, which allow me to pass in logs and simulate which step of the process picks that up and how the logs are processed, which is great when trying to diagnose why something is or isn’t happening.
The crowdsec agent running on my homelab is running from the docker container, and uses pretty much exactly the stock configuration. This is how the docker container is launched:
crowdsec:
image: crowdsecurity/crowdsec
container_name: crowdsec
restart: always
networks:
socket-proxy:
ports:
- "8080:8080"
environment:
DOCKER_HOST: tcp://socketproxy:2375
COLLECTIONS: "schiz0phr3ne/radarr schiz0phr3ne/sonarr"
BOUNCER_KEY_caddy: as8d0h109das9d0
USE_WAL: true
volumes:
- /mnt/user/appdata/crowdsec/db:/var/lib/crowdsec/data
- /mnt/user/appdata/crowdsec/acquis:/etc/crowdsec/acquis.d
- /mnt/user/appdata/crowdsec/config:/etc/crowdsec
Then there’s the Caddyfile on the LabProxy, which is where I handle banned IPs so that their traffic doesn’t even hit my homelab. This is the file:
{
crowdsec {
api_url http://homelab:8080
api_key as8d0h109das9d0
ticker_interval 10s
}
}
*.mydomain.com {
tls {
dns cloudflare skPTIe-qA_9H2_QnpFYaashud0as8d012qdißRwCq
}
encode gzip
route {
crowdsec
reverse_proxy homelab:8443
}
}
Keep in mind that the two machines are connected via tailscale, which is why I can pass in the crowdsec agent with its local hostname. If the two machines were physically separated, you’d need to expose the REST API of the agent over the web.
I hope this helps clear up some of your confusion! Let me know if you need any further help with understanding it. It only gets easier the more you interact with it!
don’t worry, all credentials in the two files are randomized, never the actual tokens
Of course! here you go: https://files.catbox.moe/hy713z.png. The image has the raw excalidraw data embedded, so you can import it to the website like a save file and play around with the sorting if need be.
Oh, that! That app proxies the docker socket connections over a TCP channel. Which provides a more granular control over what app gets what access to specific functionalities of the docker socket. Directly mounting the socket into an app technically grants full root access to the host system in case of a breach, so this is the advised way to do it.
In addition to the other commenter and their great points, here’s some more things I like:
Gosh, that’s cute. Probably how I’ll end up too. Right now I’m not ready to let friends use my services. I already have friends and family on adguard and vaultwarden, that’s enough responsibility for now.
If you’re referring to the “LabProxy VPS”: So that I don’t have to point a public domain that I (plan to) use more and more in online spaces to my personal IP address, allowing anyone and everyone to pinpoint my location. Also, I really don’t want to mess with the intricacies of DynDNS. This solution is safer and more reliable than DynDNS and open ports on my router thats not at all equipped to fend off cyberspace attacks.
If you’re referring to the caddy reverse proxy on the LabProxy VPS: I’m pointing domains that I want to funnel into my homelab at the external IP of the proxy VPS. The caddy server on that VPS reads these requests and reverse-proxies them onto the caddy-port from the homelab, using the hostname of my homelab inside my tailscale network. That’s how I make use of the tunnel. This also allows me to send the crowdsec ban decisions from the homelab to the Proxy VPS, which then denies all incoming requests from that source IP before they ever hit my homelab. Clean and Safe!
I’m still on the fence if I want to expose Jellyfin publicly or not. On the one hand, I never really want to stream movies or shows from abroad, so there’s no real need. And in desperate times I can always connect to Tailscale and watch that way. But on the other, it’s really cool to simply have a web accessible Netflix. Idk.
I haven’t researched this, but my gut tells me one should be able to connect the two servers via Wireguard (direct tunnel, tailscale, zerotier, what have you) and discretely access the docker api without making it publicly available.
Because using a containerization system to run multiple services on the same machine is vastly superior to running everything bare metal? Both from a security and a ease-of-use standpoint. Why wouldnt you use docker?