• 1 Post
  • 41 Comments
Joined 1 year ago
cake
Cake day: June 26th, 2023

help-circle






  • The rclone mount works via SSH credentials. Torrent files and tracker searches run over simple HTTPS, since both my torrent client and jackett expose public APIs for these purposes, so I can just enter the web address of these endpoints into the apps running on my homelab.

    Sidenote, since you said sshfs mount: I tried sshfs, but has significantly lower copy speeds than with rclone mount. Might have been a misconfiguration, but it was more time efficient to use rclone than trying to debug my sshfs connection speed.


  • Allow me to cross-post my recent post about my own infrastructure, which has pretty much exactly this established: lemmy.dbzer0.com/post/13552101.

    At the homelab (A in your case), I have tailscale running on the host and caddy in docker exposing port 8443 (though the port matters not). The external VPS (B in your case) runs docker-less caddy and tailscale (probably also works with caddy in docker when you run it in network: host mode). Caddy takes in all web requests to my domain and reverse_proxies them to the tailscale hostname of my homelab :8443. It does so with a wildcard entry (*.mydomain.com), and it forwards everything. That way it also handles the wildcard TLS certificate for the domain. The caddy instance on the homelab then checks for specific subdomains or paths, and reverse_proxies the requests again to the targeted docker container.

    The original source IP is available to your local docker containers by making use of the X-Forwarded-For header, which caddy handles beautifully. Simply add this block at the top of your Caddyfile on server A:

    {
            servers {
                    trusted_proxies static 192.168.144.1/24 100.111.166.92
            }
    }
    

    replacing the first IP with the gateway in the docker network, and the second IP with the “virtual” IP of server A inside the tailnet. Your containers, if they’re written properly, should automatically read this value and display the real source IP in their logs.

    Let me know if you have any further questions.





  • Absolutely! To be honest, I don’t even want to have countless machines under my umbrella, and constantly have consodilation in mind - but right now, each machine fulfills a separate purpose and feels justified in itself (homelab for large data, main VPS for anything thats operation critical and cant afford power/network outages and so on). So unless I find another purpose that none of the current machines can serve, I’ll probably scale vertically instead of horizontally (is that even how you use that expression?)


  • The crowdsec agent running on my homelab (8 Cores, 16GB RAM) is currently sitting idle at 96.86MiB RAM and between 0.4 and 1.5% CPU usage. I have a separate crowdsec agent running on the Main VPS, which is a 2 vCPU 4GB RAM machine. There, it’s using 1.3% CPU and around 2.5% RAM. All in all, very manageable.

    There is definitely a learning curve to it. When I first dove into the docs, I was overwhelmed by all the new terminology, and wrapping my head around it was not super straightforward. Now that I’ve had some time with it though, it’s become more and more clear. I’ve even written my own simple parsers for apps that aren’t on the hub!

    What I find especially helpful are features like explain, which allow me to pass in logs and simulate which step of the process picks that up and how the logs are processed, which is great when trying to diagnose why something is or isn’t happening.

    The crowdsec agent running on my homelab is running from the docker container, and uses pretty much exactly the stock configuration. This is how the docker container is launched:

      crowdsec:
        image: crowdsecurity/crowdsec
        container_name: crowdsec
        restart: always
        networks:
          socket-proxy:
        ports:
          - "8080:8080"
        environment:
          DOCKER_HOST: tcp://socketproxy:2375
          COLLECTIONS: "schiz0phr3ne/radarr schiz0phr3ne/sonarr"
          BOUNCER_KEY_caddy: as8d0h109das9d0
          USE_WAL: true
        volumes:
          - /mnt/user/appdata/crowdsec/db:/var/lib/crowdsec/data
          - /mnt/user/appdata/crowdsec/acquis:/etc/crowdsec/acquis.d
          - /mnt/user/appdata/crowdsec/config:/etc/crowdsec
    

    Then there’s the Caddyfile on the LabProxy, which is where I handle banned IPs so that their traffic doesn’t even hit my homelab. This is the file:

    {
    	crowdsec {
    		api_url http://homelab:8080
    		api_key as8d0h109das9d0
    		ticker_interval 10s
    	}
    }
    
    *.mydomain.com {
    	tls {
    		dns cloudflare skPTIe-qA_9H2_QnpFYaashud0as8d012qdißRwCq
    	}
    	encode gzip
    	route {
    		crowdsec
    		reverse_proxy homelab:8443
    	}
    }
    

    Keep in mind that the two machines are connected via tailscale, which is why I can pass in the crowdsec agent with its local hostname. If the two machines were physically separated, you’d need to expose the REST API of the agent over the web.

    I hope this helps clear up some of your confusion! Let me know if you need any further help with understanding it. It only gets easier the more you interact with it!

    don’t worry, all credentials in the two files are randomized, never the actual tokens




  • In addition to the other commenter and their great points, here’s some more things I like:

    • ressource efficient: im running all my stuff on low end servers, and cant afford my reverse proxy to waste gigabytes of RAM (kooking at you, NPM)
    • very easy syntax: the Caddyfile uses a very simple, easy to remember syntax. And the documentation is very precise and quickly tells me what to do to achieve something. I tried traefik and couldn’t handle the long, complicated tag names required to set anything up.
    • plugin ecosystem: caddy is written in go, and very easy to extend. There’s tons of plugins for different functionalities, that are (mostly) well documented and easy to use. Building a custom caddy executable takes one command.


  • If you’re referring to the “LabProxy VPS”: So that I don’t have to point a public domain that I (plan to) use more and more in online spaces to my personal IP address, allowing anyone and everyone to pinpoint my location. Also, I really don’t want to mess with the intricacies of DynDNS. This solution is safer and more reliable than DynDNS and open ports on my router thats not at all equipped to fend off cyberspace attacks.

    If you’re referring to the caddy reverse proxy on the LabProxy VPS: I’m pointing domains that I want to funnel into my homelab at the external IP of the proxy VPS. The caddy server on that VPS reads these requests and reverse-proxies them onto the caddy-port from the homelab, using the hostname of my homelab inside my tailscale network. That’s how I make use of the tunnel. This also allows me to send the crowdsec ban decisions from the homelab to the Proxy VPS, which then denies all incoming requests from that source IP before they ever hit my homelab. Clean and Safe!