IMO, this is a discussion that should be taking place on the project’s GitHub. I’m going to lock the comments so I don’t get any more reports about commenters’ behavior.
IMO, this is a discussion that should be taking place on the project’s GitHub. I’m going to lock the comments so I don’t get any more reports about commenters’ behavior.
With the disclaimer that Proxmox has nothing to do with this question, I’m forced to assume this is just a networking issue that happens to use OPNsense as the router. Because of that, I must advise that you seek help from a networking-focused community. There’s no clear link to self-hosting in this post, which is required per Rule 3.
If the connections are already tagged as you come into the Proxmox server, then you need only to create interfaces for them in Proxmox (vmbr1, vmbr2, etc). EDIT: if you’re doing PCI passthrough of the physical NICs, ignore this step.
Then, in OPNsense, you just adding the individual interfaces. No need to assign a VLAN inside OPnsense because the traffic is already tagged on the network (per your earlier statement).
Whether or not the managed switch that has tagged each port is also providing VLAN isolation, you’ll simply use the OPNsense firewall to provide isolation, which it does by default. You’ll use it to allow the connections access to the fiber WAN gateway.
You’ll need to be far more descriptive than “I can’t get it to work.” I can almost guarantee you that Fedora is not the problem.
I’m a little lost on how a container would mess with your boot loader (GRUB). That aside, most of what you’re explaining to do with the containers. These are OS-agnostic. What do the container logs tell you?
This is really more of a home networking issue than anything having to do with self-hosting, especially since it centers on a consumer router. Please consider posting this in one of the many Lemmy home networking communities.
I’m going to allow this post, despite its age and likely obsolescence. I encourage community members to use up and down votes to judge its value to the community.
If you really want to serve the self-hosting community, please improve your documentation. As someone unfamiliar with this product, I have no idea what to do with this once I clone the repo. I hunted and found a compose.yaml file, but it’s not clear if this is all I need.
Except when the ONLY pi-hole is down, which was the original OP’s whole question.
Yes, your experience will be different if your DNS is being provided by another kind of DNS resolver. If you want a consistent pi-hole experience (and you can’t avoid downtime of your current pi-hole), add another pi-hole to your network and let that be your secondary DNS resolver.
Add another DNS server (1.1.1.1, for instance) to your DHCP options. Your DHCP clients will use 1.1.1.1 when the pi-hole isn’t responsive.
Way to ruin the day of all the Apple-haters.
On the contrary, he shot down the legislation that would end net metering, which is critical for me to be productive with the excess power I generate.
Optimus Energy in Mt. Dora. Went with them because they had the best overall ‘bang for the buck’ AND their core values are green energy. They aren’t electricians or roofers trying to jump on the solar train.
No additional stress to the roof. One does have to remove then reinstall if getting the roof done. The cost is approx. $100/panel. With 42 panels, that’s an extra $4200 for a roof job. But, that’s the only real consideration.
The monthly payment on my 25-year, 7% loan for my solar installation is less than the average power bill. My solar system generates more than I need. Assuming rates never go down, I’m in good shape.
Add “-vvv” to your mount command and see what else it tells you.
Based on the vaultwarden wiki, the default DB engine is SQLite. Therefore, all the data is in the sqlite file(s) contained in your data volume. This backup utility seems to take that into account and only focuses on the data volume.
Home assistant integration could accomplish this for you. Not sure if it’s less work than regular mobile clients, though.