𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍

       🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆. 
 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍 
  • 1 Post
  • 18 Comments
Joined 2 years ago
cake
Cake day: August 26th, 2022

help-circle




  • I started with rootless podman when I set up All My Things, and I have never had an issue with either maintaining or running it. Most Docker instructions are transposable, except that podman doesn’t assume everything lives as dockerhub and you always have to specify the host. I’ve run into a couple of edge cases where arguments are not 1:1 and I’ve had to dig to figure out what the argument is on podman. I don’t know if I’m actually more secure, but I feel more secure, and I really like not having the docker service running as root in the background. All in all, I think my experience with rootless podman has been better than my experience with docker, but at this point, I’ve had far more experience with podman.

    Podman-compose gives me indigestion, but docker-compose didn’t exist or wasn’t yet common back when I used docker; and by the time I was setting up a homelab, I’d already settled on podman. So I just don’t use it most of the time, and wire things up by hand when necessary. Again, I don’t know whether that’s just me, or if podman-compose is more flaky than docker-compose. Podman-compose is certainly much younger and less battle-tested. So is podman but, as I said, I’ve been happy with it.

    I really like running containers as separate users without that daemon - I can’t even remember what about the daemon was causing me grief; I think it may have been the fact that it was always running and consuming resources, even when I wasn’t running a container, which isn’t a consideration for a homelab. However, I’d rather deeply know one tool than kind of know two that do the same thing, and since I run containers in several different situations, using podman everywhere allows me to exploit the intimacy I wouldn’t have if I were using docker in some places and podman in others.


  • I have no opinion about rsync.net. I’d check which services restic supports; there are several, and it is it supports rsync.net and that’s what you want to use, you’re golden. Or, use another backup tool that has encryption-by-default and does support rsync.net - there are a couple of options.

    I would just never store any data that wasn’t meant for public consumption unencrypted on someone else’s servers. I make an exception for my VPS, but that’s only because I’m more paranoid about exposing my LAN that putting my email on a VPS.

    restic, and other backup tools, are generally not always on. You run them; they back up. If you run them only one a month, that’s how often they run. The remote mounting is just a nice feature when you want to grab a single file from one of the backups.

    What you’re describing is a classic backup use-case. I’m recommending the easiest, cheapest, most reliable offsite solution I’ve used. restic has been around for years, and has a lot of users and a lot of eyeballs look at it, and it’s OSS. There are even GUIs for it, if you’re not comfortable with the CLI. B2 is generally well-regarded, is fairly easy to figure out, and has also been around for ages. Together, they make a solid combo. I also backup with restic to a local disk and use that for accessing history - B2 is just, as you say, in case of a fire, or theft, I suppose.




  • They can’t, tho. There are two reasons for this.

    Geolocating with cell towers requires trilateration, and needs special hardware on the cell towers. Companies used to install this hardware for emergency services, but stopped doing so as soon as they legally could as it’s very expensive. Cell towers can’t do triangulation by themselves as it requires even more expensive hardware to measure angles; trilateration doesn’t work without special equipment because wave propegation delays between the cellular antenna and the computers recording the signal are big enough to utterly throw off any estimate.

    An additional factor in making trilateration (or even triangulation, in rural cases where they did sometimes install triangulation antenna arrays on the towers) is that, since the UMTS standard, cell chips work really hard to minimize their radio signal strength. They find the closest antenna and then reduce their power until they can just barely talk to the tower; and except in certain cases they only talk to one tower at a time. This means that, at any given point, only one tower is responsible for handling traffic for the phone, and for triangulation you need 3. In addition to saving battery power, it saves the cell companies money, because of traffic congestion: a single tower can only handle so much traffic, and they have to put in more antennas and computers if the mobile density gets too high.

    The reason phones can use cellular signal to improve accuracy is because each phone can do its own triangulation, although it’s still not great and can be impossible because of power attenuation (being able to see only one tower - or maybe two - at a time); this is why Google and Apple use WiFi signals to improve accuracy, and why in-phone triangulation isn’t good enough: in any sufficiently dense urban or suburban environment, the combined informal of all the WiFi routers the phone can see, and the cell towers it can hear, can be enough to give a good, accurate position without having to turn on the GPS chip, obtain a satellite fix (which may be impossible indoors) and suck down power. But this is all done inside and from the phone - this isn’t something cell carriers can do themselves most of the time. Your phone has to send its location out somewhere.

    TL;DR: Cell carriers usually can’t locate you with any real accuracy, without the help of your phone actively reporting its calculated location. This is largely because it’s very expensive for carriers to install the necessary hardware to get any accuracy of more than hundreds of meters; they are loath to spend that money, and legislation requiring them to do so no longer exists, or is no longer enforced.

    Source: me. I worked for several years in a company that made all of the expensive equipment - hardware and software - and sold it to The Big Three carriers in the US. We also paid lobbyists to ensure that there were laws requiring cell providers to be able to locate phones for emergency services. We sent a bunch of our people and equipment to NYC on 9/11 and helped locate phones. I have no doubt law enforcement also used the capability, but that was between the cops and the cell providers. I know companies stopped doing this because we owned all of the patents on the technology and ruthlessly and successfully prosecuted the only one or two competitors in the market, and yet we still were going out of business at the end as, one by one, cell companies found ways to argue out of buying, installing, and maintaining all of this equipment. In the end, the competitors we couldn’t beat were Google and Apple, and the cell phones themselves.



  • This is good information. I had a complete failure with flashing Tasmota once, and bricked a $100 device.

    I like the project, though. My biggest complaint is that - at least for what I was trying to flash, the Linux support was iffy. I was trying to flash something for HA, and the instructions assumed I had access to the computer running HA (which is a headless device in a closet in the basement - entirely unpractical for doing fiddly pinning while trying to flash) or using a web browser with webUSB - which Firefox on Linux doesn’t. So eventually I found a completely unrelated set of instructions I could run from the CLI from my desktop over a cable connected to said desktop, and while it appeared successful, the device is bricked. I can’t even get it into flash mode anymore.

    I don’t think any of this has to do with Tasmota, except that the Linux tooling seems either weak, or make assumes people are running Chrome; and if you’re security conscious enough to be flashing a device to run Tasmota, you’re not running Chrome.

    So I’m not doing that again. It’s a hundred bucks and two days of digging around for tooling and instructions I’d like back.

    Again, not Tasmota’s fault, but it’s not super accessible.



  • I once owned a bunch of WiFi connected devices. One day I inspected my router logs and found out that they were all making calls to a bunch of services that weren’t the vendor - things like Google, and Facebook.

    WiFi connected devices require connecting to a router; in most homes, this is going to be one that’s also connected to the internet - most people aren’t going to buy a second router just for their smart home, or set up a disconnected second LAN on their one router. And nearly all of these devices come with an app, which talks to the device through an external service (I’m looking at you, Honeywell, and you, Rainbird). This is a privacy shit-show. WiFi is a terrible option for smart home devices.

    ZigBee, well, I haven’t had any luck with it - pairing problems which are certainly just a learning curve in my part and not an issue with the protocol. I chose ZWave myself because I read about the size and range limitations of ZigBee technology, versus ZWave, but honestly I could have gone either way. Back then, there was no appreciable price difference in devices. Most hubs support both, though, and I can’t see why I wouldn’t mix them (other than I need to figure out how to get ZigBee to work).

    In any case, low-power BT, ZigBee, or Zwave are all options, whereas I will not allow more WiFi smart devices in my house. I’m stuck with Honeywell and Rainbird, for… reasons… but that’s it. I don’t need to be poking more holes in my LAN security.


  • Do. The ErgoDox (also from ZSA) comes pretty close, and I’d programmable with their web app. All you need to do is reprogram it to swap the “Y” key, and pop off the key caps and swap those.

    Or, do you mean keys in the same order, but only the “Y” key is moved to the other half? Like, next to the “T”? If so, you’re in luck, in a way, because the ErgoDox(en) come with an extra column of keys on the inside; you could program the big key next to the “T” to be “Y”. Then do whatever you want with the spare “Y” key. I think ZSA sends you a couple of extra key caps with the keyboard, so if you really wanted to, you could swap the “Y” out with a blank.

    You can choose your switches when you order, IIRC. Mine was no buckling spring, but it clicked just fine.



  • Thing is, outsourcing never stopped. It’s still going strong, sending jobs to whichever country is cheapest.

    India is losing out to Indonesia, to Mexico, and to S American countries.

    It’s a really stupid drive to the bottom, and you always get what you pay for. Want a good development team in Bengaluru? It might be cheaper than in the US, but not that much cheaper. Want good developers in Mexico? You can get them, but they’re not the cheapest. And when a company outsources like this, they’ve already admitted they’re willing to sacrifice quality for cost savings, and you - as a manager - won’t be getting those good, more expensive developers. You’ll be getting whoever is cheapest.

    It is among the most stupid business practices I’ve had to fight with in my long career, and one of the things I hate the most.

    Developers are not cogs. You can’t swap them out like such, and any executive who thinks you can is a fool and an incompetent idiot.




  • It’s true some things are harder to do in the container configuration; it’s easier installed as an OS, especially integrations like Z-Wave, ZigBee, RTSP, Eufy, ESP, and so on. All of these require running other software, and in containers it’s a fair bit of fussing with port and host OS device connections.

    I’ve always run it in a container, without issue. It works fine, but I’m comfortable with the command line and LXC. That said, flashing an ESP hardware device and getting it connected to HA running in a container has so far defeated me, because I have to give access to the device in the configuration of the container before I run it, but the device flashing process itself is time limited and expects a process to be waiting on it when it is connected. It’s a chicken/egg problem I haven’t yet figured out which wouldn’t be a problem if I were running the HA OS.

    HA isn’t the only software that just works better when it controls the while OS. Kodi is another that encourages users heavily to running it as an OS.

    Regardless, it runs fine via

    podman pull ghcr.io/home-assistant/home-assistant: latest
    

    and there’s a package in AUR that wraps the container up with a systemd service - it’s as close to a bare package install as you’re likely to get.

    What’s a little funny to me is that, despite that I’ve been running HA in a container for the past 4 years, I’m working towards getting a dedicated device and running HA OS on it. If we ever move out of this house, I’m not going to spend weeks going around replacing all of the hardware - smart sockets, lights, garage door opener, security, etc etc - with dumb devices; and for any of that to be worth anything, it’s going to need a controller configured for it, which means, I’m planning on selling the HA server device with the house. For that case, I don’t want anything but HA running on that device, and for that, it’d just be easier and smoother to run HAOS.

    My advice is to run HA in a container until you are sure that’s the direction you want to go, but not for so long that it’s going to be a PITA to migrate to a dedicated server. But - hey, just IMHO - plan on running HAOS. If I knew then what I know now, that’s what I would have done.