• 0 Posts
  • 23 Comments
Joined 2 years ago
cake
Cake day: August 2nd, 2023

help-circle

  • The rng mechanics are definitely frustrating for some but the game is way deeper. Getting to 46 rolls the credits but you are left with so many unanswered questions. Some people stop there and feel satisfied, but others are curious about the world.

    My thoughts are to try to push through the initial frustration with rng on the drafting side. You’ll eventually find that there are Roguelite mechanics to help you along, and it will feel less rng-dependent.


  • This would depend on whether the limit is defined as ingress or egress or both. For example AWS has free ingress traffic from the internet but there is a cost for egress traffic to the internet.

    A better solution would be to find a unmetered service, which means that you have a fixed transfer speed (e.g. 500 Mbit) but have unlimited bandwidth. OVH offers this in their VPS products.





  • I bought it personally but I would hardly call it expensive. The three year license is like ~67 USD a year for both CRT and FX.

    I love it mainly because it’s multi-platform but I wish it had more features. They boast their great integration with VShell but it would be much better if they just had better support for OpenSSH, like being able to push ssh keys to a host.




  • Make or find yourself a cart to drag around (g or G to drag it). It it doesn’t have wheels it’ll be quite loud. Sound = attraction = death in most cases.

    Don’t bother with cars for a long while, even one that actually runs. They take a lot to maintain and cause a lot of noise (see above). You’re better off starting with a bike for midrange transportation (or if using mods a foldable bike).

    When you start building or find a nice base area, make a crafting nook and drop all your items nearby to it. When crafting you can pull ingredients from 1-2 tiles adjacent.



  • You seem to be misinformed on how the internet works. Nothing is “free”. ISPs have to buy equipment, pay for expensive physical connectivity (without disturbing existing infrastructure), and usually have to deal with constant, ever increasing bandwidth requirements.

    I’m all for a bit of net neutrality, but ISPs tend to get a lot of flak for policies like this, for seemingly no reason. For example, let’s say ISP A and Upstream B have a mutual bandwidth sharing policy (called Peering) where both sides benefit equally from the connectivity. ISP A determines that N is using all the bandwidth to Upstream B. ISP A has three options: N gets all the bandwidth to Upstream B (disturbing other traffic to/from that network), N has to be throttled to allow all traffic equally, or ISP A and Upstream B need to expand their network again (new equipment, new physical links) which will cost a lot of money. N doesn’t even pay ISP A or Upstream B, they just pay their ISP C. In the end, ISP A has to throttle N, and N is the one who had to expand/change their business model to deliver content to their customers. They had to go out and buy services from many upstream providers to even the load and designed a solution to install Caching boxes inside each ISP’s datacenter so their traffic could reach end users without going upstream.




  • For the disks, you may have a small issue with having multiple types of disks in a single RAID10, as those disks might have slightly different physical attributes. ZFS is an option here as you can add two vdevs for the different drive types and add them to the same zpool, which effectively creates the RAID10 you’re looking for. You would typically not use LVM on top of ZFS but if you go with RAID10 it would let you create logical partitions that can be expanded easily at a later time.

    Another ZFS option is to use RAIDZ1 with the 4 disks in a vdev. The vdev will use 1 disk of space across all disks to maintain a parity with the other disks. You will have 12TB of usable storage on your 16TB raw storage. This will allow you to lose one drive with no data loss.


  • Since we don’t know what server or VM tech you’re using the advice will be pretty generic. For self hosting, you can likely get away with your ISCSI traffic sharing the LAN interface with your usual vm traffic but if you need high throughput you will want ISCSI optimized nics and turn on jumbo frames (mtu of 9000 is the standard here). This requires a switch that supports jumbo frames as well.

    For Windows, I find the ISCSI support to be very lacking. Every time I have used it I have had sporadic loss of connectivity, failure to mount on boot, and other issues. I would avoid it.

    For ESXi you can map an ISCSI lun as a datastore and create vmdks on top. This functions the same if you use actual FC luns or NFS mounts, and have had no issues with reliability. There’s also RDM which is raw direct map which can mount the ISCSI lun as a disk of the vm. If you’re using vSphere I would advise against this as you lose the ability to vMotion or use DRS.


  • I believe ZFS works best when having direct access to the disks, so having a md underlying it is not best practice. Not sure how well ZFS handles external disks, but that is something to consider. As for the drive sizes and redundancy, each type should have its own vdev. So you should be looking at a vdev of the 2x6TB in mirror and a vdev of the 2x12TB in mirror for maximum redundancy against drive failure, totaling 18TB usable in your pool. Later on if you need to add more space you can create new vdevs and add them to the pool.

    If you’re not worried about redundancy, then you could bypass ZFS and just setup a RAID-0 through mdadm or add the disks to a LVM VG to use all the capacity, but remember that you might lose the whole volume if a disk dies. Keep in mind that this would include accidentally unplugging an external disk.