• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: August 2nd, 2023

help-circle
  • Make or find yourself a cart to drag around (g or G to drag it). It it doesn’t have wheels it’ll be quite loud. Sound = attraction = death in most cases.

    Don’t bother with cars for a long while, even one that actually runs. They take a lot to maintain and cause a lot of noise (see above). You’re better off starting with a bike for midrange transportation (or if using mods a foldable bike).

    When you start building or find a nice base area, make a crafting nook and drop all your items nearby to it. When crafting you can pull ingredients from 1-2 tiles adjacent.



  • You seem to be misinformed on how the internet works. Nothing is “free”. ISPs have to buy equipment, pay for expensive physical connectivity (without disturbing existing infrastructure), and usually have to deal with constant, ever increasing bandwidth requirements.

    I’m all for a bit of net neutrality, but ISPs tend to get a lot of flak for policies like this, for seemingly no reason. For example, let’s say ISP A and Upstream B have a mutual bandwidth sharing policy (called Peering) where both sides benefit equally from the connectivity. ISP A determines that N is using all the bandwidth to Upstream B. ISP A has three options: N gets all the bandwidth to Upstream B (disturbing other traffic to/from that network), N has to be throttled to allow all traffic equally, or ISP A and Upstream B need to expand their network again (new equipment, new physical links) which will cost a lot of money. N doesn’t even pay ISP A or Upstream B, they just pay their ISP C. In the end, ISP A has to throttle N, and N is the one who had to expand/change their business model to deliver content to their customers. They had to go out and buy services from many upstream providers to even the load and designed a solution to install Caching boxes inside each ISP’s datacenter so their traffic could reach end users without going upstream.




  • For the disks, you may have a small issue with having multiple types of disks in a single RAID10, as those disks might have slightly different physical attributes. ZFS is an option here as you can add two vdevs for the different drive types and add them to the same zpool, which effectively creates the RAID10 you’re looking for. You would typically not use LVM on top of ZFS but if you go with RAID10 it would let you create logical partitions that can be expanded easily at a later time.

    Another ZFS option is to use RAIDZ1 with the 4 disks in a vdev. The vdev will use 1 disk of space across all disks to maintain a parity with the other disks. You will have 12TB of usable storage on your 16TB raw storage. This will allow you to lose one drive with no data loss.


  • Since we don’t know what server or VM tech you’re using the advice will be pretty generic. For self hosting, you can likely get away with your ISCSI traffic sharing the LAN interface with your usual vm traffic but if you need high throughput you will want ISCSI optimized nics and turn on jumbo frames (mtu of 9000 is the standard here). This requires a switch that supports jumbo frames as well.

    For Windows, I find the ISCSI support to be very lacking. Every time I have used it I have had sporadic loss of connectivity, failure to mount on boot, and other issues. I would avoid it.

    For ESXi you can map an ISCSI lun as a datastore and create vmdks on top. This functions the same if you use actual FC luns or NFS mounts, and have had no issues with reliability. There’s also RDM which is raw direct map which can mount the ISCSI lun as a disk of the vm. If you’re using vSphere I would advise against this as you lose the ability to vMotion or use DRS.


  • I believe ZFS works best when having direct access to the disks, so having a md underlying it is not best practice. Not sure how well ZFS handles external disks, but that is something to consider. As for the drive sizes and redundancy, each type should have its own vdev. So you should be looking at a vdev of the 2x6TB in mirror and a vdev of the 2x12TB in mirror for maximum redundancy against drive failure, totaling 18TB usable in your pool. Later on if you need to add more space you can create new vdevs and add them to the pool.

    If you’re not worried about redundancy, then you could bypass ZFS and just setup a RAID-0 through mdadm or add the disks to a LVM VG to use all the capacity, but remember that you might lose the whole volume if a disk dies. Keep in mind that this would include accidentally unplugging an external disk.







  • Since you’ve probably been using the SMB protocol to access the NAS you probably need to understand a few things about the NFS protocol which functions differently. The NFS mount acts like a mapping of the entire system, rather than a specific user. That means that if there are differences in the systems, you may get access errors. For example the default user in Synology has a uid of 1024, but most client systems have a default of 1000. This means your user may not have access to the share or files, even if you have it mounted on the client.

    One thing to check is what your Shared Folder’s NFS permissions squash is set to. This is found in Control Panel > Shared Folder the the NFS permissions tab. If it’s set to “no mapping” then uids must match. The easiest setup is to “map all users to admin” but you may encounter issues with that later if you switch back to SMB since new files will be owned by admin.