I want to build a proper server with room for 40+ HDDs to move my media server to and have RAID 1. I know a lot about PCs and software, but when it comes to server hardware I have no clue what I’m doing. How would I go about building a server that has access to 40+ RAID 1’d HDDs?

  • computergeek125@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    7 months ago

    Other have all mentioned various tech to help you out with this - Ceph, ZFS, RAID at 50/60, raid is not a backup, etc.

    40 drives is a massive amount. On my system, I have ~58TB (before filesystem overhead) comprised of a 48TB NAS (5x12TB@RAID-5) 42TB of USB backup disks for said NAS (RAID is not a backup), a 3-node vSAN array with 12TB (3x500GB cache, 6x2TB capacity) of all-flash storage at RF2 (so ~6TB usable, since each VM is in independent RAID-1), and a standalone host with ~4TB@RAID-5 (16 disks spread across 2 RAID-5 arrays, don’t have the numbers off hand)

    That’s 5+9+16=30 drives, and the whole rack takes up 950w including the switches, which iirc account for ~250-300w (I need to upgrade those to non-PoE versions to save on some juice). Each server on its own takes up 112-185w, as measured at iDRAC. It used to take up 1100w until I upgraded some of my older servers into newer ones with better power efficiency as my own build-out design principle.

    While you can just throw 40-60 drives in a 4u chassis (both Dell and 45drives/Storinator offer the ability to have this as a DAS or server), that thing will be GIGA heavy fully loaded. Make sure you have enough power (my rack has a dedicated circuit) and that you place the rack on a stable floor surface capable of withstanding hundreds of pounds on four wheels (I think I estimated my rack to be in the 300-500lbs class)

    You mentioned wanting to watch videos for knowledge - if you want anywhere to start, I’d like you to start by watching the series Linus Tech Tips did on their Petabyte Project’s many iterations as a case study for understanding what can go wrong when you have that many drives. Then look into the tech you can use to not make the same mistakes Linus did. Many very very good options are discussed in the other comments here, and I’ve already rambled on far too long.

    Other than that, I wish you the best of luck on your NAS journey, friend. Running this stuff can be fun and challenging, but in the end you have a sweet system if you pull it off. :) There’s just a few hurdles to cross since at the 140TB size, you’re basically in enterprise storage land with all the fun problems that come with scale up/out. May your storage be plentiful and your drives stay spinning.