I don’t think you should be using it anymore if it’s getting hot enough to cook a pizza…
cooked perfectly !
Avoid hardware RAID (have a look at this). Use Linux MD or BTRFS or ZFS.
It’s a 2004 server, you can’t do anything else but HW RAID on this. also, it’s using UltraSCSI (and you should not use that in 2024 either ahah)
SCSI was creme de la creme ages ago! Is it not a matter of going in its BIOS, configure the hardware RAID (go for mirror only!?), endure the noise it probably makes, and install ? :)
Indeed! I have a lot of SCSI disks, PCI cards and a few cables too! (also, SCSI is fun to pronounce… SKEUZY) but on this server, the RAID card doesn’t have any option to create a RAID in its BIOS, from what I can tell it needs a special software and I can’t find good tutorials or documentation out there :(
You can find the 7.12.x support CD for that controller at https://www.ibm.com/support/pages/ibm-serveraid-software-matrix. I’m pretty sure that server model did not support USB booting so you’ll need to burn that to a disc. This will be the disc to boot off of to create your array(s).
I forget if the support CD had the application you would install in Windows to manage things after installation or not, or if that’s only on the application CD. Either way you’ll find several downloads for various OS drivers and the applications from that matrix.
Thanks for the link! I’ll definitely need to try this… I have a few CDs laying around, I’ll burn one!
I did not know that
Serving pizza and files. What a time to be alive.
mv pizza.01 /srv/mouth/
In Linux everything is a file!
I have a (crappy) poweredge and know for a fact that that’s the wrong end to put the pizza on any rack server.
Only heat would be from the drive backplain, all the boiling hot CPUs, RAM, and expansion cards are further back.
Who said it was too keep it warm ? Maybe it’s too cool it off before eating it :)
Also, drives can get pretty hot
nice pizza
pizza for scale :)
Does it cook pizza?
I think I would get rid of that optical drive and install a converter for another drive like a 2.5 SATA. That way you could get an SSD for the OS and leave the bays for raid.
Other than that depending on what you want to put on this beast and if you want to utilize the hardware raid will determine the recommendations.
For example if you are thinking of a file server with zfs. You need to disable the hardware raid completely by getting it to expose the disks directly to the operating system. Most would investigate if the raid controller could be flashed into IT mode for this. If not some controllers do support just a simple JBOD mode which would be better than utilizing the raid in a zfs configuration. ZFS likes to directly maintain the disks. You can generally tell its correct if you can see all your disk serial numbers during setup.
Now if you do want to utilize the raid controller and are interested in something like proxmox or just a simple Debian system. I have had great performance with XFS and hardware raid. You lose out on some advanced Copy on Write features but if disk I/O is your focus consider it worth playing with.
My personal recommendation is get rid of the optical drive and replace it with a 2.5 converter for more installation options. I would also recommend getting that ram maxed and possibly upgrading the network card to a 10gb nic if possible. It wouldn’t hurt to investigate the power supply. The original may be a bit dated and you may find a more modern supply that is more rnergy efficient.
OS generally recommendation would be proxmox installed in zfs mode with an ashift of 12.
(It’s important to get this number right for performance because it can’t be changed after creation. 12 for disks and most ssds. 13 for more modern ssds.)
Only do zfs if you can bypass all the raid functions.
I would install the rpool in a basic zfs mirror on a couple SSDs. When the system boots I would log into the web gui and create another zfs pool out of the spinners. Ashift 12. Now if this is mostly a pool for media storage I would make it a z2. If it is going to have vms on it I would make it a raid 10 style. Disk I/O is significantly improved for vms in a raid 10 style zfs pool.
From here for a bit of easy zfs management I would install cockpit on top of the hypervisor with the zfs plugin. That should make it really easy to create, manage, and share zfs datasets.
If you read this far and have considered a setup like this. One last warning. Use the proxmox web UI for all the tasks you can. Do not utilize the cockpit web UI for much more than zfs management.
Have fun creating lxcs and vms for all the services you could want.
Hey, it’s a 2005 server, it can’t do IT mode, it only have Ultra SCSI 70GB drives, a 10 GB nic would be useless (it’s only PCI, not PCIe) DDR2 RAM and 1 core processors only too!
I’ll probably install a Debian, I had fun trying Windows 2003 Server. It has a Floppy drive too, I’ll definitely keep the DVD and Floppy drive in there! (the CD Drive is IDE btw) And you can only configure the RAID array via a CD provided by IBM (No, you cannot boot this CD from an USB key, as the software on the CD is looking for the dvd drive and not an USB key)
Most of everything you said would be accurate for recent servers tho, but not here, not at all ahah!