Buy new or second hand Enterprise?
Home Server Cluster 2025 index
For a long time when running home servers it’s been normal to use older Enterprise hardware, is this still the correct choice in 2025? Let’s explore this.
2 examples most people probably know about:
Need more SATA Ports? Get an LSI SAS2008 or similar LSI controller!
For years the staple HDD controller to use was an older LSI based PCIe card. These were rock solid and provided great passthrough SATA ports when running with IT firmware.
One caveat though, make sure to point a fan at them since they were designed to be run in Enterprise servers which had lots of airflow. Sure, no problem!
Need 10Gbit? Get an older Intel x520 card for instance
If you need 10Gbit you can easily find X520 or X540 Intel 10Gbit cards to either run Fiber, DAC or 10Gbe to your server, these are great NICs and perform well and are available cheap!
Again though, make sure there is enough airflow in your box otherwise they will overheat!
Heat? There is a reason…
But that heat that both the HDD controller and Network controller mentioned above spew out has an underlying reason. These cards use a lot of power and thus become hot.
This might not have been an issue when these cards were first released (and/or their target environment) but now many years later, I’m not so sure they should still be the defacto choice if power usage is also a concern.
From my own measurements I see the power usage of the system go up with about 8w to 14w for an LSI2008 based SAS controller. Same with the Intel X540 NIC, adding that to a system instantly raises power draw with 12w or so.
Cost calculating that heat!
So if you take my old server as an example, having 2x LSI SAS2008 cards in there and a Intel X540-T2 dual port 10Gbe NIC that means just those add-in cards alone are burning about 12w + 12w + 12w = 36w!
Doing some quick math with 36w average usage that is 36wH per hour or 0.86kWh per day. 0.86kWh x 365 = 315kWh per year. And since a kWh costs around $0,25 over here that means these cards are costing me a rough ~$80 per year! That’s certainly not nothing.
More issues
Both the LSI SAS2008 and the Intel X540 are PCIe Gen2 x8 devices. Modern motherboards come with PCIe Gen3 or in the case of out selected modern hardware platform even Gen4 and Gen5! Sticking a Gen2 card into one of those slots works fine, but it’s going to eat up a lot of lanes that aren’t being utilized anywhere near their potential.
And there are “rumors” let’s call it (I have not personally verified these claims) that this older type of hardware doesn’t have updated drivers and PCIe interfaces and thus will not work well with modern CPUs and it’s features to conserve power, blocking certain power states and such causing the system to use even more power overall.
The solution? Use modern versions
Now these cards have a function of course, all my HDDs are connecting to the SAS/SATA controllers and the NIC is what I use to talk to my server with, that’s not up for dispute, we need those devices. But…. what then?
Use much more modern versions of HDD controllers and 10Gbe network cards can save a lot of power!
Selected HDD SATA Controller
In our new build we are going to be using ASM1166 based SATA controllers in an M.2 form factor. This is a 6 port SATA600 controller with support for all the modern protocols. It’s backend is a PCIe Gen3 x2 connection giving you an effective bandwidth of about 1800MB/sec for those 6 ports.
Given we are going to use 2 of these modules per Node, that would give a theoretical max of 3600MB/sec with the disks spread over the 2 cards, more then enough for HDDs since the Seagate Exos X16 14TB drives we are using peak at about 250MB/sec sustained. These numbers are also observed when doing a ZFS Scrub for instance, easily reaching 2000MB/sec total with 8 disks. I’ve doen 100’s of TB of testing including doing the same tests with an LSI SAS2008 based cards and the ASM1164 based controllers didn’t throw a single error and were every bit as fast as the LSI SAS2008 controller.
They also work in “IT” or passthrough mode and I have even used the Seagate Windows tool SeaChest to update the firmware of all disks through these cards without a single issue. Reading SMART data also works perfectly in Windows or Linux.
And last, I’ve done extensive testing with running up to 3 modules at the same time in a single system giving you up to 18x SATA600 ports if so desired. Most likely having 4x M.2 6 port controllers in a single system for 24x SATA600 ports total would also work perfectly fine. This would give about 7200MB/sec of total available bandwidth to the disks but that is still about 300MB/sec per disk. Plenty for HDDs, less great for SSDs but we’re only going to be using NVMe based SSDs.
And last power usage, from my own observations (not really accurate measurements) the power usage of the system goes up with about 2w per M.2 card I add.