This post continues my migration into my new rack.
After migrating the X10SRH, I really thought I was done. Everything was now in the rack and working like it should.
My Dell PowerEdge R420 acts as my primary NAS. It’s a dual E5-2620v2, 256GB of RAM, an SFF-8088 HBA, with a 10Gb NIC. This runs Debian and stores many terabytes of data on top of ZFS.
The problem was this device bottlenecks me and many of my future plans. With room for only one PCIe card, I am forced to use SAS uplinks on shelves to connect all the drives. This creates a serious performance bottleneck. In the future, I also want to add 40Gb networking, and the lack of PCIe slots limits me there too.
The Server Store has quickly become my new go-to for buying hardware, and this time was no exception. For $349, I could get a dual CPU Supermicro X9DRI board, a Supermicro 846 (24 bay) case and a pair of low TDP E5 v2 Xeons on one of their specials. Consider I could reuse my pile of RAM AND it included a SAS2 HBA, it didn’t take too long to hit the button on that one.
Let’s do it
This was a pretty simple build. 90% of the work was done for me as the system was already built. All I had to do is pull some RAM:
Well, now that was a lie. The ACTUAL hard part was migrating drives.
So. Many. Drives.
and more:
I also had a few SSDs to mount up. Note that this is an HP drive, in a Dell 2.5->3.5 adapter, into a Supermicro drive tray:
Conclusion
The only issue I had was that the boot drive wouldn’t boot due to missing EFI NVRAM. I covered how to fix HERE.
Welp, that’s it. I’ve migrated all my servers to a rack. If you’ve been following this so far, I hope you have enjoyed the ride!