Home Lab Update - The Quest for a Quieter Server

22. November 2019 16:25 by Cameron in   //  Tags: , , , , , , ,   //   Comments

In effort to make my server quieter, I downgraded the BIOS to the September 2010 BIOS that spins fans at lower RPMs most of the time. To help with heat produced from file transfers and traditional hardware RAID, over the past week, I began rebuilding my home server as an unRAID server. The process has taken several days to move everything over, but ultimately, it will be worth the work. The advantage to unRAID is that you only spin up disks that are in use instead of the entire array. 

After downgrading the BIOS on my HP Proliant DL380 G7, the fans at idle are much quieter. When unRAID is completely configured, heat produced by the hard drives will be minimal. There will be three hard drives that spin up at a given time vs 10 which will be much less heat. One drawback with software RAID is that you have slower file copies, but this is remedied with an SSD cache. I configured an NVMe 512GB SSD for cache and it is scheduled to write the files to the array on a nightly basis.

To begin the server conversion, I had to back up all of my media files (18TB!) to external hard drives. Then I swapped out the SAS RAID controller card with an HBA SAS controller. The HBA card works well with my existing SAS expander that I was previously using with my RAID setup. I bought an LSI 9205-8i and flashed it with the P20 IT firmware from a DOS prompt. After successful flash, I was able to boot into my unRAID flash drive I created and begin configuring the server.

I spent considerably longer on the initial setup of the unRAID array than I would have liked. There were a lot of forum posts and YouTube videos talking about Pre-Clearing your hard drives before using with unRAID. The preclear process takes a very long time as it has to do multiple passes on all sectors on your hard disks. I eventually discovered that you no longer have to explicitly preclear your drives with unRAID 6.6 and above.

The final setup that worked for me was configuring the array with two parity drives, 8 data drives and one cache drive (I plan to add a second one later for redundancy). Upon starting up the array, the drives needed to be formatted. One thing that I kept messing up with this step is leaving parity calculation on at the same time. This caused formats to take an insane amount of time. Make sure to cancel parity syncs before you format your disks. After beginning the format operation, unfortunately unRAID doesn't indicate any progress back to the UI about progress on formatting. However, if you run the following command from an ssh session, you can get a sense of how much longer the formatting will take:

# lsblk --output NAME,FSTYPE,LABEL,UUID,MODE
NAME        FSTYPE   LABEL  UUID                                
loop0       squashfs                                             
loop1       squashfs                                             
loop2       btrfs                                                   
loop3       btrfs                                                   
sda                                                                  
└─sda1      vfat     UNRAID                                
sdb                                                                  
└─sdb1                                                             
sdc                                                                   
└─sdc1                                                             
sdd                                                                   
└─sdd1                                                             
sde                                                                   
└─sde1                                                             
sdf                                                                   
└─sdf1                                                             
sdg                                                                 
└─sdg1                                                            
sdh                                                                 
└─sdh1                                                           
sdi                                                                  
└─sdi1                                                            
sdj                                                              
└─sdj1                                                           
sdk                                                            
└─sdk1                                                           
md1                                                              
md2                                                              
md3                                                            
md4                                                              
md5                                                              
md6                                                              
md7                                                             
md8                                                              
nvme0n1                                                          
└─nvme0n1p1 btrfs

In the end, you should end up with partitions under each of your sdx devices. Once the drives were all available with XFS partitions, I started the parity sync process again. With my particular setup, I have 5TB hard drives and it should take about 10-12 hours to complete. Upon completion, I will be able to freely use the shares and not worry about parity syncs being slowed down.

If my server doesn't sound like a jet engine all of the time, I may opt for re-installing the x16 PCIe riser so I can install a graphics for Plex transcoding. However, this is just a nice to have not a must have. I am already populating three PCIe slots with the HBA card, SAS expander and NVMe SSD. Once I finish the initial build, I will be setting up Docker containers for Plex, VPN server, file sharing, and many more! Here's to hoping for a quieter server!

Home Lab Update

25. July 2019 12:48 by Cameron in Plex, server  //  Tags: , , , , ,   //   Comments

A couple of weeks ago, I expanded my server's RAID 6 array to 40TB, but in the process the RAID controller's write cache failed. I didn't know that this was the problem right away though. I had just told the array to expand the volume to 40TB and then the application froze and it seemed like that wasn't going to complete. I became impatient and force shutdown the server before the operation completed. However, when I went to restart the server, it would not recognize the write cache and it failed to boot. Then, I tried removing the external card and moving everything back to the onboard P410i.

The original reason for getting an expansion RAID card was to have RAID 6 support without the need for a license. The P410i requires a license and they're prohibitively expensive. The P812 expansion cards are $12 a piece and it was the right choice for my needs. After determining that the RAID card had failed along with the cache, I ordered another P812 and cache module. In the mean time, I was able to boot my server with the onboard P410i and backup the content from the array although there was no write cache. It took a couple of days, but now everything is backed up to two 8TB hard drives.

When my second P812 arrived, I was able to install the new card and get back up and running. I recreated the RAID 6 array at 40TB and have begun copying my files back over the network. The process is slow as I'm using 1Gbits at the moment and for some reason my USB 3.0 hard drives are only operating at 480Mbits. Though, it should be faster in the future with 10Gbits. I'll need to install an NVMe drive in my desktop to take full advantage of 10Gbit throughput. My server is limiting SATA drives to SATA II at 3Gbits and my desktop is capped at 6Gbits with SATA III. I've installed an NVMe SSD into my server and can get about 1600MB/s read/write or 12Gbits.

Next, my plan is to get an NVMe SSD for my desktop, two 10Gbe cards, and an SFP+ cable to connect them. I will also need to write some PowerShell to copy files from my HDD to the SSD and then over the network to my server. On the server, then watch the SSD for new files and copy to the RAID array. The end goal being that files are copied between SSDs on both ends.

Month List

Tag cloud