Home Lab Update

25. July 2019 12:48 by Cameron in Plex, server  //  Tags: , , , , ,   //   Comments

A couple of weeks ago, I expanded my server's RAID 6 array to 40TB, but in the process the RAID controller's write cache failed. I didn't know that this was the problem right away though. I had just told the array to expand the volume to 40TB and then the application froze and it seemed like that wasn't going to complete. I became impatient and force shutdown the server before the operation completed. However, when I went to restart the server, it would not recognize the write cache and it failed to boot. Then, I tried removing the external card and moving everything back to the onboard P410i.

The original reason for getting an expansion RAID card was to have RAID 6 support without the need for a license. The P410i requires a license and they're prohibitively expensive. The P812 expansion cards are $12 a piece and it was the right choice for my needs. After determining that the RAID card had failed along with the cache, I ordered another P812 and cache module. In the mean time, I was able to boot my server with the onboard P410i and backup the content from the array although there was no write cache. It took a couple of days, but now everything is backed up to two 8TB hard drives.

When my second P812 arrived, I was able to install the new card and get back up and running. I recreated the RAID 6 array at 40TB and have begun copying my files back over the network. The process is slow as I'm using 1Gbits at the moment and for some reason my USB 3.0 hard drives are only operating at 480Mbits. Though, it should be faster in the future with 10Gbits. I'll need to install an NVMe drive in my desktop to take full advantage of 10Gbit throughput. My server is limiting SATA drives to SATA II at 3Gbits and my desktop is capped at 6Gbits with SATA III. I've installed an NVMe SSD into my server and can get about 1600MB/s read/write or 12Gbits.

Next, my plan is to get an NVMe SSD for my desktop, two 10Gbe cards, and an SFP+ cable to connect them. I will also need to write some PowerShell to copy files from my HDD to the SSD and then over the network to my server. On the server, then watch the SSD for new files and copy to the RAID array. The end goal being that files are copied between SSDs on both ends.

Home Lab Update

2. July 2019 16:11 by Cameron in Plex  //  Tags: , , , , ,   //   Comments

Initially, I had thought I would transcode 4K to 1080p using my lab server, but I couldn't get great results with the Quadro P400. I fear it is a limitation of the processors and available bandwidth as well as the P400's GPU pipeline. Maybe I would get better results with a P2000, but instead, I will be hosting 1080p rips of all my 4K movies for viewing on the go.

My long term goal is to have my ripping PC be connected to my server with a 10Gbe connection and copy to an SSD cache on my server, but this will require patience as each 10Gbe NIC is ~$50 and the SFP+ cables aren't cheap either. I will likely get an SFP+ switch at some point too so I can enable 10Gbe on other devices as needed. To better saturate 10Gbe, I have since removed the GPU and replaced with a PCIe x16 NVMe adapter for a network cache drive. SATA, when paired with this server and the SAS backplane, is limited to SATA II speeds.

I had a spare older 256GB NVMe SSD from another computer, but it maxed out at 300MB/s write speeds and this isn't enough for 10Gbe. Therefore, I got an Inland 512GB NVMe SSD that maxes out at about 850-900MB/s or ~7Gb/s with the HP Proliant DL380 G7's PCIe gen 2 x16 slot. This gets me slightly better than SATA III speeds on a 10 year old server, of which I am happy. I am going to work on some sort of file watcher to move files from the cached location to my long-term Plex library. The idea will be to have a mirror of the folders present on my general Plex library and just copy the files over and remove the cached version when complete.

To support more expansion and redundancy, I bought a secondary RAID controller, the HP P812. This card is interesting because of RAID 6 support since the onboard RAID controller only supports up to RAID 5 without a license. Licenses are hard to find and likely expensive. The new RAID card was about $16. Transferring to the new controller was easy enough since the RAID information is stored on disk. The key is that these are all HP OEM RAID controllers and it was as simple as swapping cables for each backplane from the motherboard to the new RAID card.

When installing the new RAID card, I was given the option to change the RAID configuration to RAID 6. Note, this will take a while when you have a large array as the parity has to be redistributed. After a grueling week and a half of transitioning my RAID 5 to RAID 6, I now have about 30TB of usable space in RAID 6 on my lab server. Funnily enough, I am currently in the process of expanding my array to support 10TB more space, totaling in at 40TB of space. This will be worth the time investment as it will take a long time to fill up (hopefully).

Month List

Tag cloud