Why Plex?

2. August 2019 14:45 by Cameron in Plex  //  Tags: , , , , , , ,   //   Comments

In the modern age of streaming movies and TV, we are faced with a growing problem of media fragmentation. No longer can you watch all of your desired shows under one subscription as all of the big studios seem to be fighting for a piece of the pie when it comes to licensing movies and TV across streaming services. The challenge too is that at any given moment, a publishing company can revoke these licenses from a particular service and switch to another service. Also, if the new service is something you don't subscribe to, you then need to make a decision of whether the cost of a new subscription is worth it for still having access to your show.

All of these subscription services add up. You might not think much of one or two services, but if you subscribe to several services, then you are likely paying close to if not more than a comparable cable subscription. The only difference here is that everyone is getting more money for their content by bypassing cable licensing fees. Streaming has changed the way we consume content by cutting out the middle man and allowing us to watch on our own time. However, don't be fooled that it is always cheaper than cable.

If you enjoy certain TV series but don't want to pay a monthly subscription to watch, you can opt for Plex and purchasing box sets of your TV series. There is more of an upfront cost by getting compatible hardware for Plex, but the good thing is that you can start small and plan for expansion as your budget allows. The box sets tend to be pricey up front too, but one series box set (~$200) will have paid for itself in about a year's time.

With the digital codes that are distributed with new movies, they used to be redeemed with Ultraviolet, but has shifted to Movies Anywhere. With the recent closure of Ultraviolet, I'm reminded why I don't like DRM. I'm a huge fan of buying physical movies and ripping them to my Plex server. While I like the concept of Movies Anywhere, how do I know that this won't ultimately have the same fate as Ultraviolet? With my own library, I maintain the original discs in boxes and I have access to my hundreds of movies with a click of a button. I don't have to worry about a service going defunct.

I'm not boycotting streaming services altogether, but I don't like having content I enjoy watching tied up in DRM platforms. I plan to keep my streaming service subscriptions to a minimum. Of the few, I will stick with Hulu, Netflix, Amazon Prime (I mostly use for shipping) and probably Disney's streaming platform. As long as I can still buy physical media, that will be my preferred way of building a library for many years to come.

Home Lab Update

25. July 2019 12:48 by Cameron in Plex, server  //  Tags: , , , , ,   //   Comments

A couple of weeks ago, I expanded my server's RAID 6 array to 40TB, but in the process the RAID controller's write cache failed. I didn't know that this was the problem right away though. I had just told the array to expand the volume to 40TB and then the application froze and it seemed like that wasn't going to complete. I became impatient and force shutdown the server before the operation completed. However, when I went to restart the server, it would not recognize the write cache and it failed to boot. Then, I tried removing the external card and moving everything back to the onboard P410i.

The original reason for getting an expansion RAID card was to have RAID 6 support without the need for a license. The P410i requires a license and they're prohibitively expensive. The P812 expansion cards are $12 a piece and it was the right choice for my needs. After determining that the RAID card had failed along with the cache, I ordered another P812 and cache module. In the mean time, I was able to boot my server with the onboard P410i and backup the content from the array although there was no write cache. It took a couple of days, but now everything is backed up to two 8TB hard drives.

When my second P812 arrived, I was able to install the new card and get back up and running. I recreated the RAID 6 array at 40TB and have begun copying my files back over the network. The process is slow as I'm using 1Gbits at the moment and for some reason my USB 3.0 hard drives are only operating at 480Mbits. Though, it should be faster in the future with 10Gbits. I'll need to install an NVMe drive in my desktop to take full advantage of 10Gbit throughput. My server is limiting SATA drives to SATA II at 3Gbits and my desktop is capped at 6Gbits with SATA III. I've installed an NVMe SSD into my server and can get about 1600MB/s read/write or 12Gbits.

Next, my plan is to get an NVMe SSD for my desktop, two 10Gbe cards, and an SFP+ cable to connect them. I will also need to write some PowerShell to copy files from my HDD to the SSD and then over the network to my server. On the server, then watch the SSD for new files and copy to the RAID array. The end goal being that files are copied between SSDs on both ends.

Home Lab Update

2. July 2019 16:11 by Cameron in Plex  //  Tags: , , , , ,   //   Comments

Initially, I had thought I would transcode 4K to 1080p using my lab server, but I couldn't get great results with the Quadro P400. I fear it is a limitation of the processors and available bandwidth as well as the P400's GPU pipeline. Maybe I would get better results with a P2000, but instead, I will be hosting 1080p rips of all my 4K movies for viewing on the go.

My long term goal is to have my ripping PC be connected to my server with a 10Gbe connection and copy to an SSD cache on my server, but this will require patience as each 10Gbe NIC is ~$50 and the SFP+ cables aren't cheap either. I will likely get an SFP+ switch at some point too so I can enable 10Gbe on other devices as needed. To better saturate 10Gbe, I have since removed the GPU and replaced with a PCIe x16 NVMe adapter for a network cache drive. SATA, when paired with this server and the SAS backplane, is limited to SATA II speeds.

I had a spare older 256GB NVMe SSD from another computer, but it maxed out at 300MB/s write speeds and this isn't enough for 10Gbe. Therefore, I got an Inland 512GB NVMe SSD that maxes out at about 850-900MB/s or ~7Gb/s with the HP Proliant DL380 G7's PCIe gen 2 x16 slot. This gets me slightly better than SATA III speeds on a 10 year old server, of which I am happy. I am going to work on some sort of file watcher to move files from the cached location to my long-term Plex library. The idea will be to have a mirror of the folders present on my general Plex library and just copy the files over and remove the cached version when complete.

To support more expansion and redundancy, I bought a secondary RAID controller, the HP P812. This card is interesting because of RAID 6 support since the onboard RAID controller only supports up to RAID 5 without a license. Licenses are hard to find and likely expensive. The new RAID card was about $16. Transferring to the new controller was easy enough since the RAID information is stored on disk. The key is that these are all HP OEM RAID controllers and it was as simple as swapping cables for each backplane from the motherboard to the new RAID card.

When installing the new RAID card, I was given the option to change the RAID configuration to RAID 6. Note, this will take a while when you have a large array as the parity has to be redistributed. After a grueling week and a half of transitioning my RAID 5 to RAID 6, I now have about 30TB of usable space in RAID 6 on my lab server. Funnily enough, I am currently in the process of expanding my array to support 10TB more space, totaling in at 40TB of space. This will be worth the time investment as it will take a long time to fill up (hopefully).

Home Network Update

18. June 2019 23:22 by Cameron in Home Network  //  Tags: , , , , , ,   //   Comments

After moving into our house, I had decided to get our Internet routed through the basement. This means no wires through the main floor and I had to think of a way to get coverage to my home theater PC in the living room. In my basement, I have a 24 port 1Gbe switch which I connect my HP Proliant DL380 G7, my office setup, and my Ethernet over power network. With Ethernet over power, I can get wired Ethernet on the main floor. The speed coming to the house is 400-600Mb/s and Ethernet over power provides around 130Mb/s which is ample for surfing and streaming.

I plan to move my home theater PC to my basement and give it a direct 10Gbe connection to my server. This is mainly for faster file transfers when I rip a new 4K HDR Blu-Ray. For this, I will need to copy files to a cache SSD and then to my RAID 6 array. The RAID 6 array provides 2 drive fault tolerance, but it does suffer in speed a little. I fear that I will not be able to fully saturate 10Gb so the cache will be a temporary storage location until the files are on the server. I will then have a job that will move the files from the cached location to the RAID array.

I still need a good rack mountable power delivery system and a decent rack mountable UPS. Hopefully I can invest in some of these soon.

Plex Setup Update

3. June 2019 22:33 by Cameron in   //  Tags: , , , ,   //   Comments

I now have all of my media in one place on my HP Proliant DL380 G7. It took a few days as the library consists of over 14TB of files.

When I tried to install the Quadro P400 into the server, the system would crash on the driver installation and force the server to reboot. This would render Windows effectively bricked and I fought with this about 3-4 times before giving up on getting it to work. Plan B is to use a GTX 1050 single slot card in its place. I have a GTX 750 installed in my colo server so this should work too. The GTX 1050 has more shader units than the Quadro P400 and it is HP branded so hopefully the fan control will be softer. The memory bandwidth bus is 128bit vs 64bit too. I will need to use the driver nvenc unlock to get its full potential, but according to the attached matrix, I should be able to get 14 simultaneous transcodes. This has yet to be benchmarked by myself, but the numbers look promising. I will post an update shortly after I've had a chance to test these findings.

nVidia NVENC NVDEC Matrix.pdf (427.36 kb)

ODBC vs RESTful API

31. August 2012 00:55 by Cameron in C++, Qt  //  Tags: , , , , , , , , , ,   //   Comments

In the process of writing the IGA desktop application, I've been faced with several design decisions. One of the most challenging decisions I had to make was how I should most effectively interact with a database backend. To help with this decision, I weighed out the pros and cons of using ODBC and a RESTful web API. Each of these methods are very good for certain purposes.

ODBC

Pros

  • Cross platform support through C/C++ libraries
  • Secure using username and password (connection encrypted)

Cons

  • Some ISPs/Schools block port 1433 (used with SQL Server) or other database ports (MySQL, Postgre, etc)
  • Slow response time in some instances (running multiple queries can take a fair amount of time)

RESTful API

Pros

  • Fast response time
  • Abstracts data backend - i.e. allows for an easy switch of database servers or switch of web server languages
  • Easily allows for multiple desktop and mobile frontends by adhering the web API interface (ODBC isn't usually standard in mobile platforms)

Cons

  • Requires tighter security
  • All requests must be encrypted using SSL or  plain text is sent to the server
  • Requires some sort of authentication either by API key or other method to prevent arbitrary access to server

Ultimately, I decided to go with using a RESTful web API for maintaining separation of the database architecture from the IGA desktop application. This will allow me to change the database backend without breaking the application as long as I keep the API interface the same. Another huge factor in choosing a RESTful web API is that my school blocks port 1433 on its campus wireless networks. I want college students to be able to use the IGA desktop application while on campus so this was a necessary choice. Overall, both provide advantages and disadvantages and neither one is "better" than the other. I hope this helps people with the decision between ODBC and a RESTful API.

Month List

Tag cloud