Server Storage

Every home network can benefit from having some shared storage to share files and provide centralized backups for computers on the network. I’ve put together what i think is a nice simple design that accommodates my storage needs.

The vast majority of my storage consists of Movies and TV Shows used by my Plex server. Over the past year of so I’ve seen storage growth of around 500-600 GB per month making storage management an ongoing task. At the time of this writing, I have approximately 34TB of storage dedicated to media files with another 34TB necessary to backup all of this data.


The goals of my storage system are as follows:

  1. Provide centrally managed storage for media files, backups, and file sharing
  2. Easily accommodate incremental increases in storage using any size/type of drive
  3. Minimize complexity and maximize flexibility
  4. Allow any drive to be moved to another computer while preserving all data on the drive
  5. Allow file sharing via NFS and Samba

All of the above goals are intended to maximize flexibility and minimize the time I spend managing storage.

Storage Design

I currently have just under 100Tb of usable storage between the two servers I have running at home. About half of the storage is actually used to backup my large Plex media library. There are a total of 22 drives consisting of 18 spinning disks of various sizes for primary storage as well as 4 SSD’s that I use to run Virtual Machines on.

I considered various RAID options that would provide some redundancy but ultimately decided against it. A number of years ago I had dabbled with RAID but found it to be somewhat inflexible and wanted to avoid some of the common design constraints that RAID arrays impose. Furthermore, the majority of the data that I have consists of static media files (Movies, TV Shows, and Music) that I can live without for a few hours if I need restore data from backups.

With RAID out of consideration, I decided to use MergerFS to pool individual disk drives and present them as a single logical drive. I chose MergerFS because it’s easy to setup, allows me to expand the storage pool using drives of arbitrary size, and doesn’t require any special drive formatting. Individual drives are formatted using the standard Linux ext4 file system so I can always move a drive to another machine if the need arises. To provide backup/redundancy I use rsync to copy the data to a second server every few hours. This makes it relatively simple to recover data when a drive fails – I just replace the drive and then resync the data.

Hardware Setup

I have two SuperMicro servers each with 12×3.5 drive bays. Both servers are configured similarly and use NFS and Samba shares to provide data to other servers and devices on the network. Each server has 4 separate sets of drives.

  1. OS Drive – Operating system files
  2. Media Pool – Media files including Movies, TV Shows, and Music
  3. Backup Drive – Scheduled server backups are stored here
  4. Virtual Machine drives – SSD’s where VM’s run

The image below illustrates my storage design. Note that both servers are essentially the same with primary differences being the size and number of drives used. For my MediaPool I always have two copies of the data.

When I need to add a new drive to the pool, I add the drive to the server, partition and format the drive, and add an entry into /etc/fstab. The new storage is available immediately and I don’t even need to reboot. The same process is used to replace a failed drive with the additional step of restoring data from the other copy.

Local server backups are stored on the backup drive. Scripts run daily to backup all server files as well as take snapshots of the virtual machines.

Storage Network

The other part of my storage design is network related. Restoring terabytes of data over a gigabit network takes time. On my network, it takes approximately 2.5 hours to transfer a terabyte of data. When doing mass data transfers, this can easily saturate a 1Gb network for many hours.

Since my servers have multiple ethernet connections, I’ve configured a second network port to run on a separate subnet. This keeps traffic from mass data transfers off of my main network and gives me a full 1Gb connection between servers ensuring that it won’t negatively impact my the rest of my network.


Storage for a large media library doesn’t have to be complex. I’ve tried to design a storage system that is flexible and easy to manage and I think I’ve succeeded. I’ve had a couple of drive failures and aside from the time it takes to restore the data, it takes around 10-15 minutes to replace a drive.

Leave a Reply

Your email address will not be published.

Previous article

Unifi Network Upgrade

Next article

Home Automation