Cart Sign In
No results

Performance Testing: NVMe Hardware RAID vs. Software RAID

Josh Moss
By Josh Moss
Solutions Architect

October 17, 2025

When it comes to maximizing performance and reliability in your data center, choosing the right RAID solution for your Dell PowerEdge servers is critical. NVMe technology has redefined speed and efficiency, but not all RAID configurations unlock the full potential equally. Whether you’re seeking to optimize workloads, drive faster results, or ensure robust data protection, understanding the real-world differences between hardware RAID and software RAID empowers educated decision making. In this blog post, we dive into hands-on testing, allowing you to see exactly how each approach compares and enabling you to maximize your investment in Dell PowerEdge NVMe storage.

Testing Methodology

Throughout the performance testing of NVMe hardware RAID versus NVMe software RAID, great care was taken to ensure a 1:1 comparison. Each server was configured with identical hardware, apart from the hardware RAID controller in one server. We utilized a RAID 10 across the eight NVMe drives and ran I/O against the raw device. The purpose of using raw devices was to remove filesystems, and any overhead introduced by the filesystem, to squeeze every I/O and GB from the hardware, since the goal was to compare raw performance between hardware and software RAID configurations. It is important to note that performance may vary when introducing a filesystem into the mix.

Here are the hardware attributes of the servers used in this test:

R7515 Server

In testing each of the servers, we installed Ubuntu Server 24.04 and leveraged FIO to generate I/O. We used an array of FIO parameters to mimic some of the most common I/O profiles that we see in customer environments.

  • 4k random 70/30 r/w ratio
  • 8k random 70/30 r/w ratio
  • 64k random 70/30 r/w ratio
  • 1M sequential read
  • 1M sequential write

Each FIO execution was configured to bypass the operating system's page cache, thus issuing I/O directly to disk, using direct=1 as well.

One thing you will see in the data, and our analysis of the data, is that out-of-the-box configurations may not always be the best for your workloads! It’s important to tweak, tune, and tailor hardware configurations to the workload you will run on the hardware. In the data tables, you will see a third column in each test. This third test took place on the hardware RAID server after changing the PERC (PowerEdge RAID Controller) caching policy from its default configurations of read ahead and write back [leveraging controller cache] to no read ahead and write through [the controller cache] because, after all, NVMe drives provide exceptional performance and low latency!

FIO Testing and Observation

I/O profiles vary greatly for every workload. On one end of the spectrum, you have small block, random I/O, which is typical of databases. On the other end, you have a large block, sequential I/O, which is typical of backups or media streaming. Each result in varying amounts of I/O and throughput. For starters, we began by testing 4k block size random I/O using a 70/30 read-to-write ratio.

398.0K 171.0K 569.0K 88.4K 37.9K 126.3K 303.0K 130.0K 433.0K 0K 100K 200K 300K 400K 500K 600K Read Write Total IOPS 4k randrw 70/30 IOPS Comparison MDRAID HWRAID HWRAID (modified cache policies)
1.6 0.69 0.36 0.15 1.2 0.53 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Read Write Throughput (GB/s) 4k randw 70/30 Throughput Comparison MDRAID HWRAID HWRAID (modified cache policies)

NVMe software RAID offers a substantial performance advantage over out-of-the-box hardware RAID configurations, with IOPS and throughput exceeding hardware by more than four times. However, after modifying the PERC controller cache policy, you can see the gap narrows, with the NVMe software RAID still outperforming the hardware RAID.

347.0K 149.0K 496.0K 89.1K 38.2K 127.3K 302.0K 129.0K 431.0K 0K 100K 200K 300K 400K 500K 600K Read Write Total IOPS 8k randrw 70/30 IOPS Comparison MDRAID HWRAID HWRAID (modified cache policies)
2.8 1.2 0.725 0.31 2.4 1.06 0 0.5 1 1.5 2 2.5 3 Read Write Throughput (GB/s) 8k randw 70/30 Throughput Comparison MDRAID HWRAID HWRAID (modified cache policies)

Next, we modified the block size to 8k while leaving the read-to-write ratio at 70/30. One thing you’ll notice about the transition to 8k, here, and in future tests, is that as we move to larger block sizes, the IOPS (I/O per second) trends downward. This occurs because each I/O operation takes longer to complete, reducing the total number of IOPS. As a result, throughput (GB/s) appears higher.

In the 8k random 70/30 test, you can see the software RAID configuration still performs significantly better than the base hardware RAID configuration. But, much like the 4k I/O test, you can see that after we modified the controller cache policy, it improved performance tremendously. In comparing the controller cache-enabled hardware RAID configuration for the 4K and 8K runs, read and write IOPS remain similar. This is likely because the controller cache normalizes data ingestion rates.

92.6K 39.7K 132.3K 80.4K 34.5K 114.9K 96.2K 41.3K 137.5K 0K 20K 40K 60K 80K 100K 120K 140K 160K Read Write Total IOPS 64k randrw 70/30 IOPS Comparison MDRAID HWRAID HWRAID (modified cache policies)
6.2 2.6 5.2 2.2 6.3 2.7 0 1 2 3 4 5 6 7 Read Write Throughput (GB/s) 64k randw 70/30 Throughput Comparison MDRAID HWRAID HWRAID (modified cache policies)

Next up, we tested a 64k block size at a 70/30 read-to-write ratio. This I/O profile would be consistent with virtualized or HCI (hyper-converged infrastructure) environments, Microsoft SQL Server, or general file servers. As we have seen in the trends, software RAID edges out our base hardware RAID test. However, after modifying the controller cache policy on the hardware RAID server, we see something interesting. The hardware RAID configuration actually outperforms the software RAID configuration with this particular I/O profile! Since the controller cache has been disabled, I suspect the controller is still aligning the IO requests and striping them in a more efficient manner than the software RAID configuration. This raises an important point: stripe size. When creating a PERC virtual disk, you can set the stripe size, which defaults to 256K but can be adjusted up or down. Ideally, the stripe size should align as closely as possible with the workload’s block size.


44.9K 6.6K 14.4K 9.06K 14.4K 6.1K 0K 5K 10K 15K 20K 25K 30K 35K 40K 45K 50K 1M Sequential Read 1M Sequential Write IOPS 1M Sequential Read and Write IOPS Comparison MDRAID HWRAID HWRAID (modified cache policies)
47.04 6.9 15.1 9.5 15.1 6.4 0 5 10 15 20 25 30 35 40 45 50 1M Sequential Read 1M Sequential Write Throughput (GB/s) 1M Sequential Read and Write Throughput Comparison MDRAID HWRAID HWRAID (modified cache policies)

Finally, we tested large block, sequential I/O, to see how much throughput we can get through each RAID 10. In the 1M sequential read test, the software RAID consistently outperforms both hardware RAID configurations, regardless of their cache settings. What this tells me is that the I/O path through the PERC H755n is causing a bottleneck. Next, when we look at the 1M sequential write test, we see something interesting. The hardware RAID configuration with controller caching enabled outperforms the software RAID. What we’re seeing here is that the controller cache really makes a difference. Although the NVMe drives are inherently high-performing, the controller enhances efficiency by rapidly ingesting data, sending acknowledgements to the host, flushing the cache to disk, and repeating the cycle. This advantage becomes even more apparent in tests where the hardware RAID cache is disabled.

Closing Thoughts

When choosing between PERC-managed hardware RAID and software RAID, it's important to balance performance outcomes with management complexity and compatibility. PERC hardware RAID offers a consolidated management experience through the Dell iDRAC and the PERC BIOS utilities, which can simplify provisioning and monitoring, especially in environments standardized on Dell PowerEdge infrastructure. This approach also centralizes firmware and configuration updates, potentially reducing administrative overhead for teams familiar with Dell's ecosystem. However, software RAID can provide greater performance in certain scenarios for customers who are looking to squeeze every IOP and GB from their storage.

In terms of operating system compatibility, hardware RAID solutions like PERC are universally supported by all major OSes. Software RAID, comparatively, is dependent on the operating system in use. Not all operating systems, such as VMware, support software RAID outside of vSAN. All major Linux distributions will support MD RAID, and Windows Server supports Storage Spaces and the PERC S160 software RAID controller.

Potentially, and most importantly, the hardware solution you choose will need to align with your workload requirements. We recommend utilizing a Dell tool called Live Optics . Live Optics is a performance gathering tool that examines your environment's available resources and reports how much is being consumed. When it comes to storage specifically, we can get some really detailed and useful statistics:

  • Read and write I/O sizes
  • Read and write IOPS
  • Average storage latency
  • Disk Throughput
  • Average daily writes in TB's or GB's

All of this information about your environment helps xByte recommend storage solutions that fit your workloads! If you are considering a storage solution or are looking to get your Sysadmin Report card, please reach out to a member of the xByte team!