Linux vs FreeBSD Disk I/O: Why Is FreeBSD Faster?

I manage Linux for a living, I do Linux side jobs, and maintain a home lab to test the sorts of things I might want to propose at work, or do for clients.

Recently, I’ve begun doing testing on FreeBSD in relation to the Mailguard anti-spam system I’ve been maintaining.

Linux and FreeBSD are comparable in network performance, both very good, but
I was blown away at what a beast FreeBSD is at disk I/O. Every test I’ve run has FreeBSD with 50% higher throughput no matter what sysctl parameters or scheduler is used. I’ve tried all the tuned modes, and there are small differences, but nowhere near the amount needed to catch up to FreeBSD. On the other hand, Linux seems to have better latency.

Any hints on maximizing Linux disk throughput?

6 Likes

Great topic! Hm. It’s likely because FreeBSD often runs with a leaner default stack, so I/O passes through fewer layers.

Linux, on the other hand, offers more flexibility and features, which adds overhead: The I/O scheduler, blk-mq, cgroups, dm layers, LVM, md, and filesystem journaling, and you can end up with queues and locking a lot easier.

It’s still very fast! But yes, Linux can be slower than FreeBSD out‑of‑the‑box in some storage benchmarks and configurations.

As far as closing the gap, it’s a bit outdated now and only loosely related, but some of the tips may help:

In addition, I would say make sure to set the CPU governor to a max performance mode. If the CPU is sitting at 800 MHz because the governor downclocked it, storage I/O operations may be slower. It really depends on the exact hardware, but low‑frequency/powersave modes can absolutely hurt I/O latency and throughput compared to FreeBSD.

FreeBSD will tend to sit at or near a high non‑turbo frequency unless you explicitly enable powerd[xx] or tune power settings.

For NVMe SSDs, the none I/O scheduler is usually the best choice, and it is the default on most distributions.

Also check if your tests are passing through dm-crypt or any extra layer that might be running single threaded. FreeBSD handles a lot of this work in parallel, so Linux needs a bit of manual tuning to keep up.

You could say FreeBSD out of the box is like a sports car already in sport mode. It gives you raw power with nothing extra stepping in.

Linux out of the box is the same sports car, but it starts in comfort mode with more systems active. More flexibility, options/features, but you also feel a bit less of that straight power until you tune it.

6 Likes

:nerd_face: :backhand_index_pointing_up: actually
freeBSD would eat linux alive in performance comparison
but linux has bigger community and more mainstream
I won’t say it’s ULE scheduler or zfs but it’s the kernel optimization overall.

2 Likes

It doesn’t appear to be not ZFS, as FreeBSD and Linux now use the same openzfs codebase.

It could be the scheduler for sure, and no doubt kernel optimization plays a part.

But in my testing I don’t see any real difference between Linux and FreeBSD in terms of network performance.

So it has to be something to do specifically with I/O scheduling/algorithms.

4 Likes

Now I understand why TrueNAS was built with FreeBSD. I wonder why they choose to move over to Debian? Thank you all for this nice tidbit.

3 Likes

One distro that keeps coming up in conversations like this is CachyOS. It kind of makes sense why it has blown up over the last year: Their whole goal is squeezing out more speed everywhere they can, and that includes disk IO.

They ship with the right scheduler defaults for NVMe, they have IO scheduler rules that automatically pick the best option for each device type, and they even have their own scheduler called ADIOS that’s tuned for modern multi-queue setups. So they’re clearly thinking about performance at the block layer, not just desktop responsiveness.

I haven’t tested CachyOS for raw disk IO myself yet. I’ve only clicked around the desktop to get a feel for it.

It does look like a solid pick for anyone who’s trying to get better throughput or wants a distro that focuses heavily on tuning and tweaking for faster IO.

2 Likes

To me, (and I could be waaaay off) It’s because Unix handles things better than Linux. I have both FreeBSD & Linux on Virtual Machines, let me know how to test disk IO between the two an I’ll go from there.

Depends on what you mean by “Unix”, Linux blew the doors off SCO and HP-UX when I compared them back in the day.

At any rate, I’ve been doing disk testing with dbench. I run a series of tests starting with 1 client, then 2, then 4… and so on, and if there’s enough disk space, all the way to 4096. On small VMs, there’s often too little disk space to do more than 512 clients.

1 Like

So I’m referring to the original 3 Unix’s, SystemV (SCO), BDS (Berkley Standard Distribution) and the AT&T release (Now Oracle), All 3 had to be released as Freeware due to copyright laws.
Oracle has found a way around the copyright laws, but that’s neither here of there. Have you tested with KVM/QEMU or VirtualBox? That could make a difference, just my 2 cents.

1 Like

I’ve tested VMs on KVM/Qemu and on FreeBSD’s bhyve hypervisor.

1 Like

Here’s a test I did on 2 physical machines, one running MX 25 (based on Debian 13), the other running FreeBSD 15.

The general shape of the curves matches what I’ve been seeing.

3 Likes

The trend i see is that Linux peaks at a lower number of clients and then slows down, while FreeBSD continues at a pretty even rate as the number of clients increases.
But the absolute values are what I’m curious about at this point. The fastest peak results I’ve seen are from a Linux Promox (Debian 13) host (attached)

So, since I don’t have 2 identical machines, my plan is to build a dual boot machine, add a common disk using ZFS, which can be accessed by either OS.

We’ll run the disk benchmarks on each OS, and then I’ll have more reliable data as far as absolute values, and apples to apples comparison on the exact same hardware.

4 Likes

Just for my information, are you running ZFS on Debian currently? :nerd_face:

I’ve only begun running ZFS on test instances, after learning that it’s now using the same codebase as FreeBSD. I may start using it more widely going forward.

1 Like

Hmm, So Debian with the standard file system (ext4) is slower than BSD with ZFS. Maybe it’s the ZFS that is the different speeds reason. :face_with_monocle:

1 Like

I intend to clarify this very issue with my next test setup.

4 Likes

Thanks for sharing these test results. Looking forward to seeing your next findings.

1 Like

It’ll take a bit. I’ve got to remove a machine from a 4 node proxmox cluster and use it for FreeBSD & Linux comparison. But first I need to update the 2 TB NVMEs to 4 TB, to compensate for the loss of the one ceph node.

Once that’s done, though, I can compare Linux and FreeBSD on the same machine and the same filesystem.

3 Likes

While still waiting on hardware, I tried a quick and dirty test, using an external SSD. I plugged the drive into my FreeBSD 15 server and created a ZFS pool on it. Then I ran dbench tests, exported the drive, imported it on a Proxmox 9 server, and ran the same dbench tests.

Linux peaks at 1024 clients, FreeBSD peaks at 8192 clients. FreeBSD scales better, at least with stock settings. The drive and filesystem are identical so it comes down to the kernel and the I/O scheduler.

Any tuning hints?

3 Likes

I bit outside my area of experience. hmm maybe try increasing zfs_dirty_data_max and zfs_dirty_data_max_max to allow deeper write queues.

I’m on ext4, which the related settings are default:

Found via some good reading here:

A great discussion also here:

3 Likes