Rule of thumb / TL;DR:If your system only uses swap occasionally and keeping swap demand within ~20–30% of your physical RAM as zram is enough, ZRAM is the simpler and more effective option. But if swap use regularly pushes far beyond that, is unpredictable, or if your system has fast storage (NVMe), Zswap is the… continue reading.
Rule of thumb / TL;DR:If your system only uses swap occasionally and keeping swap demand within ~20–30% of your physical RAM as zram is enough, ZRAM is the simpler and more effective option. But if swap use regularly pushes far beyond that, is unpredictable, or if your system has fast storage (NVMe), Zswap is the… continue reading.
This article hopefully serves to balance the recommendation between both ZRAM and Zswap. At the very least, remove some of my previous bias. ![]()
The community discussion this was based on can be followed here:
As well as the related forum topics below the comments.
Quick reference for the settings and outcomes when using Zram and Zswap:
| Metric (System with 16 GB or RAM) | 1.5:1 | 2:1 | 3:1 |
|---|---|---|---|
| Likely algorithm(s) at this ratio | LZ4 | LZO | Zstd |
| ZRAM — 4 GB | |||
| Max zram disksize (GB) | 4.00 | 4.00 | 4.00 |
| Max uncompressed stored in zram (GB) | 4.00 | 4.00 | 4.00 |
| Approx. RAM used by zram when full (GB) | 2.67 | 2.00 | 1.33 |
| Resident RAM left for everything else (GB) | 13.33 | 14.00 | 14.67 |
| Max uncompressed working set before any disk swap (GB) | 17.33 | 18.00 | 18.67 |
| ZSWAP — pool cap 20% of RAM = 3.20 GB | |||
| zswap pool cap (GB) @ 20% | 3.20 | 3.20 | 3.20 |
| Max uncompressed cached in zswap (GB) | 4.80 | 6.40 | 9.60 |
| Approx RAM used by zswap at cap (GB) | 3.20 | 3.20 | 3.20 |
| Resident RAM left for everything else at cap (GB) | 12.80 | 12.80 | 12.80 |
| Max uncompressed working set before any disk swap (GB) | 17.60 | 19.20 | 22.40 |
Notes
• ZRAM has a fixed uncompressed cap equal to its disksize. Actual RAM used by zram is allocated dynamically as pages are compressed and swapped out, not all at once when the device is initialized.
• ZSWAP also grows on demand up to its pool cap. When full, it starts evicting to the backing swap device.
Nice to see this empirical comparison. I haven’t dug into swapping alternatives much because it’s been an extremely rare event that any of my systems swap, especially the most current ones. It may be because I seldom run enough software concurrently to exceed the actual amount of real memory present, so the most that happens is that context switching alters which processes are active, but all in memory. On my favorite system i do allocate an ordinary swap file; I’ve never seen it active; still, these statistics are useful and undoubtedly valid for any system that’s running a lot of software, whether a laptop, desktop, or server.
Thanks for the details and for your honest assessment in the comparison of swapping techniques and technologies. I’ll definitely keep this in mind, especially if any inefficient use of resources is detected.
- for suspend the issues, have you tried adding a swap device with a low priority in additon to zram
- have you tried overprovisioning zram? i usually set between 150-200% - chromebooks and the steamdeck as well as bazzite, pop_os are somewhere in this range
speaking of bursty workloads, on an older minipc setup as a proxmox server, so also using zfs - also this was right after debian adopted tmpfs for tmp - running a handful of vm’s, docker, and lxc’s, ram was often showing 75% usage so i decided to tune it. pretty sure i was using gary explain’s config:
and afterwards i tweaked it to
apt install -y zram-tools && \
sed -i -e 's/#PERCENT=50/PERCENT=200/' -e 's/#ALGO=lz4/ALGO=lz4/' \
-e 's/#PRIORITY=100/PRIORITY=32000/' /etc/default/zramswap && \
systemctl restart zramswap && \
cat <<'EOF' > /etc/sysctl.d/99-vm-zram-parameters.conf
vm.vfs_cache_pressure=300 # Pop!_OS guides: 200-400 frees caches fast for zram, but 300 balances ZFS reads.
vm.swappiness=200 # SteamOS/Bazzite: 150-180 treats zram as "RAM extension"—snappy on low loads.
vm.dirty_background_ratio=2 # Gaming tunings: 1-5 prevents micro-lags from bursts; 2 suits light VMs.
vm.dirty_ratio=40 # Balanced gamer mid-range—allows dirty buildup without app stalls.
vm.watermark_boost_factor=0 # Disables overhead (all zram sources agree).
vm.watermark_scale_factor=125 # Proactive headroom—gamer/Proxmox default for no surprises.
vm.page-cluster=0 # Essential for zram speed (zero latency).
EOF
sysctl --load=/etc/sysctl.d/99-vm-zram-parameters.conf && reboot
root@skylake2 ~# stress-ng --vm 2 --vm-bytes 8G -t 60s
stress-ng: info: [57653] setting to a 1 min run per stressor
stress-ng: info: [57653] dispatching hogs: 2 vm
stress-ng: info: [57678] vm: using 4G per stressor instance (total 8G of 3.18G available memory)
stress-ng: info: [57653] skipped: 0
stress-ng: info: [57653] passed: 2: vm (2)
stress-ng: info: [57653] failed: 0
stress-ng: info: [57653] metrics untrustworthy: 0
stress-ng: info: [57653] successful run completed in 1 min, 4.34 secs
root@skylake2 ~# swapon --show
NAME TYPE SIZE USED PRIO
/dev/zram0 partition 23.2G 3.7G 150
this was before i upped percent to 200%, swappiness to 200 priority to 32000
and with stress-ng --vm 4 --vm-bytes 12G -t 60s
and i hovered around 6g in zram
and my arc stayed pristine
root@skylake2 ~# arc_summary | grep -E 'hit|total'
ARC total accesses: 14.7M
Total hits: 99.0 % 14.6M
Total I/O hits: 0.1 % 9.5k
Demand data hits: 99.3 % 12.4M
Demand data I/O hits: < 0.1 % 940
Demand metadata hits: 99.4 % 2.2M
Demand metadata I/O hits: < 0.1 % 894
Prefetch data hits: 21.6 % 9.0k
Prefetch data I/O hits: 0.4 % 177
Prefetch metadata hits: 60.5 % 19.4k
Prefetch metadata I/O hits: 23.4 % 7.5k
Demand hits after predictive: 37.7 % 27.9k
Demand I/O hits after predictive: 1.7 % 1.3k
Demand hits after prescient: 75.0 % 69
Demand I/O hits after prescient: 25.0 % 23
ARC states hits of all accesses:
Stream hits: 52.9 % 6.2M
zfs_read_history_hits 0
this proxmox node - a 6700 w 16g ddr4 - was running running opnsense my router at the time and 0 noticable impact DURING the stress test. honestly i was pretty afraid to even throw 12g extra at it bc it was already reporting using 12g during normal operation. still torn on lz4 vs zstd
i dunno im not trying to claim zram is better - i came across this blogpost in the context that im demoing a new cheap vps that wont allow any kernel configs so im looking into a swapfile. thanks for the writeup
Welcome to the forums @wommy
What you added is a genuinely useful reference for anyone tuning zram more aggressively, especially in mixed workloads with ZFS, VMs, or bursty memory pressure.
To answer your questions directly:
- I did not test a low-priority disk swap alongside zram.
- I also did not experiment with heavy zram over-provisioning in the 150–200% range.
At the time, zswap ended up being the more straightforward option for my needs.
That said, the configuration you shared here is precisely the kind of practical, real-world tuning that readers benefit from.
The ARC stats holding steady under stress are particularly interesting, and your notes around swappiness, page-cluster, and watermark tuning add important context that goes beyond most zram discussions.
Thanks again for contributing this.
Hey All,
I just saw this on cnx-software and I wonder if this will swing it the other way!
Linux 7 will writeback compressed, which may give us the best of both. Worth thinking about? Seems like it’s not currently being deployed yet, as it’s off by default…
- zram implements compressed data writeback. Previously, the kernel would have to decompress the pages before writing them to the physical device (uncompressed data writeback), unnecessarily wasting CPU cycles and battery, but now page writeback can directly write zram-compressed data. linux commit id=d38fab605c66 for details.
cnx-software > linux-7-0-release-main-changes-arm-risc-v-and-mips-architectures
Welcome to the forums @andiohn and that is a really good find.
Compressed zram writeback is a nice improvement, but from what I can tell it mainly matters when zram is configured with a backing device, which is not the usual setup on most desktops.
The nice part is that the kernel no longer has to decompress pages before writing them out. It can write the compressed data directly instead, so there is less wasted CPU work and a bit less overhead overall. That pretty cool.
Definitely worth keeping an eye on though, especially for low-memory systems, embedded setups, or anyone doing more custom zram tuning. I am just not sure it swings the overall zswap vs zram discussion back the other way for typical desktop use. At least not yet.
Have you tested zram with a backing device yourself? Or just stumbled across the news? Thanks for sharing!
Ya, I just stumbled upon the improvement. I really liked your article about zswap vs zram and I have deep testing of both of them. I use home assistant as digital signage and mistakenly bought a pi zero 2w 1gb and it was a JOB to get that to fit. It still currently crashes as it runs out of memory, but it at least is working the majority of the day, down for at most 3 minutes, and has a watchdog to reboot. I’d figure that if I had your article, I could get it to stay up perhaps a couple more percent, but it’s working enough to not be a problem. Next time though it should be better.
Basically I used debian 12 minimal kiosk setup directly booting into chromium and I even disabled cupsd to get more available ram. I think I’m actually running dietpi on it, but it’s been working without maintenance for like 2+ years, so I can’t actually recall. I even compressed the images a bit to try to help.
Anyways, thanks for your article. I LOVE squeezing things into small spaces, so zswap and zram are a fun challenge.
It’s amazing sometimes how we learn some of the coolest stuff be
Smart. Debian or Debian-based are some of the best minimal distros you can find.
Maybe some of these might interest you for future project:
Something like:
Look for (links below are direct downloads):
- alpine-rpi-3.23.4-armhf.img.gz 67M
- alpine-rpi-3.23.4-aarch64.img.gz 87M
~ 150 MB uncompressed.
Also checkout:
Guys like @hydn really know their hardware!
Back when I was at Digital I was familiar with many different lines of hardware, and I did learn a lot about hardware specs in order to meet the business requirements of our customers.
I was always a software engineer but I supported a lot of sales and marketing activities, so in order for those guys to market and sell their stuff, I needed to know what their customers wanted to DO, then I made sure that the combinations of hardware and software they wanted to sell would actually do the right job, so the team I worked on would “characterize” the components to make sure that the combinations being marketed would actually do the jobs they were intended to do.
We’d do a lot of “estimating”, but the estimations were based on our real tests and/or we’d get actual engineering information from the hardware and software development teams, then we’d know what was possible - and we’d try to get as much personal experience with them as possible so that our recommendations would service the needs and requirements. That was good work!