I was wrong! zswap IS better than zram

Read the full article: I was wrong! zswap IS better than zram

Rule of thumb / TL;DR:If your system only uses swap occasionally and keeping swap demand within ~20–30% of your physical RAM as zram is enough, ZRAM is the simpler and more effective option. But if swap use regularly pushes far beyond that, is unpredictable, or if your system has fast storage (NVMe), Zswap is the… continue reading.
1 Like

This article hopefully serves to balance the recommendation between both ZRAM and Zswap. At the very least, remove some of my previous bias. :grimacing:

The community discussion this was based on can be followed here:

As well as the related forum topics below the comments.


Quick reference for the settings and outcomes when using Zram and Zswap:

Metric (System with 16 GB or RAM) 1.5:1 2:1 3:1
Likely algorithm(s) at this ratio LZ4 LZO Zstd
ZRAM — 4 GB
Max zram disksize (GB) 4.00 4.00 4.00
Max uncompressed stored in zram (GB) 4.00 4.00 4.00
Approx. RAM used by zram when full (GB) 2.67 2.00 1.33
Resident RAM left for everything else (GB) 13.33 14.00 14.67
Max uncompressed working set before any disk swap (GB) 17.33 18.00 18.67

ZSWAP — pool cap 20% of RAM = 3.20 GB
zswap pool cap (GB) @ 20% 3.20 3.20 3.20
Max uncompressed cached in zswap (GB) 4.80 6.40 9.60
Approx RAM used by zswap at cap (GB) 3.20 3.20 3.20
Resident RAM left for everything else at cap (GB) 12.80 12.80 12.80
Max uncompressed working set before any disk swap (GB) 17.60 19.20 22.40

Notes
• ZRAM has a fixed uncompressed cap equal to its disksize. Actual RAM used by zram is allocated dynamically as pages are compressed and swapped out, not all at once when the device is initialized.
• ZSWAP also grows on demand up to its pool cap. When full, it starts evicting to the backing swap device.

1 Like

Nice to see this empirical comparison. I haven’t dug into swapping alternatives much because it’s been an extremely rare event that any of my systems swap, especially the most current ones. It may be because I seldom run enough software concurrently to exceed the actual amount of real memory present, so the most that happens is that context switching alters which processes are active, but all in memory. On my favorite system i do allocate an ordinary swap file; I’ve never seen it active; still, these statistics are useful and undoubtedly valid for any system that’s running a lot of software, whether a laptop, desktop, or server.

Thanks for the details and for your honest assessment in the comparison of swapping techniques and technologies. I’ll definitely keep this in mind, especially if any inefficient use of resources is detected.

2 Likes
  1. for suspend the issues, have you tried adding a swap device with a low priority in additon to zram
  2. have you tried overprovisioning zram? i usually set between 150-200% - chromebooks and the steamdeck as well as bazzite, pop_os are somewhere in this range

speaking of bursty workloads, on an older minipc setup as a proxmox server, so also using zfs - also this was right after debian adopted tmpfs for tmp - running a handful of vm’s, docker, and lxc’s, ram was often showing 75% usage so i decided to tune it. pretty sure i was using gary explain’s config:

and afterwards i tweaked it to

apt install -y zram-tools && \
sed -i -e 's/#PERCENT=50/PERCENT=200/' -e 's/#ALGO=lz4/ALGO=lz4/' \
    -e 's/#PRIORITY=100/PRIORITY=32000/' /etc/default/zramswap && \
systemctl restart zramswap && \
cat <<'EOF' > /etc/sysctl.d/99-vm-zram-parameters.conf
vm.vfs_cache_pressure=300      # Pop!_OS guides: 200-400 frees caches fast for zram, but 300 balances ZFS reads.
vm.swappiness=200              # SteamOS/Bazzite: 150-180 treats zram as "RAM extension"—snappy on low loads.
vm.dirty_background_ratio=2    # Gaming tunings: 1-5 prevents micro-lags from bursts; 2 suits light VMs.
vm.dirty_ratio=40              # Balanced gamer mid-range—allows dirty buildup without app stalls.
vm.watermark_boost_factor=0    # Disables overhead (all zram sources agree).
vm.watermark_scale_factor=125  # Proactive headroom—gamer/Proxmox default for no surprises.
vm.page-cluster=0              # Essential for zram speed (zero latency).
EOF
sysctl --load=/etc/sysctl.d/99-vm-zram-parameters.conf && reboot
root@skylake2 ~# stress-ng --vm 2 --vm-bytes 8G -t 60s
stress-ng: info:  [57653] setting to a 1 min run per stressor
stress-ng: info:  [57653] dispatching hogs: 2 vm
stress-ng: info:  [57678] vm: using 4G per stressor instance (total 8G of 3.18G available memory)
stress-ng: info:  [57653] skipped: 0
stress-ng: info:  [57653] passed: 2: vm (2)
stress-ng: info:  [57653] failed: 0
stress-ng: info:  [57653] metrics untrustworthy: 0
stress-ng: info:  [57653] successful run completed in 1 min, 4.34 secs
root@skylake2 ~# swapon --show
NAME       TYPE       SIZE USED PRIO
/dev/zram0 partition 23.2G 3.7G  150

this was before i upped percent to 200%, swappiness to 200 priority to 32000
and with stress-ng --vm 4 --vm-bytes 12G -t 60s
and i hovered around 6g in zram
and my arc stayed pristine

root@skylake2 ~# arc_summary | grep -E 'hit|total'
ARC total accesses:                                                14.7M
        Total hits:                                    99.0 %      14.6M
        Total I/O hits:                                 0.1 %       9.5k
        Demand data hits:                              99.3 %      12.4M
        Demand data I/O hits:                         < 0.1 %        940
        Demand metadata hits:                          99.4 %       2.2M
        Demand metadata I/O hits:                     < 0.1 %        894
        Prefetch data hits:                            21.6 %       9.0k
        Prefetch data I/O hits:                         0.4 %        177
        Prefetch metadata hits:                        60.5 %      19.4k
        Prefetch metadata I/O hits:                    23.4 %       7.5k
        Demand hits after predictive:                  37.7 %      27.9k
        Demand I/O hits after predictive:               1.7 %       1.3k
        Demand hits after prescient:                   75.0 %         69
        Demand I/O hits after prescient:               25.0 %         23
ARC states hits of all accesses:
        Stream hits:                                   52.9 %       6.2M
        zfs_read_history_hits                                          0

this proxmox node - a 6700 w 16g ddr4 - was running running opnsense my router at the time and 0 noticable impact DURING the stress test. honestly i was pretty afraid to even throw 12g extra at it bc it was already reporting using 12g during normal operation. still torn on lz4 vs zstd

i dunno im not trying to claim zram is better - i came across this blogpost in the context that im demoing a new cheap vps that wont allow any kernel configs so im looking into a swapfile. thanks for the writeup

1 Like

Welcome to the forums @wommy

What you added is a genuinely useful reference for anyone tuning zram more aggressively, especially in mixed workloads with ZFS, VMs, or bursty memory pressure.

To answer your questions directly:

  1. I did not test a low-priority disk swap alongside zram.
  2. I also did not experiment with heavy zram over-provisioning in the 150–200% range.

At the time, zswap ended up being the more straightforward option for my needs.

That said, the configuration you shared here is precisely the kind of practical, real-world tuning that readers benefit from.

The ARC stats holding steady under stress are particularly interesting, and your notes around swappiness, page-cluster, and watermark tuning add important context that goes beyond most zram discussions.

Thanks again for contributing this.