I was wrong! zswap IS better than zram

  1. for suspend the issues, have you tried adding a swap device with a low priority in additon to zram
  2. have you tried overprovisioning zram? i usually set between 150-200% - chromebooks and the steamdeck as well as bazzite, pop_os are somewhere in this range

speaking of bursty workloads, on an older minipc setup as a proxmox server, so also using zfs - also this was right after debian adopted tmpfs for tmp - running a handful of vm’s, docker, and lxc’s, ram was often showing 75% usage so i decided to tune it. pretty sure i was using gary explain’s config:

and afterwards i tweaked it to

apt install -y zram-tools && \
sed -i -e 's/#PERCENT=50/PERCENT=200/' -e 's/#ALGO=lz4/ALGO=lz4/' \
    -e 's/#PRIORITY=100/PRIORITY=32000/' /etc/default/zramswap && \
systemctl restart zramswap && \
cat <<'EOF' > /etc/sysctl.d/99-vm-zram-parameters.conf
vm.vfs_cache_pressure=300      # Pop!_OS guides: 200-400 frees caches fast for zram, but 300 balances ZFS reads.
vm.swappiness=200              # SteamOS/Bazzite: 150-180 treats zram as "RAM extension"—snappy on low loads.
vm.dirty_background_ratio=2    # Gaming tunings: 1-5 prevents micro-lags from bursts; 2 suits light VMs.
vm.dirty_ratio=40              # Balanced gamer mid-range—allows dirty buildup without app stalls.
vm.watermark_boost_factor=0    # Disables overhead (all zram sources agree).
vm.watermark_scale_factor=125  # Proactive headroom—gamer/Proxmox default for no surprises.
vm.page-cluster=0              # Essential for zram speed (zero latency).
EOF
sysctl --load=/etc/sysctl.d/99-vm-zram-parameters.conf && reboot
root@skylake2 ~# stress-ng --vm 2 --vm-bytes 8G -t 60s
stress-ng: info:  [57653] setting to a 1 min run per stressor
stress-ng: info:  [57653] dispatching hogs: 2 vm
stress-ng: info:  [57678] vm: using 4G per stressor instance (total 8G of 3.18G available memory)
stress-ng: info:  [57653] skipped: 0
stress-ng: info:  [57653] passed: 2: vm (2)
stress-ng: info:  [57653] failed: 0
stress-ng: info:  [57653] metrics untrustworthy: 0
stress-ng: info:  [57653] successful run completed in 1 min, 4.34 secs
root@skylake2 ~# swapon --show
NAME       TYPE       SIZE USED PRIO
/dev/zram0 partition 23.2G 3.7G  150

this was before i upped percent to 200%, swappiness to 200 priority to 32000
and with stress-ng --vm 4 --vm-bytes 12G -t 60s
and i hovered around 6g in zram
and my arc stayed pristine

root@skylake2 ~# arc_summary | grep -E 'hit|total'
ARC total accesses:                                                14.7M
        Total hits:                                    99.0 %      14.6M
        Total I/O hits:                                 0.1 %       9.5k
        Demand data hits:                              99.3 %      12.4M
        Demand data I/O hits:                         < 0.1 %        940
        Demand metadata hits:                          99.4 %       2.2M
        Demand metadata I/O hits:                     < 0.1 %        894
        Prefetch data hits:                            21.6 %       9.0k
        Prefetch data I/O hits:                         0.4 %        177
        Prefetch metadata hits:                        60.5 %      19.4k
        Prefetch metadata I/O hits:                    23.4 %       7.5k
        Demand hits after predictive:                  37.7 %      27.9k
        Demand I/O hits after predictive:               1.7 %       1.3k
        Demand hits after prescient:                   75.0 %         69
        Demand I/O hits after prescient:               25.0 %         23
ARC states hits of all accesses:
        Stream hits:                                   52.9 %       6.2M
        zfs_read_history_hits                                          0

this proxmox node - a 6700 w 16g ddr4 - was running running opnsense my router at the time and 0 noticable impact DURING the stress test. honestly i was pretty afraid to even throw 12g extra at it bc it was already reporting using 12g during normal operation. still torn on lz4 vs zstd

i dunno im not trying to claim zram is better - i came across this blogpost in the context that im demoing a new cheap vps that wont allow any kernel configs so im looking into a swapfile. thanks for the writeup

1 Like