ZIP vs 7Z vs ZSTD: The definitive comparison with real-world data

Last update: 25/09/2025
Author Isaac
  • ZIP, 7Z (LZMA2) and ZSTD perform differently depending on the objective: compatibility, ratio or decompression speed.
  • The data shows ZSTD as a leader in decompression and very competitive in ratio at medium levels.
  • For extreme storage, use Zpaq; for desktop, use 7Z; for universal storage, use ZIP or ZIP+ZSTD (method 93).

What is WinRAR Delta Compression?

Choose between ZIP, 7Z and ZSTD It seems like a minor issue until you need to deploy applications, send backups daily or reduce costs. storage and bandwidth (see how compress and decompress files). In practice, the decision changes delivery times, CPU/RAM usage, and even compatibility with third-party tools and systems.

After reviewing multiple benchmarks with real data and very different contexts —from a 5,4 GB .NET dataset for deployments, passing through a text corpus of Wikipedia 1 GB, to container images and a 28,7 GB binary— the complete picture reveals when it is appropriate to prioritize ratio, where the decompression speed and what compatibility implications it has (classic ZIP, ZIP with ZSTD, 7Z with LZMA2 or with alternative codecs, etc.).

What we compare and why it matters

We have analyzed the most used families: ZIP (Deflate), 7Z (LZMA/LZMA2) y ZSTD. In addition, they enter into the conversation Brotli, XZ, bzip2, zpaq and LZ4, because in real life it is almost never a pure "face-to-face". In .NET deployments, for example, the balance between compatibility, ratio and speed It weighs differently than in mass backups or in packaging distribution packages.

Context rules: if you search minimum deployment time and omnipresent support, you don't choose the same as when you try minimize storage of 15 backup copies (and split compressed files) and optimize the cost of egress. The good news is that there are now comparable figures and clear patterns to help you make informed decisions.

How each format works (and what it means in practice)

  • ZIP (Deflate) It combines LZ77 and Huffman; it's the veteran, ubiquitous, and completely compatible. It doesn't usually win in compression ratio or speed compared to modern options, but opens anywhere (see how manage browser extensions) and its implementation is stable and well-known. Traditional ZIP (method 8) had historical limitations, although modern compressors overcome many of these constraints.
  • 7Z It relies mainly on LZMA/LZMA2, with a very good ratio and mature tools. In addition, 7-Zip parallelizes well and its "sensible defaults» make you feel agile. It's a de facto standard in desktop and dev environments, with broad support in Windows, Linux and macOS (sometimes using external utilities).
  • Zstandard (ZSTD), created by Facebook (2015), stands out for extremely rapid decompression and a very wide level palette (1–22). Its goal is to perform as well as or better than Deflate and come close to LZMA in ratio, with generally faster execution. We see it integrated into BSD/Linux kernels, packet compression (e.g. Arch Linux uses zstd level 20, decompressing 14x faster than XZ with just +0,8% size) and increasingly present in CI/CD pipelines.

Other actors worth knowing

  • Brotli, designed for the Web, compresses better than gzip on textual content and benefits from the content-encoding of browsers. Its compression benchmark is single-threaded, which skews time comparisons. real wall-clock if you're dealing with multi-threaded codecs, but there are parallel implementations. In server environments, power consumption Total CPU (user+sys) is more relevant than the wall clock.
  • XZ Improves LZMA and offers great ratio at the expense of high compression times. For large binaries, it can be competitive, but its decompression speed often falls behind ZSTD and 7Z. Multithreaded concurrency and the parameter -e (extreme) They improve the photo somewhat, although they do not work miracles.
  • bzip2/pbzip2 They balance ratio and CPU more or less in the middle of the board. With pbzip2 parallelism is gained. However, for many modern cases ZSTD and 7Z offer better trade-offs global
  • zpaq is he "all-terrain vehicle with the maximum ratio» with journaling type incremental compression; its focus is on squeezing bytes, gladly sacrificing the decompression speedFor cold backups, this might be a good idea; for general use, it isn't.
  What Is WampServer. Uses, Features, Opinions, Prices

Benchmarks with data: text, large binaries, and deployments

7-zip

1) Textual dataset (Wikipedia 1 GB)

In a comparison of ZIP (via 7zip), 7zip, XZ, Brotli, Zstandard, zpaq over 1 GB of Wikipedia text, were graphed time vs size at each level. The author warns: unscientific evidence and Brotli monofilament reference penalizes "real" times. In user+sys computation, the picture improves for Brotli; ZSTD appears very competitive, decompressing very quickly and with ratios in the XZ/7zip orbit at high stresses.

Key findings: ZIP is left behind in compression ratio and time; 7zip advances with more sophisticated tools and multithreading; XZ improves ratio but decompresses slower that ZSTD; ZSTD balances fantastically when increasing effort; and zpaq achieves top ratios paying a lot of time, especially in decompression.

2) PeaZip "maximum compression" test (Windows, i7-8565U)

With PeaZip/WinRAR, input file 303,0 MB and five repetitions per test, these were the average values (sizes in MB; times in s):

Format Size Ratio Compression Extraction
RAR best (WinRar) 78,1 25,78% 28,5 1,8
7Z ultra (LZMA2) 71,2 23,50% 137,0 3,4
7Z Ultra Brotli 75,1 24,79% 208,0 0,8
7Z ultra Zstd 75,3 24,85% 300,0 1,2
7Z ultra BZip2 80,6 26,60% 81,0 7,1
ZPAQ ultra 57,6 19,01% 359,0 358,0

Conclusions: ZPAQ It wins in pure ratio but is very slow, even when extracting. 7Z (LZMA2) offers a good ratio, with reasonable extraction. Using Brotli/ZSTD within 7Z, the extraction flies (Brotli even more so), in exchange for longer compressions than LZMA2 in ultra. RAR prioritizes compression speed sacrificing some ratio, and if you need security, you can see how put passwords on compressed files.

3) Huge 28,7GB binary (Linux, Ryzen 5 5600G)

On a file of 28,65–28,7 GB, the objective was to compress as much as possible and compare ratios and times with xz, pbzip2, 7z and zstd:

  • xz -9e -T12: 12,6 GiB in ~15m49s (≈ 44,0%)
  • pbzip2 -9: 13,07 GiB in ~4m29s (≈ 44,55%)
  • 7z -mx=9: 12,9 GiB in ~16m43s (≈ 43,98%)
  • zstd –ultra -22 -T12: 12,48 GiB in ~19m48s (≈ 43,57%)

In "maximum compression", ZSTD narrowly won in size, but it took longer. pbzip2 surprised in speed with a close ratio. This case illustrates that, on very large binaries, absolute differences in GB weigh as much as CPU minutes, and it is advisable quantify both.

4) Game ZIPs (EU4) and converting from tar+brotli to ZIP with ZSTD

A real-life case: EU4 saves arrive as lightly compressed ZIP files. We tried extracting and recompressing them: rezip Saves ~17%. Changing the codec within the ZIP, the numbers per level were:

Method Reduction Time (ms)
zstd (3) 40% 463
zstd (5) 45% 755
zstd (7) 50% 1256
brotli (4) 32% 1481
brotli (9) 54% 4210

In addition, Wasm payloads for transcoding: ZSTD ~215 kB (136 kB if encoder only), Brotli ~683 kB. In parsing, counting fetch time from cache (the content-encoding Brotli browser is not free), ZSTD and Brotli were very evenly matched: Brotli parses somewhat faster by not decompressing into user space; ZSTD It benefits from not touching the content-encoding and reads lighter from the disk.

  How to Charge a Laptop Without an Original Charger

With 30 Mb/s upload, integrating transcoding + transfer in a typical 7,7 MB ZIP: original 2,05s; ZSTD-3 1,70s; ZSTD-5 1,88s; ZSTD-7 2,28 s. The practical recommendation leaned towards ZSTD level 7 for storage savings (when you pay for the bucket), although level 3 is tempting if you prioritize latency.

5) .NET Deployments (5,4 GB publish)

In a .NET application deployment scenario with 5,4GB of output (a mix of self-contained and framework-dependent), the practical guideline is clear: choose by target. If you're aiming for universal compatibility, go with classic ZIP; if the bottleneck is decompression and speed, ZSTD comes in strong; if you want high ratio With CLI and tooling consolidated, 7Z with LZMA2 is still a winning horse.

Decompression: the most noticeable factor in the experience

Several independent tests point to ZSTD as a separate class in decompression. In the textual bench, when looking user+sys It is understood why ZSTD is optimal for file system compression (see how disable automatic compression) and packet packing: reading is fast and the CPU cost is low. 7zip It offers a very good experience, and the difference with XZ is explained more by utilities and multithreading than by codec magic.

ZIP It surprised with the user+sys decompression metrics, being better than expected; if your audience has modest machines or workflows where rapid extraction is the priority, don't discard it without measuring. ZPAQ, for its part, reminds us of its profile: incredible ratio, but very high extraction times, suitable for cold copies.

Use cases: How to decide with data (RAM, time, ratio, and cost)

In a container platform scenario where you have to do daily backups For 30–100 containers, the operating budget is the most important factor. A pragmatic scoring system was proposed:

  • RAM: Target 200 MB compression and 100 MB decompression to score 5; for every 50 MB or more, subtract 1 point.
  • Time: 3-hour daily window to compress everything; per container, 6–1,8 minutes. On the scale, >120 s scores 0, every -20 s adds 1, <10 s is 5 points in decompression.
  • Ratio: With ~1,6GB objects (e.g. MariaDB+PHP+WordPress), and ~$6/TB storage on Backblaze/E2 type backends, aiming for ≥4:1 helps to get the results down. cost of egress (be careful with AWS at $0,09/GB outgoing).

Testing levels, ZSTD level 3 achieved ~3,5:1 in ~3 s with 207 MB of RAM; Brotli level 6 reached ~4,3:1 in ~30 s with good RAM profile. Tuning ZSTD tuning (strategy, searchLog, targetLength, minMatch) was not improved enough without penalizing time or memory; in that particular analysis, Brotli level 6 came out as the favorite. Curiously, Brotli level 4 It used more memory than 3/5 but matched the level 5 ratio in half the time, a very attractive option if 13 MB extra storage is affordable.

If the objective changes (for example, rapid deployments or intensive reading), the weights vary and ZSTD tends to be the most balanced choice due to its flash decompression and decent ratios at average levels.

Compatibility, support, and the role of ZIP with ZSTD

Something key: the ZIP standard incorporated ZSTD (method 93) since specification 6.3.8 (2020). This allows "ZIP of a lifetime» with a modern codec. How does the ecosystem support it? Today, Windows Explorer doesn't create or extract ZIPs with ZSTD; the main 7-Zip is integrating it, and there are already working forks. On Linux, there are forks of p7zip. The situation is improving, but there are still frictions.

  8 ASUS Programs That Are Useless

If you use Total Commander, you can enable 7z reading with ZSTD by replacing the TCLZMA64.DLL by compatible versions (e.g., the TotalCmd.7z package from 7-Zip ZS). To create 7z files with ZSTD within TC, older plugins do not always respect the chosen codec and fall back to LZMA; it is better to use 7-Zip ZS complete or the Codecs plugin on an existing 7-Zip installation. With 7-Zip ZS you'll have compression/downloading of Brotli, LZ4, Lizard, ZSTD inside the 7z container and handling ZIP+ZSTD. Check with 7z i which codecs are active.

In browsers and web pipelines, the content-encoding Brotli seems like a “total freebie”, but there are nuances: middleware, proxies and frameworks (e.g. discrepancies between debug/production mode in Next.js) can redo compression or add latency. Serving pre-compressed content isn't always straightforward either (Cloudflare Pages doesn't support it out of the box). In several real-world cases, replacing streams with ZSTD in ZIP and decompression in user space contributed fewer dependencies of the environment and equivalent or better results.

Useful parameters and recommended levels

En ZSTD, levels 3, 5, and 7 offer good operating points. A study crossing transcoding and transfer with 30 Mb/s showed 3 level as the fastest end-to-end, but 7 level I saved storage (in EU4, 12,5% ​​less than 5 and 25% less than 3). AWS Athena documents preferences for levels 6–9 when 3 is not used by default, which fits with that finding.

The temptation to activate long-distance matching It is real, but be careful: you will require that whoever decompresses has the same memory that who compressed. In deployments and apps client, that requirement may be unfeasible. Better to keep windows and tables within reasonable budgets.

En Brotli, the star setting is usually the level. Change lgwin can increase memory without huge gains if you're already going high. The observation that 4 level matching 5 ratio in half the time—at the cost of slightly more RAM—is gold when you want lower latency without penalizing the size too much.

To XZ, the flag -e apply slower variants of the preset looking for a bit more ratio. The manual tables remind you that the compressor memory increases, but the decompressor remains. However, compared to ZSTD, XZ usually loses in decompression when you want to open packages at high speed.

If you depend on Classic ZIP but you want the maximum speed of Deflate, libraries like libdeflate They greatly improve throughput compared to zlib. In the Rust ecosystem, zip-rs It does not yet make it easy to change backends when creating ZIPs, so that route may not be available without additional work.

How to improve file transfer speed in Windows 11
Related article:
How to speed up file transfers in Windows 11: key tricks and tweaks