Research
Security News
Malicious npm Package Targets Solana Developers and Hijacks Funds
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
github.com/klauspost/compress
This package provides various compression algorithms.
github.com/golang/snappy
offering better compression and concurrent streams.Feb 5th, 2024 - 1.17.6
Jan 26th, 2024 - v1.17.5
Dec 1st, 2023 - v1.17.4
Nov 15th, 2023 - v1.17.3
Oct 22nd, 2023 - v1.17.2
Oct 14th, 2023 - v1.17.1
Sept 19th, 2023 - v1.17.0
July 1st, 2023 - v1.16.7
June 13, 2023 - v1.16.6
Apr 16, 2023 - v1.16.5
Apr 5, 2023 - v1.16.4
Mar 13, 2023 - v1.16.1
Feb 26, 2023 - v1.16.0
Jan 21st, 2023 (v1.15.15)
Jan 3rd, 2023 (v1.15.14)
Dec 11, 2022 (v1.15.13)
Oct 26, 2022 (v1.15.12)
HeaderNoCompression
https://github.com/klauspost/compress/pull/683Sept 26, 2022 (v1.15.11)
Sept 16, 2022 (v1.15.10)
July 21, 2022 (v1.15.9)
July 13, 2022 (v1.15.8)
June 29, 2022 (v1.15.7)
June 3, 2022 (v1.15.6)
May 25, 2022 (v1.15.5)
May 11, 2022 (v1.15.4)
May 5, 2022 (v1.15.3)
Apr 26, 2022 (v1.15.2)
Mar 11, 2022 (v1.15.1)
Mar 3, 2022 (v1.15.0)
Both compression and decompression now supports "synchronous" stream operations. This means that whenever "concurrency" is set to 1, they will operate without spawning goroutines.
Stream decompression is now faster on asynchronous, since the goroutine allocation much more effectively splits the workload. On typical streams this will typically use 2 cores fully for decompression. When a stream has finished decoding no goroutines will be left over, so decoders can now safely be pooled and still be garbage collected.
While the release has been extensively tested, it is recommended to testing when upgrading.
Feb 22, 2022 (v1.14.4)
Feb 17, 2022 (v1.14.3)
Jan 25, 2022 (v1.14.2)
Jan 11, 2022 (v1.14.1)
Aug 30, 2021 (v1.13.5)
Aug 12, 2021 (v1.13.4)
Aug 3, 2021 (v1.13.3)
Jun 14, 2021 (v1.13.1)
Jun 3, 2021 (v1.13.0)
May 25, 2021 (v1.12.3)
Apr 27, 2021 (v1.12.2)
Apr 14, 2021 (v1.12.1)
Mar 26, 2021 (v1.11.13)
Mar 5, 2021 (v1.11.12)
s2sx
binary that creates self extracting archives.Mar 1, 2021 (v1.11.9)
Feb 25, 2021 (v1.11.8)
Jan 14, 2021 (v1.11.7)
Jan 7, 2021 (v1.11.6)
Dec 20, 2020 (v1.11.4)
Nov 15, 2020 (v1.11.3)
Oct 11, 2020 (v1.11.2)
Oct 1, 2020 (v1.11.1)
Sept 8, 2020 (v1.11.0)
nil
for previous behaviour. #216-rm
(remove source files) and -q
(no output except errors) to s2c
and s2d
commandss2c
and s2d
commandline tools.The packages are drop-in replacements for standard libraries. Simply replace the import path to use them:
old import | new import | Documentation |
---|---|---|
compress/gzip | github.com/klauspost/compress/gzip | gzip |
compress/zlib | github.com/klauspost/compress/zlib | zlib |
archive/zip | github.com/klauspost/compress/zip | zip |
compress/flate | github.com/klauspost/compress/flate | flate |
You may also be interested in pgzip, which is a drop in replacement for gzip, which support multithreaded compression on big files and the optimized crc32 package used by these packages.
The packages contains the same as the standard library, so you can use the godoc for that: gzip, zip, zlib, flate.
Currently there is only minor speedup on decompression (mostly CRC32 calculation).
Memory usage is typically 1MB for a Writer. stdlib is in the same range. If you expect to have a lot of concurrently allocated Writers consider using the stateless compress described below.
For compression performance, see: this spreadsheet.
To disable all assembly add -tags=noasm
. This works across all packages.
This package offers stateless compression as a special option for gzip/deflate. It will do compression but without maintaining any state between Write calls.
This means there will be no memory kept between Write calls, but compression and speed will be suboptimal.
This is only relevant in cases where you expect to run many thousands of compressors concurrently, but with very little activity. This is not intended for regular web servers serving individual requests.
Because of this, the size of actual Write calls will affect output size.
In gzip, specify level -3
/ gzip.StatelessCompression
to enable.
For direct deflate use, NewStatelessWriter and StatelessDeflate are available. See documentation
A bufio.Writer
can of course be used to control write sizes. For example, to use a 4KB buffer:
// replace 'ioutil.Discard' with your output.
gzw, err := gzip.NewWriterLevel(ioutil.Discard, gzip.StatelessCompression)
if err != nil {
return err
}
defer gzw.Close()
w := bufio.NewWriterSize(gzw, 4096)
defer w.Flush()
// Write to 'w'
This will only use up to 4KB in memory when the writer is idle.
Compression is almost always worse than the fastest compression level and each write will allocate (a little) memory.
It has been a while since we have been looking at the speed of this package compared to the standard library, so I thought I would re-do my tests and give some overall recommendations based on the current state. All benchmarks have been performed with Go 1.10 on my Desktop Intel(R) Core(TM) i7-2600 CPU @3.40GHz. Since I last ran the tests, I have gotten more RAM, which means tests with big files are no longer limited by my SSD.
The raw results are in my updated spreadsheet. Due to cgo changes and upstream updates i could not get the cgo version of gzip to compile. Instead I included the zstd cgo implementation. If I get cgo gzip to work again, I might replace the results in the sheet.
The columns to take note of are: MB/s - the throughput. Reduction - the data size reduction in percent of the original. Rel Speed relative speed compared to the standard library at the same level. Smaller - how many percent smaller is the compressed output compared to stdlib. Negative means the output was bigger. Loss means the loss (or gain) in compression as a percentage difference of the input.
The gzstd
(standard library gzip) and gzkp
(this package gzip) only uses one CPU core. pgzip
, bgzf
uses all 4 cores. zstd
uses one core, and is a beast (but not Go, yet).
There appears to be a roughly 5-10% speed advantage over the standard library when comparing at similar compression levels.
The biggest difference you will see is the result of re-balancing the compression levels. I wanted by library to give a smoother transition between the compression levels than the standard library.
This package attempts to provide a more smooth transition, where "1" is taking a lot of shortcuts, "5" is the reasonable trade-off and "9" is the "give me the best compression", and the values in between gives something reasonable in between. The standard library has big differences in levels 1-4, but levels 5-9 having no significant gains - often spending a lot more time than can be justified by the achieved compression.
There are links to all the test data in the spreadsheet in the top left field on each tab.
This test set aims to emulate typical use in a web server. The test-set is 4GB data in 53k files, and is a mixture of (mostly) HTML, JS, CSS.
Since level 1 and 9 are close to being the same code, they are quite close. But looking at the levels in-between the differences are quite big.
Looking at level 6, this package is 88% faster, but will output about 6% more data. For a web server, this means you can serve 88% more data, but have to pay for 6% more bandwidth. You can draw your own conclusions on what would be the most expensive for your case.
This test is for typical data files stored on a server. In this case it is a collection of Go precompiled objects. They are very compressible.
The picture is similar to the web content, but with small differences since this is very compressible. Levels 2-3 offer good speed, but is sacrificing quite a bit of compression.
The standard library seems suboptimal on level 3 and 4 - offering both worse compression and speed than level 6 & 7 of this package respectively.
This is a JSON file with very high redundancy. The reduction starts at 95% on level 1, so in real life terms we are dealing with something like a highly redundant stream of data, etc.
It is definitely visible that we are dealing with specialized content here, so the results are very scattered. This package does not do very well at levels 1-4, but picks up significantly at level 5 and levels 7 and 8 offering great speed for the achieved compression.
So if you know you content is extremely compressible you might want to go slightly higher than the defaults. The standard library has a huge gap between levels 3 and 4 in terms of speed (2.75x slowdown), so it offers little "middle ground".
This is a pretty common test corpus: enwik9. It contains the first 10^9 bytes of the English Wikipedia dump on Mar. 3, 2006. This is a very good test of typical text based compression and more data heavy streams.
We see a similar picture here as in "Web Content". On equal levels some compression is sacrificed for more speed. Level 5 seems to be the best trade-off between speed and size, beating stdlib level 3 in both.
I will combine two test sets, one 10GB file set and a VM disk image (~8GB). Both contain different data types and represent a typical backup scenario.
The most notable thing is how quickly the standard library drops to very low compression speeds around level 5-6 without any big gains in compression. Since this type of data is fairly common, this does not seem like good behavior.
This is mainly a test of how good the algorithms are at detecting un-compressible input. The standard library only offers this feature with very conservative settings at level 1. Obviously there is no reason for the algorithms to try to compress input that cannot be compressed. The only downside is that it might skip some compressible data on false detections.
This compression library adds a special compression level, named HuffmanOnly
, which allows near linear time compression. This is done by completely disabling matching of previous data, and only reduce the number of bits to represent each character.
This means that often used characters, like 'e' and ' ' (space) in text use the fewest bits to represent, and rare characters like '¤' takes more bits to represent. For more information see wikipedia or this nice video.
Since this type of compression has much less variance, the compression speed is mostly unaffected by the input data, and is usually more than 180MB/s for a single core.
The downside is that the compression ratio is usually considerably worse than even the fastest conventional compression. The compression ratio can never be better than 8:1 (12.5%).
The linear time compression can be used as a "better than nothing" mode, where you cannot risk the encoder to slow down on some content. For comparison, the size of the "Twain" text is 233460 bytes (+29% vs. level 1) and encode speed is 144MB/s (4.5x level 1). So in this case you trade a 30% size increase for a 4 times speedup.
For more information see my blog post on Fast Linear Time Compression.
This is implemented on Go 1.7 as "Huffman Only" mode, though not exposed for gzip.
Here are other packages of good quality and pure Go (no cgo wrappers or autoconverted code):
This code is licensed under the same conditions as the original Go code. See LICENSE file.
FAQs
Unknown package
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Security News
Research
Socket researchers have discovered malicious npm packages targeting crypto developers, stealing credentials and wallet data using spyware delivered through typosquats of popular cryptographic libraries.
Security News
Socket's package search now displays weekly downloads for npm packages, helping developers quickly assess popularity and make more informed decisions.