Socket
Socket
Sign inDemoInstall

gopkg.in/thejerf/gomempool.v1

Package Overview
Dependencies
0
Alerts
File Explorer

Install Socket

Detect and block malicious and high-risk dependencies

Install

    gopkg.in/thejerf/gomempool.v1

Package gomempool implements a simple memory pool for byte slices. The Go garbage collector runs more often when more bytes are allocated. (For full details, see the runtime package documentation on the GOGC variable.) Avoiding allocations can help the stop-the-world GC run much less often. To determine if you should use this, first deploy your code into as realistic an environment as possible. Extract a runtime.MemStats structure from your running code. Examine the various Pause fields in that structure to determine if you have a GC problem, preferably in conjunction with some other monitoring of real performance. (Remember the numbers are in nanoseconds.) If the numbers you have are OK for your use case, STOP HERE. Do not proceed. If you are generating a lot of garbage collection pauses, the next question is why. Profile the heap. If the answer is anything other than []byte slices, STOP HERE. Do not proceed. Finally, if you are indeed seeing a lot of allocations of []bytes, you may wish to preceed with using this library. gomempool is a power tool; it can save your program, but it can blow your feet off, too. (I've done both.) That said, despite the narrow use case of this library, it can have an effect in certain situations, which I happened to encounter. I suspect the biggest use case is a network application that often allocates large messages, which is what I have, causing an otherwise relatively memory-svelte program to allocate dozens of megabytes per second of []byte buffers to process messages. To use the pool, there are three basic steps: The following prose documentation will cover each step at length. Expand the Example below for concise usage examples and things you can copy & paste. First, create a pool. A *Pool is obtained by calling gomempool.New. All methods on the *Pool are threadsafe. The pool can be configured via the New call. A nil pointer of type *gomempool.Pool is also a valid pool. This will use normal make() to create byte slices and simply discard the slice when asked to .Return() it. This is convenient for testing whether you've got a memory error in your code, because you can swap in a nil Pool pointer without changing any other code. If an error goes away when you do that, you have a memory error. (Probably .Return()ing something too soon.) []bytes in the pool come with an "Allocator" that is responsible for returning them correctly. To obtain an allocator, you call GetNewAllocator() on the pool. This returns an allocator that is not yet used. You may then call .Allocate(uint64) on it to assign a []byte from the pool. This []byte is then associated with that Allocator until you call .Return() on the allocator, after which the []byte goes back into the pool. If you ask for more bytes than the pool is configured to store, the Allocator will create a transient []byte, which it will not manage. You can check whether you are invoking this case by calling .MaxSize on the pool. You MUST NOT call .Return() until you are entirely done with the []byte. This includes shared slices you may have created; this is by far the easiest way to get in trouble with a []byte pool, as it is easy to accidentally introduce sharing without realizing it. You must also make sure not to do anything with your []byte that might cause another []byte to be created instead; for instance, using your pooled []byte in an "append" call is dangerous, because the runtime might decide to give you back a []byte that backs to an entirely different array. In this case your []byte and your Allocator will cease to be related. If the Allocator is correctly managed, your code will not fail, but you won't be getting any benefit, either. You may retrieve the []byte slice corresponding to an Allocator at any time by calling .Bytes(). This means that if you do need to pass the []byte and Allocator around, it suffices to pass just the Allocator. (Though, be aware that the Allocator's []byte will be of the original size you asked for. This can not be changed, as you can not change the original slice itself.) Allocators can be reused freely, as long as they are used correctly. However, an individual Allocator is not threadsafe. Its interaction with the Pool is, but its internal values are not; do not use the same Allocator from more than one goroutine at a time. Once allocated, an allocator will PANIC if you try to allocate again with ErrBytesAlreadyAllocated. Once .Return() has been called, an allocator will PANIC if you try to .Return() the []byte again. If no []byte is currently allocated, .Bytes() will PANIC if called. This is fully on purpose. All of these situations represent profound errors in your code. This sort of error is just as dangerous in Go as it is in any other language. Go may not segfault, but memory management issues can still dish out the pain; better to find out earlier rather than later. You can combine obtaining an Allocator and a particular sized []byte by calling .Allocate() on the pool. The Allocators returned by the nil *Pool use make() to create new slices every time, and simply discard the []byte when done, but they enforce the exact same rules as the "real" Allocators, and panic in all the same places. This is so there is as little difference as possible between the two types of pools. Optionally return the byte slices Calling .Return() on an allocator is optional. If a []byte is not returned to the pool, you do not get any further benefit from the pool for that []byte, but the garbage collector will still clean it up normally. This means using a pool is still feasible even if some of your code paths may need to retain a []byte for a long or complicated period of time. If you have a byte slice that is getting passed through some goroutines, I recommend creating a structure that holds all the relevant data about the object bound together with the allocator: which makes it easy to keep the two bound together, and pass them around, with only the last user finally performing the deallocation. This library does not provide this structure since all I could give you is basically the above struct, with an interface{} in it. You can query the pool for its cache statistics by calling Stats(), which will return a structure describing how the individual buckets are performing. Quality: At the moment I would call this alpha code. Go lint clean, go vet clean, 100% coverage in the tests. You and I both know that doesn't prove this is bug-free, but at least it shows I care.


Version published

Readme

Source

gomempool

Build Status

A []byte pool implementation for Go.

Go network programs can get into a bit of garbage collection trouble if they are constantly allocating buffers of []byte to process network requests. This library gives you an interface you can program to to easily re-use []bytes without too much additional work. It also provides statistics so you can query how well the Pool is working for you.

This is fully covered with godoc, including examples, motivation, and everything else you might otherwise expect from a README.md on GitHub. (DRY.)

This is currently at version 1.0.0, and can imported via gopkg.in/thejerf/gomempool.v1 if you like. Semantic versioning will be used for version numbers.

Code Signing

Starting with commit ff6f742, I will be signing this repository with the "jerf" keybase account.

sync.Pool

"What about sync.Pool?" sync.Pool turns out to solve a different problem. sync.Pool focuses on having a pool of otherwise indistinguishable objects. gomempool specifically focuses on []bytes of potentially different sizes. After some analysis, I don't see a reason to even use sync.Pool, because there's not much it could improve in gomempool. gomempool and sync.Pool turn out not to overlap at all.

(Also I've at least picked up rumors that the core devs have been underwhelmed by sync.Pool. gomempool solves a real, if obscure, problem. You probably don't need gomempool, but there are real problems it solves.)

FAQs

Last updated on 22 Oct 2015

Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc