Package boutique provides an immutable state storage with subscriptions to changes in the store. It is intended to be a single storage for individual client data or a single state store for an application not requiring high QPS. Features: Drawbacks: When we say immutable, we mean that everything gets copied, as Go does not have immutable objects or types other than strings. This means every update to a pointer or reference type (map, dict, slice) must make a copy of the data before changing it, not a mutation. Because of modern processors, this copying is quite fast. Boutique provides storage that is best designed in a modular method: The files are best organized by using them as follows: Please see github.com/johnsiilver/boutique for a complete guide to using this package. Its complicated enough to warrant some documentation to guide you through. If your very impatient, there is an example directory with examples of verying complexity.
Package tagset supports creation and manipulation of sets of tags. It does so in a safe and efficient fashion, supporting: - consistent hashing of tagsets to recognize commonalities - flexible combination of tagsets from multiple sources - immutability to allow re-use of tagsets The package otherwise presents a fairly abstract API that allows performance optimizations without changing semantics. HashlessTagsAccumulator and HashingTagsAccumulator both allow building tagsets bit-by-bit, by appending new tags. The HashedTags type represents an _immutable_ set of tags and associated hashes. It is the primary data structure used to represent a set of tags.
Package frozen provides immutable data structures.
Tlogdb is a trivial transparent log client and server. It is meant as more a starting point to be customized than a tool to be used directly. A transparent log is a tamper-proof, append-only, immutable log of data records. That is, if the server were to violate the “append-only, immutable” properties, that tampering would be detected by the client. For more about transparent logs, see https://research.swtch.com/tlog. To create a new log (new server state): The newlog command creates a new database in file (default tlog.db) containing an empty log and a newly generated public/private key pair for the server using the given name. The newlog command prints the newly generated public key. To see it again: To add a record named name to the log: To serve the authenticated log data: The default server address is localhost:6655. The client maintains a cache database both for performance (avoiding duplicate downloads) and for storing the server's public key and the most recently seen log head. To create a new client cache: The newcache command creates a new database in file (default tlogclient.db) and stores the given public key for later use. The key should be the output of the tlogdb's server commands newlog or publickey, described above. To look up a record in the log: The default server address is again localhost:6655. The protocol between client and server is the same as used in the Go module checksum database, documented at https://golang.org/design/25530-sumdb#checksum-database. There are three endpoints: /latest serves a signed tree head; /lookup/NAME looks up the given name, and /tile/* serves log tiles. Putting the various commands together in a Unix shell:
Tlogdb is a trivial transparent log client and server. It is meant as more a starting point to be customized than a tool to be used directly. A transparent log is a tamper-proof, append-only, immutable log of data records. That is, if the server were to violate the “append-only, immutable” properties, that tampering would be detected by the client. For more about transparent logs, see https://research.swtch.com/tlog. To create a new log (new server state): The newlog command creates a new database in file (default tlog.db) containing an empty log and a newly generated public/private key pair for the server using the given name. The newlog command prints the newly generated public key. To see it again: To add a record named name to the log: To serve the authenticated log data: The default server address is localhost:6655. The client maintains a cache database both for performance (avoiding duplicate downloads) and for storing the server's public key and the most recently seen log head. To create a new client cache: The newcache command creates a new database in file (default tlogclient.db) and stores the given public key for later use. The key should be the output of the tlogdb's server commands newlog or publickey, described above. To look up a record in the log: The default server address is again localhost:6655. The protocol between client and server is the same as used in the Go module checksum database, documented at https://golang.org/design/25530-sumdb#checksum-database. There are three endpoints: /latest serves a signed tree head; /lookup/NAME looks up the given name, and /tile/* serves log tiles. Putting the various commands together in a Unix shell:
Package iavl implements a versioned, snapshottable (immutable) AVL+ tree for persisting key-value pairs. Basic usage of MutableTree. Proof of existence: Proof of absence: Now we delete an old version: Can't create a proof of absence for a version we no longer have:
Package iavl implements a versioned, snapshottable (immutable) AVL+ tree for persisting key-value pairs. Basic usage of MutableTree. Proof of existence: Proof of absence: Now we delete an old version: Can't create a proof of absence for a version we no longer have:
Package cbor is a modern CBOR codec (RFC 8949 & RFC 7049) with CBOR tags, Go struct tags (toarray/keyasint/omitempty), Core Deterministic Encoding, CTAP2, Canonical CBOR, float64->32->16, and duplicate map key detection. Encoding options allow "preferred serialization" by encoding integers and floats to their smallest forms (e.g. float16) when values fit. Struct tags like "keyasint", "toarray" and "omitempty" make CBOR data smaller and easier to use with structs. For example, "toarray" tag makes struct fields encode to CBOR array elements. And "keyasint" makes a field encode to an element of CBOR map with specified int key. Latest docs can be viewed at https://github.com/fxamacker/cbor#cbor-library-in-go The Quick Start guide is at https://github.com/fxamacker/cbor#quick-start Function signatures identical to encoding/json include: Standard interfaces include: Custom encoding and decoding is possible by implementing standard interfaces for user-defined Go types. Codec functions are available at package-level (using defaults options) or by creating modes from options at runtime. "Mode" in this API means definite way of encoding (EncMode) or decoding (DecMode). EncMode and DecMode interfaces are created from EncOptions or DecOptions structs. Modes use immutable options to avoid side-effects and simplify concurrency. Behavior of modes won't accidentally change at runtime after they're created. Modes are intended to be reused and are safe for concurrent use. EncMode and DecMode Interfaces Using Default Encoding Mode Using Default Decoding Mode Creating and Using Encoding Modes Predefined Encoding Options: https://github.com/fxamacker/cbor#predefined-encoding-options Encoding Options: https://github.com/fxamacker/cbor#encoding-options Decoding Options: https://github.com/fxamacker/cbor#decoding-options Struct tags like `cbor:"name,omitempty"` and `json:"name,omitempty"` work as expected. If both struct tags are specified then `cbor` is used. Struct tags like "keyasint", "toarray", and "omitempty" make it easy to use very compact formats like COSE and CWT (CBOR Web Tokens) with structs. For example, "toarray" makes struct fields encode to array elements. And "keyasint" makes struct fields encode to elements of CBOR map with int keys. https://raw.githubusercontent.com/fxamacker/images/master/cbor/v2.0.0/cbor_easy_api.png Struct tags are listed at https://github.com/fxamacker/cbor#struct-tags-1 Over 375 tests are included in this package. Cover-guided fuzzing is handled by fxamacker/cbor-fuzz.
Package disgord provides Go bindings for the documented Discord API, and allows for a stateful Client using the Session interface, with the option of a configurable caching system or bypass the built-in caching logic all together. Create a Disgord session to get access to the REST API and socket functionality. In the following example, we listen for new messages and write a "hello" message when our handler function gets fired. Session interface: https://godoc.org/github.com/andersfylling/disgord/#Session Disgord also provides the option to listen for events using a channel. The setup is exactly the same as registering a function. Simply define your channel, add buffering if you need it, and register it as a handler in the `.On` method. Never close a channel without removing the handler from Disgord. You can't directly call Remove, instead you inject a controller to dictate the handler's lifetime. Since you are the owner of the channel, disgord will never close it for you. Here is what it would look like to use the channel for handling events. Please run this in a go routine unless you know what you are doing. Disgord handles sharding for you automatically; when starting the bot, when discord demands you to scale up your shards (during runtime), etc. It also gives you control over the shard setup in case you want to run multiple instances of Disgord (in these cases you must handle scaling yourself as Disgord can not). Sharding is done behind the scenes, so you do not need to worry about any settings. Disgord will simply ask Discord for the recommended amount of shards for your bot on startup. However, to set specific amount of shards you can use the `disgord.ShardConfig` to specify a range of valid shard IDs (starts from 0). starting a bot with exactly 5 shards Running multiple instances each with 1 shard (note each instance must use unique shard ids) Handle scaling options yourself > Note: if you create a CacheConfig you don't have to set every field. > Note: Only LFU is supported. > Note: Lifetime options does not currently work/do anything (yet). A part of Disgord is the control you have; while this can be a good detail for advanced Users, we recommend beginners to utilise the default configurations (by simply not editing the configuration). Example of configuring the Cache: If you just want to change a specific field, you can do so. The fields are always default values. > Note: Disabling caching for some types while activating it for others (eg. disabling Channels, but activating guild caching), can cause items extracted from the Cache to not reflect the true discord state. Example, activated guild but disabled channel caching: The guild is stored to the Cache, but it's Channels are discarded. Guild Channels are dismantled from the guild object and otherwise stored in the channel Cache to improve performance and reduce memory use. So when you extract the cached guild object, all of the channel will only hold their channel ID, and nothing more. To keep it safe and reliable, you can not directly affect the contents of the Cache. Unlike discordgo where everything is mutable, the caching in disgord is immutable. This does reduce performance as a copy must be made (only on new Cache entries), but as a performance freak, I can tell you right now that a simple struct copy is not that expensive. This also means that, as long as discord sends their events properly, the caching will always reflect the true state of discord. If there is a bug in the Cache and you keep getting the incorrect data, please file an issue at github.com/andersfylling/disgord so it can quickly be resolved(!) Whenever you call a REST method from the Session interface; the Cache is always checked first. Upon a Cache hit, no REST request is executed and you get the data from the Cache in return. However, if this is problematic for you or there exist a bug which gives you bad/outdated data, you can bypass it by using Disgord flags. In addition to disgord.IgnoreCache, as shown above, you can pass in other flags such as: disgord.SortByID, disgord.OrderAscending, etc. You can find these flags in the flag.go file. `disgord_diagnosews` will store all the incoming and outgoing JSON data as files in the directory "diagnose-report/packets". The file format is as follows: unix_clientType_direction_shardID_operationCode_sequenceNumber[_eventName].json `json_std` switches out jsoniter with the json package from the std libs. `disgord_removeDiscordMutex` replaces mutexes in discord structures with a empty mutex; removes locking behaviour and any mutex code when compiled. `disgord_parallelism` activates built-in locking in discord structure methods. Eg. Guild.AddChannel(*Channel) does not do locking by default. But if you find yourself using these discord data structures in parallel environment, you can activate the internal locking to reduce race conditions. Note that activating `disgord_parallelism` and `disgord_removeDiscordMutex` at the same time, will cause you to have no locking as `disgord_removeDiscordMutex` affects the same mutexes. `disgord_legacy` adds wrapper methods with the original discord naming. eg. For REST requests you will notice Disgord uses a consistency between update/create/get/delete/set while discord uses edit/update/modify/close/delete/remove/etc. So if you struggle find a REST method, you can enable this build tag to gain access to mentioned wrappers. `disgordperf` does some low level tweaking that can help boost json unmarshalling and drops json validation from Discord responses/events. Other optimizations might take place as well. `disgord_websocket_gorilla` replaces nhooyr/websocket dependency with gorilla/websocket for gateway communication. In addition to the typical REST endpoints for deleting data, you can also use Client/Session.DeleteFromDiscord(...) for basic deletions. If you need to delete a specific range of messages, or anything complex as that; you can't use .DeleteFromDiscord(...). Not every struct has implemented the interface that allows you to call DeleteFromDiscord. Do not fret, if you try to pass a type that doesn't qualify, you get a compile error.
Package iavl implements a versioned, snapshottable (immutable) AVL+ tree for persisting key-value pairs. Basic usage of MutableTree. Proof of existence: Proof of absence: Now we delete an old version: Can't create a proof of absence for a version we no longer have:
Package quaternary implements a smaller but immutable map which can't be iterated
Package iavl implements a versioned, snapshottable (immutable) AVL+ tree for persisting key-value pairs. Basic usage of MutableTree. Proof of existence: Proof of absence: Now we delete an old version: Can't create a proof of absence for a version we no longer have:
Package color colorizes your terminal strings. Default Brush are available in sub-package brush for your convenience. You can invoke them directly: ...or you can create new ones! Create a Style, which has convenience methods : Style.WithForeground or WithBackground returns a new Style, with the applied Paint. Styles are immutable so the original one is left unchanged : Style.Brush gives you a Brush that you can invoke directly to colorize strings : You can use it with all sorts of things : That's it!
This package is the root package of the govmomi library. The library is structured as follows: The minimal usable functionality is available through the vim25 package. It contains subpackages that contain generated types, managed objects, and all available methods. The vim25 package is entirely independent of the other packages in the govmomi tree -- it has no dependencies on its peers. The vim25 package itself contains a client structure that is passed around throughout the entire library. It abstracts a session and its immutable state. See the vim25 package for more information. The session package contains an abstraction for the session manager that allows a user to login and logout. It also provides access to the current session (i.e. to determine if the user is in fact logged in) The object package contains wrappers for a selection of managed objects. The constructors of these objects all take a *vim25.Client, which they pass along to derived objects, if applicable. The govc package contains the govc CLI. The code in this tree is not intended to be used as a library. Any functionality that govc contains that _could_ be used as a library function but isn't, _should_ live in a root level package. Other packages, such as "event", "guest", or "license", provide wrappers for the respective subsystems. They are typically not needed in normal workflows so are kept outside the object package.
Package dot implements data synchronization of user defined types using operational transformation/OT. Please see https://github.com/dotchain/dot for a tutorial on how to use DOT. The core functionality is spread out between dot/changes, dot/streams, dot/refs and dot/ops but this package exposes simple client and server implementations for common use cases: Server example Client example DOT uses immutable values. Every Value must implement the change.Value interface which is a single Apply method that returns the result of applying a mutation (while leaving the original value effectively unchanged). If the underlying type behaves like a collection (such as with Slices), the type must also implement some collection specific methods specified in the changes.Collection interface. Most actual types are likely to be structs or slices with boilerplate implementaations of the interfaces. The x/dotc package has a code generator which can emit such boilerplate implementations simplifying this task. The changes package implements a set of simple changes (Replace, Splice and Move). Richer changes are expected to be built up by composition via changes.ChangeSet (which is a sequence of changes) and changes.PathChange (which modifies a value at a path). Changes are immutable too and generally are meant to not maintain any reference to the value they apply on. While custom changes are possible (they have to implement the changes.Custom interface), they are expected to be rare as the default set of chnange types cover a vast variety of scenarios. The core logic of DOT is in the Merge methods of changes: they guaranteee that if two independent changes are done to a value, the deviation in the values can be converged. The basic property of any two changes (on the same value) is that: Care must be taken with custom changes to ensure that this property is preserved. Streams represent the sequence of changes associated with a single value. Stream instances behave like they are immutable: when a change happens, a new stream instance captures the change. Streams also support multiple-writers: it is possible for two independent changes to the same stream instance. In this case, the newly-created stream instances only capture the respective changes but these both have a "Next" value that converges to the same value. That is, the two separate streams implicitly have the changes from each other (but after transforming through the Merge) method. This allows streams to perform quite nicely as convergent data structures without much syntax overhead: The streams package provides a generic Stream implementation (via the New function) which implements the idea of a sequence of convergent changes. But much of the power of streams is in having strongly type streams where the stream is associated with a strongly typed value. The streams package provides simple text streamss (S8 and S16) as well as Bool and Counter types. Richer types like structs and slices can be converted to their stream equivalent rather mechanically and this is done by the x/dotc package -- using code generation. Substreams are streams that refer into a particular field of a parent stream. For example, if the parent value is a struct with a "Done" field, it is possible to treat the "Done stream" as the changes scoped to this field. This allows code to be written much more cleanly. See the https://github.com/dotchain/dot#toggling-complete section of the documentation for an example. Streams support branching (a la Git) and folding. See the examples! Streams also support references. A typical use case is maintaining the user cursor within a region of text. When remote changes happen to the text, the cursor needs to be updated. In fact, when one takes a substream of an element of an array, the array index needs to be automatically managed (i.e. insertions into the array before the index should automatically update the index etc). This is managed within streams using references. A particular value can be reconstituted from the sequence of changes to that value. In DOT, only these changes are stored and that too in an append-only log. This make the backend rather simple and generally agnostic of application types to a large extent. See https://github.com/dotchain/dot#server for example code.
Package pglogrepl implements PostgreSQL logical replication client functionality. pglogrepl uses package github.com/jackc/pgconn as its underlying PostgreSQL connection. Use pgconn to establish a connection to PostgreSQL and then use the pglogrepl functions on that connection. Proper use of this package requires understanding the underlying PostgreSQL concepts. See https://www.postgresql.org/docs/current/protocol-replication.html.
Package iavl implements a versioned, snapshottable (immutable) AVL+ tree for persisting key-value pairs. Basic usage of MutableTree. Proof of existence: Proof of absence: Now we delete an old version: Can't create a proof of absence for a version we no longer have:
Package iavl implements a versioned, snapshottable (immutable) AVL+ tree for persisting key-value pairs. The tree is not safe for concurrent use, and must be guarded by a Mutex or RWLock as appropriate - the exception is immutable trees returned by MutableTree.GetImmutable() which are safe for concurrent use as long as the version is not deleted via DeleteVersion(). Basic usage of MutableTree: Proof of existence: Proof of absence: Now we delete an old version: Can't create a proof of absence for a version we no longer have:
Package vogen provides a code generator for Value Objects in Go. Value Objects are immutable objects that represent a value.
Package decimal implements immutable decimal floating-point numbers. It is specifically designed for transactional financial systems and adheres to the principles set by ANSI X3.274-1996. Decimal is a struct with three fields: The numerical value of a decimal is calculated as follows: This approach allows the same numeric value to have multiple representations, for example, 1, 1.0, and 1.00, which represent the same value but have different scales and coefficients. The range of a decimal is determined by its scale. Here are the ranges for frequently used scales: Subnormal numbers are not supported to ensure peak performance. Consequently, decimals between -0.00000000000000000005 and 0.00000000000000000005 inclusive, are rounded to 0. Special values such as NaN, Infinity, or negative zeros are not supported. This ensures that arithmetic operations always produce either valid decimals or errors. Each arithmetic operation occurs in two steps: The operation is initially performed using uint64 arithmetic. If no overflow occurs, the exact result is immediately returned. If overflow occurs, the operation proceeds to step 2. The operation is repeated with at least double precision using big.Int arithmetic. The result is then rounded to 19 digits. If no significant digits are lost during rounding, the inexact result is returned. If any significant digit is lost, an overflow error is returned. Step 1 improves performance by avoiding performance impact associated with big.Int arithmetic. It is expected that, in transactional financial systems, most arithmetic operations will compute an exact result during step 1. The following rules determine the significance of digits during step 2: All transcendental functions are always computed with at least double precision using big.Int arithmetic. The result is then rounded to 19 digits. If no significant digits are lost during rounding, the inexact result is returned. If any significant digit is lost, an overflow error is returned. The following rules determine the significance of digits: Unlike many other decimal libraries, this package does not provide an explicit mathematical context. Instead, the context is implicit and can be approximately equated to the following settings: The equality of Etiny and Emin implies that this package does not support subnormal numbers. For all operations the result is the one that would be obtained by computing the exact mathematical result with infinite precision and then rounding it to 19 digits using half-to-even rounding. This method ensures that rounding errors are evenly distributed between rounding up and down. In addition to implicit rounding, the package provides several methods for explicit rounding: See the documentation for each method for more details. All methods are panic-free and pure. Errors are returned in the following cases: Division by Zero: Unlike Go's standard library, Decimal.Quo, Decimal.QuoRem, Decimal.Inv, Decimal.AddQuo, Decimal.SubQuo, do not panic when dividing by 0. Instead, they return an error. Invalid Operation: Decimal.PowInt returns an error if 0 is raised to a negative power. Decimal.Sqrt return an error if the square root of a negative decimal is requested. Decimal.Log returns an error when calculating the natural logarithm of a non-positive decimal. Overflow: Unlike standard integers, decimals do not "wrap around" when exceeding their maximum value. For out-of-range values, methods return an error. Errors are not returned in the following cases: A. JSON The package integrates seamlessly with standard encoding/json through the implementation of encoding.TextMarshaller and encoding.TextUnmarshaler interfaces. Below is an example structure: This package marshals decimals as quoted strings, ensuring the preservation of the exact numerical value. Below is an example OpenAPI schema: B. XML The package integrates with standard encoding/xml via the implementation of encoding.TextMarshaller and encoding.TextUnmarshaler interfaces. Below is an example structure: "xs:decimal" type can represent decimals in XML schema. It is possible to impose restrictions on the length of the decimals using the following type: C. Protocol Buffers Protocol Buffers provide two formats to represent decimals. The first format represents decimals as numerical strings. The main advantage of this format is that it preserves trailing zeros. To convert between this format and decimals, use Parse and Decimal.String. Below is an example of a proto definition: The second format represents decimals as a pair of integers: one for the integer part and another for the fractional part. This format does not preserve trailing zeros and rounds decimals with more than nine digits in the fractional part. For conversion between this format and decimals, use NewFromInt64 and Decimal.Int64 with a scale argument of "9". Below is an example of a proto definition: D. SQL The package integrates with the standard database/sql via the implementation of sql.Scanner and driver.Valuer interfaces. To ensure accurate preservation of decimal scales, it is essential to choose appropriate column types: Below are the reasons for these preferences: PostgreSQL: Always use DECIMAL without precision or scale specifications, that is, avoid DECIMAL(p) or DECIMAL(p, s). DECIMAL accurately preserves the scale of decimals. SQLite: Prefer TEXT, since DECIMAL is just an alias for binary floating-point numbers. TEXT accurately preserves the scale of decimals. MySQL: Use DECIMAL(19, d), as DECIMAL is merely an alias for DECIMAL(10, 0). The downside of this format is that MySQL automatically rescales all decimals: it rounds values with more than d digits in the fractional part (using half away from zero) and pads with trailing zeros those with fewer than d digits in the fractional part. To prevent automatic rescaling, consider using VARCHAR(22), which accurately preserves the scale of decimals. This example demonstrates the advantage of decimals for financial calculations. It computes the sum 0.1 + 0.2 using both decimal and float64 arithmetic. In decimal arithmetic, the result is exactly 0.3, as expected. In float64 arithmetic, the result is 0.30000000000000004 due to floating-point inaccuracy. This example calculates an approximate value of π using the Leibniz formula. The Leibniz formula is an infinite series that converges to π/4, and is given by the equation: 1 - 1/3 + 1/5 - 1/7 + 1/9 - 1/11 + ... = π/4. This example computes the series up to the 500,000th term using decimal arithmetic and returns the approximate value of π. This example implements a simple calculator that evaluates mathematical expressions written in postfix notation. The calculator can handle basic arithmetic operations such as addition, subtraction, multiplication, and division.
Package freeze enables the "freezing" of data, similar to JavaScript's Object.freeze(). A frozen object cannot be modified; attempting to do so will result in an unrecoverable panic. Freezing is useful for providing soft guarantees of immutability. That is: the compiler can't prevent you from mutating an frozen object, but the runtime can. One of the unfortunate aspects of Go is its limited support for constants: structs, slices, and even arrays cannot be declared as consts. This becomes a problem when you want to pass a slice around to many consumers without worrying about them modifying it. With freeze, you can guard against these unwanted or intended behaviors. To accomplish this, the mprotect syscall is used. Sadly, this necessitates allocating new memory via mmap and copying the data into it. This performance penalty should not be prohibitive, but it's something to be aware of. In case it wasn't clear from the previous paragraph, this package is not intended to be used in production. A well-designed API is a much saner solution than freezing your data structures. I would even caution against using freeze in your automated testing, due to its platform-specific nature. freeze is best used for "one-off" debugging. Something like this: 1. Observe bug 2. Suspect that shared mutable data is the culprit 3. Call freeze.Object on the data after it is created 4. Run program again; it crashes 5. Inspect stack trace to identify where the data was modified 6. Fix bug 7. Remove call to freeze.Object Again: do not use freeze in production. It's a cool proof-of-concept, and it can be useful for debugging, but that's about it. Let me put it another way: freeze imports four packages: reflect, runtime, unsafe, and syscall (actually golang.org/x/sys/unix). Does that sound like a package you want to depend on? Okay, back to the real documention: Functions are provided for freezing the three "pointer types:" Pointer, Slice, and Map. Each function returns a copy of their input that is backed by protected memory. In addition, Object is provided for freezing recursively. Given a slice of pointers, Object will prevent modifications to both the pointer data and the slice data, while Slice merely does the latter. To freeze an object: Note that since foo does not contain any pointers, calling Pointer(f) would have the same effect here. It is recommended that, where convenient, you reassign the return value to its original variable, as with append. Otherwise, you will retain both the mutable original and the frozen copy. Likewise, to freeze a slice: Interfaces can also be frozen, since internally they are just pointers to objects. The effect of this is that the interface's pure methods can still be called, but impure methods cannot. Unfortunately the impurity of a given method is defined by the implementation, not the interface. Even a String method could conceivably modify some internal state. Furthermore, the caveat about unexported struct fields (see below) applies here, so many exported objects cannot be completely frozen. This package depends heavily on the internal representations of the slice and map types. These objects are not likely to change, but if they do, this package will break. In general, you can't call Object on the same object twice. This is because Object will attempt to rewrite the object's internal pointers -- which is a memory modification. Calling Pointer or Slice twice should be fine. Object cannot descend into unexported struct fields. It can still freeze the field itself, but if the field contains a pointer, the data it points to will not be frozen. Appending to a frozen slice will trigger a panic iff len(slice) < cap(slice). This is because appending to a full slice will allocate new memory. Unix is the only supported platform. Windows support is not planned, because it doesn't support a syscall analogous to mprotect.
Package eris implements the Encoding for Robust Immutable Storage (ERIS) encoding, version 1.0.0, as described in the spec: ERIS is an encoding of arbitrary content into a set of uniformly sized, encrypted and content-addressed blocks as well as a short identifier that can be encoded as an URN. The content can be reassembled from the blocks only with this identifier. The encoding is defined independent of any storage and transport layer or any specific application. This package does not implement any storage layer, but only concerns itself with the encoding and decoding of content. Users of this package are expected to implement their own storage layer, which can be as simple as files stored on-disk. Example(s) of how to use this package are provided in the 'examples' directory. This package intentionally does not have any dependencies other than Go's x/crypto library for cryptographic primitives.
Package cbor is a modern CBOR codec (RFC 8949 & RFC 7049) with CBOR tags, Go struct tags (toarray/keyasint/omitempty), Core Deterministic Encoding, CTAP2, Canonical CBOR, float64->32->16, and duplicate map key detection. Encoding options allow "preferred serialization" by encoding integers and floats to their smallest forms (e.g. float16) when values fit. Struct tags like "keyasint", "toarray" and "omitempty" make CBOR data smaller and easier to use with structs. For example, "toarray" tag makes struct fields encode to CBOR array elements. And "keyasint" makes a field encode to an element of CBOR map with specified int key. Latest docs can be viewed at https://github.com/fxamacker/cbor#cbor-library-in-go The Quick Start guide is at https://github.com/fxamacker/cbor#quick-start Function signatures identical to encoding/json include: Standard interfaces include: Custom encoding and decoding is possible by implementing standard interfaces for user-defined Go types. Codec functions are available at package-level (using defaults options) or by creating modes from options at runtime. "Mode" in this API means definite way of encoding (EncMode) or decoding (DecMode). EncMode and DecMode interfaces are created from EncOptions or DecOptions structs. Modes use immutable options to avoid side-effects and simplify concurrency. Behavior of modes won't accidentally change at runtime after they're created. Modes are intended to be reused and are safe for concurrent use. EncMode and DecMode Interfaces Using Default Encoding Mode Using Default Decoding Mode Creating and Using Encoding Modes Predefined Encoding Options: https://github.com/fxamacker/cbor#predefined-encoding-options Encoding Options: https://github.com/fxamacker/cbor#encoding-options Decoding Options: https://github.com/fxamacker/cbor#decoding-options Struct tags like `cbor:"name,omitempty"` and `json:"name,omitempty"` work as expected. If both struct tags are specified then `cbor` is used. Struct tags like "keyasint", "toarray", and "omitempty" make it easy to use very compact formats like COSE and CWT (CBOR Web Tokens) with structs. For example, "toarray" makes struct fields encode to array elements. And "keyasint" makes struct fields encode to elements of CBOR map with int keys. https://raw.githubusercontent.com/fxamacker/images/master/cbor/v2.0.0/cbor_easy_api.png Struct tags are listed at https://github.com/fxamacker/cbor#struct-tags-1 Over 375 tests are included in this package. Cover-guided fuzzing is handled by a private fuzzer that replaced fxamacker/cbor-fuzz years ago.
Package pcre2 provides access to version 2 of the Perl Compatible Regular Expresion library, PCRE. It implements two main types, Regexp and Matcher. Regexp objects store a compiled regular expression. They consist of two immutable parts: pcre and pcre_extra. Compile()/MustCompile() initialize pcre. Calling Study() on a compiled Regexp initializes pcre_extra. Compilation of regular expressions using Compile or MustCompile is slightly expensive, so these objects should be kept and reused, instead of compiling them from scratch for each matching attempt. CompileJIT and MustCompileJIT are way more expensive, because they run Study() after compiling a Regexp, but they tend to give much better performance: http://sljit.sourceforge.net/regex_perf.html Matcher objects keeps the results of a match against a []byte or string subject. The Group and GroupString functions provide access to capture groups; both versions work no matter if the subject was a []byte or string, but the version with the matching type is slightly more efficient. Matcher objects contain some temporary space and refer the original subject. They are mutable and can be reused (using Match, MatchString, Reset or ResetString). For details on the regular expression language implemented by this package and the flags defined below, see the PCRE documentation. http://www.pcre.org/pcre2.txt