Copying All nodes (since they implement the Node interface) also implement the NodeCopier interface which provides the ShallowCopy() function. A shallow copy returns a new node with all the same properties, but no children. On the other hand there is a DeepCopy function which returns a new node with all recursive children also copied. This ensures that the new returned node can be manipulated without affecting the original node or any of its children. Dates in GEDCOM files can be very complex as they can cater for many scenarios: 1. Incomplete, like "Dec 1943" 2. Anchored, like "Aft. 3 Sep 2003" or "Before 1923" 3. Ranges, like "Bet. 4 Apr 1823 and 8 Apr 1823" 4. Phrases, like "(Foo Bar)" This package provides a very rich API for dealing with all kind of dates in a meaningful and sensible way. Some notable features include: 1. All dates, even though that specify an specific day have a minimum and maximum value that are their true bounds. This is especially important for larger date ranges like the whole month of "Jun 1945". 2. Upper and lower bounds of dates can be converted to the native Go time.Time object. 3. There is a Years function that provides a convenient way to normalise a date range into a number for easier distance and comparison measurements. 4. Algorithms for calculating the similarity of dates on a configurable parabola. Decoding a GEDCOM stream: If you are reading from a file you can use NewDocumentFromGEDCOMFile: Package gedcom contains functionality for encoding, decoding, traversing, manipulating and comparing of GEDCOM data. You can download the latest binaries for macOS, Windows and Linux on the Releases page: https://github.com/elliotchance/gedcom/releases This will not require you to install Go or any other dependencies. If you wish to build it from source you must install the dependencies with: On top of the raw document is a powerful API that takes care of the complex traversing of the Document. Here is a simple example: Some of the nodes in a GEDCOM file have been replaced with more function rich types, such as names, dates, families and more. Encoding a Document If you need the GEDCOM data as a string you can simply using fmt.Stringer: The Filter function recursively removes or manipulates nodes with a FilterFunction: Some examples of Filter functions include BlacklistTagFilter, OfficialTagFilter, SimpleNameFilter and WhitelistTagFilter. There are several functions available that handle different kinds of merging: - MergeNodes(left, right Node) Node: returns a new node that merges children from both nodes. - MergeNodeSlices(left, right Nodes, mergeFn MergeFunction) Nodes: merges two slices based on the mergeFn. This allows more advanced merging when dealing with slices of nodes. - MergeDocuments(left, right *Document, mergeFn MergeFunction) *Document: creates a new document with their respective nodes merged. You can use IndividualBySurroundingSimilarityMergeFunction with this to merge individuals, rather than just appending them all. The MergeFunction is a type that can be received in some of the merging functions. The closure determines if two nodes should be merged and what the result would be. Alternatively it can also describe when two nodes should not be merged. You may certainly create your own MergeFunction, but there are some that are already included: - IndividualBySurroundingSimilarityMergeFunction creates a MergeFunction that will merge individuals if their surrounding similarity is at least minimumSimilarity. - EqualityMergeFunction is a MergeFunction that will return a merged node if the node are considered equal (with Equals). Node.Equals performs a shallow comparison between two nodes. The implementation is different depending on the types of nodes being compared. You should see the specific documentation for the Node. Equality is not to be confused with the Is function seen on some of the nodes, such as Date.Is. The Is function is used to compare exact raw values in nodes. DeepEqual tests if left and right are recursively equal. CompareNodes recursively compares two nodes. For example: Produces a *NodeDiff than can be rendered with the String method:
Package xlsxtra provides extra utilities for the xlsx package (https://github.com/tealeg/xlsx) to manipulate excel files: - Sort(), SortByHeaders: multi-column (reverse) sort of selected rows. (Note that columns are one based, not zero based to make reverse sort possible.) - AddBool(), AddInt(), AddFloat(), ...: shortcut to add a cell to a row with the right type. - NewStyle(): create a style and set the ApplyFill, ApplyFont, ApplyBorder and ApplyAlignment automatically. - NewStyles(): create a slice of styles based on a color palette - Sheets: access sheets by name instead of by index - Col: access cell values of a row by column header title - SetRowStyle: set style of all cells in a row - ToString: convert a xlsx.Row to a slice of strings See Col(umn) and Sort example for a quick introduction.
Package linq provides methods for querying and manipulating slices, arrays, maps, strings, channels and collections. Authors: Alexander Kalankhodzhaev (kalan), Ahmet Alp Balkan
Package database provides a block and metadata storage database. This package provides a database layer to store and retrieve block data and arbitrary metadata in a simple and efficient manner. The default backend, ffldb, has a strong focus on speed, efficiency, and robustness. It makes use leveldb for the metadata, flat files for block storage, and strict checksums in key areas to ensure data integrity. A quick overview of the features database provides are as follows: The main entry point is the DB interface. It exposes functionality for transactional-based access and storage of metadata and block data. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The Namespace interface is an abstraction that provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or committing changes that took place while the transaction was active. It also provides the root metadata bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root metadata bucket is the upper-most bucket in which data is stored and is created at the same time as the database. Use the Metadata function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database and using a managed read-write transaction to store and retrieve metadata. This example demonstrates creating a new database, using a managed read-write transaction to store a block, and using a managed read-only transaction to fetch the block.
Package jin Copyright (c) 2020 eco. License that can be found in the LICENSE file. "your wish is my command" Jin is a comprehensive JSON manipulation tool bundle. All functions tested with random data with help of Node.js. All test-path and test-value creation automated with Node.js. Jin provides parse, interpret, build and format tools for JSON. Third-party packages only used for the benchmark. No dependency need for core functions. We make some benchmark with other packages like Jin. In Result, Jin is the fastest (op/ns) and more memory friendly then others (B/op). For more information please take a look at BENCHMARK section below. WHAT IS NEW? 7 new functions tested and added to package. - GetMap() get objects as map[string]string structure with key values pairs - GetAll() get only specific keys values - GetAllMap() get only specific keys with map[string]stringstructure - GetKeys() get objects keys as string array - GetValues() get objects values as string array - GetKeysValues() get objects keys and values with separate string arrays - Length() get length of JSON array. 06.04.2020 INSTALLATION And you are good to go. Import and start using. Major difference between parsing and interpreting is parser has to read all data before answer your needs. On the other hand interpreter reads up to find the data you need. With parser, once the parse is complete you can access any data with no time. But there is a time cost to parse all data and this cost can increase as data content grows. If you need to access all keys of a JSON then, we are simply recommend you to use Parser. But if you need to access some keys of a JSON then we strongly recommend you to use Interpreter, it will be much faster and much more memory-friendly than parser. Interpreter is core element of this package, no need to create an Interpreter type, just call which function you want. First let's look at general function parameters. We are gonna use Get() function to access the value of path has pointed. In this case 'jin'. Path value can consist hard coded values. Get() function return type is []byte but all other variations of return types are implemented with different functions. For example. If you need "value" as string use GetString(). Parser is another alternative for JSON manipulation. We recommend to use this structure when you need to access all or most of the keys in the JSON. Parser constructor need only one parameter. We can parse it with Parse() function. Let's look at Parser.Get() About path value look above. There is all return type variations of Parser.Get() function like Interpreter. For return string use Parser.GetString() like this, Other usefull functions of Jin. -Add(), AddKeyValue(), Set(), SetKey() Delete(), Insert(), IterateArray(), IterateKeyValue() Tree(). Let's look at IterateArray() function. There are two formatting functions. Flatten() and Indent(). Indent() is adds indentation to JSON for nicer visualization and Flatten() removes this indentation. Control functions are simple and easy way to check value types of any path. For example. IsArray(). Or you can use GetType(). There are lots of JSON build functions in this package and all of them has its own examples. We just want to mention a couple of them. Scheme is simple and powerful tool for create JSON schemes. Testing is very important thing for this type of packages and it shows how reliable it is. For that reasons we use Node.js for unit testing. Lets look at folder arrangement and working principle. - test/ folder: test-json.json, this is a temporary file for testing. all other test-cases copying here with this name so they can process by test-case-creator.js. test-case-creator.js is core path & value creation mechanism. When it executed with executeNode() function. It reads the test-json.json file and generates the paths and values from this files content. With command line arguments it can generate different paths and values. As a result, two files are created with this process. the first of these files is test-json-paths.json and the second is test-json-values.json test-json-paths.json has all the path values. test-json-values.json has all the values that corresponding to path values. - tests/ folder All files in this folder is a test-case. But it doesn't mean that you can't change anything, on the contrary, all test-cases are creating automatically based on this folder content. You can add or remove any .json file that you want. All GO side test-case automation functions are in core_test.go file. This package developed with Node.js v13.7.0. please make sure that your machine has a valid version of Node.js before testing. All functions and methods are tested with complicated randomly genereted .json files. Like this, Most of JSON packages not even run properly with this kind of JSON streams. We did't see such packages as competitors to ourselves. And that's because we didn't even bother to benchmark against them. Benchmark results. - Benchmark prefix removed from function names for make room to results. - Benchmark between 'buger/jsonparser' and 'ecoshub/jin' use the same payload (JSON test-cases) that 'buger/jsonparser' package use for benchmark it self. We are currently working on, - Marshal() and Unmarshal() functions. - http.Request parser/interpreter - Builder functions for http.ResponseWriter If you want to contribute this work feel free to fork it. We want to fill this section with contributors.
Package database provides a block and metadata storage database. This package provides a database layer to store and retrieve block data and arbitrary metadata in a simple and efficient manner. The default backend, ffldb, has a strong focus on speed, efficiency, and robustness. It makes use leveldb for the metadata, flat files for block storage, and strict checksums in key areas to ensure data integrity. A quick overview of the features database provides are as follows: The main entry point is the DB interface. It exposes functionality for transactional-based access and storage of metadata and block data. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The Namespace interface is an abstraction that provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or committing changes that took place while the transaction was active. It also provides the root metadata bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root metadata bucket is the upper-most bucket in which data is stored and is created at the same time as the database. Use the Metadata function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database and using a managed read-write transaction to store and retrieve metadata. This example demonstrates creating a new database, using a managed read-write transaction to store a block, and using a managed read-only transaction to fetch the block.
Package mangle is a sanitization / data masking library for Go (golang). This library provides functionality to sanitize text and HTML data. This can be integrated into tools that export or manipulate databases/files so that confidential data is not exposed to staging, development or testing systems. Install mangle: Try the command line tool: For basic usage of mangle as a Go library for strings, see the Example below. For usage using io.Reader/io.Writer for text and html, see the manglefile source code. Secure - every non-punctuation, non-tag word is replaced, with no reuse of source words. Fast - around 350,000 words/sec ("War and Peace" mangled in < 2s). Deterministic - words are selected from a corpus deterministically - subsequent runs will result in the same word replacements as long as the user secret is not changed. This allows automated tests to run against the sanitized data, as well as for efficient rsync's of mangled versions of large slowly changing data sets. Accurate - maintains a high level of resemblance to the source text, to allow for realistic testing of search indexes, page layout etc. Replacement words are all natural words from a user defined corpus. Source text word length, punctuation, title case and all caps, HTML tags and attributes are maintained. Reads a corpus from a file, initializes a Mangle, runs a string through it and prints the output.
Package database provides a block and metadata storage database. This package provides a database layer to store and retrieve block data and arbitrary metadata in a simple and efficient manner. The default backend, ffldb, has a strong focus on speed, efficiency, and robustness. It makes use leveldb for the metadata, flat files for block storage, and strict checksums in key areas to ensure data integrity. A quick overview of the features database provides are as follows: The main entry point is the DB interface. It exposes functionality for transactional-based access and storage of metadata and block data. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The Namespace interface is an abstraction that provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or committing changes that took place while the transaction was active. It also provides the root metadata bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root metadata bucket is the upper-most bucket in which data is stored and is created at the same time as the database. Use the Metadata function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database and using a managed read-write transaction to store and retrieve metadata. This example demonstrates creating a new database, using a managed read-write transaction to store a block, and using a managed read-only transaction to fetch the block.
Package morestrings implements additional functions to manipulate UTF-8 encoded strings, beyond what is provided in the standard "strings" package.
Package morestrings implements additional functions to manipulate UTF-8 encoded strings, beyond what is provided in the standard "strings" package.
Package luar provides a convenient interface between Lua and Go. It uses Alessandro Arzilli's golua (https://github.com/aarzilli/golua). Most Go values can be passed to Lua: basic types, strings, complex numbers, user-defined types, pointers, composite types, functions, channels, etc. Conversely, most Lua values can be converted to Go values. Composite types are processed recursively. Methods can be called on user-defined types. These methods will be callable using _dot-notation_ rather than colon notation. Arrays, slices, maps and structs can be copied as tables, or alternatively passed over as Lua proxy objects which can be naturally indexed. In the case of structs and string maps, fields have priority over methods. Use 'luar.method(<value>, <method>)(<params>...)' to call shadowed methods. Unexported struct fields are ignored. The "lua" tag is used to match fields in struct conversion. You may pass a Lua table to an imported Go function; if the table is 'array-like' then it is converted to a Go slice; if it is 'map-like' then it is converted to a Go map. Pointer values encode as the value pointed to when unproxified. Usual operators (arithmetic, string concatenation, pairs/ipairs, etc.) work on proxies too. The type of the result depends on the type of the operands. The rules are as follows: - If the operands are of the same type, use this type. - If one type is a Lua number, use the other, user-defined type. - If the types are different and not Lua numbers, convert to a complex proxy, a Lua number, or a Lua string according to the result kind. Channel proxies can be manipulated with the following methods: - close(): Close the channel. - recv() value: Fetch and return a value from the channel. - send(x value): Send a value in the channel. Complex proxies can be manipulated with the following attributes: - real: The real part. - imag: The imaginary part. Slice proxies can be manipulated with the following methods/attributes: - append(x ...value) sliceProxy: Append the elements and return the new slice. The elements must be convertible to the slice element type. - cap: The capacity of the slice. - slice(i, j integer) sliceProxy: Return the sub-slice that ranges from 'i' to 'j' excluded, starting from 1. String proxies can be browsed rune by rune with the pairs/ipairs functions. These runes are encoded as strings in Lua. Indexing a string proxy (starting from 1) will return the corresponding byte as a Lua string. String proxies can be manipulated with the following method: - slice(i, j integer) sliceProxy: Return the sub-string that ranges from 'i' to 'j' excluded, starting from 1. Pointers to structs and structs within pointers are automatically dereferenced. Slices must be looped over with 'ipairs'.
Package pathlist manipulates lists of filepaths joined by the OS-specific ListSeparator (pathlists), usually found in PATH or GOPATH environment variables. See also package env ( https://godoc.org/gopkg.in/pathlist.v0/env ) for helper functions and examples for using this package. The package complements the functionality of the standard path/filepath with: Pathlist handles the quoting/unquoting of filepaths as required on Windows. The package uses two separate types to make the context of string parameters (single/list, quoted/unquoted) explicit: This also helps prevent some common mistakes with pathlist manipulation that can lead to unexpected behavior and security issues. Specifically, a naive implementation could have the following problems: In addition to using distinct types, filepath arguments are validated. On Unix, filepaths containing the separator (':') cannot be used as part of a List; these trigger an Error with Cause returning ErrSep. On Windows, raw (unquoted) filepaths must be used; an Error with Cause returning ErrQuote is issued otherwise. Calls returning (List, error) can be wrapped using Must when the validation is known to succeed in advance. Using this package, the two examples above can be written as: Note that even though the error handling API is common across platforms, the behavior is somewhat asymmetric. On Unix, ':' is generally a valid (although uncommon) character in filepaths, so an error typically indicates input error and the user should be notified. In contrast, on Windows all filepaths can be included in a list; an error generally means that the caller is mixing quoted and unquoted paths, which likely indicates a bug and that the implementation should be fixed.
Package structs implements a generic interface for manipulating Go structs. The related API is powered and inspired from the Go reflection package. It was initially developed to provide generic field getters and setters for any struct. It has grown into an generic abstraction layer to structs powered by the reflect package. While not being particularly performant compared to other structs packages, it provides a "natural feel" API, convenience, flexibility and simpler alternative compared to using the reflect package directly. Throughout the documentation, the t variable is assumed to be of type struct T. T has two fields: the first is a string field called "Property" and the second is a nested struct via a pointer called "Nested". While "Property" is set to value "Test", "Number" inside "Nested" is set to 123456. NOTE: the same applies to all other variables subsequently declared. The following input are supported throughout the package: NOTE: See the IndirectStruct method for details on the above scenarios. There are two main ways to use for this package to manipulate structs. Either by using the objects and methods or by using helper functions. Objects and methods: this approach requires calling of the New method first, which gives access to the API of the StructValue, StructField and StructFields objects. Helper functions: A set of self-contained functions that hides internal implementations powered by the StructValue object, are all declared in the file called helpers.go. The following table summarizes the above statements: All objects in this package are linked to the main StructValue object. The relationships between each one of them are as follow: NOTE: For an exhaustive illustration of package capabilities, please refer to the following file: https://github.com/roninzo/structs/example_test.go. The StructValue object is the starting point for manipulating structs using objects and methods. To initialize the StructValue object, make use of the New method followed by handling any potential error encountered in this process. Example: From there, several methods provides information about the struct. Example: NOTE: When possible, all object method names were inspired from the reflect package, trying to reduce the learning curve. Example: The StructValue object is also the gateway to the other two objects declared in this package: StructField and StructFields. Examples: The StructField object represents one field in the struct, and provides getters and setters methods. Before reading data out of struct fields generically, it is recommended to get extra information about the struct field. This is useful if the type of the field is not known at runtime. Example: However, if nested struct are present inside t, sub-fields are not made available directly. This means that nested structs must be loaded explicitly with the Struct method. Example: The StructFields object represents all the fields in the struct. Its main purpose at the moment, is to loop through StructField objects in a "for range" loop. Example: The other purpose, is to return all struct filed names: Example: The StructRows object represents the slice of structs and mostly follows the database/sql API. Its main purpose is to loop through elements of a submitted slice of structs. Each of those elements can then be manipulated in the same manner as the StructValue. Example: The helper methods in the helpers.go provide advanced functions for transforming structs or wrap StructValue object functionalities. Examples:
Objx - Go package for dealing with maps, slices, JSON and other data. Objx provides the `objx.Map` type, which is a `map[string]interface{}` that exposes a powerful `Get` method (among others) that allows you to easily and quickly get access to data within the map, without having to worry too much about type assertions, missing data, default values etc. Objx uses a predictable pattern to make access data from within `map[string]interface{}` easy. Call one of the `objx.` functions to create your `objx.Map` to get going: NOTE: Any methods or functions with the `Must` prefix will panic if something goes wrong, the rest will be optimistic and try to figure things out without panicking. Use `Get` to access the value you're interested in. You can use dot and array notation too: Once you have sought the `Value` you're interested in, you can use the `Is*` methods to determine its type. Or you can just assume the type, and use one of the strong type methods to extract the real value: If there's no value there (or if it's the wrong type) then a default value will be returned, or you can be explicit about the default value. If you're dealing with a slice of data as a value, Objx provides many useful methods for iterating, manipulating and selecting that data. You can find out more by exploring the index below. A simple example of how to use Objx: Since `objx.Map` is a `map[string]interface{}` you can treat it as such. For example, to `range` the data, do what you would expect:
Package linq provides methods for querying and manipulating slices, arrays, maps, strings, channels and collections. Authors: Alexander Kalankhodzhaev (kalan), Ahmet Alp Balkan, Cleiton Marques Souza.
Package luar provides a convenient interface between Lua and Go. It uses Alessandro Arzilli's golua (https://github.com/aarzilli/golua). Most Go values can be passed to Lua: basic types, strings, complex numbers, user-defined types, pointers, composite types, functions, channels, etc. Conversely, most Lua values can be converted to Go values. Composite types are processed recursively. Methods can be called on user-defined types. These methods will be callable using _dot-notation_ rather than colon notation. Arrays, slices, maps and structs can be copied as tables, or alternatively passed over as Lua proxy objects which can be naturally indexed. In the case of structs and string maps, fields have priority over methods. Use 'luar.method(<value>, <method>)(<params>...)' to call shadowed methods. Unexported struct fields are ignored. The "lua" tag is used to match fields in struct conversion. You may pass a Lua table to an imported Go function; if the table is 'array-like' then it is converted to a Go slice; if it is 'map-like' then it is converted to a Go map. Pointer values encode as the value pointed to when unproxified. Usual operators (arithmetic, string concatenation, pairs/ipairs, etc.) work on proxies too. The type of the result depends on the type of the operands. The rules are as follows: - If the operands are of the same type, use this type. - If one type is a Lua number, use the other, user-defined type. - If the types are different and not Lua numbers, convert to a complex proxy, a Lua number, or a Lua string according to the result kind. Channel proxies can be manipulated with the following methods: - close(): Close the channel. - recv() value: Fetch and return a value from the channel. - send(x value): Send a value in the channel. Complex proxies can be manipulated with the following attributes: - real: The real part. - imag: The imaginary part. Slice proxies can be manipulated with the following methods/attributes: - append(x ...value) sliceProxy: Append the elements and return the new slice. The elements must be convertible to the slice element type. - cap: The capacity of the slice. - sub(i, j integer) sliceProxy: Return the sub-slice that ranges from 'i' to 'j' included. This matches Lua's behaviour, but not Go's. String proxies can be browsed rune by rune with the pairs/ipairs functions. These runes are encoded as strings in Lua. String proxies can be manipulated with the following method: - sub(i, j integer) sliceProxy: Return the sub-string that ranges from 'i' to 'j' included. This matches Lua's behaviour, but not Go's. Slices must be looped with 'ipairs'.
Package seshcookie enables you to associate session-state with HTTP requests while keeping your server stateless. Because session-state is transferred as part of the HTTP request (in a cookie), state can be seamlessly maintained between server restarts or load balancing. It's inspired by Beaker (http://pypi.python.org/pypi/Beaker), which provides a similar service for Python webapps. The cookies are authenticated and encrypted (using AES-GCM) with a key derived from a string provided to the NewHandler function. This makes seshcookie reliable and secure: session contents are opaque to users and not able to be manipulated or forged by third parties. Storing session-state in a cookie makes building some apps trivial, like this example that tells a user how many times they have visited the site:
Package dec is a Go wrapper around the decNumber library (arbitrary precision decimal arithmetic). For more information about the decNumber library, and detailed documentation, see http://speleotrove.com/decimal/dec.html It implements the General Decimal Arithmetic Specification in ANSI C. This specification defines a decimal arithmetic which meets the requirements of commercial, financial, and human-oriented applications. It also matches the decimal arithmetic in the IEEE 754 Standard for Floating Point Arithmetic. The library fully implements the specification, and hence supports integer, fixed-point, and floating-point decimal numbers directly, including infinite, NaN (Not a Number), and subnormal values. Both arbitrary-precision and fixed-size representations are supported. General usage notes: The Number format is optimized for efficient processing of relatively short numbers; It does, however, support arbitrary precision (up to 999,999,999 digits) and arbitrary exponent range (Emax in the range 0 through 999,999,999 and Emin in the range -999,999,999 through 0). Mathematical functions (for example Exp()) as identified below are restricted more tightly: digits, emax, and -emin in the context must be <= MaxMath (999999), and their operand(s) must be within these bounds. Logical functions are further restricted; their operands must be finite, positive, have an exponent of zero, and all digits must be either 0 or 1. The result will only contain digits which are 0 or 1 (and will have exponent=0 and a sign of 0). Operands to operator functions are never modified unless they are also specified to be the result number (which is always permitted). Other than that case, operands must not overlap. Go implementation details: The decimal32, decimal64 and decimal128 types are merged into Single, Double and Quad, respectively. Contexts are created with a immutable precision (i.e. number of digits). If one needs to change precision on the fly, discard the existing context and create a new one with the required precision. From a programming standpoint, any initialized Number is a valid operand in arithmetic operations, regardless of the settings or existence of its creator Context (not to be confused with having a valid value in a given arithmetic operation). Arithmetic functions are Number methods. The value of the receiver of the method will be set to the result of the operation. For example: Arithmetic methods always return the receiver in order to allow chain calling: Using the same Number as operand and result, like in n.Multiply(n, n, ctx), is legal and will not produce unexpected results. A few functions like the context status manipulation functions have been moved to their own type (Status). The C call decContextTestStatus(ctx, mask) is therefore replaced by ctx.Status().Set(mask) in the Go implementation. The same goes for decNumberClassToString(number) which is repleaced by number.Class().String() in go. Error handling: although most arithmetic functions can cause errors, the standard Go error handling is not used in its idiomatic form. That is, arithmetic functions do not return errors. Instead, the type of the error is ORed into the status flags in the current context (Context type). It is the responsibility of the caller to clear the status flags as required. The result of any routine which returns a number will always be a valid number (which may be a special value, such as an Infinity or NaN). This permits the use of much fewer error checks; a single check for a whole computation is often enough. To check for errors, get the Context's status with the Status() function (see the Status type), or use the Context's ErrorStatus() function. The package provides facilities for managing free-lists of Numbers in order to relieve pressure on the garbage collector in computation intensive applications. NumberPool is in fact a simple wrapper around a *Context and a sync.Pool (or the lighter util.Pool provided in the util subpackage); NumberPool will automatically cast the return value of Get() to the desired type. For example: Note the use of pool.Context on the last statement. If a particular implementation needs always-initialized numbers (like any new Go variable being initialize to the 0 value of its type), the pool's New function can be set for example to: If an application needs to change its arithmetic precision on the fly, any NumberPool built on top of the affected Context's will need to be discarded and recreated along with the Context. This will not affect existing numbers that can still be used as valid operands in arithmetic functions. Go re-implementation of decNumber's example1.c - simple addition. Cnvert the first two argument words to decNumber, add them together, and display the result. Extended re-implementation of decNumber's example2.c - compound interest. With added error handling from example3.c. Go re-implementation of decNumber's example5.c. Compressed formats. Go re-implementation of decNumber's example6.c. Packed Decimal numbers. This example reworks Example 2, starting and ending with Packed Decimal numbers. Go re-implementation of decNumber's example7.c. Using decQuad to add two numbers together. Go re-implementation of decNumber's example8.c. Using Quad with Number
Sprig: Template functions for Go. This package contains a number of utility functions for working with data inside of Go `html/template` and `text/template` files. To add these functions, use the `template.Funcs()` method: Note that you should add the function map before you parse any template files. Date Functions String Functions String Slice Functions: Integer Slice Functions: Conversions: Defaults: OS: File Paths: Encoding: Reflection: typeOf: Takes an interface and returns a string representation of the type. For pointers, this will return a type prefixed with an asterisk(`*`). So a pointer to type `Foo` will be `*Foo`. typeIs: Compares an interface with a string name, and returns true if they match. Note that a pointer will not match a reference. For example `*Foo` will not match `Foo`. typeIsLike: Compares an interface with a string name and returns true if the interface is that `name` or that `*name`. In other words, if the given value matches the given type or is a pointer to the given type, this returns true. kindOf: Takes an interface and returns a string representation of its kind. kindIs: Returns true if the given string matches the kind of the given interface. Note: None of these can test whether or not something implements a given interface, since doing so would require compiling the interface in ahead of time. Data Structures: Lists Functions: These are used to manipulate lists: '{{ list 1 2 3 | reverse | first }}' Dict Functions: These are used to manipulate dicts. Math Functions: Integer functions will convert integers of any width to `int64`. If a string is passed in, functions will attempt to convert with `strconv.ParseInt(s, 1064)`. If this fails, the value will be treated as 0. Crypto Functions: SemVer Functions: These functions provide version parsing and comparisons for SemVer 2 version strings.
Package goutils provides utility functions to manipulate strings in various ways. The code snippets below show examples of how to use goutils. Some functions return errors while others do not, so usage would vary as a result. Example:
Package luar provides a convenient interface between Lua and Go. It uses Alessandro Arzilli's golua (https://github.com/aarzilli/golua). Most Go values can be passed to Lua: basic types, strings, complex numbers, user-defined types, pointers, composite types, functions, channels, etc. Conversely, most Lua values can be converted to Go values. Composite types are processed recursively. Methods can be called on user-defined types. These methods will be callable using _dot-notation_ rather than colon notation. Arrays, slices, maps and structs can be copied as tables, or alternatively passed over as Lua proxy objects which can be naturally indexed. In the case of structs and string maps, fields have priority over methods. Use 'luar.method(<value>, <method>)(<params>...)' to call shadowed methods. Unexported struct fields are ignored. The "lua" tag is used to match fields in struct conversion. You may pass a Lua table to an imported Go function; if the table is 'array-like' then it is converted to a Go slice; if it is 'map-like' then it is converted to a Go map. Pointer values encode as the value pointed to when unproxified. Usual operators (arithmetic, string concatenation, pairs/ipairs, etc.) work on proxies too. The type of the result depends on the type of the operands. The rules are as follows: - If the operands are of the same type, use this type. - If one type is a Lua number, use the other, user-defined type. - If the types are different and not Lua numbers, convert to a complex proxy, a Lua number, or a Lua string according to the result kind. Channel proxies can be manipulated with the following methods: - close(): Close the channel. - recv() value: Fetch and return a value from the channel. - send(x value): Send a value in the channel. Complex proxies can be manipulated with the following attributes: - real: The real part. - imag: The imaginary part. Slice proxies can be manipulated with the following methods/attributes: - append(x ...value) sliceProxy: Append the elements and return the new slice. The elements must be convertible to the slice element type. - cap: The capacity of the slice. - slice(i, j integer) sliceProxy: Return the sub-slice that ranges from 'i' to 'j' excluded, starting from 1. String proxies can be browsed rune by rune with the pairs/ipairs functions. These runes are encoded as strings in Lua. Indexing a string proxy (starting from 1) will return the corresponding byte as a Lua string. String proxies can be manipulated with the following method: - slice(i, j integer) sliceProxy: Return the sub-string that ranges from 'i' to 'j' excluded, starting from 1. Pointers to structs and structs within pointers are automatically dereferenced. Slices must be looped over with 'ipairs'.
Package dec is a Go wrapper around the decNumber library (arbitrary precision decimal arithmetic). For more information about the decNumber library, and detailed documentation, see http://speleotrove.com/decimal/dec.html It implements the General Decimal Arithmetic Specification in ANSI C. This specification defines a decimal arithmetic which meets the requirements of commercial, financial, and human-oriented applications. It also matches the decimal arithmetic in the IEEE 754 Standard for Floating Point Arithmetic. The library fully implements the specification, and hence supports integer, fixed-point, and floating-point decimal numbers directly, including infinite, NaN (Not a Number), and subnormal values. Both arbitrary-precision and fixed-size representations are supported. General usage notes: The Number format is optimized for efficient processing of relatively short numbers; It does, however, support arbitrary precision (up to 999,999,999 digits) and arbitrary exponent range (Emax in the range 0 through 999,999,999 and Emin in the range -999,999,999 through 0). Mathematical functions (for example Exp()) as identified below are restricted more tightly: digits, emax, and -emin in the context must be <= MaxMath (999999), and their operand(s) must be within these bounds. Logical functions are further restricted; their operands must be finite, positive, have an exponent of zero, and all digits must be either 0 or 1. The result will only contain digits which are 0 or 1 (and will have exponent=0 and a sign of 0). Operands to operator functions are never modified unless they are also specified to be the result number (which is always permitted). Other than that case, operands must not overlap. Go implementation details: The decimal32, decimal64 and decimal128 types are merged into Single, Double and Quad, respectively. Contexts are created with a immutable precision (i.e. number of digits). If one needs to change precision on the fly, discard the existing context and create a new one with the required precision. From a programming standpoint, any initialized Number is a valid operand in arithmetic operations, regardless of the settings or existence of its creator Context (not to be confused with having a valid value in a given arithmetic operation). Arithmetic functions are Number methods. The value of the receiver of the method will be set to the result of the operation. For example: Arithmetic methods always return the receiver in order to allow chain calling: Using the same Number as operand and result, like in n.Multiply(n, n, ctx), is legal and will not produce unexpected results. A few functions like the context status manipulation functions have been moved to their own type (Status). The C call decContextTestStatus(ctx, mask) is therefore replaced by ctx.Status().Set(mask) in the Go implementation. The same goes for decNumberClassToString(number) which is repleaced by number.Class().String() in go. Error handling: although most arithmetic functions can cause errors, the standard Go error handling is not used in its idiomatic form. That is, arithmetic functions do not return errors. Instead, the type of the error is ORed into the status flags in the current context (Context type). It is the responsibility of the caller to clear the status flags as required. The result of any routine which returns a number will always be a valid number (which may be a special value, such as an Infinity or NaN). This permits the use of much fewer error checks; a single check for a whole computation is often enough. To check for errors, get the Context's status with the Status() function (see the Status type), or use the Context's ErrorStatus() function. The package provides facilities for managing free-lists of Numbers in order to relieve pressure on the garbage collector in computation intensive applications. NumberPool is in fact a simple wrapper around a *Context and a sync.Pool (or the lighter util.Pool provided in the util subpackage); NumberPool will automatically cast the return value of Get() to the desired type. For example: Note the use of pool.Context on the last statement. If a particular implementation needs always-initialized numbers (like any new Go variable being initialize to the 0 value of its type), the pool's New function can be set for example to: If an application needs to change its arithmetic precision on the fly, any NumberPool built on top of the affected Context's will need to be discarded and recreated along with the Context. This will not affect existing numbers that can still be used as valid operands in arithmetic functions. Go re-implementation of decNumber's example1.c - simple addition. Cnvert the first two argument words to decNumber, add them together, and display the result. Extended re-implementation of decNumber's example2.c - compound interest. With added error handling from example3.c. Go re-implementation of decNumber's example5.c. Compressed formats. Go re-implementation of decNumber's example6.c. Packed Decimal numbers. This example reworks Example 2, starting and ending with Packed Decimal numbers. Go re-implementation of decNumber's example7.c. Using decQuad to add two numbers together. Go re-implementation of decNumber's example8.c. Using Quad with Number
Package morestrings implements additional functions to manipulate UTF-8 encoded strings, beyond what is provided in the standard "strings" package.
objx - Go package for dealing with maps, slices, JSON and other data. Objx provides the `objx.Map` type, which is a `map[string]interface{}` that exposes a powerful `Get` method (among others) that allows you to easily and quickly get access to data within the map, without having to worry too much about type assertions, missing data, default values etc. Objx uses a preditable pattern to make access data from within `map[string]interface{}'s easy. Call one of the `objx.` functions to create your `objx.Map` to get going: NOTE: Any methods or functions with the `Must` prefix will panic if something goes wrong, the rest will be optimistic and try to figure things out without panicking. Use `Get` to access the value you're interested in. You can use dot and array notation too: Once you have saught the `Value` you're interested in, you can use the `Is*` methods to determine its type. Or you can just assume the type, and use one of the strong type methods to extract the real value: If there's no value there (or if it's the wrong type) then a default value will be returned, or you can be explicit about the default value. If you're dealing with a slice of data as a value, Objx provides many useful methods for iterating, manipulating and selecting that data. You can find out more by exploring the index below. A simple example of how to use Objx: Since `objx.Map` is a `map[string]interface{}` you can treat it as such. For example, to `range` the data, do what you would expect:
Package walletdb provides a namespaced database interface for btcwallet. A wallet essentially consists of a multitude of stored data such as private and public keys, key derivation bits, pay-to-script-hash scripts, and various metadata. One of the issues with many wallets is they are tightly integrated. Designing a wallet with loosely coupled components that provide specific functionality is ideal, however it presents a challenge in regards to data storage since each component needs to store its own data without knowing the internals of other components or breaking atomicity. This package solves this issue by providing a pluggable driver, namespaced database interface that is intended to be used by the main wallet daemon. This allows the potential for any backend database type with a suitable driver. Each component, which will typically be a package, can then implement various functionality such as address management, voting pools, and colored coin metadata in their own namespace without having to worry about conflicts with other packages even though they are sharing the same database that is managed by the wallet. A quick overview of the features walletdb provides are as follows: The main entry point is the DB interface. It exposes functionality for creating, retrieving, and removing namespaces. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The Namespace interface is an abstraction that provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or commiting changes that took place while the transaction was active. It also provides the root bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root bucket is the upper-most bucket in a namespace under which data is stored and is created at the same time as the namespace. Use the RootBucket function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database, getting a namespace from it, and using a managed read-write transaction against the namespace to store and retrieve data.
Package strmangle is a collection of string manipulation functions. Primarily used by boil and templates for code generation. Because it is focused on pipelining inside templates you will see some odd parameter ordering.
Package luar provides a convenient interface between Lua and Go. It uses Alessandro Arzilli's golua (https://github.com/nwidger/golua). Most Go values can be passed to Lua: basic types, strings, complex numbers, user-defined types, pointers, composite types, functions, channels, etc. Conversely, most Lua values can be converted to Go values. Composite types are processed recursively. Methods can be called on user-defined types. These methods will be callable using _dot-notation_ rather than colon notation. Arrays, slices, maps and structs can be copied as tables, or alternatively passed over as Lua proxy objects which can be naturally indexed. In the case of structs and string maps, fields have priority over methods. Use 'luar.method(<value>, <method>)(<params>...)' to call shadowed methods. Unexported struct fields are ignored. The "lua" tag is used to match fields in struct conversion. You may pass a Lua table to an imported Go function; if the table is 'array-like' then it is converted to a Go slice; if it is 'map-like' then it is converted to a Go map. Pointer values encode as the value pointed to when unproxified. Usual operators (arithmetic, string concatenation, pairs/ipairs, etc.) work on proxies too. The type of the result depends on the type of the operands. The rules are as follows: - If the operands are of the same type, use this type. - If one type is a Lua number, use the other, user-defined type. - If the types are different and not Lua numbers, convert to a complex proxy, a Lua number, or a Lua string according to the result kind. Channel proxies can be manipulated with the following methods: - close(): Close the channel. - recv() value: Fetch and return a value from the channel. - send(x value): Send a value in the channel. Complex proxies can be manipulated with the following attributes: - real: The real part. - imag: The imaginary part. Slice proxies can be manipulated with the following methods/attributes: - append(x ...value) sliceProxy: Append the elements and return the new slice. The elements must be convertible to the slice element type. - cap: The capacity of the slice. - slice(i, j integer) sliceProxy: Return the sub-slice that ranges from 'i' to 'j' excluded, starting from 1. String proxies can be browsed rune by rune with the pairs/ipairs functions. These runes are encoded as strings in Lua. Indexing a string proxy (starting from 1) will return the corresponding byte as a Lua string. String proxies can be manipulated with the following method: - slice(i, j integer) sliceProxy: Return the sub-string that ranges from 'i' to 'j' excluded, starting from 1. Pointers to structs and structs within pointers are automatically dereferenced. Slices must be looped over with 'ipairs'.
Package linq provides methods for querying and manipulating slices, arrays, maps, strings, channels and collections. Authors: Alexander Kalankhodzhaev (kalan), Ahmet Alp Balkan, Cleiton Marques Souza.
Package linq provides methods for querying and manipulating slices, arrays, maps, strings, channels and collections. Authors: Alexander Kalankhodzhaev (kalan), Ahmet Alp Balkan, Cleiton Marques Souza.
Package walletdb provides a namespaced database interface for btcwallet. A wallet essentially consists of a multitude of stored data such as private and public keys, key derivation bits, pay-to-script-hash scripts, and various metadata. One of the issues with many wallets is they are tightly integrated. Designing a wallet with loosely coupled components that provide specific functionality is ideal, however it presents a challenge in regards to data storage since each component needs to store its own data without knowing the internals of other components or breaking atomicity. This package solves this issue by providing a pluggable driver, namespaced database interface that is intended to be used by the main wallet daemon. This allows the potential for any backend database type with a suitable driver. Each component, which will typically be a package, can then implement various functionality such as address management, voting pools, and colored coin metadata in their own namespace without having to worry about conflicts with other packages even though they are sharing the same database that is managed by the wallet. A quick overview of the features walletdb provides are as follows: The main entry point is the DB interface. It exposes functionality for creating, retrieving, and removing namespaces. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The Namespace interface is an abstraction that provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or commiting changes that took place while the transaction was active. It also provides the root bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root bucket is the upper-most bucket in a namespace under which data is stored and is created at the same time as the namespace. Use the RootBucket function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database, getting a namespace from it, and using a managed read-write transaction against the namespace to store and retrieve data.
Package bfk provides methods and types to parse, interpret, and compile, Brainfuck code. Parsing a Brainfuck program can be parsed from a string with the Parse function: Assuming no parse error occurs, the resulting program may be optionally configured and subsequently executed against the given input and output streams. Brainfuck is an esoteric programming language (esolang), which closely mimics the semantics of a Turing Machine, and is in fact one of the simplest Turing complete languages. Like a Turing Machine, brainfuck uses a (theoretically) infinite tape of cells as the memory model, and defines the movement of a pointer over the tape. As the pointer moves over the tape, it can make changes to the cell it points to, preform input and output, and run loops. The language consists of just eight operators, which manipulate a pointer to a tape of "cells". The operators are simply "+", "-", ">", "<", ".", ",", "[", and "]", all other characters are ignored. As the only semantically meaningful characters are punctuation, comments are just any text that occurs in the program source, provided that no punctuation characters are used. Due to the simplicity of the language, it is trivial to implement an interpreter, for example, the operators can be directly converted to C, assuming that ptr is a char*. + operator Increment the value of the current cell by one. See the Cell Size section below. - operator Decrement the value of the current cell by one. > operator Increment the pointer (move to the right) by one cell. See the Tape Size section below. < operator Decrement the pointer (move to the left) by one cell. . operator Output the value of the current cell. , operator Read the next byte of the input stream and write the value to the current cell. See the End of File section below. [ operator If the current cell is non-zero, do nothing and execute the next operator, otherwise, if the current cell is zero, then skip all the next operations until the matching "]" operator is passed, and execute the operator after the matching "]". ] operator Unconditionally jump back to the matching "[" operator. Note that the rules of the "[" operator will apply and if the current cell is zero, then the next operator to be run will be the one immediately after the "]" operator that was just executed. Because IO is done one byte per cell, it may appear that the cells are unsigned bytes, and this is the behaviour of the canonical compiler, however, it is not explicitly limited to these values. Whether negative values are allowed and if so in what range is also not specified nor widely agreed upon. At the very minimum, values 0-255 can be assumed to be valid, as this is the range of the input and output operators. As such, any program that preforms IO should only do so on cells whose values are in this range, to avoid the undefined behaviour of trying to output a value greater than 255 as a byte of output. And for maximum portability cells should not exceed 255 to avoid undefined overflow behaviour. If the cells are known to be exactly 8 bits wide, i.e. unsigned bytes, then most implementations will allow 256 == 0 style wrapping. Be sure to check explicitly as some implementations say that they use bytes, but mean signed bytes, i.e. -128 through +127, where overflowing 128 == -128. Incidentally, these implementations are usually also the same offenders that set the current cell to -1 on EOF, avoid these implementations if possible. As with an abstract Turing Machine, the memory tape is theoretically infinite, however, in practice, it is not. Implementations and specifications vary, however, the original, canonical implementation used an arbitrary 30,000 cell tape. This is considered to be the minimum tape size, by convention, and any code intended to be portable should attempt to manage its own memory to within this range. Regardless of the implementation's tape limit, the memory is always initialized to zeros, and the pointer begins at the left-most cell. Many implementations allow for an "infinite" tape, extending to the right as far as the RAM of the computer being used will allow. Note, it is rare to see an implementation allow any cells to the left of the original, i.e. negative cell addresses, and those that do not support negative addresses may handle this case in different ways. The behaviour is left unspecified and portable programs should be designed in such a way as to avoid moving the pointer to the left of the initial cell. The "," input operator reads one byte from the input stream and sets the cell at the pointer to the value of that byte. This is pretty straight forward, the cell can be set to any value from 0 to 255, assuming that the cell fits the value, however, when the end of the input stream is reached, which value, if any, is written to the current cell is not defined by the language. There are three main schools of thought regarding which value to use at EOF, the first is to simply leave the value unchanged, the second is to zero to the current cell, and the third is to set it to -1, which is the value of the constant EOF in C. The later of these is common among C implementations, and indeed is even used by one of the example implementations provided by the language's creator, however, this is not the behaviour of the canonical compiler. The original compiler instead left the value at the pointer unaltered, this is the default behaviour that bfk uses, and this is the behaviour that is easiest to account for when writing brainfuck code.
Package linq provides methods for querying and manipulating slices, arrays, maps, strings, channels and collections. Authors: Alexander Kalankhodzhaev (kalan), Ahmet Alp Balkan, Cleiton Marques Souza.
Package linq provides methods for querying and manipulating slices, arrays, maps, strings, channels and collections. Authors: Alexander Kalankhodzhaev (kalan), Ahmet Alp Balkan, Cleiton Marques Souza.
Package structs contains various utilities to work with Go (Golang) structs. It was initially used by me to convert a struct into a `map[string]any`. With time I've added other utilities for structs. It's basically a high level package based on primitives from the reflect package. Feel free to add new functions or improve the existing code. [![Build & Test Action Status](https://github.com/devnw/structs/actions/workflows/build.yml/badge.svg)](https://github.com/devnw/structs/actions) [![Go Report Card](https://goreportcard.com/badge/go.devnw.com/structs)](https://goreportcard.com/report/go.devnw.com/structs) [![codecov](https://codecov.io/gh/devnw/structs/branch/main/graph/badge.svg)](https://codecov.io/gh/devnw/structs) [![Go Reference](https://pkg.go.dev/badge/go.devnw.com/structs.svg)](https://pkg.go.dev/go.devnw.com/structs) [![License: MIT 2.0](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/license/mit-0/) [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](http://makeapullrequest.com) ## Install `go get go.devnw.com/structs` ## Usage and Examples Just like the standard lib `strings`, `bytes` and co packages, `structs` has many global functions to manipulate or organize your struct data. Lets define and declare a struct: ```go ``` ```go // Convert a struct to a map[string]any // => {"Name":"gopher", "ID":123456, "Enabled":true} m := structs.Map(server) // Convert the values of a struct to a []any // => ["gopher", 123456, true] v := structs.Values(server) // Convert the names of a struct to a []string // (see "Names methods" for more info about fields) n := structs.Names(server) // Convert the values of a struct to a []*Field // (see "Field methods" for more info about fields) f := structs.Fields(server) // Return the struct name => "Server" n := structs.Name(server) // Check if any field of a struct is initialized or not. h := structs.HasZero(server) // Check if all fields of a struct is initialized or not. z := structs.IsZero(server) // Check if server is a struct or a pointer to struct i := structs.IsStruct(server) ``` ### Struct methods The structs functions can be also used as independent methods by creating a new `*structs.Struct`. This is handy if you want to have more control over the structs (such as retrieving a single Field). ```go // Create a new struct type: s := structs.New(server) m := s.Map() // Get a map[string]any v := s.Values() // Get a []any f := s.Fields() // Get a []*Field n := s.Names() // Get a []string f := s.Field(name) // Get a *Field based on the given field name f, ok := s.FieldOk(name) // Get a *Field based on the given field name n := s.Name() // Get the struct name h := s.HasZero() // Check if any field is uninitialized z := s.IsZero() // Check if all fields are uninitialized ``` ### Field methods We can easily examine a single Field for more detail. Below you can see how we get and interact with various field methods: ```go s := structs.New(server) // Get the Field struct for the "Name" field name := s.Field("Name") // Get the underlying value, value => "gopher" value := name.Value().(string) // Set the field's value name.Set("another gopher") // Get the field's kind, kind => "string" name.Kind() // Check if the field is exported or not // Check if the value is a zero value, such as "" for string, 0 for int // Check if the field is an anonymous (embedded) field // Get the Field's tag value for tag name "json", tag value => "name,omitempty" tagValue := name.Tag("json") ``` Nested structs are supported too: ```go addrField := s.Field("Server").Field("Addr") // Get the value for addr a := addrField.Value().(string) // Or get all fields httpServer := s.Field("Server").Fields() ``` We can also get a slice of Fields from the Struct type to iterate over all fields. This is handy if you wish to examine all fields: ```go s := structs.New(server) ``` ## Credits ## License The MIT License (MIT) - see LICENSE.md for more details
Package btcutil provides bitcoin-specific convenience functions and types. A Block defines a bitcoin block that provides easier and more efficient manipulation of raw wire protocol blocks. It also memoizes hashes for the block and its transactions on their first access so subsequent accesses don't have to repeat the relatively expensive hashing operations. A Tx defines a bitcoin transaction that provides more efficient manipulation of raw wire protocol transactions. It memoizes the hash for the transaction on its first access so subsequent accesses don't have to repeat the relatively expensive hashing operations. To decode a base58 string: Similarly, to encode the same data:
Package btcutil provides bitcoin-specific convenience functions and types. A Block defines a bitcoin block that provides easier and more efficient manipulation of raw wire protocol blocks. It also memoizes hashes for the block and its transactions on their first access so subsequent accesses don't have to repeat the relatively expensive hashing operations. A Tx defines a bitcoin transaction that provides more efficient manipulation of raw wire protocol transactions. It memoizes the hash for the transaction on its first access so subsequent accesses don't have to repeat the relatively expensive hashing operations. To decode a base58 string: Similarly, to encode the same data:
The motivation behind this package is that the StructTag implementation shipped with Go's standard library is very limited in detecting a malformed StructTag and each time StructTag.Get(key) gets called, it results in the StructTag being parsed again. Another problem is that the StructTag can not be easily manipulated because it is basically a string. This package provides a way to parse the StructTag into a Tag map, which allows for fast lookups and easy manipulation of key value pairs within the Tag. Deprecated: This package has been moved to github.com/quartercastle/structtag
OSC52 is a terminal escape sequence that allows copying text to the clipboard. The sequence consists of the following: Pc is the clipboard choice: Pd is the data to copy to the clipboard. This string should be encoded in base64 (RFC-4648). If Pd is "?", the terminal replies to the host with the current contents of the clipboard. If Pd is neither a base64 string nor "?", the terminal clears the clipboard. See https://invisible-island.net/xterm/ctlseqs/ctlseqs.html#h3-Operating-System-Commands where Ps = 52 => Manipulate Selection Data. Examples:
Package nlp provides implementations of selected machine learning algorithms for natural language processing of text corpora. The primary focus is the statistical semantics of plain-text documents supporting semantic analysis and retrieval of semantically similar documents. The package makes use of the Gonum (http://http//www.gonum.org/) library for linear algebra and scientific computing with some inspiration taken from Python's scikit-learn (http://scikit-learn.org/stable/) and Gensim(https://radimrehurek.com/gensim/) The primary intended use case is to support document input as text strings encoded as a matrix of numerical feature vectors called a `term document matrix`. Each column in the matrix corresponds to a document in the corpus and each row corresponds to a unique term occurring in the corpus. The individual elements within the matrix contain the frequency with which each term occurs within each document (referred to as `term frequency`). Whilst textual data from document corpora are the primary intended use case, the algorithms can be used with other types of data from other sources once encoded (vectorised) into a suitable matrix e.g. image data, sound data, users/products, etc. These matrices can be processed and manipulated through the application of additional transformations for weighting features, identifying relationships or optimising the data for analysis, information retrieval and/or predictions. Typically the algorithms in this package implement one of three primary interfaces: One of the implementations of Vectoriser is Pipeline which can be used to wire together pipelines composed of a Vectoriser and one or more Transformers arranged in serial so that the output from each stage forms the input of the next. This can be used to construct a classic LSI (Latent Semantic Indexing) pipeline (vectoriser -> TF.IDF weighting -> Truncated SVD): Whilst they take different inputs, both Vectorisers and Transformers have 3 primary methods:
Package mutable_string provides a utility for manipulating strings in a flexible and efficient manner. The package allows users to apply multiple string modifications (such as insertions, deletions, and replacements) to an initial string while deferring the actual application of these modifications. MutableString is the core data structure of this package, representing a string that can undergo multiple transformations. The transformations are represented as overlays, where each overlay specifies a range of characters to be replaced and the replacement text. The MutableString struct keeps track of the initial text and a list of overlays. The text modifications are not immediately applied to the initial text; instead, they are stored in the overlays until the Commit method is called. During the Commit operation, all accumulated overlays are applied to the initial text in the order they were added, resulting in the final transformed string. The package provides a set of methods for creating and manipulating MutableString instances, including: ReplaceRange: Adds an overlay that specifies a range of characters in the initial text to be replaced with the provided replacement text. The range is defined by a starting position (inclusive) and an ending position (exclusive). If the replacement text is empty, this operation effectively performs a deletion. Insert: Adds an overlay that inserts the provided text at a specified character position. This operation extends the length of the resultant text. Commit: Applies all accumulated overlays to the initial text. After the Commit operation, the resultant text is updated to the final transformed string, and the list of overlays is cleared. The Commit method ensures that no overlapping overlays are applied; if it detects any overlaps, it returns an error. MutableString is designed to handle batch string transformations efficiently. By deferring the actual application of modifications, the package reduces the number of intermediate string allocations and copies that would be needed if each modification were applied immediately. This is particularly useful when dealing with large strings and a sequence of complex transformations. Usage: The package is intended for use cases where batch string manipulation is needed, such as text editors, document processing systems, and code generation tools. It provides a convenient and memory-efficient way to perform complex string transformations.
Package godom provides a comprehensive and fluent API for creating and manipulating DOM structures in Go. The GoDOM library is inspired by the gomponents library, but it offers a unique take on DOM manipulation in Go. Instead of merely creating strings of HTML, GoDOM focuses on maintaining a live DOM-like structure, making it easier to manipulate, extend, and integrate with other Go packages. Core Concepts: 1. **Elements**: At the heart of the library are Elements, which represent individual HTML elements. Elements can be nested, allowing for the creation of complex DOM structures. 2. **Attributes**: Attributes allow for the modification of Elements, adding things like classes, IDs, and other HTML attributes. 3. **Helpers**: The library provides helper functions for every HTML5 tag and attribute, making it easy to build any HTML structure. 4. **Utilities**: GoDOM provides utilities that allow for conditional rendering of elements and attributes, delayed rendering, and more. Usage: To create and render a simple HTML structure: ```go doc := Div(Class("container"))( ) var buf bytes.Buffer doc.Render(&buf) fmt.Println(buf.String()) ``` The GoDOM library can be extended with the template and util packages for more functionality. For more complex examples and use cases, refer to the README and other package documentations.
Package dom provides GopherJS and Go bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The documentation for some of the identifiers is based on the MDN Web Docs by Mozilla Contributors (https://developer.mozilla.org/en-US/docs/Web/API), licensed under CC-BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5/). The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM. If you depend on none of the APIs changing unexpectedly, you're advised to vendor this package.
Package database provides a block and metadata storage database. This package provides a database layer to store and retrieve block data and arbitrary metadata in a simple and efficient manner. The default backend, ffldb, has a strong focus on speed, efficiency, and robustness. It makes use leveldb for the metadata, flat files for block storage, and strict checksums in key areas to ensure data integrity. A quick overview of the features database provides are as follows: The main entry point is the DB interface. It exposes functionality for transactional-based access and storage of metadata and block data. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The Namespace interface is an abstraction that provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or committing changes that took place while the transaction was active. It also provides the root metadata bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root metadata bucket is the upper-most bucket in which data is stored and is created at the same time as the database. Use the Metadata function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database and using a managed read-write transaction to store and retrieve metadata. This example demonstrates creating a new database, using a managed read-write transaction to store a block, and using a managed read-only transaction to fetch the block.
Package dom provides GopherJS bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices.
Package btcutil provides bitcoin-specific convenience functions and types. A Block defines a bitcoin block that provides easier and more efficient manipulation of raw wire protocol blocks. It also memoizes hashes for the block and its transactions on their first access so subsequent accesses don't have to repeat the relatively expensive hashing operations. A Tx defines a bitcoin transaction that provides more efficient manipulation of raw wire protocol transactions. It memoizes the hash for the transaction on its first access so subsequent accesses don't have to repeat the relatively expensive hashing operations. To decode a base58 string: Similarly, to encode the same data:
Package database provides a block and metadata storage database. As of Feb 2016, there are over 400,000 blocks in the Bitcoin block chain and and over 112 million transactions (which turns out to be over 60GB of data). This package provides a database layer to store and retrieve this data in a simple and efficient manner. The default backend, ffldb, has a strong focus on speed, efficiency, and robustness. It makes use leveldb for the metadata, flat files for block storage, and strict checksums in key areas to ensure data integrity. A quick overview of the features database provides are as follows: The main entry point is the DB interface. It exposes functionality for transactional-based access and storage of metadata and block data. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The interface provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or committing changes that took place while the transaction was active. It also provides the root metadata bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root metadata bucket is the upper-most bucket in which data is stored and is created at the same time as the database. Use the Metadata function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database and using a managed read-write transaction to store and retrieve metadata. This example demonstrates creating a new database, using a managed read-write transaction to store a block, and using a managed read-only transaction to fetch the block.