Package squalor provides SQL utility routines, such as validation of models against table schemas, marshalling and unmarshalling of model structs into table rows and programmatic construction of SQL statements. While squalor uses the database/sql package, the SQL it utilizes is MySQL specific (e.g. REPLACE, INSERT ON DUPLICATE KEY UPDATE, etc). Given a simple table definition: And a model for a row of the table: The BindModel method will validate that the model and the table definition are compatible. For example, it would be an error for the User.ID field to have the type string. The table definition is loaded from the database allowing custom checks such as verifying the existence of indexes. While fields are often primitive types such as int64 or string, it is sometimes desirable to use a custom type. This can be accomplished by having the type implement the sql.Scanner and driver.Valuer interfaces. For example, you might create a GzippedText type which automatically compresses the value when it is written to the database and uncompressed when it is read: The Insert method is used to insert one or more rows in a table. You can pass either a struct or a pointer to a struct. Multiple rows can be inserted at once. Doing so offers convenience and performance. When multiple rows are batch inserted the returned error corresponds to the first SQL error encountered which may make it impossible to determine which row caused the error. If the rows correspond to different tables the order of insertion into the tables is undefined. If you care about the order of insertion into multiple tables and determining which row is causing an insertion error, structure your calls to Insert appropriately (i.e. insert into a single table or insert a single row). After a successful insert on a table with an auto increment primary key, the auto increment will be set back in the corresponding field of the object. For example, if the user table was defined as: Then we could create a new user by doing: The Replace method replaces a row in table, either inserting the row if it doesn't exist, or deleting and then inserting the row. See the MySQL docs for the difference between REPLACE, UPDATE and INSERT ON DUPLICATE KEY UPDATE (Upsert). The Update method updates a row in a table, returning the number of rows modified. The Upsert method inserts or updates a row. The Get method retrieves a single row by primary key and binds the result columns to a struct. The delete method deletes rows by primary key. See the documentation for DB.Delete for performance limitations when batch deleting multiple rows from a table with a primary key composed of multiple columns. To support optimistic locking with a column storing the version number, one field in a model object can be marked to serve as the lock. Modifying the example above: Now, the Update method will ensure that the object has not been concurrently modified when writing, by constraining the update by the version number. If the update is successful, the version number will be both incremented on the model (in-memory), as well as in the database. Programmatic construction of SQL queries prohibits SQL injection attacks while keeping query construction both readable and similar to SQL itself. Support is provided for most of the SQL DML (data manipulation language), though the constructed queries are targetted to MySQL. DB provides wrappers for the sql.Query and sql.QueryRow interfaces. The Rows struct returned from Query has an additional StructScan method for binding the columns for a row result to a struct. QueryRow can be used to easily query and scan a single value: In addition to the convenience of inserting, deleting and updating table rows using a struct, attention has been paid to performance. Since squalor is carefully constructing the SQL queries for insertion, deletion and update, it eschews the use of placeholders in favor of properly escaping values in the query. With the Go MySQL drivers, this saves a roundtrip to the database for each query because queries with placeholders must be prepared before being executed. Marshalling and unmarshalling of data from structs utilizes reflection, but care is taken to minimize the use of reflection in order to improve performance. For example, reflection data for models is cached when the model is bound to a table. Batch operations are utilized when possible. The Delete, Insert, Replace, Update and Upsert operations will perform multiple operations per SQL statement. This can provide an order of magnitude speed improvement over performing the mutations one row at a time due to minimizing the network overhead.
Package goutils provides utility functions to manipulate strings in various ways. The code snippets below show examples of how to use goutils. Some functions return errors while others do not, so usage would vary as a result. Example:
Package dom provides GopherJS bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM. If you depend on none of the APIs changing unexpectedly, you're advised to vendor this package.
Package walletdb provides a namespaced database interface for btcwallet. A wallet essentially consists of a multitude of stored data such as private and public keys, key derivation bits, pay-to-script-hash scripts, and various metadata. One of the issues with many wallets is they are tightly integrated. Designing a wallet with loosely coupled components that provide specific functionality is ideal, however it presents a challenge in regards to data storage since each component needs to store its own data without knowing the internals of other components or breaking atomicity. This package solves this issue by providing a pluggable driver, namespaced database interface that is intended to be used by the main wallet daemon. This allows the potential for any backend database type with a suitable driver. Each component, which will typically be a package, can then implement various functionality such as address management, voting pools, and colored coin metadata in their own namespace without having to worry about conflicts with other packages even though they are sharing the same database that is managed by the wallet. A quick overview of the features walletdb provides are as follows: The main entry point is the DB interface. It exposes functionality for creating, retrieving, and removing namespaces. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The Namespace interface is an abstraction that provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or commiting changes that took place while the transaction was active. It also provides the root bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root bucket is the upper-most bucket in a namespace under which data is stored and is created at the same time as the namespace. Use the RootBucket function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database, getting a namespace from it, and using a managed read-write transaction against the namespace to store and retrieve data.
Package walletdb provides a namespaced database interface for btcwallet. A wallet essentially consists of a multitude of stored data such as private and public keys, key derivation bits, pay-to-script-hash scripts, and various metadata. One of the issues with many wallets is they are tightly integrated. Designing a wallet with loosely coupled components that provide specific functionality is ideal, however it presents a challenge in regards to data storage since each component needs to store its own data without knowing the internals of other components or breaking atomicity. This package solves this issue by providing a pluggable driver, namespaced database interface that is intended to be used by the main wallet daemon. This allows the potential for any backend database type with a suitable driver. Each component, which will typically be a package, can then implement various functionality such as address management, voting pools, and colored coin metadata in their own namespace without having to worry about conflicts with other packages even though they are sharing the same database that is managed by the wallet. A quick overview of the features walletdb provides are as follows: The main entry point is the DB interface. It exposes functionality for creating, retrieving, and removing namespaces. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The Namespace interface is an abstraction that provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or commiting changes that took place while the transaction was active. It also provides the root bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root bucket is the upper-most bucket in a namespace under which data is stored and is created at the same time as the namespace. Use the RootBucket function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database, getting a namespace from it, and using a managed read-write transaction against the namespace to store and retrieve data.
Package structs implements a generic interface for manipulating Go structs. The related API is powered and inspired from the Go reflection package. It was initially developed to provide generic field getters and setters for any struct. It has grown into an generic abstraction layer to structs powered by the reflect package. While not being particularly performant compared to other structs packages, it provides a "natural feel" API, convenience, flexibility and simpler alternative compared to using the reflect package directly. Throughout the documentation, the t variable is assumed to be of type struct T. T has two fields: the first is a string field called "Property" and the second is a nested struct via a pointer called "Nested". While "Property" is set to value "Test", "Number" inside "Nested" is set to 123456. NOTE: the same applies to all other variables subsequently declared. The following input are supported throughout the package: NOTE: See the IndirectStruct method for details on the above scenarios. There are two main ways to use for this package to manipulate structs. Either by using the objects and methods or by using helper functions. Objects and methods: this approach requires calling of the New method first, which gives access to the API of the StructValue, StructField and StructFields objects. Helper functions: A set of self-contained functions that hides internal implementations powered by the StructValue object, are all declared in the file called helpers.go. The following table summarizes the above statements: All objects in this package are linked to the main StructValue object. The relationships between each one of them are as follow: NOTE: For an exhaustive illustration of package capabilities, please refer to the following file: https://github.com/roninzo/structs/example_test.go. The StructValue object is the starting point for manipulating structs using objects and methods. To initialize the StructValue object, make use of the New method followed by handling any potential error encountered in this process. Example: From there, several methods provides information about the struct. Example: NOTE: When possible, all object method names were inspired from the reflect package, trying to reduce the learning curve. Example: The StructValue object is also the gateway to the other two objects declared in this package: StructField and StructFields. Examples: The StructField object represents one field in the struct, and provides getters and setters methods. Before reading data out of struct fields generically, it is recommended to get extra information about the struct field. This is useful if the type of the field is not known at runtime. Example: However, if nested struct are present inside t, sub-fields are not made available directly. This means that nested structs must be loaded explicitly with the Struct method. Example: The StructFields object represents all the fields in the struct. Its main purpose at the moment, is to loop through StructField objects in a "for range" loop. Example: The other purpose, is to return all struct filed names: Example: The StructRows object represents the slice of structs and mostly follows the database/sql API. Its main purpose is to loop through elements of a submitted slice of structs. Each of those elements can then be manipulated in the same manner as the StructValue. Example: The helper methods in the helpers.go provide advanced functions for transforming structs or wrap StructValue object functionalities. Examples:
Package nlp provides implementations of selected machine learning algorithms for natural language processing of text corpora. The primary focus is the statistical semantics of plain-text documents supporting semantic analysis and retrieval of semantically similar documents. The package makes use of the Gonum (http://http//www.gonum.org/) library for linear algebra and scientific computing with some inspiration taken from Python's scikit-learn (http://scikit-learn.org/stable/) and Gensim(https://radimrehurek.com/gensim/) The primary intended use case is to support document input as text strings encoded as a matrix of numerical feature vectors called a `term document matrix`. Each column in the matrix corresponds to a document in the corpus and each row corresponds to a unique term occurring in the corpus. The individual elements within the matrix contain the frequency with which each term occurs within each document (referred to as `term frequency`). Whilst textual data from document corpora are the primary intended use case, the algorithms can be used with other types of data from other sources once encoded (vectorised) into a suitable matrix e.g. image data, sound data, users/products, etc. These matrices can be processed and manipulated through the application of additional transformations for weighting features, identifying relationships or optimising the data for analysis, information retrieval and/or predictions. Typically the algorithms in this package implement one of three primary interfaces: One of the implementations of Vectoriser is Pipeline which can be used to wire together pipelines composed of a Vectoriser and one or more Transformers arranged in serial so that the output from each stage forms the input of the next. This can be used to construct a classic LSI (Latent Semantic Indexing) pipeline (vectoriser -> TF.IDF weighting -> Truncated SVD): Whilst they take different inputs, both Vectorisers and Transformers have 3 primary methods:
Package walletdb provides a namespaced database interface for btcwallet. A wallet essentially consists of a multitude of stored data such as private and public keys, key derivation bits, pay-to-script-hash scripts, and various metadata. One of the issues with many wallets is they are tightly integrated. Designing a wallet with loosely coupled components that provide specific functionality is ideal, however it presents a challenge in regards to data storage since each component needs to store its own data without knowing the internals of other components or breaking atomicity. This package solves this issue by providing a pluggable driver, namespaced database interface that is intended to be used by the main wallet daemon. This allows the potential for any backend database type with a suitable driver. Each component, which will typically be a package, can then implement various functionality such as address management, voting pools, and colored coin metadata in their own namespace without having to worry about conflicts with other packages even though they are sharing the same database that is managed by the wallet. A quick overview of the features walletdb provides are as follows: The main entry point is the DB interface. It exposes functionality for creating, retrieving, and removing namespaces. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The Namespace interface is an abstraction that provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or commiting changes that took place while the transaction was active. It also provides the root bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root bucket is the upper-most bucket in a namespace under which data is stored and is created at the same time as the namespace. Use the RootBucket function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database, getting a namespace from it, and using a managed read-write transaction against the namespace to store and retrieve data.
Package database provides a block and metadata storage database. This package provides a database layer to store and retrieve block data and arbitrary metadata in a simple and efficient manner. The default backend, ffldb, has a strong focus on speed, efficiency, and robustness. It makes use leveldb for the metadata, flat files for block storage, and strict checksums in key areas to ensure data integrity. A quick overview of the features database provides are as follows: The main entry point is the DB interface. It exposes functionality for transactional-based access and storage of metadata and block data. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The Namespace interface is an abstraction that provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or committing changes that took place while the transaction was active. It also provides the root metadata bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root metadata bucket is the upper-most bucket in which data is stored and is created at the same time as the database. Use the Metadata function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database and using a managed read-write transaction to store and retrieve metadata. This example demonstrates creating a new database, using a managed read-write transaction to store a block, and using a managed read-only transaction to fetch the block.
Package envconf parses environment variables into structs. It supports multiple types, however the core type is always a string. Translators are available which manipulate the string value before setting, e.g. `!base64:SGVsbG8gV29ybGQ=` will cast to a string as "Hello World". There is no specific handling for bytes when using this method, it is handled as a string entirely, if you are expecting actual bytes use a type like Hex or Base64 which will directly translate the env var string to bytes Standard conversion from string to int, bool etc work, as well as custom types which satisfy `SetterFromEnv` (on a pointer, like JSON) Combining translators and custom types is perfectly fine. The string translations will happen first, then the output will be passed into FromEnvString
objx - Go package for dealing with maps, slices, JSON and other data. Objx provides the `objx.Map` type, which is a `map[string]interface{}` that exposes a powerful `Get` method (among others) that allows you to easily and quickly get access to data within the map, without having to worry too much about type assertions, missing data, default values etc. Objx uses a preditable pattern to make access data from within `map[string]interface{}'s easy. Call one of the `objx.` functions to create your `objx.Map` to get going: NOTE: Any methods or functions with the `Must` prefix will panic if something goes wrong, the rest will be optimistic and try to figure things out without panicking. Use `Get` to access the value you're interested in. You can use dot and array notation too: Once you have saught the `Value` you're interested in, you can use the `Is*` methods to determine its type. Or you can just assume the type, and use one of the strong type methods to extract the real value: If there's no value there (or if it's the wrong type) then a default value will be returned, or you can be explicit about the default value. If you're dealing with a slice of data as a value, Objx provides many useful methods for iterating, manipulating and selecting that data. You can find out more by exploring the index below. A simple example of how to use Objx: Since `objx.Map` is a `map[string]interface{}` you can treat it as such. For example, to `range` the data, do what you would expect:
Package sqlr is designed to reduce the effort required to work with SQL databases. It is intended for programmers who are comfortable with writing SQL, but would like assistance with the sometimes tedious process of preparing SQL queries for tables that have a large number of columns, or have a variable number of input parameters. This GoDoc summary provides an overview of how to use this package. For more detailed documentation, see https://jjeffery.github.io/sqlr. Preparing SQL queries with many placeholder arguments is tedious and error-prone. The following insert query has a dozen placeholders, and it is difficult to match up the columns with the placeholders. It is not uncommon to have tables with many more columns than this example, and the level of difficulty increases with the number of columns in the table. This package uses reflection to simplify the construction of SQL queries. Supplementary information about each database column is stored in the structure tag of the associated field. The calling program creates a schema, which describes rules for generating SQL statements. These rules include specifying the SQL dialect (eg MySQL, Postgres, SQLite) and the naming convention used to convert Go struct field names into column names (eg "GivenName" => "given_name"). The schema is usually created during program initialization. Once created, a schema is immutable and can be called concurrently from multiple goroutines. A session is created using a context, a database connection (eg *sql.DB, *sql.Tx, *sql.Conn), and a schema. A session is inexpensive to create, and is intended to last no longer than a single request (which might be a HTTP request, in the case of a HTTP server). A session is bounded by the lifetime of its context. Once a session has been created, it is possible to create simple row insert/update statements with minimal effort. The Exec method parses the SQL query and replaces occurrences of "{}" with the column names or placeholders that make sense for the SQL clause in which they occur. In the example above, the insert and update statements would look like: If the schema is created with a different dialect then the generated SQL will be different. For example if the Postgres dialect was used the insert and update queries would look more like: Inserting and updating a single row are common enough operations that the session has methods that make it very simple: Select queries can be performed using the session's Select method: The SQL queries prepared in the above example would look like the following: The examples are using a MySQL dialect. If the schema had been setup for, say, a Postgres dialect, a generated query would look more like: It is an important point to note that this feature is not about writing the SQL for the programmer. Rather it is about "filling in the blanks": allowing the programmer to specify as much of the SQL query as they want without having to write the tiresome bits. When inserting rows, if a column is defined as an autoincrement column, then the generated value will be retrieved from the database server, and the corresponding field in the row structure will be updated. Autoincrement column values work for all supported databases (PostgreSQL, MySQL, Microsoft SQL Server and SQLite). Most SQL database tables have columns that are nullable, and it can be tiresome to always map to pointer types or special nullable types such as sql.NullString. In many cases it is acceptable to map the zero value for the field a database NULL in the corresponding database column. Where it is acceptable to map a zero value to a NULL database column, the Go struct field can be marked with the "null" keyword in the field's struct tag. In the above example the `manager_id` column can be null, but if all valid IDs are non-zero, it is unambiguous to map the zero value to a database NULL. Similarly, if the `phone` column an empty string it will be stored as a NULL in the database. Care should be taken, because there are cases where a zero value and a database NULL do not represent the same thing. There are many cases, however, where this feature can be applied, and the result is simpler code that is easier to read. It is not uncommon to serialize complex objects as JSON text for storage in an SQL database. Native support for JSON is available in some database servers: in partcular Postgres has excellent support for JSON. It is straightforward to use this package to serialize a structure field to JSON: In the example above the `Cmplx` field will be marshaled as JSON text when writing to the database, and unmarshaled into the struct when reading from the database. While most SQL queries accept a fixed number of parameters, if the SQL query contains a `WHERE IN` clause, it requires additional string manipulation to match the number of placeholders in the query with args. This package simplifies queries with a variable number of arguments. When processing an SQL query, it detects if any of the arguments are slices: In the above example, the number of placeholders ("?") in the query will be increased to match the number of values in the `ids` slice. The expansion logic can handle any mix of slice and scalar arguments. A session can create type-safe query functions. This is a very powerful feature and makes it very easy to create type-safe data access. See the Session.MakeQuery function for examples.
Package nilsimsa is a Go implemenation of Nilsimsa, ported from code.google.com/p/py-nilsimsa but follows the conventions establish by the md5 package in the standard lib The Java implementaition at https://github.com/weblyzard/nilsimsa/ blob/master/src/main/java/com/weblyzard/lib/string/nilsimsa/Nilsimsa.java was also consulted. There is a discussion about using hash to score string similarities http://stackoverflow.com/questions/4323977/string-similarity-score-hash Copyright 2015 Sheng-Te Tsao. All rights reserved. Use of this source code is governed by the same BSD-style license that is used by the Go standard library From http://en.wikipedia.org/wiki/Nilsimsa_Hash Nilsimsa is an anti-spam focused locality-sensitive hashing algorithm originally proposed the cmeclax remailer operator in 2001[1] and then reviewed by Damiani et al. in their 2004 paper titled, "An Open Digest-based Technique for Spam Detection".[2] The goal of Nilsimsa is to generate a hash digest of an email message such that the digests of two similar messages are similar to each other. In comparison with cryptographic hash functions such as SHA-1 or MD5, making a small modification to a document does not substantially change the resulting hash of the document. The paper suggests that the Nilsimsa satisfies three requirements: Subsequent testing[3] on a range of file types identified the Nilsimsa hash as having a significantly higher false positive rate when compared to other similarity digest schemes such as TLSH, Ssdeep and Sdhash. Nilsimsa similarity matching was taken in consideration by Jesse Kornblum when developing the fuzzy hashing in 2006,[4] that used the algorithms of spamsum by Andrew Tridgell (2002).[5] References: [1] http://web.archive.org/web/20050707005338/ [2] http://spdp.di.unimi.it/papers/pdcs04.pdf [3] https://www.academia.edu/7833902/TLSH_-A_Locality_Sensitive_Hash [4] http://jessekornblum.livejournal.com/242493.html [5] http://dfrws.org/2006/proceedings/12-Kornblum.pdf From http://blog.semanticlab.net/tag/nilsimsa/ Damiani, E. et al., 2004. An Open Digest-based Technique for Spam Detection. In in Proceedings of the 2004 International Workshop on Security in Parallel and Distributed Systems. pp. 15-17. This paper discusses the Nilsimsa open digest hash algorithm which is frequently used for Spam detection. The authors describe the computation of the 32-byte code, discuss different attack scenarios and measures to counter them. Digest computation Slide a five character window through the input text and compute all eight possible tri-grams for each window (e.g. "igram" yields "igr", "gra", "ram", "ira", "iam", "grm", ...) Hash these trigrams using a hash function h() which maps every tri-gram to one of 256 accumulators and increment the corresponding accumulator. Nilsimsa uses the Trans53 hash function for hashing. At the end of the process described below, compute the expected value of the accumulators and set the bits which correspond to each accumulator either to (1) if it exceeds this threshold or (o) otherwise. The Nilsimsa similarity is computed based on the bitwise difference between two Nilsimsa hashes. Documents are considered similar if they exceed a pre-defined similarity value. >24 similar bits - conflict probability: 1.35E-4 (suggestions by Nilsimsa's original designer) >54 similar bits - conflict probability: 7.39E-12 (suggested by the article's authors) Spammers can apply multiple techniques to prevent Nilsimsa from detecting duplicates: Aimed attacks manipulate Nilsimsa's accumulators by adding words which introduce new tri-grams that specifically alter the hash value. Although these attacks are relatively effective, they can be easily circumvented by computing the Nilsimsa hash twice with different hash functions.
Objx - Go package for dealing with maps, slices, JSON and other data. Objx provides the `objx.Map` type, which is a `map[string]interface{}` that exposes a powerful `Get` method (among others) that allows you to easily and quickly get access to data within the map, without having to worry too much about type assertions, missing data, default values etc. Objx uses a preditable pattern to make access data from within `map[string]interface{}` easy. Call one of the `objx.` functions to create your `objx.Map` to get going: NOTE: Any methods or functions with the `Must` prefix will panic if something goes wrong, the rest will be optimistic and try to figure things out without panicking. Use `Get` to access the value you're interested in. You can use dot and array notation too: Once you have sought the `Value` you're interested in, you can use the `Is*` methods to determine its type. Or you can just assume the type, and use one of the strong type methods to extract the real value: If there's no value there (or if it's the wrong type) then a default value will be returned, or you can be explicit about the default value. If you're dealing with a slice of data as a value, Objx provides many useful methods for iterating, manipulating and selecting that data. You can find out more by exploring the index below. A simple example of how to use Objx: Since `objx.Map` is a `map[string]interface{}` you can treat it as such. For example, to `range` the data, do what you would expect:
Package nlp provides implementations of selected machine learning algorithms for natural language processing of text corpora. The initial primary focus being on the implementation of algorithms supporting LSA (Latent Semantic Analysis), often referred to as Latent Semantic Indexing in the context of information retrieval. The algorithms in the package typically support document input as text strings which are then encoded as a matrix of numerical feature vectors called a `term document matrix`. Columns in this matrix represent the documents in the corpus and the rows represent terms occurring in the documents. The individual elements within the matrix contains counts of the number of occurrences of each term in the associated document. This matrix can be manipulated through the application of additional transformations for weighting features, identifying relationships or optimising the data for analysis, information retrieval and/or predictions. A common transformation is for the purpose of weighting features to remove natural biases which would skew results e.g. commonly occurring words like `the`, `of`, `and`, etc. which should carry lower weight than unusual words. Term Document matrices typically have a very large number of dimensions and so transformations are often applied to reduce the dimensionality using techniques such as Locality Sensitive Hashing or Latent Semantic Analysis (typically performed using matrix SVD - `Singular Value Decomposition`) which approximates the original term document matrix with a new matrix of much lower rank (typically around 100 rather than 1000s). Truncated SVD is a fundamental part of LSA (Latent Semantic Analysis aka Latent Semantic Indexing) and serves a number of purposes: 1. The reduced dimensionality of the data theoretically requires less memory. 2. As less significant dimensions are removed, there is less `noise` in the data which could have artificially skewed results. 3. Perhaps most importantly, the SVD effectively encodes the co-occurrence of terms within the documents to capture semantic meaning rather than simply the presence (or lack of presence) of words. This combats the problem of synonymy (a common challenge in NLP) where different words in the English language can be used to mean the same thing (synonyms). In LSA, documents can have a high degree of semantic similarity with very few words in common. The post SVD matrix (with each column being a feature vector representing a document within the corpus) can be compared for similarity with each other (for clustering) or with a query (also represented as a feature vector projected into the same dimensional space). Similarity is measured by the angle between the two feature vectors being considered.
Package pathlist manipulates lists of filepaths joined by the OS-specific ListSeparator (pathlists), usually found in PATH or GOPATH environment variables. See also package env ( https://godoc.org/gopkg.in/pathlist.v0/env ) for helper functions and examples for using this package. The package complements the functionality of the standard path/filepath with: Pathlist handles the quoting/unquoting of filepaths as required on Windows. The package uses two separate types to make the context of string parameters (single/list, quoted/unquoted) explicit: This also helps prevent some common mistakes with pathlist manipulation that can lead to unexpected behavior and security issues. Specifically, a naive implementation could have the following problems: In addition to using distinct types, filepath arguments are validated. On Unix, filepaths containing the separator (':') cannot be used as part of a List; these trigger an Error with Cause returning ErrSep. On Windows, raw (unquoted) filepaths must be used; an Error with Cause returning ErrQuote is issued otherwise. Calls returning (List, error) can be wrapped using Must when the validation is known to succeed in advance. Using this package, the two examples above can be written as: Note that even though the error handling API is common across platforms, the behavior is somewhat asymmetric. On Unix, ':' is generally a valid (although uncommon) character in filepaths, so an error typically indicates input error and the user should be notified. In contrast, on Windows all filepaths can be included in a list; an error generally means that the caller is mixing quoted and unquoted paths, which likely indicates a bug and that the implementation should be fixed.
Package luar provides a convenient interface between Lua and Go. It uses Alessandro Arzilli's golua (https://github.com/glycerine/golua). Most Go values can be passed to Lua: basic types, strings, complex numbers, user-defined types, pointers, composite types, functions, channels, etc. Conversely, most Lua values can be converted to Go values. Composite types are processed recursively. Methods can be called on user-defined types. These methods will be callable using _dot-notation_ rather than colon notation. Arrays, slices, maps and structs can be copied as tables, or alternatively passed over as Lua proxy objects which can be naturally indexed. In the case of structs and string maps, fields have priority over methods. Use 'luar.method(<value>, <method>)(<params>...)' to call shadowed methods. Unexported struct fields are ignored. The "lua" tag is used to match fields in struct conversion. You may pass a Lua table to an imported Go function; if the table is 'array-like' then it is converted to a Go slice; if it is 'map-like' then it is converted to a Go map. Pointer values encode as the value pointed to when unproxified. Usual operators (arithmetic, string concatenation, pairs/ipairs, etc.) work on proxies too. The type of the result depends on the type of the operands. The rules are as follows: - If the operands are of the same type, use this type. - If one type is a Lua number, use the other, user-defined type. - If the types are different and not Lua numbers, convert to a complex proxy, a Lua number, or a Lua string according to the result kind. Channel proxies can be manipulated with the following methods: - close(): Close the channel. - recv() value: Fetch and return a value from the channel. - send(x value): Send a value in the channel. Complex proxies can be manipulated with the following attributes: - real: The real part. - imag: The imaginary part. Slice proxies can be manipulated with the following methods/attributes: - append(x ...value) sliceProxy: Append the elements and return the new slice. The elements must be convertible to the slice element type. - cap: The capacity of the slice. - slice(i, j integer) sliceProxy: Return the sub-slice that ranges from 'i' to 'j' excluded, starting from 1. String proxies can be browsed rune by rune with the pairs/ipairs functions. These runes are encoded as strings in Lua. Indexing a string proxy (starting from 1) will return the corresponding byte as a Lua string. String proxies can be manipulated with the following method: - slice(i, j integer) sliceProxy: Return the sub-string that ranges from 'i' to 'j' excluded, starting from 1. Pointers to structs and structs within pointers are automatically dereferenced. Slices must be looped over with 'ipairs'.
objx - Go package for dealing with maps, slices, JSON and other data. Objx provides the `objx.Map` type, which is a `map[string]interface{}` that exposes a powerful `Get` method (among others) that allows you to easily and quickly get access to data within the map, without having to worry too much about type assertions, missing data, default values etc. Objx uses a preditable pattern to make access data from within `map[string]interface{}'s easy. Call one of the `objx.` functions to create your `objx.Map` to get going: NOTE: Any methods or functions with the `Must` prefix will panic if something goes wrong, the rest will be optimistic and try to figure things out without panicking. Use `Get` to access the value you're interested in. You can use dot and array notation too: Once you have saught the `Value` you're interested in, you can use the `Is*` methods to determine its type. Or you can just assume the type, and use one of the strong type methods to extract the real value: If there's no value there (or if it's the wrong type) then a default value will be returned, or you can be explicit about the default value. If you're dealing with a slice of data as a value, Objx provides many useful methods for iterating, manipulating and selecting that data. You can find out more by exploring the index below. A simple example of how to use Objx: Since `objx.Map` is a `map[string]interface{}` you can treat it as such. For example, to `range` the data, do what you would expect:
Package linq provides methods for querying and manipulating slices, arrays, maps, strings, channels and collections. Authors: Alexander Kalankhodzhaev (kalan), Ahmet Alp Balkan, Cleiton Marques Souza.
Package stringfy is a simple string manipulation for Go. Example of usage:
objx - Go package for dealing with maps, slices, JSON and other data. Objx provides the `objx.Map` type, which is a `map[string]interface{}` that exposes a powerful `Get` method (among others) that allows you to easily and quickly get access to data within the map, without having to worry too much about type assertions, missing data, default values etc. Objx uses a preditable pattern to make access data from within `map[string]interface{}'s easy. Call one of the `objx.` functions to create your `objx.Map` to get going: NOTE: Any methods or functions with the `Must` prefix will panic if something goes wrong, the rest will be optimistic and try to figure things out without panicking. Use `Get` to access the value you're interested in. You can use dot and array notation too: Once you have sought the `Value` you're interested in, you can use the `Is*` methods to determine its type. Or you can just assume the type, and use one of the strong type methods to extract the real value: If there's no value there (or if it's the wrong type) then a default value will be returned, or you can be explicit about the default value. If you're dealing with a slice of data as a value, Objx provides many useful methods for iterating, manipulating and selecting that data. You can find out more by exploring the index below. A simple example of how to use Objx: Since `objx.Map` is a `map[string]interface{}` you can treat it as such. For example, to `range` the data, do what you would expect:
Sprig: Template functions for Go. This package contains a number of utility functions for working with data inside of Go `html/template` and `text/template` files. To add these functions, use the `template.Funcs()` method: Note that you should add the function map before you parse any template files. Date Functions String Functions String Slice Functions: Integer Slice Functions: Conversions: Defaults: OS: File Paths: Encoding: Reflection: typeOf: Takes an interface and returns a string representation of the type. For pointers, this will return a type prefixed with an asterisk(`*`). So a pointer to type `Foo` will be `*Foo`. typeIs: Compares an interface with a string name, and returns true if they match. Note that a pointer will not match a reference. For example `*Foo` will not match `Foo`. typeIsLike: Compares an interface with a string name and returns true if the interface is that `name` or that `*name`. In other words, if the given value matches the given type or is a pointer to the given type, this returns true. kindOf: Takes an interface and returns a string representation of its kind. kindIs: Returns true if the given string matches the kind of the given interface. Note: None of these can test whether or not something implements a given interface, since doing so would require compiling the interface in ahead of time. Data Structures: Lists Functions: These are used to manipulate lists: '{{ list 1 2 3 | reverse | first }}' Dict Functions: These are used to manipulate dicts. Math Functions: Integer functions will convert integers of any width to `int64`. If a string is passed in, functions will attempt to convert with `strconv.ParseInt(s, 1064)`. If this fails, the value will be treated as 0. Crypto Functions: SemVer Functions: These functions provide version parsing and comparisons for SemVer 2 version strings.
Package nlp provides implementations of selected machine learning algorithms for natural language processing of text corpora. The primary focus is the statistical semantics of plain-text documents supporting semantic analysis and retrieval of semantically similar documents. The package makes use of the Gonum (http://http//www.gonum.org/) library for linear algebra and scientific computing with some inspiration taken from Python's scikit-learn (http://scikit-learn.org/stable/) and Gensim(https://radimrehurek.com/gensim/) The primary intended use case is to support document input as text strings encoded as a matrix of numerical feature vectors called a `term document matrix`. Each column in the matrix corresponds to a document in the corpus and each row corresponds to a unique term occurring in the corpus. The individual elements within the matrix contain the frequency with which each term occurs within each document (referred to as `term frequency`). Whilst textual data from document corpora are the primary intended use case, the algorithms can be used with other types of data from other sources once encoded (vectorised) into a suitable matrix e.g. image data, sound data, users/products, etc. These matrices can be processed and manipulated through the application of additional transformations for weighting features, identifying relationships or optimising the data for analysis, information retrieval and/or predictions. Typically the algorithms in this package implement one of three primary interfaces: One of the implementations of Vectoriser is Pipeline which can be used to wire together pipelines composed of a Vectoriser and one or more Transformers arranged in serial so that the output from each stage forms the input of the next. This can be used to construct a classic LSI (Latent Semantic Indexing) pipeline (vectoriser -> TF.IDF weighting -> Truncated SVD): Whilst they take different inputs, both Vectorisers and Transformers have 3 primary methods:
Package luar provides a convenient interface between Lua and Go. It uses Alessandro Arzilli's golua (https://github.com/aarzilli/golua). Most Go values can be passed to Lua: basic types, strings, complex numbers, user-defined types, pointers, composite types, functions, channels, etc. Conversely, most Lua values can be converted to Go values. Composite types are processed recursively. Methods can be called on user-defined types. These methods will be callable using _dot-notation_ rather than colon notation. Arrays, slices, maps and structs can be copied as tables, or alternatively passed over as Lua proxy objects which can be naturally indexed. In the case of structs and string maps, fields have priority over methods. Use 'luar.method(<value>, <method>)(<params>...)' to call shadowed methods. Unexported struct fields are ignored. The "lua" tag is used to match fields in struct conversion. You may pass a Lua table to an imported Go function; if the table is 'array-like' then it is converted to a Go slice; if it is 'map-like' then it is converted to a Go map. Pointer values encode as the value pointed to when unproxified. Usual operators (arithmetic, string concatenation, pairs/ipairs, etc.) work on proxies too. The type of the result depends on the type of the operands. The rules are as follows: - If the operands are of the same type, use this type. - If one type is a Lua number, use the other, user-defined type. - If the types are different and not Lua numbers, convert to a complex proxy, a Lua number, or a Lua string according to the result kind. Channel proxies can be manipulated with the following methods: - close(): Close the channel. - recv() value: Fetch and return a value from the channel. - send(x value): Send a value in the channel. Complex proxies can be manipulated with the following attributes: - real: The real part. - imag: The imaginary part. Slice proxies can be manipulated with the following methods/attributes: - append(x ...value) sliceProxy: Append the elements and return the new slice. The elements must be convertible to the slice element type. - cap: The capacity of the slice. - slice(i, j integer) sliceProxy: Return the sub-slice that ranges from 'i' to 'j' excluded, starting from 1. String proxies can be browsed rune by rune with the pairs/ipairs functions. These runes are encoded as strings in Lua. Indexing a string proxy (starting from 1) will return the corresponding byte as a Lua string. String proxies can be manipulated with the following method: - slice(i, j integer) sliceProxy: Return the sub-string that ranges from 'i' to 'j' excluded, starting from 1. Pointers to structs and structs within pointers are automatically dereferenced. Slices must be looped over with 'ipairs'.
Package dec is a Go wrapper around the decNumber library (arbitrary precision decimal arithmetic). For more information about the decNumber library, and detailed documentation, see http://speleotrove.com/decimal/dec.html It implements the General Decimal Arithmetic Specification in ANSI C. This specification defines a decimal arithmetic which meets the requirements of commercial, financial, and human-oriented applications. It also matches the decimal arithmetic in the IEEE 754 Standard for Floating Point Arithmetic. The library fully implements the specification, and hence supports integer, fixed-point, and floating-point decimal numbers directly, including infinite, NaN (Not a Number), and subnormal values. Both arbitrary-precision and fixed-size representations are supported. General usage notes: The Number format is optimized for efficient processing of relatively short numbers; It does, however, support arbitrary precision (up to 999,999,999 digits) and arbitrary exponent range (Emax in the range 0 through 999,999,999 and Emin in the range -999,999,999 through 0). Mathematical functions (for example Exp()) as identified below are restricted more tightly: digits, emax, and -emin in the context must be <= MaxMath (999999), and their operand(s) must be within these bounds. Logical functions are further restricted; their operands must be finite, positive, have an exponent of zero, and all digits must be either 0 or 1. The result will only contain digits which are 0 or 1 (and will have exponent=0 and a sign of 0). Operands to operator functions are never modified unless they are also specified to be the result number (which is always permitted). Other than that case, operands must not overlap. Go implementation details: The decimal32, decimal64 and decimal128 types are merged into Single, Double and Quad, respectively. Contexts are created with a immutable precision (i.e. number of digits). If one needs to change precision on the fly, discard the existing context and create a new one with the required precision. From a programming standpoint, any initialized Number is a valid operand in arithmetic operations, regardless of the settings or existence of its creator Context (not to be confused with having a valid value in a given arithmetic operation). Arithmetic functions are Number methods. The value of the receiver of the method will be set to the result of the operation. For example: Arithmetic methods always return the receiver in order to allow chain calling: Using the same Number as operand and result, like in n.Multiply(n, n, ctx), is legal and will not produce unexpected results. A few functions like the context status manipulation functions have been moved to their own type (Status). The C call decContextTestStatus(ctx, mask) is therefore replaced by ctx.Status().Set(mask) in the Go implementation. The same goes for decNumberClassToString(number) which is repleaced by number.Class().String() in go. Error handling: although most arithmetic functions can cause errors, the standard Go error handling is not used in its idiomatic form. That is, arithmetic functions do not return errors. Instead, the type of the error is ORed into the status flags in the current context (Context type). It is the responsibility of the caller to clear the status flags as required. The result of any routine which returns a number will always be a valid number (which may be a special value, such as an Infinity or NaN). This permits the use of much fewer error checks; a single check for a whole computation is often enough. To check for errors, get the Context's status with the Status() function (see the Status type), or use the Context's ErrorStatus() function. The package provides facilities for managing free-lists of Numbers in order to relieve pressure on the garbage collector in computation intensive applications. NumberPool is in fact a simple wrapper around a *Context and a sync.Pool (or the lighter util.Pool provided in the util subpackage); NumberPool will automatically cast the return value of Get() to the desired type. For example: Note the use of pool.Context on the last statement. If a particular implementation needs always-initialized numbers (like any new Go variable being initialize to the 0 value of its type), the pool's New function can be set for example to: If an application needs to change its arithmetic precision on the fly, any NumberPool built on top of the affected Context's will need to be discarded and recreated along with the Context. This will not affect existing numbers that can still be used as valid operands in arithmetic functions. Go re-implementation of decNumber's example1.c - simple addition. Cnvert the first two argument words to decNumber, add them together, and display the result. Extended re-implementation of decNumber's example2.c - compound interest. With added error handling from example3.c. Go re-implementation of decNumber's example5.c. Compressed formats. Go re-implementation of decNumber's example6.c. Packed Decimal numbers. This example reworks Example 2, starting and ending with Packed Decimal numbers. Go re-implementation of decNumber's example7.c. Using decQuad to add two numbers together. Go re-implementation of decNumber's example8.c. Using Quad with Number
Package linq provides methods for querying and manipulating slices, arrays, maps, strings, channels and collections. Authors: Alexander Kalankhodzhaev (kalan), Ahmet Alp Balkan, Cleiton Marques Souza.
Package database provides a block and metadata storage database. This package provides a database layer to store and retrieve block data and arbitrary metadata in a simple and efficient manner. The default backend, ffldb, has a strong focus on speed, efficiency, and robustness. It makes use leveldb for the metadata, flat files for block storage, and strict checksums in key areas to ensure data integrity. A quick overview of the features database provides are as follows: The main entry point is the DB interface. It exposes functionality for transactional-based access and storage of metadata and block data. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The Namespace interface is an abstraction that provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or committing changes that took place while the transaction was active. It also provides the root metadata bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root metadata bucket is the upper-most bucket in which data is stored and is created at the same time as the database. Use the Metadata function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database and using a managed read-write transaction to store and retrieve metadata. This example demonstrates creating a new database, using a managed read-write transaction to store a block, and using a managed read-only transaction to fetch the block.
Package goutils provides utility functions to manipulate strings in various ways. The code snippets below show examples of how to use goutils. Some functions return errors while others do not, so usage would vary as a result. Example:
The motivation behind this package is that the StructTag implementation shipped with Go's standard library is very limited in detecting a malformed StructTag and each time StructTag.Get(key) gets called, it results in the StructTag being parsed again. Another problem is that the StructTag can not be easily manipulated because it is basically a string. This package provides a way to parse the StructTag into a Tag map, which allows for fast lookups and easy manipulation of key value pairs within the Tag.
Package jin Copyright (c) 2020 eco. License that can be found in the LICENSE file. "your wish is my command" Jin is a comprehensive JSON manipulation tool bundle. All functions tested with random data with help of Node.js. All test-path and test-value creation automated with Node.js. Jin provides parse, interpret, build and format tools for JSON. Third-party packages only used for the benchmark. No dependency need for core functions. We make some benchmark with other packages like Jin. In Result, Jin is the fastest (op/ns) and more memory friendly then others (B/op). For more information please take a look at BENCHMARK section below. WHAT IS NEW? 7 new functions tested and added to package. - GetMap() get objects as map[string]string structure with key values pairs - GetAll() get only specific keys values - GetAllMap() get only specific keys with map[string]stringstructure - GetKeys() get objects keys as string array - GetValues() get objects values as string array - GetKeysValues() get objects keys and values with separate string arrays - Length() get length of JSON array. 06.04.2020 INSTALLATION And you are good to go. Import and start using. Major difference between parsing and interpreting is parser has to read all data before answer your needs. On the other hand interpreter reads up to find the data you need. With parser, once the parse is complete you can access any data with no time. But there is a time cost to parse all data and this cost can increase as data content grows. If you need to access all keys of a JSON then, we are simply recommend you to use Parser. But if you need to access some keys of a JSON then we strongly recommend you to use Interpreter, it will be much faster and much more memory-friendly than parser. Interpreter is core element of this package, no need to create an Interpreter type, just call which function you want. First let's look at general function parameters. We are gonna use Get() function to access the value of path has pointed. In this case 'jin'. Path value can consist hard coded values. Get() function return type is []byte but all other variations of return types are implemented with different functions. For example. If you need "value" as string use GetString(). Parser is another alternative for JSON manipulation. We recommend to use this structure when you need to access all or most of the keys in the JSON. Parser constructor need only one parameter. We can parse it with Parse() function. Let's look at Parser.Get() About path value look above. There is all return type variations of Parser.Get() function like Interpreter. For return string use Parser.GetString() like this, Other usefull functions of Jin. -Add(), AddKeyValue(), Set(), SetKey() Delete(), Insert(), IterateArray(), IterateKeyValue() Tree(). Let's look at IterateArray() function. There are two formatting functions. Flatten() and Indent(). Indent() is adds indentation to JSON for nicer visualization and Flatten() removes this indentation. Control functions are simple and easy way to check value types of any path. For example. IsArray(). Or you can use GetType(). There are lots of JSON build functions in this package and all of them has its own examples. We just want to mention a couple of them. Scheme is simple and powerful tool for create JSON schemes. Testing is very important thing for this type of packages and it shows how reliable it is. For that reasons we use Node.js for unit testing. Lets look at folder arrangement and working principle. - test/ folder: test-json.json, this is a temporary file for testing. all other test-cases copying here with this name so they can process by test-case-creator.js. test-case-creator.js is core path & value creation mechanism. When it executed with executeNode() function. It reads the test-json.json file and generates the paths and values from this files content. With command line arguments it can generate different paths and values. As a result, two files are created with this process. the first of these files is test-json-paths.json and the second is test-json-values.json test-json-paths.json has all the path values. test-json-values.json has all the values that corresponding to path values. - tests/ folder All files in this folder is a test-case. But it doesn't mean that you can't change anything, on the contrary, all test-cases are creating automatically based on this folder content. You can add or remove any .json file that you want. All GO side test-case automation functions are in core_test.go file. This package developed with Node.js v13.7.0. please make sure that your machine has a valid version of Node.js before testing. All functions and methods are tested with complicated randomly genereted .json files. Like this, Most of JSON packages not even run properly with this kind of JSON streams. We did't see such packages as competitors to ourselves. And that's because we didn't even bother to benchmark against them. Benchmark results. - Benchmark prefix removed from function names for make room to results. - Benchmark between 'buger/jsonparser' and 'ecoshub/jin' use the same payload (JSON test-cases) that 'buger/jsonparser' package use for benchmark it self. We are currently working on, - Marshal() and Unmarshal() functions. - http.Request parser/interpreter - Builder functions for http.ResponseWriter If you want to contribute this work feel free to fork it. We want to fill this section with contributors.
Package jin Copyright (c) 2020 eco. License that can be found in the LICENSE file. "your wish is my command" Jin is a comprehensive JSON manipulation tool bundle. All functions tested with random data with help of Node.js. All test-path and test-value creation automated with Node.js. Jin provides parse, interpret, build and format tools for JSON. Third-party packages only used for the benchmark. No dependency need for core functions. We make some benchmark with other packages like Jin. In Result, Jin is the fastest (op/ns) and more memory friendly then others (B/op). For more information please take a look at BENCHMARK section below. WHAT IS NEW? 7 new functions tested and added to package. - GetMap() get objects as map[string]string structure with key values pairs - GetAll() get only specific keys values - GetAllMap() get only specific keys with map[string]stringstructure - GetKeys() get objects keys as string array - GetValues() get objects values as string array - GetKeysValues() get objects keys and values with separate string arrays - Length() get length of JSON array. 06.04.2020 INSTALLATION And you are good to go. Import and start using. Major difference between parsing and interpreting is parser has to read all data before answer your needs. On the other hand interpreter reads up to find the data you need. With parser, once the parse is complete you can access any data with no time. But there is a time cost to parse all data and this cost can increase as data content grows. If you need to access all keys of a JSON then, we are simply recommend you to use Parser. But if you need to access some keys of a JSON then we strongly recommend you to use Interpreter, it will be much faster and much more memory-friendly than parser. Interpreter is core element of this package, no need to create an Interpreter type, just call which function you want. First let's look at general function parameters. We are gonna use Get() function to access the value of path has pointed. In this case 'jin'. Path value can consist hard coded values. Get() function return type is []byte but all other variations of return types are implemented with different functions. For example. If you need "value" as string use GetString(). Parser is another alternative for JSON manipulation. We recommend to use this structure when you need to access all or most of the keys in the JSON. Parser constructor need only one parameter. We can parse it with Parse() function. Let's look at Parser.Get() About path value look above. There is all return type variations of Parser.Get() function like Interpreter. For return string use Parser.GetString() like this, Other usefull functions of Jin. -Add(), AddKeyValue(), Set(), SetKey() Delete(), Insert(), IterateArray(), IterateKeyValue() Tree(). Let's look at IterateArray() function. There are two formatting functions. Flatten() and Indent(). Indent() is adds indentation to JSON for nicer visualization and Flatten() removes this indentation. Control functions are simple and easy way to check value types of any path. For example. IsArray(). Or you can use GetType(). There are lots of JSON build functions in this package and all of them has its own examples. We just want to mention a couple of them. Scheme is simple and powerful tool for create JSON schemes. Testing is very important thing for this type of packages and it shows how reliable it is. For that reasons we use Node.js for unit testing. Lets look at folder arrangement and working principle. - test/ folder: test-json.json, this is a temporary file for testing. all other test-cases copying here with this name so they can process by test-case-creator.js. test-case-creator.js is core path & value creation mechanism. When it executed with executeNode() function. It reads the test-json.json file and generates the paths and values from this files content. With command line arguments it can generate different paths and values. As a result, two files are created with this process. the first of these files is test-json-paths.json and the second is test-json-values.json test-json-paths.json has all the path values. test-json-values.json has all the values that corresponding to path values. - tests/ folder All files in this folder is a test-case. But it doesn't mean that you can't change anything, on the contrary, all test-cases are creating automatically based on this folder content. You can add or remove any .json file that you want. All GO side test-case automation functions are in core_test.go file. This package developed with Node.js v13.7.0. please make sure that your machine has a valid version of Node.js before testing. All functions and methods are tested with complicated randomly genereted .json files. Like this, Most of JSON packages not even run properly with this kind of JSON streams. We did't see such packages as competitors to ourselves. And that's because we didn't even bother to benchmark against them. Benchmark results. - Benchmark prefix removed from function names for make room to results. - Benchmark between 'buger/jsonparser' and 'ecoshub/jin' use the same payload (JSON test-cases) that 'buger/jsonparser' package use for benchmark it self. We are currently working on, - Marshal() and Unmarshal() functions. - http.Request parser/interpreter - Builder functions for http.ResponseWriter If you want to contribute this work feel free to fork it. We want to fill this section with contributors.
Package dom provides GopherJS bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM. If you depend on none of the APIs changing unexpectedly, you're advised to vendor this package.
Copying All nodes (since they implement the Node interface) also implement the NodeCopier interface which provides the ShallowCopy() function. A shallow copy returns a new node with all the same properties, but no children. On the other hand there is a DeepCopy function which returns a new node with all recursive children also copied. This ensures that the new returned node can be manipulated without affecting the original node or any of its children. Dates in GEDCOM files can be very complex as they can cater for many scenarios: 1. Incomplete, like "Dec 1943" 2. Anchored, like "Aft. 3 Sep 2003" or "Before 1923" 3. Ranges, like "Bet. 4 Apr 1823 and 8 Apr 1823" 4. Phrases, like "(Foo Bar)" This package provides a very rich API for dealing with all kind of dates in a meaningful and sensible way. Some notable features include: 1. All dates, even though that specify an specific day have a minimum and maximum value that are their true bounds. This is especially important for larger date ranges like the whole month of "Jun 1945". 2. Upper and lower bounds of dates can be converted to the native Go time.Time object. 3. There is a Years function that provides a convenient way to normalise a date range into a number for easier distance and comparison measurements. 4. Algorithms for calculating the similarity of dates on a configurable parabola. Decoding a GEDCOM stream: If you are reading from a file you can use NewDocumentFromGEDCOMFile: Package gedcom contains functionality for encoding, decoding, traversing, manipulating and comparing of GEDCOM data. You can download the latest binaries for macOS, Windows and Linux on the Releases page: https://github.com/elliotchance/gedcom/releases This will not require you to install Go or any other dependencies. If you wish to build it from source you must install the dependencies with: On top of the raw document is a powerful API that takes care of the complex traversing of the Document. Here is a simple example: Some of the nodes in a GEDCOM file have been replaced with more function rich types, such as names, dates, families and more. Encoding a Document If you need the GEDCOM data as a string you can simply using fmt.Stringer: The Filter function recursively removes or manipulates nodes with a FilterFunction: Some examples of Filter functions include BlacklistTagFilter, OfficialTagFilter, SimpleNameFilter and WhitelistTagFilter. There are several functions available that handle different kinds of merging: - MergeNodes(left, right Node) Node: returns a new node that merges children from both nodes. - MergeNodeSlices(left, right Nodes, mergeFn MergeFunction) Nodes: merges two slices based on the mergeFn. This allows more advanced merging when dealing with slices of nodes. - MergeDocuments(left, right *Document, mergeFn MergeFunction) *Document: creates a new document with their respective nodes merged. You can use IndividualBySurroundingSimilarityMergeFunction with this to merge individuals, rather than just appending them all. The MergeFunction is a type that can be received in some of the merging functions. The closure determines if two nodes should be merged and what the result would be. Alternatively it can also describe when two nodes should not be merged. You may certainly create your own MergeFunction, but there are some that are already included: - IndividualBySurroundingSimilarityMergeFunction creates a MergeFunction that will merge individuals if their surrounding similarity is at least minimumSimilarity. - EqualityMergeFunction is a MergeFunction that will return a merged node if the node are considered equal (with Equals). Node.Equals performs a shallow comparison between two nodes. The implementation is different depending on the types of nodes being compared. You should see the specific documentation for the Node. Equality is not to be confused with the Is function seen on some of the nodes, such as Date.Is. The Is function is used to compare exact raw values in nodes. DeepEqual tests if left and right are recursively equal. CompareNodes recursively compares two nodes. For example: Produces a *NodeDiff than can be rendered with the String method:
Package jsonfs provides a JSON marshal/unmarshaller that treats JSON objects as directories and JSON values (bools, numbers or strings) as files. This is an alternative to various other structures for dealing with JSON that either use maps or structs to represent data. This is particularly great for doing discovery on JSON data or manipulating JSON. Each file is read-only. You can switch out files in a directory in order to make updates. A File represents a JSON basic value of bool, integer, float, null or string. A Directory represents a JSON object or array. Because a directory can be an object or array, a directory that represents an array has _.array._ appened to the name in some filesystems. If creating an array using filesytem tools, use ArrayDirName("name_of_your_array"), which will append the correct suffix. You always opened the file with simply the name. This also means that naming an object (not array) _.array._ will cause unexpected results. This is a thought experiment. However, it is quite performant, but does have the unattractive nature of being more verbose. Also, you may find problems if your JSON dict keys have invalid characters for the filesystem you are running on AND you use the diskfs filesystem. For the memfs, this is mostly not a problem, except for /. You cannot use / in your keys. We are slower and allocate more memory. I'm going to have to spend some time optimising the memory use. I had some previous benchmarks that showed this was faster. But I had a mistake that became obvious with using a large file, (unless I was 700,000x faster on unmarshal, and I'm not that good). Example of unmarshalling a JSON file: Example of creating a JSON object via the library: Example of marshaling a Directory: Example of getting a JSON value by field name: Same example, but we are okay with zero values if the field doesn't exist: Get the type a File holds: Get a value the easiest way when you aren't sure what it is: Get a value from a Directory when you care about all the details (uck): Get a value from a Directory when zero values will do if set to null or doesn't exist: Put the value in an fs.FS and walk the JSON:
Package luar provides a convenient interface between Lua and Go. It uses Alessandro Arzilli's golua (https://github.com/aarzilli/golua). Most Go values can be passed to Lua: basic types, strings, complex numbers, user-defined types, pointers, composite types, functions, channels, etc. Conversely, most Lua values can be converted to Go values. Composite types are processed recursively. Methods can be called on user-defined types. These methods will be callable using _dot-notation_ rather than colon notation. Arrays, slices, maps and structs can be copied as tables, or alternatively passed over as Lua proxy objects which can be naturally indexed. In the case of structs and string maps, fields have priority over methods. Use 'luar.method(<value>, <method>)(<params>...)' to call shadowed methods. Unexported struct fields are ignored. The "lua" tag is used to match fields in struct conversion. You may pass a Lua table to an imported Go function; if the table is 'array-like' then it is converted to a Go slice; if it is 'map-like' then it is converted to a Go map. Pointer values encode as the value pointed to when unproxified. Usual operators (arithmetic, string concatenation, pairs/ipairs, etc.) work on proxies too. The type of the result depends on the type of the operands. The rules are as follows: - If the operands are of the same type, use this type. - If one type is a Lua number, use the other, user-defined type. - If the types are different and not Lua numbers, convert to a complex proxy, a Lua number, or a Lua string according to the result kind. Channel proxies can be manipulated with the following methods: - close(): Close the channel. - recv() value: Fetch and return a value from the channel. - send(x value): Send a value in the channel. Complex proxies can be manipulated with the following attributes: - real: The real part. - imag: The imaginary part. Slice proxies can be manipulated with the following methods/attributes: - append(x ...value) sliceProxy: Append the elements and return the new slice. The elements must be convertible to the slice element type. - cap: The capacity of the slice. - sub(i, j integer) sliceProxy: Return the sub-slice that ranges from 'i' to 'j' included. This matches Lua's behaviour, but not Go's. String proxies can be browsed rune by rune with the pairs/ipairs functions. These runes are encoded as strings in Lua. String proxies can be manipulated with the following method: - sub(i, j integer) sliceProxy: Return the sub-string that ranges from 'i' to 'j' included. This matches Lua's behaviour, but not Go's. Slices must be looped with 'ipairs'.
Sprig: Template functions for Go. This package contains a number of utility functions for working with data inside of Go `html/template` and `text/template` files. To add these functions, use the `template.Funcs()` method: Note that you should add the function map before you parse any template files. Date Functions String Functions String Slice Functions: Integer Slice Functions: Conversions: Defaults: OS: File Paths: Encoding: Reflection: typeOf: Takes an interface and returns a string representation of the type. For pointers, this will return a type prefixed with an asterisk(`*`). So a pointer to type `Foo` will be `*Foo`. typeIs: Compares an interface with a string name, and returns true if they match. Note that a pointer will not match a reference. For example `*Foo` will not match `Foo`. typeIsLike: Compares an interface with a string name and returns true if the interface is that `name` or that `*name`. In other words, if the given value matches the given type or is a pointer to the given type, this returns true. kindOf: Takes an interface and returns a string representation of its kind. kindIs: Returns true if the given string matches the kind of the given interface. Note: None of these can test whether or not something implements a given interface, since doing so would require compiling the interface in ahead of time. Data Structures: Lists Functions: These are used to manipulate lists: '{{ list 1 2 3 | reverse | first }}' Dict Functions: These are used to manipulate dicts. Math Functions: Integer functions will convert integers of any width to `int64`. If a string is passed in, functions will attempt to convert with `strconv.ParseInt(s, 1064)`. If this fails, the value will be treated as 0. Crypto Functions: SemVer Functions: These functions provide version parsing and comparisons for SemVer 2 version strings.
Package database provides a block and metadata storage database. This package provides a database layer to store and retrieve block data and arbitrary metadata in a simple and efficient manner. The default backend, ffldb, has a strong focus on speed, efficiency, and robustness. It makes use leveldb for the metadata, flat files for block storage, and strict checksums in key areas to ensure data integrity. A quick overview of the features database provides are as follows: The main entry point is the DB interface. It exposes functionality for transactional-based access and storage of metadata and block data. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The Namespace interface is an abstraction that provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or committing changes that took place while the transaction was active. It also provides the root metadata bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root metadata bucket is the upper-most bucket in which data is stored and is created at the same time as the database. Use the Metadata function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database and using a managed read-write transaction to store and retrieve metadata. This example demonstrates creating a new database, using a managed read-write transaction to store a block, and using a managed read-only transaction to fetch the block.
Package morestrings implements additional functions to manipulate UTF-8 encoded strings, beyond what is provided in the standard "strings" package.
Package maptrans provides a generic map to manipulate dictionaries. There are many cases where we have some JSON object and we want to convert it to another JSON object. Since Go is strongly typed, a useful approach is to first translate JSON object into a map from string to an interface. Such map is called Maptrans. We can then define a translation of one Maptrans into another Maptrans. Translations are defined by the specially constructed Maptrans. The simplest case is when we take a field from one object and present it in the result under a different name. In this case we just write the source and destination strings. Example of name conversions Translating field to another field and changing value. We can provide a function which will translate field value to another value. The function can also perform some verification of the input. For this we need to describe convewrsion using MapElement object which is defined as There are several predefined MapFunc translators: - IDMap translates any value to itself. It can be used to map common objects to themselves. - StringMap translates string to a string, trimming leading and trailing spaces. - StringToLowerMap translates a string to lower-case string (and trims spaces). - IdentifierMap does a string translation but rejects invalid identifiers. An identifier should start with a letter or underscore and have only letters, digits and underscores in it. - IPAddrMap does a string translation of IP addresses which should be valid. - CIDRMap does a string translation of IP addresses in a slash notation, e.g . 1.2.3.4/24 - BoolMap converts boolean or string to a boolean. - UUIDMap converts string to a string verifying that the source string is a valid UUID - StringArrayMap converts array of strings into another array of strings. When Mandatory field is specified, the field must be present in the source object. To translate one map into asnother, the Type should be specified as ObjectTranslationn. The SubTranslation is the translation specification for the internal object. Translating array of objects. To translate an arary of objects into another array of objects, the Type should be specified as ObjectArrayTranslation. The SubTranslation defines translation for each element of an array. Using values to modify the original objects. Example JSON object If we want to present this as a "flat" object we need a ObjectArrayTranslation method. Example
Package dom provides GopherJS bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The documentation for some of the identifiers is based on the MDN Web Docs by Mozilla Contributors (https://developer.mozilla.org/en-US/docs/Web/API), licensed under CC-BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5/). The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices. This package has a relatively stable API. However, there will be backwards incompatible changes from time to time. This is because the package isn't complete yet, as well as because the DOM is a moving target, and APIs do change sometimes. While an attempt is made to reduce changing function signatures to a minimum, it can't always be guaranteed. Sometimes mistakes in the bindings are found that require changing arguments or return values. Interfaces defined in this package may also change on a semi-regular basis, as new methods are added to them. This happens because the bindings aren't complete and can never really be, as new features are added to the DOM. If you depend on none of the APIs changing unexpectedly, you're advised to vendor this package.
Package dom provides GopherJS bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices.
Package dom provides GopherJS bindings for the JavaScript DOM APIs. This package is an in progress effort of providing idiomatic Go bindings for the DOM, wrapping the JavaScript DOM APIs. The API is neither complete nor frozen yet, but a great amount of the DOM is already useable. While the package tries to be idiomatic Go, it also tries to stick closely to the JavaScript APIs, so that one does not need to learn a new set of APIs if one is already familiar with it. One decision that hasn't been made yet is what parts exactly should be part of this package. It is, for example, possible that the canvas APIs will live in a separate package. On the other hand, types such as StorageEvent (the event that gets fired when the HTML5 storage area changes) will be part of this package, simply due to how the DOM is structured – even if the actual storage APIs might live in a separate package. This might require special care to avoid circular dependencies. The usual entry point of using the dom package is by using the GetWindow() function which will return a Window, from which you can get things such as the current Document. The DOM has a big amount of different element and event types, but they all follow three interfaces. All functions that work on or return generic elements/events will return one of the three interfaces Element, HTMLElement or Event. In these interface values there will be concrete implementations, such as HTMLParagraphElement or FocusEvent. It's also not unusual that values of type Element also implement HTMLElement. In all cases, type assertions can be used. Example: Several functions in the JavaScript DOM return "live" collections of elements, that is collections that will be automatically updated when elements get removed or added to the DOM. Our bindings, however, return static slices of elements that, once created, will not automatically reflect updates to the DOM. This is primarily done so that slices can actually be used, as opposed to a form of iterator, but also because we think that magically changing data isn't Go's nature and that snapshots of state are a lot easier to reason about. This does not, however, mean that all objects are snapshots. Elements, events and generally objects that aren't slices or maps are simple wrappers around JavaScript objects, and as such attributes as well as method calls will always return the most current data. To reflect this behaviour, these bindings use pointers to make the semantics clear. Consider the following example: The above example will print `true`. Some objects in the JS API have two versions of attributes, one that returns a string and one that returns a DOMTokenList to ease manipulation of string-delimited lists. Some other objects only provide DOMTokenList, sometimes DOMSettableTokenList. To simplify these bindings, only the DOMTokenList variant will be made available, by the type TokenList. In cases where the string attribute was the only way to completely replace the value, our TokenList will provide Set([]string) and SetString(string) methods, which will be able to accomplish the same. Additionally, our TokenList will provide methods to convert it to strings and slices.
Objx - Go package for dealing with maps, slices, JSON and other data. Objx provides the `objx.Map` type, which is a `map[string]interface{}` that exposes a powerful `Get` method (among others) that allows you to easily and quickly get access to data within the map, without having to worry too much about type assertions, missing data, default values etc. Objx uses a preditable pattern to make access data from within `map[string]interface{}` easy. Call one of the `objx.` functions to create your `objx.Map` to get going: NOTE: Any methods or functions with the `Must` prefix will panic if something goes wrong, the rest will be optimistic and try to figure things out without panicking. Use `Get` to access the value you're interested in. You can use dot and array notation too: Once you have sought the `Value` you're interested in, you can use the `Is*` methods to determine its type. Or you can just assume the type, and use one of the strong type methods to extract the real value: If there's no value there (or if it's the wrong type) then a default value will be returned, or you can be explicit about the default value. If you're dealing with a slice of data as a value, Objx provides many useful methods for iterating, manipulating and selecting that data. You can find out more by exploring the index below. A simple example of how to use Objx: Since `objx.Map` is a `map[string]interface{}` you can treat it as such. For example, to `range` the data, do what you would expect:
Package linq provides methods for querying and manipulating slices, arrays, maps, strings, channels and collections. Authors: Alexander Kalankhodzhaev (kalan), Ahmet Alp Balkan, Cleiton Marques Souza.
Package morestrings implements additional functions to manipulate UTF-8 encoded strings, beyond what is provided in the standard "strings" package.
Useful functions to work with strings Package morestrings implements additional functions to manipulate UTF-8 encoded strings, beyond what is provided in the standard "strings" package.
Package dec is a Go wrapper around the decNumber library (arbitrary precision decimal arithmetic). For more information about the decNumber library, and detailed documentation, see http://speleotrove.com/decimal/dec.html It implements the General Decimal Arithmetic Specification in ANSI C. This specification defines a decimal arithmetic which meets the requirements of commercial, financial, and human-oriented applications. It also matches the decimal arithmetic in the IEEE 754 Standard for Floating Point Arithmetic. The library fully implements the specification, and hence supports integer, fixed-point, and floating-point decimal numbers directly, including infinite, NaN (Not a Number), and subnormal values. Both arbitrary-precision and fixed-size representations are supported. General usage notes: The Number format is optimized for efficient processing of relatively short numbers; It does, however, support arbitrary precision (up to 999,999,999 digits) and arbitrary exponent range (Emax in the range 0 through 999,999,999 and Emin in the range -999,999,999 through 0). Mathematical functions (for example Exp()) as identified below are restricted more tightly: digits, emax, and -emin in the context must be <= MaxMath (999999), and their operand(s) must be within these bounds. Logical functions are further restricted; their operands must be finite, positive, have an exponent of zero, and all digits must be either 0 or 1. The result will only contain digits which are 0 or 1 (and will have exponent=0 and a sign of 0). Operands to operator functions are never modified unless they are also specified to be the result number (which is always permitted). Other than that case, operands must not overlap. Go implementation details: The decimal32, decimal64 and decimal128 types are merged into Single, Double and Quad, respectively. Contexts are created with a immutable precision (i.e. number of digits). If one needs to change precision on the fly, discard the existing context and create a new one with the required precision. From a programming standpoint, any initialized Number is a valid operand in arithmetic operations, regardless of the settings or existence of its creator Context (not to be confused with having a valid value in a given arithmetic operation). Arithmetic functions are Number methods. The value of the receiver of the method will be set to the result of the operation. For example: Arithmetic methods always return the receiver in order to allow chain calling: Using the same Number as operand and result, like in n.Multiply(n, n, ctx), is legal and will not produce unexpected results. A few functions like the context status manipulation functions have been moved to their own type (Status). The C call decContextTestStatus(ctx, mask) is therefore replaced by ctx.Status().Set(mask) in the Go implementation. The same goes for decNumberClassToString(number) which is repleaced by number.Class().String() in go. Error handling: although most arithmetic functions can cause errors, the standard Go error handling is not used in its idiomatic form. That is, arithmetic functions do not return errors. Instead, the type of the error is ORed into the status flags in the current context (Context type). It is the responsibility of the caller to clear the status flags as required. The result of any routine which returns a number will always be a valid number (which may be a special value, such as an Infinity or NaN). This permits the use of much fewer error checks; a single check for a whole computation is often enough. To check for errors, get the Context's status with the Status() function (see the Status type), or use the Context's ErrorStatus() function. The package provides facilities for managing free-lists of Numbers in order to relieve pressure on the garbage collector in computation intensive applications. NumberPool is in fact a simple wrapper around a *Context and a sync.Pool (or the lighter util.Pool provided in the util subpackage); NumberPool will automatically cast the return value of Get() to the desired type. For example: Note the use of pool.Context on the last statement. If a particular implementation needs always-initialized numbers (like any new Go variable being initialize to the 0 value of its type), the pool's New function can be set for example to: If an application needs to change its arithmetic precision on the fly, any NumberPool built on top of the affected Context's will need to be discarded and recreated along with the Context. This will not affect existing numbers that can still be used as valid operands in arithmetic functions. Go re-implementation of decNumber's example1.c - simple addition. Cnvert the first two argument words to decNumber, add them together, and display the result. Extended re-implementation of decNumber's example2.c - compound interest. With added error handling from example3.c. Go re-implementation of decNumber's example5.c. Compressed formats. Go re-implementation of decNumber's example6.c. Packed Decimal numbers. This example reworks Example 2, starting and ending with Packed Decimal numbers. Go re-implementation of decNumber's example7.c. Using decQuad to add two numbers together. Go re-implementation of decNumber's example8.c. Using Quad with Number
Package morestrings implements additional functions to manipulate UTF-8 encoded strings, beyond what is provided in the standard "strings" package.