Package jsonapi provides a serializer and deserializer for jsonapi.org spec payloads. You can keep your model structs as is and use struct field tags to indicate to jsonapi how you want your response built or your request deserialized. What about my relationships? jsonapi supports relationships out of the box and will even side load them in your response into an "included" array--that contains associated objects. jsonapi uses StructField tags to annotate the structs fields that you already have and use in your app and then reads and writes jsonapi.org output based on the instructions you give the library in your jsonapi tags. Example structs using a Blog > Post > Comment structure, jsonapi Tag Reference Value, primary: "primary,<type field output>" This indicates that this is the primary key field for this struct type. Tag value arguments are comma separated. The first argument must be, "primary", and the second must be the name that should appear in the "type" field for all data objects that represent this type of model. Value, attr: "attr,<key name in attributes hash>[,<extra arguments>]" These fields' values should end up in the "attribute" hash for a record. The first argument must be, "attr', and the second should be the name for the key to display in the "attributes" hash for that record. The following extra arguments are also supported: "omitempty": excludes the fields value from the "attribute" hash. "iso8601": uses the ISO8601 timestamp format when serialising or deserialising the time.Time value. Value, relation: "relation,<key name in relationships hash>" Relations are struct fields that represent a one-to-one or one-to-many to other structs. jsonapi will traverse the graph of relationships and marshal or unmarshal records. The first argument must be, "relation", and the second should be the name of the relationship, used as the key in the "relationships" hash for the record. Use the methods below to Marshal and Unmarshal jsonapi.org json payloads. Visit the readme at https://github.com/google/jsonapi
Package joseki is a pure Go library for working with RDF, a powerful framework for representing informations as graphs. For more informations about RDF itself, please see https://www.w3.org/TR/rdf11-concepts Joseki provides the following features to work with RDF : * Structures to represent and manipulate the RDF model (URIs, Literals, Blank Nodes, Triples, etc). * RDF Graphs to store data, with several implentations provided. * A Low level API to query data stored in graphs. * A High level API to query data using the SPARQL 1.1 query language. * Query processing using modern techniques such as join ordering or optimized query execution plans. * Load RDF data stored in files in various formats (N-Triples, Turtle, etc) into any graph. * Serialize a RDF Graph into various formats. This package aims to work with RDF graphs, which are composed of RDF Triple {Subject Object Predicate}. Using joseki, you can represent an RDF Triple as followed : You can also store your RDF Triples in a RDF Graph, using various type of graphs. Here, we use a Tree Graph to store our triple : You can also query any triple from a RDF Graph, using a low level API or a SPARQL query. For more informations about specific features, see the documentation of each subpackage. Author : Thomas Minier
Package jsonapi provides a serializer and deserializer for jsonapi.org spec payloads. You can keep your model structs as is and use struct field tags to indicate to jsonapi how you want your response built or your request deserialized. What about my relationships? jsonapi supports relationships out of the box and will even side load them in your response into an "included" array--that contains associated objects. jsonapi uses StructField tags to annotate the structs fields that you already have and use in your app and then reads and writes jsonapi.org output based on the instructions you give the library in your jsonapi tags. Example structs using a Blog > Post > Comment structure, jsonapi Tag Reference Value, primary: "primary,<type field output>" This indicates that this is the primary key field for this struct type. Tag value arguments are comma separated. The first argument must be, "primary", and the second must be the name that should appear in the "type" field for all data objects that represent this type of model. Value, attr: "attr,<key name in attributes hash>[,<extra arguments>]" These fields' values should end up in the "attribute" hash for a record. The first argument must be, "attr', and the second should be the name for the key to display in the "attributes" hash for that record. The following extra arguments are also supported: "omitempty": excludes the fields value from the "attribute" hash. "iso8601": uses the ISO8601 timestamp format when serialising or deserialising the time.Time value. Value, relation: "relation,<key name in relationships hash>" Relations are struct fields that represent a one-to-one or one-to-many to other structs. jsonapi will traverse the graph of relationships and marshal or unmarshal records. The first argument must be, "relation", and the second should be the name of the relationship, used as the key in the "relationships" hash for the record. Use the methods below to Marshal and Unmarshal jsonapi.org json payloads. Visit the readme at https://github.com/companyinfo/jsonapi
Package disjoint implements a disjoint-set data structure. A disjoint-set—also called union-find—data structure keeps track of nonoverlapping partitions of a collection of data elements. Initially, each data element belongs to its own, singleton, set. The following operations can then be performed on these sets: • Union merges two sets into a single set containing the union of their elements. • Find returns an arbitrary element from a set. The critical feature is that Find returns the same element when given any element in a set. The implication is that two elements A and B belong to the same set if and only if A.Find() == B.Find(). Both Union and Find take as arguments elements of sets, not the sets themselves. Because sets are mutually disjoint, an element uniquely identifies a set. Ergo, there is no need to pass sets to those functions. Disjoint sets are more limited in functionality than conventional sets. They support only set union, not set intersection, set difference, or any other set operation. They don't allow an element to reside in more than one set. They don't even provide a way to enumerate the elements in a given set. What makes them useful, though, is that they're extremely fast, especially for large sets; both Union and Find run in amortized near-constant time. See http://en.wikipedia.org/wiki/Disjoint-set_data_structure for more information. Disjoint sets are often used in graph algorithms, for example to find a minimal spanning tree for a graph or to determine if adding a given edge to a graph would create a cycle. Draw a maze. This is my favorite use of disjoint-set forests. The algorithm works by repeatedly knocking down walls between two sets of rooms that are not part of the same union and merging those rooms into a single union. A union represents connected components—rooms in the maze that are mutually reachable. A single union implies that every room is reachable from every other room. This is a fairly long example, but note that only half of it relates to generating the maze. The other half renders the maze using Unicode box-drawing characters.
Package pdf implements reading of PDF files. PDF is Adobe's Portable Document Format, ubiquitous on the internet. A PDF document is a complex data format built on a fairly simple structure. This package exposes the simple structure along with some wrappers to extract basic information. If more complex information is needed, it is possible to extract that information by interpreting the structure exposed by this package. Specifically, a PDF is a data structure built from Values, each of which has one of the following Kinds: The accessors on Value—Int64, Float64, Bool, Name, and so on—return a view of the data as the given type. When there is no appropriate view, the accessor returns a zero result. For example, the Name accessor returns the empty string if called on a Value v for which v.Kind() != Name. Returning zero values this way, especially from the Dict and Array accessors, which themselves return Values, makes it possible to traverse a PDF quickly without writing any error checking. On the other hand, it means that mistakes can go unreported. The basic structure of the PDF file is exposed as the graph of Values. Most richer data structures in a PDF file are dictionaries with specific interpretations of the name-value pairs. The Font and Page wrappers make the interpretation of a specific Value as the corresponding type easier. They are only helpers, though: they are implemented only in terms of the Value API and could be moved outside the package. Equally important, traversal of other PDF data structures can be implemented in other packages as needed.
Package amt provides a reference implementation of the IPLD AMT (Array Mapped Trie) used in the Filecoin blockchain. The AMT algorithm is similar to a HAMT https://en.wikipedia.org/wiki/Hash_array_mapped_trie but instead presents an array-like interface where the indexes themselves form the mapping to nodes in the trie structure. An AMT is suitable for storing sparse array data as a minimum amount of intermediate nodes are required to address a small number of entries even when their indexes span a large distance. AMT is also a suitable means of storing non-sparse array data as required, with a small amount of storage and algorithmic overhead required to handle mapping that assumes that some elements within any range of data may not be present. The AMT algorithm produces a tree-like graph, with a single root node addressing a collection of child nodes which connect downward toward leaf nodes which store the actual entries. No terminal entries are stored in intermediate elements of the tree, unlike in a HAMT. We can divide up the AMT tree structure into "levels" or "heights", where a height of zero contains the terminal elements, and the maximum height of the tree contains the single root node. Intermediate nodes are used to span across the range of indexes. Any AMT instance uses a fixed "width" that is consistent across the tree's nodes. An AMT's "bitWidth" dictates the width, or maximum-brancing factor (arity) of the AMT's nodes by determining how many bits of the original index are used to determine the index at any given level. A bitWidth of 3 (the default for this implementation) can generate indexes in the range of 0 to (3^2)-1=7, i.e. a "width" of 8. In practice, this means that an AMT with a bitWidth of 3 has a branching factor of _between 1 and 8_ for any node in the structure. Considering the minimal case: a minimal AMT contains a single node which serves as both the root and the leaf node and can hold zero or more elements (an empty AMT is possible, although a special-case, and consists of a zero-length root). This minimal AMT can store array indexes from 0 to width-1 (8 for the default bitWidth of 3) without requiring the addition of additional nodes. Attempts to add additional indexes beyond width-1 will result in additional nodes being added and a tree structure in order to address the new elements. The minimal AMT node is said to have a height of 0. Every node in an AMT has a height that indicates its distance from the leaf nodes. All leaf nodes have a height of 0. The height of the root node dictates the overall height of the entire AMT. In the case of the minimal AMT, this is 0. Elements are stored in a compacted form within nodes, they are "position-mapped" by a bitmap field that is stored with the node. The bitmap is a simple byte array, where each bit represents an element of the data that can be stored in the node. With a width of 8, the bitmap is a single byte and up to 8 elements can be stored in the node. The data array of a node _only stores elements that are present in that node_, so the array is commonly shorter than the maximum width. An empty AMT is a special-case where the single node can have zero elements, therefore a zero-length data array and a bitmap of `0x00`. In all other cases, the data array must have between 1 and width elements. Determining the position of an index within the data array requires counting the number of set bits within the bitmap up to the element we are concerned with. If the bitmap has bits 2, 4 and 6 set, we can see that only 3 of the bits are set so our data array should hold 3 elements. To address index 4, we know that the first element will be index 2 and therefore the second will hold index 4. This format allows us to store only the elements that are set in the node. Overflow beyond the single node AMT by adding an index beyond width-1 requires an increase in height in order to address all elements. If an element in the range of width to (width*2)-1 is added, a single additional height is required which will result in a new root node which is used to address two consecutive leaf nodes. Because we have an arity of up to width at any node, the addition of indexes in the range of 0 to (width^2)-1 will still require only the addition of a single additional height above the leaf nodes, i.e. height 1. From the width of an AMT we can derive the maximum range of indexes that can be contained by an AMT at any given `height` with the formula width^(height+1)-1. e.g. an AMT with a width of 8 and a height of 2 can address indexes 0 to 8^(2+1)-1=511. Incrementing the height doubles the range of indexes that can be contained within that structure. Nodes above height 0 (non-leaf nodes) do not contain terminal elements, but instead, their data array contains links to child nodes. The index compaction using the bitmap is the same as for leaf nodes, so each non-leaf node only stores as many links as it has child nodes. Because additional height is required to address larger indexes, even a single-element AMT will require more than one node where the index is greater than the width of the AMT. For a width of 8, indexes 8 to 63 require a height of 1, indexes 64 to 511 require a height of 2, indexes 512 to 4095 require a height of 3, etc. Retrieving elements from the AMT requires extracting only the portion of the requested index that is required at each height to determine the position in the data array to navigate into. When traversing through the tree, we only need to select from indexes 0 to width-1. To do this, we take log2(width) bits from the index to form a number that is between 0 and width-1. e.g. for a width of 8, we only need 3 bits to form a number between 0 and 7, so we only consume 3 bits per level of the AMT as we traverse. A simple method to calculate this at any height in the AMT (assuming bitWidth of 3, i.e. a width of 8) is: 1. Calculate the maximum number of nodes (not entries) that may be present in an sub-tree rooted at the current height. width^height provides this number. e.g. at height 0, only 1 node can be present, but at height 3, we may have a tree of up to 512 nodes (storing up to 8^(3+1)=4096 entries). 2. Divide the index by this number to find the index for this height. e.g. an index of 3 at height 0 will be 3/1=3, or an index of 20 at height 1 will be 20/8=2. 3. If we are at height 0, the element we want is at the data index, position-mapped via the bitmap. 4. If we are above height 0, we need to navigate to the child element at the index we calculated, position-mapped via the bitmap. When traversing to the child, we discard the upper portion of the index that we no longer need. This can be achieved by a mod operation against the number-of-nodes value. e.g. an index of 20 at height 1 requires navigation to the element at position 2, when moving to that element (which is height 0), we truncate the index with 20%8=4, at height 0 this index will be the index in our data array (position-mapped via the bitmap). In this way, each sub-tree root consumes a small slice, log2(width) bits long, of the original index. Adding new elements to an AMT may require up to 3 steps: 1. Increasing the height to accommodate a new index if the current height is not sufficient to address the new index. Increasing the height requires turning the current root node into an intermediate and adding a new root which links to the old (repeated until the required height is reached). 2. Adding any missing intermediate and leaf nodes that are required to address the new index. Depending on the density of existing indexes, this may require the addition of up to height-1 new nodes to connect the root to the required leaf. Sparse indexes will mean large gaps in the tree that will need filling to address new, equally sparse, indexes. 3. Setting the element at the leaf node in the appropriate position in the data array and setting the appropriate bit in the bitmap. Removing elements requires a reversal of this process. Any empty node (other than the case of a completely empty AMT) must be removed and its parent should have its child link removed. This removal may recurse up the tree to remove many unnecessary intermediate nodes. The root node may also be removed if the current height is no longer necessary to contain the range of indexes still in the AMT. This can be easily determined if _only_ the first bit of the root's bitmap is set, meaning only the left-most is present, which will become the new root node (repeated until the new root has more than the first bit set or height of 0, the single-node case). See https://github.com/ipld/specs/blob/master/data-structures/hashmap.md for a description of a HAMT algorithm. And https://github.com/ipld/specs/blob/master/data-structures/vector.md for a description of a similar algorithm to an AMT that doesn't support internal node compression and therefore doesn't support sparse arrays. Unlike a HAMT, the AMT algorithm doesn't benefit from randomness introduced by a hash algorithm. Therefore an AMT used in cases where user-input can influence indexes, larger-than-necessary tree structures may present risks as well as the challenge imposed by having a strict upper-limit on the indexes addressable by the AMT. A width of 8, using 64-bit integers for indexing, allows for a tree height of up to 64/log2(8)=21 (i.e. a width of 8 has a bitWidth of 3, dividing the 64 bits of the uint into 21 separate per-height indexes). Careful placement of indexes could create extremely sub-optimal forms with large heights connecting leaf nodes that are sparsely packed. The overhead of the large number of intermediate nodes required to connect leaf nodes in AMTs that contain high indexes can be abused to create perverse forms that contain large numbers of nodes to store a minimal number of elements. Minimal nodes will be created where indexes are all in the lower-range. The optimal case for an AMT is contiguous index values starting from zero. As larger indexes are introduced that span beyond the current maximum, more nodes are required to address the new nodes _and_ the existing lower index nodes. Consider a case where a width=8 AMT is only addressing indexes less than 8 and requiring a single height. The introduction of a single index within 8 of the maximum 64-bit unsigned integer range will require the new root to have a height of 21 and have enough connecting nodes between it and both the existing elements and the new upper index. This pattern of behavior may be acceptable if there is significant density of entries under a particular maximum index. There is a direct relationship between the sparseness of index values and the number of nodes required to address the entries. This should be the key consideration when determining whether an AMT is a suitable data-structure for a given application.
Package jsonapi provides a serializer and deserializer for jsonapi.org spec payloads. You can keep your model structs as is and use struct field tags to indicate to jsonapi how you want your response built or your request deserialzied. What about my relationships? jsonapi supports relationships out of the box and will even side load them in your response into an "included" array--that contains associated objects. jsonapi uses StructField tags to annotate the structs fields that you already have and use in your app and then reads and writes jsonapi.org output based on the instructions you give the library in your jsonapi tags. Example structs using a Blog > Post > Comment structure, jsonapi Tag Reference Value, primary: "primary,<type field output>" This indicates that this is the primary key field for this struct type. Tag value arguments are comma separated. The first argument must be, "primary", and the second must be the name that should appear in the "type" field for all data objects that represent this type of model. Value, attr: "attr,<key name in attributes hash>[,<extra arguments>]" These fields' values should end up in the "attribute" hash for a record. The first argument must be, "attr', and the second should be the name for the key to display in the "attributes" hash for that record. The following extra arguments are also supported: "omitempty": excludes the fields value from the "attribute" hash. "iso8601": uses the ISO8601 timestamp format when serialising or deserialising the time.Time value. Value, relation: "relation,<key name in relationships hash>" Relations are struct fields that represent a one-to-one or one-to-many to other structs. jsonapi will traverse the graph of relationships and marshal or unmarshal records. The first argument must be, "relation", and the second should be the name of the relationship, used as the key in the "relationships" hash for the record. Use the methods below to Marshal and Unmarshal jsonapi.org json payloads. Visit the readme at https://github.com/google/jsonapi
Package jsonapi provides a serializer and deserializer for jsonapi.org spec payloads. You can keep your model structs as is and use struct field tags to indicate to jsonapi how you want your response built or your request deserialzied. What about my relationships? jsonapi supports relationships out of the box and will even side load them in your response into an "included" array--that contains associated objects. jsonapi uses StructField tags to annotate the structs fields that you already have and use in your app and then reads and writes jsonapi.org output based on the instructions you give the library in your jsonapi tags. Example structs using a Blog > Post > Comment structure, jsonapi Tag Reference Value, primary: "primary,<type field output>" This indicates that this is the primary key field for this struct type. Tag value arguments are comma separated. The first argument must be, "primary", and the second must be the name that should appear in the "type" field for all data objects that represent this type of model. Value, attr: "attr,<key name in attributes hash>[,<extra arguments>]" These fields' values should end up in the "attribute" hash for a record. The first argument must be, "attr', and the second should be the name for the key to display in the the "attributes" hash for that record. The following extra arguments are also supported: "omitempty": excludes the fields value from the "attribute" hash. "iso8601": uses the ISO8601 timestamp format when serialising or deserialising the time.Time value. Value, relation: "relation,<key name in relationships hash>" Relations are struct fields that represent a one-to-one or one-to-many to other structs. jsonapi will traverse the graph of relationships and marshal or unmarshal records. The first argument must be, "relation", and the second should be the name of the relationship, used as the key in the "relationships" hash for the record. Use the methods below to Marshal and Unmarshal jsonapi.org json payloads. Visit the readme at https://github.com/google/jsonapi
Package pdf implements reading of PDF files. PDF is Adobe's Portable Document Format, ubiquitous on the internet. A PDF document is a complex data format built on a fairly simple structure. This package exposes the simple structure along with some wrappers to extract basic information. If more complex information is needed, it is possible to extract that information by interpreting the structure exposed by this package. Specifically, a PDF is a data structure built from Values, each of which has one of the following Kinds: The accessors on Value—Int64, Float64, Bool, Name, and so on—return a view of the data as the given type. When there is no appropriate view, the accessor returns a zero result. For example, the Name accessor returns the empty string if called on a Value v for which v.Kind() != Name. Returning zero values this way, especially from the Dict and Array accessors, which themselves return Values, makes it possible to traverse a PDF quickly without writing any error checking. On the other hand, it means that mistakes can go unreported. The basic structure of the PDF file is exposed as the graph of Values. Most richer data structures in a PDF file are dictionaries with specific interpretations of the name-value pairs. The Font and Page wrappers make the interpretation of a specific Value as the corresponding type easier. They are only helpers, though: they are implemented only in terms of the Value API and could be moved outside the package. Equally important, traversal of other PDF data structures can be implemented in other packages as needed.
Package wardleyToGo provides primitives to build an in-memory map (a plan). In the context of the package "a map" represents a landscape. The landscape is made of "Components". Each component knows its own location on a map. Components can collaborate, meaning that they may be linked together. Therefore a map is also a graph. The entrypoint of this API is the 'Map' structure
Package pdf implements reading of PDF files. PDF is Adobe's Portable Document Format, ubiquitous on the internet. A PDF document is a complex data format built on a fairly simple structure. This package exposes the simple structure along with some wrappers to extract basic information. If more complex information is needed, it is possible to extract that information by interpreting the structure exposed by this package. Specifically, a PDF is a data structure built from Values, each of which has one of the following Kinds: The accessors on Value—Int64, Float64, Bool, Name, and so on—return a view of the data as the given type. When there is no appropriate view, the accessor returns a zero result. For example, the Name accessor returns the empty string if called on a Value v for which v.Kind() != Name. Returning zero values this way, especially from the Dict and Array accessors, which themselves return Values, makes it possible to traverse a PDF quickly without writing any error checking. On the other hand, it means that mistakes can go unreported. The basic structure of the PDF file is exposed as the graph of Values. Most richer data structures in a PDF file are dictionaries with specific interpretations of the name-value pairs. The Font and Page wrappers make the interpretation of a specific Value as the corresponding type easier. They are only helpers, though: they are implemented only in terms of the Value API and could be moved outside the package. Equally important, traversal of other PDF data structures can be implemented in other packages as needed.
Package jsonapi provides a serializer and deserializer for jsonapi.org spec payloads. You can keep your model structs as is and use struct field tags to indicate to jsonapi how you want your response built or your request deserialzied. What about my relationships? jsonapi supports relationships out of the box and will even side load them in your response into an "included" array--that contains associated objects. jsonapi uses StructField tags to annotate the structs fields that you already have and use in your app and then reads and writes jsonapi.org output based on the instructions you give the library in your jsonapi tags. Example structs using a Blog > Post > Comment structure, jsonapi Tag Reference Value, primary: "primary,<type field output>" This indicates that this is the primary key field for this struct type. Tag value arguments are comma separated. The first argument must be, "primary", and the second must be the name that should appear in the "type" field for all data objects that represent this type of model. Value, attr: "attr,<key name in attributes hash>[,<extra arguments>]" These fields' values should end up in the "attribute" hash for a record. The first argument must be, "attr', and the second should be the name for the key to display in the the "attributes" hash for that record. The following extra arguments are also supported: "omitempty": excludes the fields value from the "attribute" hash. "iso8601": uses the ISO8601 timestamp format when serialising or deserialising the time.Time value. Value, relation: "relation,<key name in relationships hash>" Relations are struct fields that represent a one-to-one or one-to-many to other structs. jsonapi will traverse the graph of relationships and marshal or unmarshal records. The first argument must be, "relation", and the second should be the name of the relationship, used as the key in the "relationships" hash for the record. Use the methods below to Marshal and Unmarshal jsonapi.org json payloads. Visit the readme at https://github.com/google/jsonapi
Package pdf implements reading of PDF files. PDF is Adobe's Portable Document Format, ubiquitous on the internet. A PDF document is a complex data format built on a fairly simple structure. This package exposes the simple structure along with some wrappers to extract basic information. If more complex information is needed, it is possible to extract that information by interpreting the structure exposed by this package. Specifically, a PDF is a data structure built from Values, each of which has one of the following Kinds: The accessors on Value—Int64, Float64, Bool, Name, and so on—return a view of the data as the given type. When there is no appropriate view, the accessor returns a zero result. For example, the Name accessor returns the empty string if called on a Value v for which v.Kind() != Name. Returning zero values this way, especially from the Dict and Array accessors, which themselves return Values, makes it possible to traverse a PDF quickly without writing any error checking. On the other hand, it means that mistakes can go unreported. The basic structure of the PDF file is exposed as the graph of Values. Most richer data structures in a PDF file are dictionaries with specific interpretations of the name-value pairs. The Font and Page wrappers make the interpretation of a specific Value as the corresponding type easier. They are only helpers, though: they are implemented only in terms of the Value API and could be moved outside the package. Equally important, traversal of other PDF data structures can be implemented in other packages as needed.
Package pdf implements reading of PDF files. PDF is Adobe's Portable Document Format, ubiquitous on the internet. A PDF document is a complex data format built on a fairly simple structure. This package exposes the simple structure along with some wrappers to extract basic information. If more complex information is needed, it is possible to extract that information by interpreting the structure exposed by this package. Specifically, a PDF is a data structure built from Values, each of which has one of the following Kinds: The accessors on Value—Int64, Float64, Bool, Name, and so on—return a view of the data as the given type. When there is no appropriate view, the accessor returns a zero result. For example, the Name accessor returns the empty string if called on a Value v for which v.Kind() != Name. Returning zero values this way, especially from the Dict and Array accessors, which themselves return Values, makes it possible to traverse a PDF quickly without writing any error checking. On the other hand, it means that mistakes can go unreported. The basic structure of the PDF file is exposed as the graph of Values. Most richer data structures in a PDF file are dictionaries with specific interpretations of the name-value pairs. The Font and Page wrappers make the interpretation of a specific Value as the corresponding type easier. They are only helpers, though: they are implemented only in terms of the Value API and could be moved outside the package. Equally important, traversal of other PDF data structures can be implemented in other packages as needed.
Package pdf implements reading of PDF files. PDF is Adobe's Portable Document Format, ubiquitous on the internet. A PDF document is a complex data format built on a fairly simple structure. This package exposes the simple structure along with some wrappers to extract basic information. If more complex information is needed, it is possible to extract that information by interpreting the structure exposed by this package. Specifically, a PDF is a data structure built from Values, each of which has one of the following Kinds: The accessors on Value—Int64, Float64, Bool, Name, and so on—return a view of the data as the given type. When there is no appropriate view, the accessor returns a zero result. For example, the Name accessor returns the empty string if called on a Value v for which v.Kind() != Name. Returning zero values this way, especially from the Dict and Array accessors, which themselves return Values, makes it possible to traverse a PDF quickly without writing any error checking. On the other hand, it means that mistakes can go unreported. The basic structure of the PDF file is exposed as the graph of Values. Most richer data structures in a PDF file are dictionaries with specific interpretations of the name-value pairs. The Font and Page wrappers make the interpretation of a specific Value as the corresponding type easier. They are only helpers, though: they are implemented only in terms of the Value API and could be moved outside the package. Equally important, traversal of other PDF data structures can be implemented in other packages as needed.
Package cadence and its subdirectories contain the Cadence client side framework. The Cadence service is a task orchestrator for your application’s tasks. Applications using Cadence can execute a logical flow of tasks, especially long-running business logic, asynchronously or synchronously. They can also scale at runtime on distributed systems. A quick example illustrates its use case. Consider Uber Eats where Cadence manages the entire business flow from placing an order, accepting it, handling shopping cart processes (adding, updating, and calculating cart items), entering the order in a pipeline (for preparing food and coordinating delivery), to scheduling delivery as well as handling payments. Cadence consists of a programming framework (or client library) and a managed service (or backend). The framework enables developers to author and coordinate tasks in Go code. The root cadence package contains common data structures. The subpackages are: The Cadence hosted service brokers and persists events generated during workflow execution. Worker nodes owned and operated by customers execute the coordination and task logic. To facilitate the implementation of worker nodes Cadence provides a client-side library for the Go language. In Cadence, you can code the logical flow of events separately as a workflow and code business logic as activities. The workflow identifies the activities and sequences them, while an activity executes the logic. Dynamic workflow execution graphs - Determine the workflow execution graphs at runtime based on the data you are processing. Cadence does not pre-compute the execution graphs at compile time or at workflow start time. Therefore, you have the ability to write workflows that can dynamically adjust to the amount of data they are processing. If you need to trigger 10 instances of an activity to efficiently process all the data in one run, but only 3 for a subsequent run, you can do that. Child Workflows - Orchestrate the execution of a workflow from within another workflow. Cadence will return the results of the child workflow execution to the parent workflow upon completion of the child workflow. No polling is required in the parent workflow to monitor status of the child workflow, making the process efficient and fault tolerant. Durable Timers - Implement delayed execution of tasks in your workflows that are robust to worker failures. Cadence provides two easy to use APIs, **workflow.Sleep** and **workflow.Timer**, for implementing time based events in your workflows. Cadence ensures that the timer settings are persisted and the events are generated even if workers executing the workflow crash. Signals - Modify/influence the execution path of a running workflow by pushing additional data directly to the workflow using a signal. Via the Signal facility, Cadence provides a mechanism to consume external events directly in workflow code. Task routing - Efficiently process large amounts of data using a Cadence workflow, by caching the data locally on a worker and executing all activities meant to process that data on that same worker. Cadence enables you to choose the worker you want to execute a certain activity by scheduling that activity execution in the worker's specific task-list. Unique workflow ID enforcement - Use business entity IDs for your workflows and let Cadence ensure that only one workflow is running for a particular entity at a time. Cadence implements an atomic "uniqueness check" and ensures that no race conditions are possible that would result in multiple workflow executions for the same workflow ID. Therefore, you can implement your code to attempt to start a workflow without checking if the ID is already in use, even in the cases where only one active execution per workflow ID is desired. Perpetual/ContinueAsNew workflows - Run periodic tasks as a single perpetually running workflow. With the "ContinueAsNew" facility, Cadence allows you to leverage the "unique workflow ID enforcement" feature for periodic workflows. Cadence will complete the current execution and start the new execution atomically, ensuring you get to keep your workflow ID. By starting a new execution Cadence also ensures that workflow execution history does not grow indefinitely for perpetual workflows. At-most once activity execution - Execute non-idempotent activities as part of your workflows. Cadence will not automatically retry activities on failure. For every activity execution Cadence will return a success result, a failure result, or a timeout to the workflow code and let the workflow code determine how each one of those result types should be handled. Asynch Activity Completion - Incorporate human input or thrid-party service asynchronous callbacks into your workflows. Cadence allows a workflow to pause execution on an activity and wait for an external actor to resume it with a callback. During this pause the activity does not have any actively executing code, such as a polling loop, and is merely an entry in the Cadence datastore. Therefore, the workflow is unaffected by any worker failures happening over the duration of the pause. Activity Heartbeating - Detect unexpected failures/crashes and track progress in long running activities early. By configuring your activity to report progress periodically to the Cadence server, you can detect a crash that occurs 10 minutes into an hour-long activity execution much sooner, instead of waiting for the 60-minute execution timeout. The recorded progress before the crash gives you sufficient information to determine whether to restart the activity from the beginning or resume it from the point of failure. Timeouts for activities and workflow executions - Protect against stuck and unresponsive activities and workflows with appropriate timeout values. Cadence requires that timeout values are provided for every activity or workflow invocation. There is no upper bound on the timeout values, so you can set timeouts that span days, weeks, or even months. Visibility - Get a list of all your active and/or completed workflow. Explore the execution history of a particular workflow execution. Cadence provides a set of visibility APIs that allow you, the workflow owner, to monitor past and current workflow executions. Debuggability - Replay any workflow execution history locally under a debugger. The Cadence client library provides an API to allow you to capture a stack trace from any failed workflow execution history.
Package pdf implements reading of PDF files. PDF is Adobe's Portable Document Format, ubiquitous on the internet. A PDF document is a complex data format built on a fairly simple structure. This package exposes the simple structure along with some wrappers to extract basic information. If more complex information is needed, it is possible to extract that information by interpreting the structure exposed by this package. Specifically, a PDF is a data structure built from Values, each of which has one of the following Kinds: The accessors on Value—Int64, Float64, Bool, Name, and so on—return a view of the data as the given type. When there is no appropriate view, the accessor returns a zero result. For example, the Name accessor returns the empty string if called on a Value v for which v.Kind() != Name. Returning zero values this way, especially from the Dict and Array accessors, which themselves return Values, makes it possible to traverse a PDF quickly without writing any error checking. On the other hand, it means that mistakes can go unreported. The basic structure of the PDF file is exposed as the graph of Values. Most richer data structures in a PDF file are dictionaries with specific interpretations of the name-value pairs. The Font and Page wrappers make the interpretation of a specific Value as the corresponding type easier. They are only helpers, though: they are implemented only in terms of the Value API and could be moved outside the package. Equally important, traversal of other PDF data structures can be implemented in other packages as needed.
Package cadence and its subdirectories contain the Cadence client side framework. The Cadence service is a task orchestrator for your application’s tasks. Applications using Cadence can execute a logical flow of tasks, especially long-running business logic, asynchronously or synchronously. They can also scale at runtime on distributed systems. A quick example illustrates its use case. Consider Uber Eats where Cadence manages the entire business flow from placing an order, accepting it, handling shopping cart processes (adding, updating, and calculating cart items), entering the order in a pipeline (for preparing food and coordinating delivery), to scheduling delivery as well as handling payments. Cadence consists of a programming framework (or client library) and a managed service (or backend). The framework enables developers to author and coordinate tasks in Go code. The root cadence package contains common data structures. The subpackages are: The Cadence hosted service brokers and persists events generated during workflow execution. Worker nodes owned and operated by customers execute the coordination and task logic. To facilitate the implementation of worker nodes Cadence provides a client-side library for the Go language. In Cadence, you can code the logical flow of events separately as a workflow and code business logic as activities. The workflow identifies the activities and sequences them, while an activity executes the logic. Dynamic workflow execution graphs - Determine the workflow execution graphs at runtime based on the data you are processing. Cadence does not pre-compute the execution graphs at compile time or at workflow start time. Therefore, you have the ability to write workflows that can dynamically adjust to the amount of data they are processing. If you need to trigger 10 instances of an activity to efficiently process all the data in one run, but only 3 for a subsequent run, you can do that. Child Workflows - Orchestrate the execution of a workflow from within another workflow. Cadence will return the results of the child workflow execution to the parent workflow upon completion of the child workflow. No polling is required in the parent workflow to monitor status of the child workflow, making the process efficient and fault tolerant. Durable Timers - Implement delayed execution of tasks in your workflows that are robust to worker failures. Cadence provides two easy to use APIs, **workflow.Sleep** and **workflow.Timer**, for implementing time based events in your workflows. Cadence ensures that the timer settings are persisted and the events are generated even if workers executing the workflow crash. Signals - Modify/influence the execution path of a running workflow by pushing additional data directly to the workflow using a signal. Via the Signal facility, Cadence provides a mechanism to consume external events directly in workflow code. Task routing - Efficiently process large amounts of data using a Cadence workflow, by caching the data locally on a worker and executing all activities meant to process that data on that same worker. Cadence enables you to choose the worker you want to execute a certain activity by scheduling that activity execution in the worker's specific task-list. Unique workflow ID enforcement - Use business entity IDs for your workflows and let Cadence ensure that only one workflow is running for a particular entity at a time. Cadence implements an atomic "uniqueness check" and ensures that no race conditions are possible that would result in multiple workflow executions for the same workflow ID. Therefore, you can implement your code to attempt to start a workflow without checking if the ID is already in use, even in the cases where only one active execution per workflow ID is desired. Perpetual/ContinueAsNew workflows - Run periodic tasks as a single perpetually running workflow. With the "ContinueAsNew" facility, Cadence allows you to leverage the "unique workflow ID enforcement" feature for periodic workflows. Cadence will complete the current execution and start the new execution atomically, ensuring you get to keep your workflow ID. By starting a new execution Cadence also ensures that workflow execution history does not grow indefinitely for perpetual workflows. At-most once activity execution - Execute non-idempotent activities as part of your workflows. Cadence will not automatically retry activities on failure. For every activity execution Cadence will return a success result, a failure result, or a timeout to the workflow code and let the workflow code determine how each one of those result types should be handled. Asynch Activity Completion - Incorporate human input or thrid-party service asynchronous callbacks into your workflows. Cadence allows a workflow to pause execution on an activity and wait for an external actor to resume it with a callback. During this pause the activity does not have any actively executing code, such as a polling loop, and is merely an entry in the Cadence datastore. Therefore, the workflow is unaffected by any worker failures happening over the duration of the pause. Activity Heartbeating - Detect unexpected failures/crashes and track progress in long running activities early. By configuring your activity to report progress periodically to the Cadence server, you can detect a crash that occurs 10 minutes into an hour-long activity execution much sooner, instead of waiting for the 60-minute execution timeout. The recorded progress before the crash gives you sufficient information to determine whether to restart the activity from the beginning or resume it from the point of failure. Timeouts for activities and workflow executions - Protect against stuck and unresponsive activities and workflows with appropriate timeout values. Cadence requires that timeout values are provided for every activity or workflow invocation. There is no upper bound on the timeout values, so you can set timeouts that span days, weeks, or even months. Visibility - Get a list of all your active and/or completed workflow. Explore the execution history of a particular workflow execution. Cadence provides a set of visibility APIs that allow you, the workflow owner, to monitor past and current workflow executions. Debuggability - Replay any workflow execution history locally under a debugger. The Cadence client library provides an API to allow you to capture a stack trace from any failed workflow execution history.
Package pdf implements reading of PDF files. PDF is Adobe's Portable Document Format, ubiquitous on the internet. A PDF document is a complex data format built on a fairly simple structure. This package exposes the simple structure along with some wrappers to extract basic information. If more complex information is needed, it is possible to extract that information by interpreting the structure exposed by this package. Specifically, a PDF is a data structure built from Values, each of which has one of the following Kinds: The accessors on Value—Int64, Float64, Bool, Name, and so on—return a view of the data as the given type. When there is no appropriate view, the accessor returns a zero result. For example, the Name accessor returns the empty string if called on a Value v for which v.Kind() != Name. Returning zero values this way, especially from the Dict and Array accessors, which themselves return Values, makes it possible to traverse a PDF quickly without writing any error checking. On the other hand, it means that mistakes can go unreported. The basic structure of the PDF file is exposed as the graph of Values. Most richer data structures in a PDF file are dictionaries with specific interpretations of the name-value pairs. The Font and Page wrappers make the interpretation of a specific Value as the corresponding type easier. They are only helpers, though: they are implemented only in terms of the Value API and could be moved outside the package. Equally important, traversal of other PDF data structures can be implemented in other packages as needed.
Package textrank is an implementation of Text Rank algorithm in Go with extendable features (automatic summarization, phrase extraction). It supports multithreading by goroutines. The package is under The MIT Licence. If there was a program what could rank book size text's words, phrases and sentences continuously on multiple threads and it would be opened to modifing by objects, written in a simple, secure, static language and if it would be very well documented... Now, here it is. - Find the most important phrases. - Find the most important words. - Find the most important N sentences. - Importance by phrase weights. - Importance by word occurrence. - Find the first N sentences, start from Xth sentence. - Find sentences by phrase chains ordered by position in text. - Access to the whole ranked data. - Support more languages. - Algorithm for weighting can be modified by interface implementation. - Parser can be modified by interface implementation. - Multi thread support. Find the most important phrases: This is the most basic and simplest usage of textrank. All possible pre-defined finder queries: After ranking, the graph contains a lot of valuable data. There are functions in textrank package what contains logic to retrieve those data from the graph. After ranking, the graph contains a lot of valuable data. The GetRank function allows access to the graph and every data can be retrieved from this structure. Adding text continuously: It is possibe to add more text after another texts already have been added. The Ranking function can merge these multiple texts and it can recalculate the weights and all related data. Using different algorithm to ranking text: There are two algorithm has implemented, it is possible to write custom algorithm by Algorithm interface and use it instead of defaults. Using multiple graphs: Graph ID exists because it is possible run multiple independent text ranking processes. Using different non-English languages: Engish is used by default but it is possible to add any language. To use other languages a stop word list is required what you can find here: https://github.com/stopwords-iso Asynchronous usage by goroutines: It is thread safe. Independent graphs can receive texts in the same time and can be extended by more text also in the same time.
Package fastjsonapi provides a fasthttp based serializer and deserializer for jsonapi.org spec payloads. You can keep your model structs as is and use struct field tags to indicate to jsonapi how you want your response built or your request deserialzied. What about my relationships? jsonapi supports relationships out of the box and will even side load them in your response into an "included" array--that contains associated objects. jsonapi uses StructField tags to annotate the structs fields that you already have and use in your app and then reads and writes jsonapi.org output based on the instructions you give the library in your jsonapi tags. Example structs using a Blog > Post > Comment structure, fastjsonapi Tag Reference Value, primary: "primary,<type field output>" This indicates that this is the primary key field for this struct type. Tag value arguments are comma separated. The first argument must be, "primary", and the second must be the name that should appear in the "type" field for all data objects that represent this type of model. Value, attr: "attr,<key name in attributes hash>[,<extra arguments>]" These fields' values should end up in the "attribute" hash for a record. The first argument must be, "attr', and the second should be the name for the key to display in the the "attributes" hash for that record. The following extra arguments are also supported: "omitempty": excludes the fields value from the "attribute" hash. "iso8601": uses the ISO8601 timestamp format when serialising or deserialising the time.Time value. Value, relation: "relation,<key name in relationships hash>" Relations are struct fields that represent a one-to-one or one-to-many to other structs. jsonapi will traverse the graph of relationships and marshal or unmarshal records. The first argument must be, "relation", and the second should be the name of the relationship, used as the key in the "relationships" hash for the record. Use the methods below to Marshal and Unmarshal jsonapi.org json payloads. Visit the readme at https://github.com/google/jsonapi
Package jsonapi provides a serializer and deserializer for jsonapi.org spec payloads. You can keep your model structs as is and use struct field tags to indicate to jsonapi how you want your response built or your request deserialzied. What about my relationships? jsonapi supports relationships out of the box and will even side load them in your response into an "included" array--that contains associated objects. jsonapi uses StructField tags to annotate the structs fields that you already have and use in your app and then reads and writes jsonapi.org output based on the instructions you give the library in your jsonapi tags. Example structs using a Blog > Post > Comment structure, jsonapi Tag Reference Value, primary: "primary,<type field output>" This indicates that this is the primary key field for this struct type. Tag value arguments are comma separated. The first argument must be, "primary", and the second must be the name that should appear in the "type" field for all data objects that represent this type of model. Value, attr: "attr,<key name in attributes hash>[,<extra arguments>]" These fields' values should end up in the "attribute" hash for a record. The first argument must be, "attr', and the second should be the name for the key to display in the "attributes" hash for that record. The following extra arguments are also supported: "omitempty": excludes the fields value from the "attribute" hash. "iso8601": uses the ISO8601 timestamp format when serialising or deserialising the time.Time value. Value, relation: "relation,<key name in relationships hash>" Relations are struct fields that represent a one-to-one or one-to-many to other structs. jsonapi will traverse the graph of relationships and marshal or unmarshal records. The first argument must be, "relation", and the second should be the name of the relationship, used as the key in the "relationships" hash for the record. Use the methods below to Marshal and Unmarshal jsonapi.org json payloads. Visit the readme at https://github.com/google/jsonapi