Package ebpf is a toolkit for working with eBPF programs. eBPF programs are small snippets of code which are executed directly in a VM in the Linux kernel, which makes them very fast and flexible. Many Linux subsystems now accept eBPF programs. This makes it possible to implement highly application specific logic inside the kernel, without having to modify the actual kernel itself. Since eBPF is a relatively young concept, documentation and user space support is still lacking. Most of the available tools are written in C, and reside in the kernel's source tree. The more mature external projects like libbcc focus on using eBPF for instrumentation and debugging. This leads to certain trade-offs which are not acceptable when writing production services. This package is instead designed for long-running processes which want to use eBPF to implement part of their application logic. It has no run-time dependencies outside of the library and the Linux kernel itself. eBPF code should be compiled ahead of time using clang, and shipped with your application as any other resource. The two main parts are an ELF loader, which reads object files emitted by clang, and facilities to modify and load eBPF programs into the kernel. This package doesn't include code required to attach eBPF to Linux subsystems, since this varies per subsystem. See the examples for possible solutions. ExampleExtractDistance shows how to attach an eBPF socket filter to extract the network distance of an IP host. ExampleSocketELF demonstrates how to load an eBPF program from an ELF, pin it into the filesystem and attach it to a raw socket.
Package headerfile allows you to read a file which contains a small header of "key:value" lines. These kinds of files are very common when processing simple blogs, and similar files.
Package zap provides fast, structured, leveled logging. For applications that log in the hot path, reflection-based serialization and string formatting are prohibitively expensive - they're CPU-intensive and make many small allocations. Put differently, using json.Marshal and fmt.Fprintf to log tons of interface{} makes your application slow. Zap takes a different approach. It includes a reflection-free, zero-allocation JSON encoder, and the base Logger strives to avoid serialization overhead and allocations wherever possible. By building the high-level SugaredLogger on that foundation, zap lets users choose when they need to count every allocation and when they'd prefer a more familiar, loosely typed API. In contexts where performance is nice, but not critical, use the SugaredLogger. It's 4-10x faster than other structured logging packages and supports both structured and printf-style logging. Like log15 and go-kit, the SugaredLogger's structured logging APIs are loosely typed and accept a variadic number of key-value pairs. (For more advanced use cases, they also accept strongly typed fields - see the SugaredLogger.With documentation for details.) By default, loggers are unbuffered. However, since zap's low-level APIs allow buffering, calling Sync before letting your process exit is a good habit. In the rare contexts where every microsecond and every allocation matter, use the Logger. It's even faster than the SugaredLogger and allocates far less, but it only supports strongly-typed, structured logging. Choosing between the Logger and SugaredLogger doesn't need to be an application-wide decision: converting between the two is simple and inexpensive. The simplest way to build a Logger is to use zap's opinionated presets: NewExample, NewProduction, and NewDevelopment. These presets build a logger with a single function call: Presets are fine for small projects, but larger projects and organizations naturally require a bit more customization. For most users, zap's Config struct strikes the right balance between flexibility and convenience. See the package-level BasicConfiguration example for sample code. More unusual configurations (splitting output between files, sending logs to a message queue, etc.) are possible, but require direct use of go.uber.org/zap/zapcore. See the package-level AdvancedConfiguration example for sample code. The zap package itself is a relatively thin wrapper around the interfaces in go.uber.org/zap/zapcore. Extending zap to support a new encoding (e.g., BSON), a new log sink (e.g., Kafka), or something more exotic (perhaps an exception aggregation service, like Sentry or Rollbar) typically requires implementing the zapcore.Encoder, zapcore.WriteSyncer, or zapcore.Core interfaces. See the zapcore documentation for details. Similarly, package authors can use the high-performance Encoder and Core implementations in the zapcore package to build their own loggers. An FAQ covering everything from installation errors to design decisions is available at https://github.com/uber-go/zap/blob/master/FAQ.md.
Package sdk is the official AWS SDK for the Go programming language. The AWS SDK for Go provides APIs and utilities that developers can use to build Go applications that use AWS services, such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). The SDK removes the complexity of coding directly against a web service interface. It hides a lot of the lower-level plumbing, such as authentication, request retries, and error handling. The SDK also includes helpful utilities on top of the AWS APIs that add additional capabilities and functionality. For example, the Amazon S3 Download and Upload Manager will automatically split up large objects into multiple parts and transfer them concurrently. See the s3manager package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/ Checkout the Getting Started Guide and API Reference Docs detailed the SDK's components and details on each AWS client the SDK supports. The Getting Started Guide provides examples and detailed description of how to get setup with the SDK. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/welcome.html The API Reference Docs include a detailed breakdown of the SDK's components such as utilities and AWS clients. Use this as a reference of the Go types included with the SDK, such as AWS clients, API operations, and API parameters. https://docs.aws.amazon.com/sdk-for-go/api/ The SDK is composed of two main components, SDK core, and service clients. The SDK core packages are all available under the aws package at the root of the SDK. Each client for a supported AWS service is available within its own package under the service folder at the root of the SDK. aws - SDK core, provides common shared types such as Config, Logger, and utilities to make working with API parameters easier. awserr - Provides the error interface that the SDK will use for all errors that occur in the SDK's processing. This includes service API response errors as well. The Error type is made up of a code and message. Cast the SDK's returned error type to awserr.Error and call the Code method to compare returned error to specific error codes. See the package's documentation for additional values that can be extracted such as RequestId. credentials - Provides the types and built in credentials providers the SDK will use to retrieve AWS credentials to make API requests with. Nested under this folder are also additional credentials providers such as stscreds for assuming IAM roles, and ec2rolecreds for EC2 Instance roles. endpoints - Provides the AWS Regions and Endpoints metadata for the SDK. Use this to lookup AWS service endpoint information such as which services are in a region, and what regions a service is in. Constants are also provided for all region identifiers, e.g UsWest2RegionID for "us-west-2". session - Provides initial default configuration, and load configuration from external sources such as environment and shared credentials file. request - Provides the API request sending, and retry logic for the SDK. This package also includes utilities for defining your own request retryer, and configuring how the SDK processes the request. service - Clients for AWS services. All services supported by the SDK are available under this folder. The SDK includes the Go types and utilities you can use to make requests to AWS service APIs. Within the service folder at the root of the SDK you'll find a package for each AWS service the SDK supports. All service clients follows a common pattern of creation and usage. When creating a client for an AWS service you'll first need to have a Session value constructed. The Session provides shared configuration that can be shared between your service clients. When service clients are created you can pass in additional configuration via the nifcloud.Config type to override configuration provided by in the Session to create service client instances with custom configuration. Once the service's client is created you can use it to make API requests the AWS service. These clients are safe to use concurrently. In the AWS SDK for Go, you can configure settings for service clients, such as the log level and maximum number of retries. Most settings are optional; however, for each service client, you must specify a region and your credentials. The SDK uses these values to send requests to the correct AWS region and sign requests with the correct credentials. You can specify these values as part of a session or as environment variables. See the SDK's configuration guide for more information. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html See the session package documentation for more information on how to use Session with the SDK. https://docs.aws.amazon.com/sdk-for-go/api/nifcloud/session/ See the Config type in the aws package for more information on configuration options. https://docs.aws.amazon.com/sdk-for-go/api/nifcloud/#Config When using the SDK you'll generally need your AWS credentials to authenticate with AWS services. The SDK supports multiple methods of supporting these credentials. By default the SDK will source credentials automatically from its default credential chain. See the session package for more information on this chain, and how to configure it. The common items in the credential chain are the following: Environment Credentials - Set of environment variables that are useful when sub processes are created for specific roles. Shared Credentials file (~/.nifcloud/credentials) - This file stores your credentials based on a profile name and is useful for local development. EC2 Instance Role Credentials - Use EC2 Instance Role to assign credentials to application running on an EC2 instance. This removes the need to manage credential files in production. Credentials can be configured in code as well by setting the Config's Credentials value to a custom provider or using one of the providers included with the SDK to bypass the default credential chain and use a custom one. This is helpful when you want to instruct the SDK to only use a specific set of credentials or providers. This example creates a credential provider for assuming an IAM role, "myRoleARN" and configures the S3 service client to use that role for API requests. See the credentials package documentation for more information on credential providers included with the SDK, and how to customize the SDK's usage of credentials. https://docs.aws.amazon.com/sdk-for-go/api/nifcloud/credentials The SDK has support for the shared configuration file (~/.nifcloud/config). This support can be enabled by setting the environment variable, "AWS_SDK_LOAD_CONFIG=1", or enabling the feature in code when creating a Session via the Option's SharedConfigState parameter. In addition to the credentials you'll need to specify the region the SDK will use to make AWS API requests to. In the SDK you can specify the region either with an environment variable, or directly in code when a Session or service client is created. The last value specified in code wins if the region is specified multiple ways. To set the region via the environment variable set the "AWS_REGION" to the region you want to the SDK to use. Using this method to set the region will allow you to run your application in multiple regions without needing additional code in the application to select the region. The endpoints package includes constants for all regions the SDK knows. The values are all suffixed with RegionID. These values are helpful, because they reduce the need to type the region string manually. To set the region on a Session use the aws package's Config struct parameter Region to the AWS region you want the service clients created from the session to use. This is helpful when you want to create multiple service clients, and all of the clients make API requests to the same region. See the endpoints package for the AWS Regions and Endpoints metadata. https://docs.aws.amazon.com/sdk-for-go/api/nifcloud/endpoints/ In addition to setting the region when creating a Session you can also set the region on a per service client bases. This overrides the region of a Session. This is helpful when you want to create service clients in specific regions different from the Session's region. See the Config type in the aws package for more information and additional options such as setting the Endpoint, and other service client configuration options. https://docs.aws.amazon.com/sdk-for-go/api/nifcloud/#Config Once the client is created you can make an API request to the service. Each API method takes a input parameter, and returns the service response and an error. The SDK provides methods for making the API call in multiple ways. In this list we'll use the S3 ListObjects API as an example for the different ways of making API requests. ListObjects - Base API operation that will make the API request to the service. ListObjectsRequest - API methods suffixed with Request will construct the API request, but not send it. This is also helpful when you want to get a presigned URL for a request, and share the presigned URL instead of your application making the request directly. ListObjectsPages - Same as the base API operation, but uses a callback to automatically handle pagination of the API's response. ListObjectsWithContext - Same as base API operation, but adds support for the Context pattern. This is helpful for controlling the canceling of in flight requests. See the Go standard library context package for more information. This method also takes request package's Option functional options as the variadic argument for modifying how the request will be made, or extracting information from the raw HTTP response. ListObjectsPagesWithContext - same as ListObjectsPages, but adds support for the Context pattern. Similar to ListObjectsWithContext this method also takes the request package's Option function option types as the variadic argument. In addition to the API operations the SDK also includes several higher level methods that abstract checking for and waiting for an AWS resource to be in a desired state. In this list we'll use WaitUntilBucketExists to demonstrate the different forms of waiters. WaitUntilBucketExists. - Method to make API request to query an AWS service for a resource's state. Will return successfully when that state is accomplished. WaitUntilBucketExistsWithContext - Same as WaitUntilBucketExists, but adds support for the Context pattern. In addition these methods take request package's WaiterOptions to configure the waiter, and how underlying request will be made by the SDK. The API method will document which error codes the service might return for the operation. These errors will also be available as const strings prefixed with "ErrCode" in the service client's package. If there are no errors listed in the API's SDK documentation you'll need to consult the AWS service's API documentation for the errors that could be returned. Pagination helper methods are suffixed with "Pages", and provide the functionality needed to round trip API page requests. Pagination methods take a callback function that will be called for each page of the API's response. Waiter helper methods provide the functionality to wait for an AWS resource state. These methods abstract the logic needed to to check the state of an AWS resource, and wait until that resource is in a desired state. The waiter will block until the resource is in the state that is desired, an error occurs, or the waiter times out. If a resource times out the error code returned will be request.WaiterResourceNotReadyErrorCode. This example shows a complete working Go file which will upload a file to S3 and use the Context pattern to implement timeout logic that will cancel the request if it takes too long. This example highlights how to use sessions, create a service client, make a request, handle the error, and process the response.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/v9/_examples Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rbg|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validataes that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providng reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format Full validation is blocked by https://github.com/golang/crypto/pull/28 This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package osmpbf decodes OpenStreetMap (OSM) PBF files. Use this package by creating a NewDecoder and passing it a PBF file. Use Start to start decoding process. Use Decode to return Node, Way and Relation structs.
Package daemon aims to make it easier to write simple network daemons for process supervised deployment like "systemd". https://www.freedesktop.org/software/systemd/man/daemon.html Specifically it supports the following: Package gone/daemon provides a master server manager for one or more services so they can be taken down and restarted while keeping any network connection or other file descriptors open. Several different reload/restart schemes are possible - see the examples. At the lower level the "gone/daemon/srv" package provides a "Server" interface and functions Serve(), Shutdown(). You can use this interface directly, implementing your own reload policy or use the higherlevel Run() function of the "gone/daemon" package. At the higher level package "gone/daemon" provide the Run() function which takes a set of RunOptions, one of which must be a function to instantiate a slice of srv.Server objects (and possible define cleanup function) and serve these servers until Exit() is called while obeying Reload().
Package lumberjack provides a rolling logger. Note that this is v2.0 of lumberjack, and should be imported using gopkg.in thusly: The package name remains simply lumberjack, and the code resides at https://github.com/natefinch/lumberjack under the v2.0 branch. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.
Package lumberjack provides a rolling logger. Note that this is v3.0 of lumberjack. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. Letting outside processes write to or manipulate the file that lumberjack writes to will also result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.
Package pbc provides structures for building pairing-based cryptosystems. It is a wrapper around the Pairing-Based Cryptography (PBC) Library authored by Ben Lynn (https://crypto.stanford.edu/pbc/). This wrapper provides access to all PBC functions. It supports generation of various types of elliptic curves and pairings, element initialization, I/O, and arithmetic. These features can be used to quickly build pairing-based or conventional cryptosystems. The PBC library is designed to be extremely fast. Internally, it uses GMP for arbitrary-precision arithmetic. It also includes a wide variety of optimizations that make pairing-based cryptography highly efficient. To improve performance, PBC does not perform type checking to ensure that operations actually make sense. The Go wrapper provides the ability to add compatibility checks to most operations, or to use unchecked elements to maximize performance. Since this library provides low-level access to pairing primitives, it is very easy to accidentally construct insecure systems. This library is intended to be used by cryptographers or to implement well-analyzed cryptosystems. Cryptographic pairings are defined over three mathematical groups: G1, G2, and GT, where each group is typically of the same order r. Additionally, a bilinear map e maps a pair of elements — one from G1 and another from G2 — to an element in GT. This map e has the following additional property: If G1 == G2, then a pairing is said to be symmetric. Otherwise, it is asymmetric. Pairings can be used to construct a variety of efficient cryptosystems. The PBC library currently supports 5 different types of pairings, each with configurable parameters. These types are designated alphabetically, roughly in chronological order of introduction. Type A, D, E, F, and G pairings are implemented in the library. Each type has different time and space requirements. For more information about the types, see the documentation for the corresponding generator calls, or the PBC manual page at https://crypto.stanford.edu/pbc/manual/ch05s01.html. This package must be compiled using cgo. It also requires the installation of GMP and PBC. During the build process, this package will attempt to include <gmp.h> and <pbc/pbc.h>, and then dynamically link to GMP and PBC. Most systems include a package for GMP. To install GMP in Debian / Ubuntu: For an RPM installation with YUM: For installation with Fink (http://www.finkproject.org/) on Mac OS X: For more information or to compile from source, visit https://gmplib.org/ To install the PBC library, download the appropriate files for your system from https://crypto.stanford.edu/pbc/download.html. PBC has three dependencies: the gcc compiler, flex (http://flex.sourceforge.net/), and bison (https://www.gnu.org/software/bison/). See the respective sites for installation instructions. Most distributions include packages for these libraries. For example, in Debian / Ubuntu: The PBC source can be compiled and installed using the usual GNU Build System: After installing, you may need to rebuild the search path for libraries: It is possible to install the package on Windows through the use of MinGW and MSYS. MSYS is required for installing PBC, while GMP can be installed through a package. Based on your MinGW installation, you may need to add "-I/usr/local/include" to CPPFLAGS and "-L/usr/local/lib" to LDFLAGS when building PBC. Likewise, you may need to add these options to CGO_CPPFLAGS and CGO_LDFLAGS when installing this package. This package is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional details, see the COPYING and COPYING.LESSER files. This example generates a pairing and some random group elements, then applies the pairing operation. This example computes and verifies a Boneh-Lynn-Shacham signature in a simulated conversation between Alice and Bob.
Package pbc provides structures for building pairing-based cryptosystems. It is a wrapper around the Pairing-Based Cryptography (PBC) Library authored by Ben Lynn (https://crypto.stanford.edu/pbc/). This wrapper provides access to all PBC functions. It supports generation of various types of elliptic curves and pairings, element initialization, I/O, and arithmetic. These features can be used to quickly build pairing-based or conventional cryptosystems. The PBC library is designed to be extremely fast. Internally, it uses GMP for arbitrary-precision arithmetic. It also includes a wide variety of optimizations that make pairing-based cryptography highly efficient. To improve performance, PBC does not perform type checking to ensure that operations actually make sense. The Go wrapper provides the ability to add compatibility checks to most operations, or to use unchecked elements to maximize performance. Since this library provides low-level access to pairing primitives, it is very easy to accidentally construct insecure systems. This library is intended to be used by cryptographers or to implement well-analyzed cryptosystems. Cryptographic pairings are defined over three mathematical groups: G1, G2, and GT, where each group is typically of the same order r. Additionally, a bilinear map e maps a pair of elements — one from G1 and another from G2 — to an element in GT. This map e has the following additional property: If G1 == G2, then a pairing is said to be symmetric. Otherwise, it is asymmetric. Pairings can be used to construct a variety of efficient cryptosystems. The PBC library currently supports 5 different types of pairings, each with configurable parameters. These types are designated alphabetically, roughly in chronological order of introduction. Type A, D, E, F, and G pairings are implemented in the library. Each type has different time and space requirements. For more information about the types, see the documentation for the corresponding generator calls, or the PBC manual page at https://crypto.stanford.edu/pbc/manual/ch05s01.html. This package must be compiled using cgo. It also requires the installation of GMP and PBC. During the build process, this package will attempt to include <gmp.h> and <pbc/pbc.h>, and then dynamically link to GMP and PBC. Most systems include a package for GMP. To install GMP in Debian / Ubuntu: For an RPM installation with YUM: For installation with Fink (http://www.finkproject.org/) on Mac OS X: For more information or to compile from source, visit https://gmplib.org/ To install the PBC library, download the appropriate files for your system from https://crypto.stanford.edu/pbc/download.html. PBC has three dependencies: the gcc compiler, flex (http://flex.sourceforge.net/), and bison (https://www.gnu.org/software/bison/). See the respective sites for installation instructions. Most distributions include packages for these libraries. For example, in Debian / Ubuntu: The PBC source can be compiled and installed using the usual GNU Build System: After installing, you may need to rebuild the search path for libraries: It is possible to install the package on Windows through the use of MinGW and MSYS. MSYS is required for installing PBC, while GMP can be installed through a package. Based on your MinGW installation, you may need to add "-I/usr/local/include" to CPPFLAGS and "-L/usr/local/lib" to LDFLAGS when building PBC. Likewise, you may need to add these options to CGO_CPPFLAGS and CGO_LDFLAGS when installing this package. This package is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional details, see the COPYING and COPYING.LESSER files. This example generates a pairing and some random group elements, then applies the pairing operation. This example computes and verifies a Boneh-Lynn-Shacham signature in a simulated conversation between Alice and Bob.
Package siris is a fully-featured HTTP/2 backend web framework written entirely in Google’s Go Language. Source code and other details for the project are available at GitHub: The only requirement is the Go Programming Language, at least version 1.8 Example code: All HTTP methods are supported, developers can also register handlers for same paths for different methods. The first parameter is the HTTP Method, second parameter is the request path of the route, third variadic parameter should contains one or more context.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: In order to make things easier for the user, Siris provides functions for all HTTP Methods. The first parameter is the request path of the route, second variadic parameter should contains one or more context.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: A set of routes that are being groupped by path prefix can (optionally) share the same middleware handlers and template layout. A group can have a nested group too. `.Party` is being used to group routes, developers can declare an unlimited number of (nested) groups. Example code: Siris developers are able to register their own handlers for http statuses like 404 not found, 500 internal server error and so on. Example code: With the help of Siris's expressionist router you can build any form of API you desire, with safety. Example code: At the previous example, we've seen static routes, group of routes, subdomains, wildcard subdomains, a small example of parameterized path with a single known paramete and custom http errors, now it's time to see wildcard parameters and macros. Siris, like net/http std package registers route's handlers by a Handler, the Siris' type of handler is just a func(ctx context.Context) where context comes from github.com/go-siris/siris/context. Until go 1.9 you will have to import that package too, after go 1.9 this will be not be necessary. Siris has the easiest and the most powerful routing process you have ever meet. At the same time, Siris has its own interpeter(yes like a programming language) for route's path syntax and their dynamic path parameters parsing and evaluation, I am calling them "macros" for shortcut. How? It calculates its needs and if not any special regexp needed then it just registers the route with the low-level path syntax, otherwise it pre-compiles the regexp and adds the necessary middleware(s). Standard macro types for parameters: if type is missing then parameter's type is defaulted to string, so {param} == {param:string}. If a function not found on that type then the "string"'s types functions are being used. i.e: Besides the fact that Siris provides the basic types and some default "macro funcs" you are able to register your own too!. Register a named path parameter function: at the func(argument ...) you can have any standard type, it will be validated before the server starts so don't care about performance here, the only thing it runs at serve time is the returning func(paramValue string) bool. Example code: A path parameter name should contain only alphabetical letters, symbols, containing '_' and numbers are NOT allowed. If route failed to be registered, the app will panic without any warnings if you didn't catch the second return value(error) on .Handle/.Get.... Last, do not confuse ctx.Values() with ctx.Params(). Path parameter's values goes to ctx.Params() and context's local storage that can be used to communicate between handlers and middleware(s) goes to ctx.Values(), path parameters and the rest of any custom values are separated for your own good. Run Static Files Example code: More examples can be found here: https://github.com/go-siris/siris/tree/master/_examples/beginner/file-server Middleware is just a concept of ordered chain of handlers. Middleware can be registered globally, per-party, per-subdomain and per-route. Example code: Siris is able to wrap and convert any external, third-party Handler you used to use to your web application. Let's convert the https://github.com/rs/cors net/http external middleware which returns a `next form` handler. Example code: Siris supports 5 template engines out-of-the-box, developers can still use any external golang template engine, as `context.ResponseWriter()` is an `io.Writer`. All of these five template engines have common features with common API, like Layout, Template Funcs, Party-specific layout, partial rendering and more. Example code: View engine supports bundled(https://github.com/jteeuwen/go-bindata) template files too. go-bindata gives you two functions, asset and assetNames, these can be setted to each of the template engines using the `.Binary` func. Example code: A real example can be found here: https://github.com/go-siris/siris/tree/master/_examples/intermediate/view/embedding-templates-into-app. Enable auto-reloading of templates on each request. Useful while developers are in dev mode as they no neeed to restart their app on every template edit. Example code: Each one of these template engines has different options located here: https://github.com/go-siris/siris/tree/master/view . This example will show how to store and access data from a session. You don’t need any third-party library, but If you want you can use any session manager compatible or not. In this example we will only allow authenticated users to view our secret message on the /secret page. To get access to it, the will first have to visit /login to get a valid session cookie, which logs him in. Additionally he can visit /logout to revoke his access to our secret message. Example code: Running the example: But you should have a basic idea of the framework by now, we just scratched the surface. If you enjoy what you just saw and want to learn more, please follow the below links: Examples: Built'n Middleware: Community Middleware: Home Page:
Package siris is a fully-featured HTTP/2 backend web framework written entirely in Google’s Go Language. Source code and other details for the project are available at GitHub: The only requirement is the Go Programming Language, at least version 1.8 Example code: All HTTP methods are supported, developers can also register handlers for same paths for different methods. The first parameter is the HTTP Method, second parameter is the request path of the route, third variadic parameter should contains one or more context.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: In order to make things easier for the user, Siris provides functions for all HTTP Methods. The first parameter is the request path of the route, second variadic parameter should contains one or more context.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: A set of routes that are being groupped by path prefix can (optionally) share the same middleware handlers and template layout. A group can have a nested group too. `.Party` is being used to group routes, developers can declare an unlimited number of (nested) groups. Example code: Siris developers are able to register their own handlers for http statuses like 404 not found, 500 internal server error and so on. Example code: With the help of Siris's expressionist router you can build any form of API you desire, with safety. Example code: At the previous example, we've seen static routes, group of routes, subdomains, wildcard subdomains, a small example of parameterized path with a single known paramete and custom http errors, now it's time to see wildcard parameters and macros. Siris, like net/http std package registers route's handlers by a Handler, the Siris' type of handler is just a func(ctx context.Context) where context comes from github.com/go-siris/siris/context. Until go 1.9 you will have to import that package too, after go 1.9 this will be not be necessary. Siris has the easiest and the most powerful routing process you have ever meet. At the same time, Siris has its own interpeter(yes like a programming language) for route's path syntax and their dynamic path parameters parsing and evaluation, I am calling them "macros" for shortcut. How? It calculates its needs and if not any special regexp needed then it just registers the route with the low-level path syntax, otherwise it pre-compiles the regexp and adds the necessary middleware(s). Standard macro types for parameters: if type is missing then parameter's type is defaulted to string, so {param} == {param:string}. If a function not found on that type then the "string"'s types functions are being used. i.e: Besides the fact that Siris provides the basic types and some default "macro funcs" you are able to register your own too!. Register a named path parameter function: at the func(argument ...) you can have any standard type, it will be validated before the server starts so don't care about performance here, the only thing it runs at serve time is the returning func(paramValue string) bool. Example code: A path parameter name should contain only alphabetical letters, symbols, containing '_' and numbers are NOT allowed. If route failed to be registered, the app will panic without any warnings if you didn't catch the second return value(error) on .Handle/.Get.... Last, do not confuse ctx.Values() with ctx.Params(). Path parameter's values goes to ctx.Params() and context's local storage that can be used to communicate between handlers and middleware(s) goes to ctx.Values(), path parameters and the rest of any custom values are separated for your own good. Run Static Files Example code: More examples can be found here: https://github.com/go-siris/siris/tree/master/_examples/beginner/file-server Middleware is just a concept of ordered chain of handlers. Middleware can be registered globally, per-party, per-subdomain and per-route. Example code: Siris is able to wrap and convert any external, third-party Handler you used to use to your web application. Let's convert the https://github.com/rs/cors net/http external middleware which returns a `next form` handler. Example code: Siris supports 5 template engines out-of-the-box, developers can still use any external golang template engine, as `context.ResponseWriter()` is an `io.Writer`. All of these five template engines have common features with common API, like Layout, Template Funcs, Party-specific layout, partial rendering and more. Example code: View engine supports bundled(https://github.com/jteeuwen/go-bindata) template files too. go-bindata gives you two functions, asset and assetNames, these can be setted to each of the template engines using the `.Binary` func. Example code: A real example can be found here: https://github.com/go-siris/siris/tree/master/_examples/intermediate/view/embedding-templates-into-app. Enable auto-reloading of templates on each request. Useful while developers are in dev mode as they no neeed to restart their app on every template edit. Example code: Each one of these template engines has different options located here: https://github.com/go-siris/siris/tree/master/view . This example will show how to store and access data from a session. You don’t need any third-party library, but If you want you can use any session manager compatible or not. In this example we will only allow authenticated users to view our secret message on the /secret page. To get access to it, the will first have to visit /login to get a valid session cookie, which logs him in. Additionally he can visit /logout to revoke his access to our secret message. Example code: Running the example: But you should have a basic idea of the framework by now, we just scratched the surface. If you enjoy what you just saw and want to learn more, please follow the below links: Examples: Built'n Middleware: Community Middleware: Home Page:
Package golangsdk provides a multi-vendor interface to OpenStack-compatible clouds. The library has a three-level hierarchy: providers, services, and resources. Provider structs represent the cloud providers that offer and manage a collection of services. You will generally want to create one Provider client per OpenStack cloud. Use your OpenStack credentials to create a Provider client. The IdentityEndpoint is typically refered to as "auth_url" or "OS_AUTH_URL" in information provided by the cloud operator. Additionally, the cloud may refer to TenantID or TenantName as project_id and project_name. Credentials are specified like so: You may also use the openstack.AuthOptionsFromEnv() helper function. This function reads in standard environment variables frequently found in an OpenStack `openrc` file. Again note that Gophercloud currently uses "tenant" instead of "project". Service structs are specific to a provider and handle all of the logic and operations for a particular OpenStack service. Examples of services include: Compute, Object Storage, Block Storage. In order to define one, you need to pass in the parent provider, like so: Resource structs are the domain models that services make use of in order to work with and represent the state of API resources: Intermediate Result structs are returned for API operations, which allow generic access to the HTTP headers, response body, and any errors associated with the network transaction. To turn a result into a usable resource struct, you must call the Extract method which is chained to the response, or an Extract function from an applicable extension: All requests that enumerate a collection return a Pager struct that is used to iterate through the results one page at a time. Use the EachPage method on that Pager to handle each successive Page in a closure, then use the appropriate extraction method from that request's package to interpret that Page as a slice of results: If you want to obtain the entire collection of pages without doing any intermediary processing on each page, you can use the AllPages method: This top-level package contains utility functions and data types that are used throughout the provider and service packages. Of particular note for end users are the AuthOptions and EndpointOpts structs.
Package mafsa implements Minimal Acyclic Finite State Automata (MA-FSA) in a space-optimized way as described by Dacuik, Mihov, Watson, and Watson in their paper, "Incremental Construction of Minimal Acyclic Finite-State Automata" (2000). It also implements Minimal Perfect Hashing (MPH) as described by Lucceshi and Kowaltowski in their paper, "Applications of Finite Automata Representing Large Vocabularies" (1992). Unscientifically speaking, this package lets you store large amounts of strings (with Unicode) in memory so that membership queries, prefix lookups, and fuzzy searches are fast. And because minimal perfect hashing is included, you can associate each entry in the tree with more data used by your application. See the README or the end of this documentation for a brief tutorial. MA-FSA structures are a specific type of Deterministic Acyclic Finite State Automaton (DAFSA) which fold equivalent state transitions into each other starting from the suffix of each entry. Typical construction algorithms involve building out the entire tree first, then minimizing the completed tree. However, the method described in the paper above allows the tree to be minimized after every word insertion, provided the insertions are performed in lexicographical order, which drastically reduces memory usage compared to regular prefix trees ("tries"). The goal of this package is to provide a simple, useful, and correct implementation of MA-FSA. Though more complex algorithms exist for removal of items and unordered insertion, these features may be outside the scope of this package. Membership queries are on the order of O(n), where n is the length of the input string, so basically O(1). It is advisable to keep n small since long entries without much in common, especially in the beginning or end of the string, will quickly overrun the optimizations that are available. In those cases, n-gram implementations might be preferable, though these will use more CPU. This package provides two kinds of MA-FSA implementations. One, the BuildTree, facilitates the construction of an optimized tree and allows ordered insertions. The other, MinTree, is effectively read-only but uses significantly less memory and is ideal for production environments where only reads will be occurring. Usually your build process will be separate from your production application, which will make heavy use of reading the structure. To use this package, create a BuildTree and insert your items in lexicographical order: The tree is now compressed to a minimum number of nodes and is ready to be saved. In your production application, then, you can read the file into a MinTree directly: The mt variable is a *MinTree which has the same data as the original BuildTree, but without all the extra "scaffolding" that was required for adding new elements. The package provides some basic read mechanisms.
Package lumberjack provides a rolling logger. Note that this is v2.0 of lumberjack, and should be imported using gopkg.in thusly: The package name remains simply lumberjack, and the code resides at https://github.com/natefinch/lumberjack under the v2.0 branch. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.
Package baker provides types and functions to build a pipeline for the processing of structured data. Structured data is represented by the Record interface. LogLine implements that interface and represents a csv record. Using the functions in the package one can build and run a Topology, reading its configuration from a TOML file. The package doesn't include any component. They can be found in their respective packages (baker/input, baker/filter, baker/output and baker/upload). The README file in the project repository provides additional information and examples: https://github.com/AdRoll/baker/blob/main/README.md
Fresh is a command-line tool that builds, starts and restarts your written in Go application, including a web app, every time you save a Go or template file or any desired files you specify via configuration file. Fresh can be used even without configuration file, using default values. Fresh watches for file events, and every time a file is created, deleted or modified it will build and restart the application. If `go build` returns an error, it will create a log file in the tmp folder and keep watching, attempting to rebuild if initial compilation was successful. It will also attempt to kill previously created processes. This is a fork of an original fresh (https://github.com/pilu/fresh) that is announced as unmaintained.
Package webpackager implements the control flow of Web Packager. The code below illustrates the usage of this package: Config allows you to change some behaviors of the Packager. packager.Run(url) retrieves an HTTP response using FetchClient, processes it using Processor, and turns it into a signed exchange using ExchangeFactory. Processor inspects the HTTP response to see the eligibility for signed exchanges and manipulates it to optimize the page loading. The generated signed exchanges are stored in ResourceCache to prevent duplicates. The code above sets just two parameters, ExchangeFactory and ResourceCache, and uses the defaults for other parameters. With this setup, the packager retrieves the content just through HTTP, applies the recommended set of optimizations, generates signed exchanges compliant with the version b3, and saves them in files named like "index.html.sxg" under "/tmp/sxg". Config has a few more parameters. See its definition for the details. You can also pass your own implementations to Config to inject custom logic into the packaging flow. You could write, for example, a custom FetchClient to retrieve the content from a database table instead of web servers, a custom Processor or HTMLTask to apply your optimization techniques, a ResourceCache to store the produced signed exchanges into another database table in addition to a local drive, and so on. The cmd/webpackager package provides a command line interface to execute the packaging flow without writing the driver code on your own.
rm2pdf MIT Licensed RCL January 2020 This programme attempts to create annotated A4 PDF files from reMarkable tablet file groups (RM bundles), including .rm files recording marks. Normally these files will be in a local directory, such as an xochitl directory synchronised to a tablet over sshfs. The programme takes as input either: * The path to the PDF file which has had annotations made to it * The path to the RM bundle with uuid, such as <path>/<uuid> with no filename extension, together with a PDF template to use for the background (a blank A4 template is provided in templates/A4.pdf). The resulting PDF is layered with the background and .rm file layers each in a separated PDF layer. The .rm file marks are stroked using the fpdf PDF library, although .rm tilt and pressure characteristics are not represented in the PDF output. PDF files from sources such as Microsoft Word do not always work well. It can help to rewrite them using the pdftk tool, e.g. by doing Custom colours for some pens can be specified using the -c or --colours switch, which overrides the default pen selection. A second -c switch sets the colours on the second layer, and so on. Example of processing an rm bundle without a pdf: Example of processing an rm bundle with a pdf, and per-layer colours: General options: Warning: the OutputFile will be overwritten if it exists. The parser is a go port of reMarkable tablet "lines" or ".rm" file parser, with binary decoding hints drawn from rm2svg https://github.com/reHackable/maxio/blob/master/tools/rM2svg which in turn refers to https://github.com/lschwetlick/maxio/tree/master/tools. Python struct format codes referred to in the parser, such as "<{}sI" are from rm2svg. RMParser provides a python-like iterator based on bufio.Scan, which iterates over the referenced reMarkable .rm file returning a data structure consisting of each path with its associated layer and path segments. Usage example: Pen selections are hard-coded in stroke.go with widths, opacities and colours. The StrokeSetting interface "Width" is used to scale strokes based on nothing more than what seems to be about right. Resolving the page sizes and reMarkable output resolution was based on the reMarkable png templates and viewing the reMarkable app's output x and y widths. These dimensions are noted in pdf.go in PDF_WIDTH_IN_MM and PDF_HEIGHT_IN_MM. Conversion from mm to points (MM_TO_RMPOINTS) and from points to the resolution of the reMarkable tablet (PTS_2_RMPTS) is also set in pdf.go. The theoretical conversion factor is slightly altered based on the output from various tests, including those in the testfiles directory. To view the testfiles after processing use or alter the paths used in the tests.
Package jin Copyright (c) 2020 eco. License that can be found in the LICENSE file. "your wish is my command" Jin is a comprehensive JSON manipulation tool bundle. All functions tested with random data with help of Node.js. All test-path and test-value creation automated with Node.js. Jin provides parse, interpret, build and format tools for JSON. Third-party packages only used for the benchmark. No dependency need for core functions. We make some benchmark with other packages like Jin. In Result, Jin is the fastest (op/ns) and more memory friendly then others (B/op). For more information please take a look at BENCHMARK section below. WHAT IS NEW? 7 new functions tested and added to package. - GetMap() get objects as map[string]string structure with key values pairs - GetAll() get only specific keys values - GetAllMap() get only specific keys with map[string]stringstructure - GetKeys() get objects keys as string array - GetValues() get objects values as string array - GetKeysValues() get objects keys and values with separate string arrays - Length() get length of JSON array. 06.04.2020 INSTALLATION And you are good to go. Import and start using. Major difference between parsing and interpreting is parser has to read all data before answer your needs. On the other hand interpreter reads up to find the data you need. With parser, once the parse is complete you can access any data with no time. But there is a time cost to parse all data and this cost can increase as data content grows. If you need to access all keys of a JSON then, we are simply recommend you to use Parser. But if you need to access some keys of a JSON then we strongly recommend you to use Interpreter, it will be much faster and much more memory-friendly than parser. Interpreter is core element of this package, no need to create an Interpreter type, just call which function you want. First let's look at general function parameters. We are gonna use Get() function to access the value of path has pointed. In this case 'jin'. Path value can consist hard coded values. Get() function return type is []byte but all other variations of return types are implemented with different functions. For example. If you need "value" as string use GetString(). Parser is another alternative for JSON manipulation. We recommend to use this structure when you need to access all or most of the keys in the JSON. Parser constructor need only one parameter. We can parse it with Parse() function. Let's look at Parser.Get() About path value look above. There is all return type variations of Parser.Get() function like Interpreter. For return string use Parser.GetString() like this, Other usefull functions of Jin. -Add(), AddKeyValue(), Set(), SetKey() Delete(), Insert(), IterateArray(), IterateKeyValue() Tree(). Let's look at IterateArray() function. There are two formatting functions. Flatten() and Indent(). Indent() is adds indentation to JSON for nicer visualization and Flatten() removes this indentation. Control functions are simple and easy way to check value types of any path. For example. IsArray(). Or you can use GetType(). There are lots of JSON build functions in this package and all of them has its own examples. We just want to mention a couple of them. Scheme is simple and powerful tool for create JSON schemes. Testing is very important thing for this type of packages and it shows how reliable it is. For that reasons we use Node.js for unit testing. Lets look at folder arrangement and working principle. - test/ folder: test-json.json, this is a temporary file for testing. all other test-cases copying here with this name so they can process by test-case-creator.js. test-case-creator.js is core path & value creation mechanism. When it executed with executeNode() function. It reads the test-json.json file and generates the paths and values from this files content. With command line arguments it can generate different paths and values. As a result, two files are created with this process. the first of these files is test-json-paths.json and the second is test-json-values.json test-json-paths.json has all the path values. test-json-values.json has all the values that corresponding to path values. - tests/ folder All files in this folder is a test-case. But it doesn't mean that you can't change anything, on the contrary, all test-cases are creating automatically based on this folder content. You can add or remove any .json file that you want. All GO side test-case automation functions are in core_test.go file. This package developed with Node.js v13.7.0. please make sure that your machine has a valid version of Node.js before testing. All functions and methods are tested with complicated randomly genereted .json files. Like this, Most of JSON packages not even run properly with this kind of JSON streams. We did't see such packages as competitors to ourselves. And that's because we didn't even bother to benchmark against them. Benchmark results. - Benchmark prefix removed from function names for make room to results. - Benchmark between 'buger/jsonparser' and 'ecoshub/jin' use the same payload (JSON test-cases) that 'buger/jsonparser' package use for benchmark it self. We are currently working on, - Marshal() and Unmarshal() functions. - http.Request parser/interpreter - Builder functions for http.ResponseWriter If you want to contribute this work feel free to fork it. We want to fill this section with contributors.
This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places. This file is automatically created by Recurly's OpenAPI generation process and thus any edits you make by hand will be lost. If you wish to make a change to this file, please create a Github issue explaining the changes you need and we will usher them to the appropriate places.
Command goat provides an implementation of a BitTorrent tracker, written in Go. goat can be built using Go 1.1+. It can be downloaded, built, and installed, simply by running: In addition, goat depends on a MySQL server for data storage. After creating a database and user for goat, its database schema may be imported from the SQL files located in 'res/'. goat will not run unless MySQL is installed, and a database and user are properly configured for its use. Optionally, goat can be built to use ql (https://github.com/cznic/ql) as its storage backend. This is done by supplying the 'ql' tag in the go get command: A blank ql database file is located under 'res/ql/goat.db', and will be copied to '~/.config/goat/goat.db' on UNIX systems. goat is now able to use ql as its storage backend, for those who do not wish to use an external, MySQL backend. goat is capable of listening for torrent traffic in three modes: HTTP, HTTPS, and UDP. HTTP/HTTPS are the recommended methods, and are required in order for goat to serve its API, and to allow use of private tracker passkeys. HTTP is considered the standard mode of operation for goat. HTTP allows gathering a great number of metrics, use of passkeys, use of a client whitelist, and access to goat's RESTful API, when configured. For most trackers, this will be the only listener which is necessary in order for goat to function properly. The HTTPS listener provides a method to encrypt traffic to the tracker, but must be used with caution. Unless the SSL certificate in use is signed by a proper certificate authority, it will distress most clients, and they may outright refuse to announce to it. If you are in possession of a certificate signed by a certificate authority, this mode may be more ideal, as it provides added security for your clients. The UDP listener is the most unusual method of the three, and should only be used for public trackers. The BitTorrent UDP tracker protocol specifies a very specific packet format, meaning that additional information or parameters cannot be packed into a UDP datagram in a standard way. The UDP tracker may be the fastest and least bandwidth-intensive, but as stated, should only be used for public trackers. A new feature goat added to goat in order to allow better interoperability with many languages is a RESTful API, which is served using the HTTP or HTTPS listeners. This API enables easy retrieval of tracker statistics, while allowing goat to run as a completely independent process. It should be noted that the API is only enabled when configured, and when a HTTP or HTTPS listener is enabled. Without a transport mechanism, the API will be inaccessible. The API features several modes of authentication, including HTTP Basic for login and HMAC-SHA1 other calls. Upon logging into the API using HTTP Basic with a username and password pair, an API public key and secret will be generated. The public key is used as the username for HTTP Basic authentication, and the secret key is used to calculate a HMAC-SHA1 signature for the password. As part of API signature generation, a random nonce value must be generated and added to the request. It is added to the password portion of the HTTP Basic request, and also to the string which is used to create the signature. Nonce values must be changed on every request, or the request will fail. The current pseudocode format of the HMAC-SHA1 signature is as follows: The proper format for a HTTP Basic request is as follows: When the public key, nonce, and API signature are sent via HTTP Basic, the server will verify the signature. Successful authentication will allow access to the API. This list contains all API calls currently recognized by goat. Each call must be authenticated using the aforementioned methods. Request an API public key and secret key for this user. The public key, user ID, and secret key are used to authenticate further API calls. The expire time indicates when this key is set to expire. Further API calls will extend the expiration time. Retrieve a list of all files tracked by goat. Some extended attributes are not added to reduce strain on database, and to provide a more general overview. Retrieve extended attributes about a specific file with matching ID. This provides counts for number of completions, seeders, leechers, and a list of fileUser relationships associated with a given file. Retrieve a variety of metrics about the current status of goat, including its PID, hostname, memory usage, number of HTTP/UDP hits, etc. Create a user with the specified username, password, and torrent limit. Reterieve a list of all users registered to goat, including their ID, torrent limit, and username. Retrieve information about a single user with matching ID, including their ID, torrent limit, and username. goat is configured using a JSON file, which will be created under '~/.config/goat/config.json' on UNIX systems. Here is an example configuration, describing the settings available to the user.
Package goBolt implements drivers for the Neo4J Bolt Protocol Versions 1-4. There are some limitations to the types of collections the internalDriver supports. Specifically, maps should always be of type map[string]interface{} and lists should always be of type []interface{}. It doesn't seem that the Bolt protocol supports uint64 either, so the biggest number it can send right now is the int64 max. The URL format is: `bolt://(user):(password)@(host):(port)` Schema must be `bolt`. User and password is only necessary if you are authenticating. TLS is supported by using query parameters on the connection string, like so: `bolt://host:port?tls=true&tls_no_verify=false` The supported query params are: * timeout - the number of seconds to set the connection timeout to. Defaults to 60 seconds. * tls - Set to 'true' or '1' if you want to use TLS encryption * tls_no_verify - Set to 'true' or '1' if you want to accept any server certificate (for testing, not secure) * tls_ca_cert_file - path to a custom ca cert for a self-signed TLS cert * tls_cert_file - path to a cert file for this client (need to verify this is processed by Neo4j) * tls_key_file - path to a key file for this client (need to verify this is processed by Neo4j) Errors returned from the API support wrapping, so if you receive an error from the library, it might be wrapping other errors. You can get the innermost error by using the `InnerMost` method. Failure messages from Neo4J are reported, along with their metadata, as an error. In order to get the failure message metadata from a wrapped error, you can do so by calling `err.(*errors.Error).InnerMost().(messages.FailureMessage).Metadata` If there is an error with the database connection, you should get a sql/internalDriver ErrBadConn as per the best practice recommendations of the Golang SQL Driver. However, this error may be wrapped, so you might have to call `InnerMost` to get it, as specified above.
go-sqlx Generates Go code using a package as a generic template that implements sqlx. Given the StructName of a Struct type T go-sqlx will create a new self-contained Go source file and rewrite T's "db" tag of struct field The file is created in the same package and directory as the package that defines T. It has helpful defaults designed for use with go generate. For example, given this snippet, running this command in the same directory will create the file pill_sqlx.go, in package painkiller, containing a definition of helper for sqlx Typically this process would be run using go generate, like this: With no arguments, it processes the package in the current directory. Otherwise, the arguments must trimmedStructName a single directory holding a Go package or a set of Go source files that represent a single Go package. The -type flag accepts a comma-separated list of types so a single run can generate methods for multiple types. The default flagOutput file is t_string.go, where t is the lower-cased trimmedStructName of the first type listed. It can be overridden with the -flagOutput flag.
go-atomicvalue Generates Go code using a package as a generic template for atomic.Value. Given the name of a atomic.Value type T , and the name of a type Value go-atomicvalue will create a new self-contained Go source file implementing The file is created in the same package and directory as the package that defines T, Key. It has helpful defaults designed for use with go generate. For example, given this snippet, running this command in the same directory will create the file pill_atomicvalue.go, in package painkiller, containing a definition of Typically this process would be run using go generate, like this: With no arguments, it processes the package in the current directory. Otherwise, the arguments must name a single directory holding a Go package or a set of Go source files that represent a single Go package. The -type flag accepts a comma-separated list of types so a single run can generate methods for multiple types. The default output file is t_string.go, where t is the lower-cased name of the first type listed. It can be overridden with the -output flag.
go-syncpool Generates Go code using a package as a generic template for sync.Pool. Given the name of a sync.Pool type T , and the name of a type Value go-syncpool will create a new self-contained Go source file implementing The file is created in the same package and directory as the package that defines T, Key. It has helpful defaults designed for use with go generate. For example, given this snippet, running this command in the same directory will create the file pill_syncpool.go, in package painkiller, containing a definition of Typically this process would be run using go generate, like this: With no arguments, it processes the package in the current directory. Otherwise, the arguments must name a single directory holding a Go package or a set of Go source files that represent a single Go package. The -type flag accepts a comma-separated list of types so a single run can generate methods for multiple types. The default output file is t_string.go, where t is the lower-cased name of the first type listed. It can be overridden with the -output flag.
go-syncmap Generates Go code using a package as a generic template for sync.Map. Given the name of a sync.Map type T , and the name of a type Key and Value go-syncmap will create a new self-contained Go source file implementing from Go version 1.9 onward from Go version 1.15 onward The file is created in the same package and directory as the package that defines T, Key and Value. It has helpful defaults designed for use with go generate. For example, given this snippet, running this command in the same directory will create the file pill_syncmap.go, in package painkiller, containing a definition of from Go version 1.9 onward from Go version 1.15 onward Typically this process would be run using go generate, like this: With no arguments, it processes the package in the current directory. Otherwise, the arguments must name a single directory holding a Go package or a set of Go source files that represent a single Go package. The -type flag accepts a comma-separated list of types so a single run can generate methods for multiple types. The default output file is t_string.go, where t is the lower-cased name of the first type listed. It can be overridden with the -output flag.
go-validator Generates Go code using a package as a generic template that implements validator. Given the StructName of a Struct type T go-validator will create a new self-contained Go source file and rewrite T's "db" tag of struct field The file is created in the same package and directory as the package that defines T. It has helpful defaults designed for use with go generate. For example, given this snippet, running this command in the same directory will create the file pill_validator.go, in package painkiller, containing a definition of helper for validator Typically this process would be run using go generate, like this: With no arguments, it processes the package in the current directory. Otherwise, the arguments must trimmedStructName a single directory holding a Go package or a set of Go source files that represent a single Go package. The -type flag accepts a comma-separated list of types so a single run can generate methods for multiple types. The default flagOutput file is t_string.go, where t is the lower-cased trimmedStructName of the first type listed. It can be overridden with the -flagOutput flag.
go-sqlx Generates Go code using a package as a generic template that implements sqlx. Given the StructName of a Struct type T go-sqlx will create a new self-contained Go source file and rewrite T's "db" tag of struct field The file is created in the same package and directory as the package that defines T. It has helpful defaults designed for use with go generate. For example, given this snippet, running this command in the same directory will create the file pill_sqlx.go, in package painkiller, containing a definition of helper for sqlx Typically this process would be run using go generate, like this: With no arguments, it processes the package in the current directory. Otherwise, the arguments must trimmedStructName a single directory holding a Go package or a set of Go source files that represent a single Go package. The -type flag accepts a comma-separated list of types so a single run can generate methods for multiple types. The default flagOutput file is t_string.go, where t is the lower-cased trimmedStructName of the first type listed. It can be overridden with the -flagOutput flag.
go-validator Generates Go code using a package as a generic template that implements validator. Given the StructName of a Struct type T go-validator will create a new self-contained Go source file and rewrite T's "db" tag of struct field The file is created in the same package and directory as the package that defines T. It has helpful defaults designed for use with go generate. For example, given this snippet, running this command in the same directory will create the file pill_validator.go, in package painkiller, containing a definition of helper for validator Typically this process would be run using go generate, like this: With no arguments, it processes the package in the current directory. Otherwise, the arguments must trimmedStructName a single directory holding a Go package or a set of Go source files that represent a single Go package. The -type flag accepts a comma-separated list of types so a single run can generate methods for multiple types. The default flagOutput file is t_string.go, where t is the lower-cased trimmedStructName of the first type listed. It can be overridden with the -flagOutput flag.
go-nulljson Generates Go code using a package as a generic template that implements database/sql.Scanner and database/sql/driver.Valuer. Given the name of a NullJson type T , and the name of a type Value go-nulljson will create a new self-contained Go source file implementing The file is created in the same package and directory as the package that defines T, Key. It has helpful defaults designed for use with go generate. For example, given this snippet, running this command in the same directory will create the file pill_nulljson.go, in package painkiller, containing a definition of Typically this process would be run using go generate, like this: With no arguments, it processes the package in the current directory. Otherwise, the arguments must name a single directory holding a Go package or a set of Go source files that represent a single Go package. The -type flag accepts a comma-separated list of types so a single run can generate methods for multiple types. The default output file is t_string.go, where t is the lower-cased name of the first type listed. It can be overridden with the -output flag.
Typically this process would be run using go generate, like this: With no arguments, it processes the package in the current directory. Otherwise, the arguments must name a single directory holding a Go package or a set of Go source files that represent a single Go package. The -type flag accepts a comma-separated list of types so a single run can generate methods for multiple types. The default output file is t_string.go, where t is the lower-cased name of the first type listed. It can be overridden with the -output flag.
Typically this process would be run using go generate, like this: With no arguments, it processes the package in the current directory. Otherwise, the arguments must name a single directory holding a Go package or a set of Go source files that represent a single Go package. The -type flag accepts a comma-separated list of types so a single run can generate methods for multiple types. The default output file is t_string.go, where t is the lower-cased name of the first type listed. It can be overridden with the -output flag.
Package gopacket provides packet decoding for the Go language. gopacket contains many sub-packages with additional functionality you may find useful, including: Also, if you're looking to dive right into code, see the examples subdirectory for numerous simple binaries built using gopacket libraries. gopacket takes in packet data as a []byte and decodes it into a packet with a non-zero number of "layers". Each layer corresponds to a protocol within the bytes. Once a packet has been decoded, the layers of the packet can be requested from the packet. Packets can be decoded from a number of starting points. Many of our base types implement Decoder, which allow us to decode packets for which we don't have full data. Most of the time, you won't just have a []byte of packet data lying around. Instead, you'll want to read packets in from somewhere (file, interface, etc) and process them. To do that, you'll want to build a PacketSource. First, you'll need to construct an object that implements the PacketDataSource interface. There are implementations of this interface bundled with gopacket in the gopacket/pcap and gopacket/pfring subpackages... see their documentation for more information on their usage. Once you have a PacketDataSource, you can pass it into NewPacketSource, along with a Decoder of your choice, to create a PacketSource. Once you have a PacketSource, you can read packets from it in multiple ways. See the docs for PacketSource for more details. The easiest method is the Packets function, which returns a channel, then asynchronously writes new packets into that channel, closing the channel if the packetSource hits an end-of-file. You can change the decoding options of the packetSource by setting fields in packetSource.DecodeOptions... see the following sections for more details. gopacket optionally decodes packet data lazily, meaning it only decodes a packet layer when it needs to handle a function call. Lazily-decoded packets are not concurrency-safe. Since layers have not all been decoded, each call to Layer() or Layers() has the potential to mutate the packet in order to decode the next layer. If a packet is used in multiple goroutines concurrently, don't use gopacket.Lazy. Then gopacket will decode the packet fully, and all future function calls won't mutate the object. By default, gopacket will copy the slice passed to NewPacket and store the copy within the packet, so future mutations to the bytes underlying the slice don't affect the packet and its layers. If you can guarantee that the underlying slice bytes won't be changed, you can use NoCopy to tell gopacket.NewPacket, and it'll use the passed-in slice itself. The fastest method of decoding is to use both Lazy and NoCopy, but note from the many caveats above that for some implementations either or both may be dangerous. During decoding, certain layers are stored in the packet as well-known layer types. For example, IPv4 and IPv6 are both considered NetworkLayer layers, while TCP and UDP are both TransportLayer layers. We support 4 layers, corresponding to the 4 layers of the TCP/IP layering scheme (roughly anagalous to layers 2, 3, 4, and 7 of the OSI model). To access these, you can use the packet.LinkLayer, packet.NetworkLayer, packet.TransportLayer, and packet.ApplicationLayer functions. Each of these functions returns a corresponding interface (gopacket.{Link,Network,Transport,Application}Layer). The first three provide methods for getting src/dst addresses for that particular layer, while the final layer provides a Payload function to get payload data. This is helpful, for example, to get payloads for all packets regardless of their underlying data type: A particularly useful layer is ErrorLayer, which is set whenever there's an error parsing part of the packet. Note that we don't return an error from NewPacket because we may have decoded a number of layers successfully before running into our erroneous layer. You may still be able to get your Ethernet and IPv4 layers correctly, even if your TCP layer is malformed. gopacket has two useful objects, Flow and Endpoint, for communicating in a protocol independent manner the fact that a packet is coming from A and going to B. The general layer types LinkLayer, NetworkLayer, and TransportLayer all provide methods for extracting their flow information, without worrying about the type of the underlying Layer. A Flow is a simple object made up of a set of two Endpoints, one source and one destination. It details the sender and receiver of the Layer of the Packet. An Endpoint is a hashable representation of a source or destination. For example, for LayerTypeIPv4, an Endpoint contains the IP address bytes for a v4 IP packet. A Flow can be broken into Endpoints, and Endpoints can be combined into Flows: Both Endpoint and Flow objects can be used as map keys, and the equality operator can compare them, so you can easily group together all packets based on endpoint criteria: For load-balancing purposes, both Flow and Endpoint have FastHash() functions, which provide quick, non-cryptographic hashes of their contents. Of particular importance is the fact that Flow FastHash() is symetric: A->B will have the same hash as B->A. An example usage could be: This allows us to split up a packet stream while still making sure that each stream sees all packets for a flow (and its bidirectional opposite). If your network has some strange encapsulation, you can implement your own decoder. In this example, we handle Ethernet packets which are encapsulated in a 4-byte header. See the docs for Decoder and PacketBuilder for more details on how coding decoders works, or look at RegisterLayerType and RegisterEndpointType to see how to add layer/endpoint types to gopacket. TLDR: DecodingLayerParser takes about 10% of the time as NewPacket to decode packet data, but only for known packet stacks. Basic decoding using gopacket.NewPacket or PacketSource.Packets is somewhat slow due to its need to allocate a new packet and every respective layer. It's very versatile and can handle all known layer types, but sometimes you really only care about a specific set of layers regardless, so that versatility is wasted. DecodingLayerParser avoids memory allocation altogether by decoding packet layers directly into preallocated objects, which you can then reference to get the packet's information. A quick example: The important thing to note here is that the parser is modifying the passed in layers (eth, ip4, ip6, tcp) instead of allocating new ones, thus greatly speeding up the decoding process. It's even branching based on layer type... it'll handle an (eth, ip4, tcp) or (eth, ip6, tcp) stack. However, it won't handle any other type... since no other decoders were passed in, an (eth, ip4, udp) stack will stop decoding after ip4, and only pass back [LayerTypeEthernet, LayerTypeIPv4] through the 'decoded' slice (along with an error saying it can't decode a UDP packet). Unfortunately, not all layers can be used by DecodingLayerParser... only those implementing the DecodingLayer interface are usable. Also, it's possible to create DecodingLayers that are not themselves Layers... see layers.IPv6ExtensionSkipper for an example of this. As well as offering the ability to decode packet data, gopacket will allow you to create packets from scratch, as well. A number of gopacket layers implement the SerializableLayer interface; these layers can be serialized to a []byte in the following manner: SerializeTo PREPENDS the given layer onto the SerializeBuffer, and they treat the current buffer's Bytes() slice as the payload of the serializing layer. Therefore, you can serialize an entire packet by serializing a set of layers in reverse order (Payload, then TCP, then IP, then Ethernet, for example). The SerializeBuffer's SerializeLayers function is a helper that does exactly that. To generate a (empty and useless, because no fields are set) Ethernet(IPv4(TCP(Payload))) packet, for exmaple, you can run: If you use gopacket, you'll almost definitely want to make sure gopacket/layers is imported, since when imported it sets all the LayerType variables and fills in a lot of interesting variables/maps (DecodersByLayerName, etc). Therefore, it's recommended that even if you don't use any layers functions directly, you still import with:
Package ogdl is used to process OGDL, the Ordered Graph Data Language. OGDL is a textual format to write trees or graphs of text, where indentation and spaces define the structure. Here is an example: The languange is simple, either in its textual representation or its number of productions (the specification rules), allowing for compact implementations. OGDL character streams are normally formed by Unicode characters, and encoded as UTF-8 strings, but any encoding that is ASCII transparent is compatible with the specification. See the full spec at http://ogdl.org. To install this package just do: If we have a text file 'config.ogdl' containing: then, will print If the timeout parameter was not present, then the default value (60) will be assigned to 'to'. The default value is optional, but be aware that Int64() will return 0 in case that the parameter doesn't exist. The configuration file can be written in a conciser way: The package includes a template processor. It takes an arbitrary input stream with some variables in it, and produces an output stream with the variables resolved out of a Graph object which acts as context. For example (given the previous config file): string(b) is then: Some rules are followed:
Package ogdl is used to process OGDL, the Ordered Graph Data Language. OGDL is a textual format to write trees or graphs of text, where indentation and spaces define the structure. Here is an example: The languange is simple, either in its textual representation or its number of productions (the specification rules), allowing for compact implementations. OGDL character streams are normally formed by Unicode characters, and encoded as UTF-8 strings, but any encoding that is ASCII transparent is compatible with the specification. See the full spec at http://ogdl.org. To install this package just do: If we have a text file 'config.g' containing: then, will print If the timeout parameter was not present, then the default value (60) will be assigned to 'to'. The default value is optional, but be aware that Int64() will return 0 in case that the parameter doesn't exist. The configuration file can be written in a conciser way: The package includes a template processor. It takes an arbitrary input stream with some variables in it, and produces an output stream with the variables resolved out of a Graph object which acts as context. For example (given the previous config file): string(b) is then: Some rules are followed:
Package lumberjack provides a rolling logger. Note that this is v2.0 of lumberjack, and should be imported using gopkg.in thusly: The package name remains simply lumberjack, and the code resides at https://github.com/natefinch/lumberjack under the v2.0 branch. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.
Package muxfys is a pure Go library that lets you in-process temporarily fuse-mount remote file systems or object stores as a "filey" system. Currently only support for S3-like systems has been implemented. It has high performance, and is easy to use with nothing else to install, and no root permissions needed (except to initially install/configure fuse: on old linux you may need to install fuse-utils, and for macOS you'll need to install osxfuse; for both you must ensure that 'user_allow_other' is set in /etc/fuse.conf or equivalent). It allows "multiplexing": you can mount multiple different buckets (or sub directories of the same bucket) on the same local directory. This makes commands you want to run against the files in your buckets much simpler, eg. instead of mounting s3://publicbucket, s3://myinputbucket and s3://myoutputbucket to separate mount points and running: You could multiplex the 3 buckets (at the desired paths) on to the directory you will work from and just run: When using muxfys, you 1) mount, 2) do something that needs the files in your S3 bucket(s), 3) unmount. Then repeat 1-3 for other things that need data in your S3 buckets. To add support for a new kind of remote file system or object store, simply implement the RemoteAccessor interface and supply an instance of that to RemoteConfig.
Package muxfys is a pure Go library that lets you in-process temporarily fuse-mount remote file systems or object stores as a "filey" system. Currently only support for S3-like systems has been implemented. It has high performance, and is easy to use with nothing else to install, and no root permissions needed (except to initially install/configure fuse: on old linux you may need to install fuse-utils, and for macOS you'll need to install osxfuse; for both you must ensure that 'user_allow_other' is set in /etc/fuse.conf or equivalent). It allows "multiplexing": you can mount multiple different buckets (or sub directories of the same bucket) on the same local directory. This makes commands you want to run against the files in your buckets much simpler, eg. instead of mounting s3://publicbucket, s3://myinputbucket and s3://myoutputbucket to separate mount points and running: You could multiplex the 3 buckets (at the desired paths) on to the directory you will work from and just run: When using muxfys, you 1) mount, 2) do something that needs the files in your S3 bucket(s), 3) unmount. Then repeat 1-3 for other things that need data in your S3 buckets. To add support for a new kind of remote file system or object store, simply implement the RemoteAccessor interface and supply an instance of that to RemoteConfig.