Package sdk is the official AWS SDK for the Go programming language. The AWS SDK for Go provides APIs and utilities that developers can use to build Go applications that use AWS services, such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). The SDK removes the complexity of coding directly against a web service interface. It hides a lot of the lower-level plumbing, such as authentication, request retries, and error handling. The SDK also includes helpful utilities on top of the AWS APIs that add additional capabilities and functionality. For example, the Amazon S3 Download and Upload Manager will automatically split up large objects into multiple parts and transfer them concurrently. See the s3manager package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/ Checkout the Getting Started Guide and API Reference Docs detailed the SDK's components and details on each AWS client the SDK supports. The Getting Started Guide provides examples and detailed description of how to get setup with the SDK. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/welcome.html The API Reference Docs include a detailed breakdown of the SDK's components such as utilities and AWS clients. Use this as a reference of the Go types included with the SDK, such as AWS clients, API operations, and API parameters. https://docs.aws.amazon.com/sdk-for-go/api/ The SDK is composed of two main components, SDK core, and service clients. The SDK core packages are all available under the aws package at the root of the SDK. Each client for a supported AWS service is available within its own package under the service folder at the root of the SDK. aws - SDK core, provides common shared types such as Config, Logger, and utilities to make working with API parameters easier. awserr - Provides the error interface that the SDK will use for all errors that occur in the SDK's processing. This includes service API response errors as well. The Error type is made up of a code and message. Cast the SDK's returned error type to awserr.Error and call the Code method to compare returned error to specific error codes. See the package's documentation for additional values that can be extracted such as RequestId. credentials - Provides the types and built in credentials providers the SDK will use to retrieve AWS credentials to make API requests with. Nested under this folder are also additional credentials providers such as stscreds for assuming IAM roles, and ec2rolecreds for EC2 Instance roles. endpoints - Provides the AWS Regions and Endpoints metadata for the SDK. Use this to lookup AWS service endpoint information such as which services are in a region, and what regions a service is in. Constants are also provided for all region identifiers, e.g UsWest2RegionID for "us-west-2". session - Provides initial default configuration, and load configuration from external sources such as environment and shared credentials file. request - Provides the API request sending, and retry logic for the SDK. This package also includes utilities for defining your own request retryer, and configuring how the SDK processes the request. service - Clients for AWS services. All services supported by the SDK are available under this folder. The SDK includes the Go types and utilities you can use to make requests to AWS service APIs. Within the service folder at the root of the SDK you'll find a package for each AWS service the SDK supports. All service clients follows a common pattern of creation and usage. When creating a client for an AWS service you'll first need to have a Session value constructed. The Session provides shared configuration that can be shared between your service clients. When service clients are created you can pass in additional configuration via the nifcloud.Config type to override configuration provided by in the Session to create service client instances with custom configuration. Once the service's client is created you can use it to make API requests the AWS service. These clients are safe to use concurrently. In the AWS SDK for Go, you can configure settings for service clients, such as the log level and maximum number of retries. Most settings are optional; however, for each service client, you must specify a region and your credentials. The SDK uses these values to send requests to the correct AWS region and sign requests with the correct credentials. You can specify these values as part of a session or as environment variables. See the SDK's configuration guide for more information. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html See the session package documentation for more information on how to use Session with the SDK. https://docs.aws.amazon.com/sdk-for-go/api/nifcloud/session/ See the Config type in the aws package for more information on configuration options. https://docs.aws.amazon.com/sdk-for-go/api/nifcloud/#Config When using the SDK you'll generally need your AWS credentials to authenticate with AWS services. The SDK supports multiple methods of supporting these credentials. By default the SDK will source credentials automatically from its default credential chain. See the session package for more information on this chain, and how to configure it. The common items in the credential chain are the following: Environment Credentials - Set of environment variables that are useful when sub processes are created for specific roles. Shared Credentials file (~/.nifcloud/credentials) - This file stores your credentials based on a profile name and is useful for local development. EC2 Instance Role Credentials - Use EC2 Instance Role to assign credentials to application running on an EC2 instance. This removes the need to manage credential files in production. Credentials can be configured in code as well by setting the Config's Credentials value to a custom provider or using one of the providers included with the SDK to bypass the default credential chain and use a custom one. This is helpful when you want to instruct the SDK to only use a specific set of credentials or providers. This example creates a credential provider for assuming an IAM role, "myRoleARN" and configures the S3 service client to use that role for API requests. See the credentials package documentation for more information on credential providers included with the SDK, and how to customize the SDK's usage of credentials. https://docs.aws.amazon.com/sdk-for-go/api/nifcloud/credentials The SDK has support for the shared configuration file (~/.nifcloud/config). This support can be enabled by setting the environment variable, "AWS_SDK_LOAD_CONFIG=1", or enabling the feature in code when creating a Session via the Option's SharedConfigState parameter. In addition to the credentials you'll need to specify the region the SDK will use to make AWS API requests to. In the SDK you can specify the region either with an environment variable, or directly in code when a Session or service client is created. The last value specified in code wins if the region is specified multiple ways. To set the region via the environment variable set the "AWS_REGION" to the region you want to the SDK to use. Using this method to set the region will allow you to run your application in multiple regions without needing additional code in the application to select the region. The endpoints package includes constants for all regions the SDK knows. The values are all suffixed with RegionID. These values are helpful, because they reduce the need to type the region string manually. To set the region on a Session use the aws package's Config struct parameter Region to the AWS region you want the service clients created from the session to use. This is helpful when you want to create multiple service clients, and all of the clients make API requests to the same region. See the endpoints package for the AWS Regions and Endpoints metadata. https://docs.aws.amazon.com/sdk-for-go/api/nifcloud/endpoints/ In addition to setting the region when creating a Session you can also set the region on a per service client bases. This overrides the region of a Session. This is helpful when you want to create service clients in specific regions different from the Session's region. See the Config type in the aws package for more information and additional options such as setting the Endpoint, and other service client configuration options. https://docs.aws.amazon.com/sdk-for-go/api/nifcloud/#Config Once the client is created you can make an API request to the service. Each API method takes a input parameter, and returns the service response and an error. The SDK provides methods for making the API call in multiple ways. In this list we'll use the S3 ListObjects API as an example for the different ways of making API requests. ListObjects - Base API operation that will make the API request to the service. ListObjectsRequest - API methods suffixed with Request will construct the API request, but not send it. This is also helpful when you want to get a presigned URL for a request, and share the presigned URL instead of your application making the request directly. ListObjectsPages - Same as the base API operation, but uses a callback to automatically handle pagination of the API's response. ListObjectsWithContext - Same as base API operation, but adds support for the Context pattern. This is helpful for controlling the canceling of in flight requests. See the Go standard library context package for more information. This method also takes request package's Option functional options as the variadic argument for modifying how the request will be made, or extracting information from the raw HTTP response. ListObjectsPagesWithContext - same as ListObjectsPages, but adds support for the Context pattern. Similar to ListObjectsWithContext this method also takes the request package's Option function option types as the variadic argument. In addition to the API operations the SDK also includes several higher level methods that abstract checking for and waiting for an AWS resource to be in a desired state. In this list we'll use WaitUntilBucketExists to demonstrate the different forms of waiters. WaitUntilBucketExists. - Method to make API request to query an AWS service for a resource's state. Will return successfully when that state is accomplished. WaitUntilBucketExistsWithContext - Same as WaitUntilBucketExists, but adds support for the Context pattern. In addition these methods take request package's WaiterOptions to configure the waiter, and how underlying request will be made by the SDK. The API method will document which error codes the service might return for the operation. These errors will also be available as const strings prefixed with "ErrCode" in the service client's package. If there are no errors listed in the API's SDK documentation you'll need to consult the AWS service's API documentation for the errors that could be returned. Pagination helper methods are suffixed with "Pages", and provide the functionality needed to round trip API page requests. Pagination methods take a callback function that will be called for each page of the API's response. Waiter helper methods provide the functionality to wait for an AWS resource state. These methods abstract the logic needed to to check the state of an AWS resource, and wait until that resource is in a desired state. The waiter will block until the resource is in the state that is desired, an error occurs, or the waiter times out. If a resource times out the error code returned will be request.WaiterResourceNotReadyErrorCode. This example shows a complete working Go file which will upload a file to S3 and use the Context pattern to implement timeout logic that will cancel the request if it takes too long. This example highlights how to use sessions, create a service client, make a request, handle the error, and process the response.
Package gosaas contains helper functions, middlewares, user management and billing functionalities commonly used in typical Software as a Service web application. The primary goal of this library is to handle repetitive components letting you focus on the core part of your project. You use the NewServer function to get a working server MUX. You need to pass the top level routes to the NewServer function to get the initial routing working. For instance if your web application handles the following routes: You only pass the "task" and "ping" routes to the server. Anything after the top-level will be handled by your code. You will be interested in ShiftPath, Respond, ParseBody and ServePage functions to get started. The most important aspect of a route is the Handler field which corresponds to the code to execute. The Handler is a standard http.Handler meaning that your code will need to implement the ServeHTTP function. The remaining fields for a route control if specific middlewares are part of the request life-cycle or not. For instance, the Logger flag will output request information to stdout when enabled.
Package ovirtclient provides a human-friendly Go client for the oVirt Engine. It provides an abstraction layer for the oVirt API, as well as a mocking facility for testing purposes. This documentation contains two parts. This introduction explains setting up the client with the credentials. The API doc contains the individual API calls. When reading the API doc, start with the Client interface: it contains all components of the API. The individual API's, their documentation and examples are located in subinterfaces, such as DiskClient. There are several ways to create a client instance. The most basic way is to use the New() function as follows: The mock client simulates the oVirt engine behavior in-memory without needing an actual running engine. This is a good way to provide a testing facility. It can be created using the NewMock method: That's it! However, to make it really useful, you will need the test helper which can set up test fixtures. The test helper can work in two ways: Either it sets up test fixtures in the mock client, or it sets up a live connection and identifies a usable storage domain, cluster, etc. for testing purposes. The ovirtclient.NewMockTestHelper() function can be used to create a test helper with a mock client in the backend: The easiest way to set up the test helper for a live connection is by using environment variables. To do that, you can use the ovirtclient.NewLiveTestHelperFromEnv() function: This function will inspect environment variables to determine if a connection to a live oVirt engine can be established. The following environment variables are supported: URL of the oVirt engine API. Mandatory. The username for the oVirt engine. Mandatory. The password for the oVirt engine. Mandatory. A file containing the CA certificate in PEM format. Provide the CA certificate in PEM format directly. Disable certificate verification if set. Not recommended. The cluster to use for testing. Will be automatically chosen if not provided. ID of the blank template. Will be automatically chosen if not provided. Storage domain to use for testing. Will be automatically chosen if not provided. VNIC profile to use for testing. Will be automatically chosen if not provided. You can also create the test helper manually: This library provides extensive logging. Each API interaction is logged on the debug level, and other messages are added on other levels. In order to provide logging this library uses the go-ovirt-client-log (https://github.com/oVirt/go-ovirt-client-log) interface definition. As long as your logger implements this interface, you will be able to receive log messages. The logging library also provides a few built-in loggers. For example, you can log via the default Go log interface: Or, you can also log in tests: You can also disable logging: Finally, we also provide an adapter library for klog here: https://github.com/oVirt/go-ovirt-client-log-klog Modern-day oVirt engines run secured with TLS. This means that the client needs a way to verify the certificate the server is presenting. This is controlled by the tls parameter of the New() function. You can implement your own source by implementing the TLSProvider interface, but the package also includes a ready-to-use provider. Create the provider using the TLS() function: This provider has several functions. The easiest to set up is using the system trust root for certificates. However, this won't work own Windows: Now you need to add your oVirt engine certificate to your system trust root. If you don't want to, or can't add the certificate to the system trust root, you can also directly provide it to the client. Finally, you can also disable certificate verification. Do we need to say that this is a very, very bad idea? The configured tls variable can then be passed to the New() function to create an oVirt client. This library attempts to retry API calls that can be retried if possible. Each function has a sensible retry policy. However, you may want to customize the retries by passing one or more retry flags. The following retry flags are supported: This strategy will stop retries when the context parameter is canceled. This strategy adds a wait time after each time, which is increased by the given factor on each try. The default is a backoff with a factor of 2. This strategy will cancel retries if the error in question is a permanent error. This is enabled by default. This strategy will abort retries if a maximum number of tries is reached. On complex calls the retries are counted per underlying API call. This strategy will abort retries if a certain time has been elapsed for the higher level call. This strategy will abort retries if a certain underlying API call takes longer than the specified duration.
Package lumberjack provides a rolling logger. Note that this is v2.0 of lumberjack, and should be imported using gopkg.in thusly: The package name remains simply lumberjack, and the code resides at https://github.com/natefinch/lumberjack under the v2.0 branch. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.
Package lumberjack provides a rolling logger. Note that this is v3.0 of lumberjack. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. Letting outside processes write to or manipulate the file that lumberjack writes to will also result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.
Package lumberjack provides a rolling logger. Note that this is v2.0 of lumberjack, and should be imported using gopkg.in thusly: The package name remains simply lumberjack, and the code resides at https://github.com/natefinch/lumberjack under the v2.0 branch. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.
Package telemetry holds observability facades for our services and libraries. The provided interface here allows for instrumenting libraries and packages without any dependencies on Logging and Metric instrumentation implementations. This allows a consistent way of authoring Log lines and Metrics for the producers of these libraries and packages while providing consumers the ability to plug in the implementations of their choice. The following requirements helped shape the form of the interfaces. Or developers will resort to using `fmt.Printf()` Error: something happened that we can't gracefully recover from. This is a log line that should be actionable by an operator and be alerted on. Info: something happened that might be of interest but does not impact the application stability. E.g. someone gave the wrong credentials and was therefore denied access, parsing error on external input, etc. Debug: anything that can help to understand application state during development. More levels get tricky to reason about when writing log lines or establishing the right level of verbosity at runtime. By the above explanations fatal folds into error, warning folds into info, and trace folds into debug. We trust more in partitioning loggers per domain, component, etc. and allow them to be individually addressed to required log levels than controlling a single logger with more levels. We also believe that most logs should be metrics. Anything above Debug level should be able to emit a metric which can be use for dashboards, alerting, etc. We want the ability to rollup / aggregate over the same message while allowing for contextual data to be added. A logging implementation can make the choice how to present to provided log data. This can be 100% structured, a single log line, or a combination. Allow the Go Context object to be passed and have a registry for values of interest we want to pull from context. A good example of an item we want to automatically include in log lines is the `x-request-id` so we can tie log lines produced in the request path together. This allows us to control per component which levels of log lines we want to output at runtime. The interface design allows for this to be implemented without having an opinion on it. By providing at each library or component entry point the ability to provide a Logger implementation, this can be easily achieved. Look at that lovely very empty go.mod and non-existent go.sum file.
Package css implements a CSS3 compliant scanner and parser. This is meant to be a low-level library for extracting a CSS3 abstract syntax tree from raw CSS text. This package can be used for building tools to validate, optimize and format CSS text. CSS parsing occurs in two steps. First the scanner breaks up a stream of code points (runes) into tokens. These tokens represent the most basic units of the CSS syntax tree such as identifiers, whitespace, and strings. The second step is to feed these tokens into the parser which creates the abstract syntax tree (AST) based on the context of the tokens. Unlike many language parsers, the abstract syntax tree for CSS saves many of the original tokens in the stream so they can be reparsed at different levels. For example, parsing a @media query will save off the raw tokens found in the {-block so they can be reparsed as a full style sheet. This package doesn't understand the specifics of how to parse different types of at-rules (such as @media queries) so it defers that to the user to handle parsing. The CSS3 syntax defines a syntax tree of several types. At the top-level there is a StyleSheet. The style sheet is simply a collection of Rules. A Rule can be either an AtRule or a QualifiedRule. An AtRule is defined as a rule starting with an "@" symbol and an identifier, then it's followed by zero or more component values and finally ends with either a {-block or a semicolon. The block is parsed simply as a collection of tokens and it is up to the user to define the exact grammar. A QualifiedRule is defined as a rule starting with one or more component values and ending with a {-block. Inside the {-blocks are a list of declarations. Despite the name, a list of declarations can be either an AtRule or a Declaration. A Declaration is an identifier followed by a colon followed by one or more component values. The declaration can also have it's Important flag set if the last two non-whitespace tokens are a case-insensitive "!important". ComponentValues are the basic unit inside rules and declarations. A ComponentValue can be either a SimpleBlock, a Function, or a Token. A simple block starts with either a {, [, or (, has zero or more component values, and then ends with the mirror of the starting token (}, ], or )). A Function is an identifier immediately followed by a left parenthesis, then zero or more component values, and then ending with a right parenthesis.
Package fyner provides a declarative wrapper around the Fyne UI library. Fyner's approach to structuring the UI and state management of an app is very difference from Fyne's. Fyner provides its own set of widgets that wrap the basic Fyne widgets, providing similar functionality but in a far more declarative way. To differentiate, Fyner refers to its own widgets as "components". Fyner components are instantiated manually as struct pointer literals. They export a number of fields which may be set, some of which are of a type from the state package. The fields of a component should not be changed after it is created, though if any of the fields are mutable state types, they may be set via the state API. For example, to create a center-layout container that contains a single label: To interact with the typical Fyne UI system, a Content function is provided that turns a Fyner component into a Fyne CanvasObject. This is typically only used at the top-level in order to set the content of a window, hence the name, but it can actually be used anywhere that a client might want to insert a component into a regular Fyne UI layout. To illustrate the whole system, here's a complete example:
Package iris provides a beautifully expressive and easy to use foundation for your next website, API, or distributed app. Source code and other details for the project are available at GitHub: 8.5.9 Final The only requirement is the Go Programming Language, at least version 1.8 but 1.9 is highly recommended. Iris takes advantage of the vendor directory feature wisely: https://docs.google.com/document/d/1Bz5-UB7g2uPBdOx-rw5t9MxJwkfpx90cqG9AFL0JAYo. You get truly reproducible builds, as this method guards against upstream renames and deletes. A simple copy-paste and `go get ./...` to resolve two dependencies: https://github.com/kataras/golog and the https://github.com/iris-contrib/httpexpect will work for ever even for older versions, the newest version can be retrieved by `go get` but this file contains documentation for an older version of Iris. Follow the instructions below: 1. install the Go Programming Language: https://golang.org/dl 2. clear yours previously `$GOPATH/src/github.com/kataras/iris` folder or create new 3. download the Iris v8.5.9 (final): https://github.com/kataras/iris/archive/v8.zip 4. extract the contents of the `iris-v8` folder that's inside the downloaded zip file to your `$GOPATH/src/github.com/kataras/iris` 5. navigate to your `$GOPATH/src/github.com/kataras/iris` folder if you're not already there and open a terminal/command prompt, execute the command: `go get ./...` and you're ready to GO:) Example code: You can start the server(s) listening to any type of `net.Listener` or even `http.Server` instance. The method for initialization of the server should be passed at the end, via `Run` function. Below you'll see some useful examples: UNIX and BSD hosts can take advandage of the reuse port feature. Example code: That's all with listening, you have the full control when you need it. Let's continue by learning how to catch CONTROL+C/COMMAND+C or unix kill command and shutdown the server gracefully. In order to manually manage what to do when app is interrupted, we have to disable the default behavior with the option `WithoutInterruptHandler` and register a new interrupt handler (globally, across all possible hosts). Example code: Access to all hosts that serve your application can be provided by the `Application#Hosts` field, after the `Run` method. But the most common scenario is that you may need access to the host before the `Run` method, there are two ways of gain access to the host supervisor, read below. First way is to use the `app.NewHost` to create a new host and use one of its `Serve` or `Listen` functions to start the application via the `iris#Raw` Runner. Note that this way needs an extra import of the `net/http` package. Example Code: Second, and probably easier way is to use the `host.Configurator`. Note that this method requires an extra import statement of "github.com/kataras/iris/core/host" when using go < 1.9, if you're targeting on go1.9 then you can use the `iris#Supervisor` and omit the extra host import. All common `Runners` we saw earlier (`iris#Addr, iris#Listener, iris#Server, iris#TLS, iris#AutoTLS`) accept a variadic argument of `host.Configurator`, there are just `func(*host.Supervisor)`. Therefore the `Application` gives you the rights to modify the auto-created host supervisor through these. Example Code: Read more about listening and gracefully shutdown by navigating to: All HTTP methods are supported, developers can also register handlers for same paths for different methods. The first parameter is the HTTP Method, second parameter is the request path of the route, third variadic parameter should contains one or more iris.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: In order to make things easier for the user, iris provides functions for all HTTP Methods. The first parameter is the request path of the route, second variadic parameter should contains one or more iris.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: A set of routes that are being groupped by path prefix can (optionally) share the same middleware handlers and template layout. A group can have a nested group too. `.Party` is being used to group routes, developers can declare an unlimited number of (nested) groups. Example code: iris developers are able to register their own handlers for http statuses like 404 not found, 500 internal server error and so on. Example code: With the help of iris's expressionist router you can build any form of API you desire, with safety. Example code: Iris has first-class support for the MVC pattern, you'll not find these stuff anywhere else in the Go world. Example Code: Iris web framework supports Request data, Models, Persistence Data and Binding with the fastest possible execution. Characteristics: All HTTP Methods are supported, for example if want to serve `GET` then the controller should have a function named `Get()`, you can define more than one method function to serve in the same Controller struct. Persistence data inside your Controller struct (share data between requests) via `iris:"persistence"` tag right to the field or Bind using `app.Controller("/" , new(myController), theBindValue)`. Models inside your Controller struct (set-ed at the Method function and rendered by the View) via `iris:"model"` tag right to the field, i.e User UserModel `iris:"model" name:"user"` view will recognise it as `{{.user}}`. If `name` tag is missing then it takes the field's name, in this case the `"User"`. Access to the request path and its parameters via the `Path and Params` fields. Access to the template file that should be rendered via the `Tmpl` field. Access to the template data that should be rendered inside the template file via `Data` field. Access to the template layout via the `Layout` field. Access to the low-level `iris.Context` via the `Ctx` field. Get the relative request path by using the controller's name via `RelPath()`. Get the relative template path directory by using the controller's name via `RelTmpl()`. Flow as you used to, `Controllers` can be registered to any `Party`, including Subdomains, the Party's begin and done handlers work as expected. Optional `BeginRequest(ctx)` function to perform any initialization before the method execution, useful to call middlewares or when many methods use the same collection of data. Optional `EndRequest(ctx)` function to perform any finalization after any method executed. Inheritance, recursively, see for example our `mvc.SessionController/iris.SessionController`, it has the `mvc.Controller/iris.Controller` as an embedded field and it adds its logic to its `BeginRequest`. Source file: https://github.com/kataras/iris/blob/v8/mvc/session_controller.go. Read access to the current route via the `Route` field. Support for more than one input arguments (map to dynamic request path parameters). Register one or more relative paths and able to get path parameters, i.e Response via output arguments, optionally, i.e Where `any` means everything, from custom structs to standard language's types-. `Result` is an interface which contains only that function: Dispatch(ctx iris.Context) and Get where HTTP Method function(Post, Put, Delete...). Iris has a very powerful and blazing fast MVC support, you can return any value of any type from a method function and it will be sent to the client as expected. * if `string` then it's the body. * if `string` is the second output argument then it's the content type. * if `int` then it's the status code. * if `bool` is false then it throws 404 not found http error by skipping everything else. * if `error` and not nil then (any type) response will be omitted and error's text with a 400 bad request will be rendered instead. * if `(int, error)` and error is not nil then the response result will be the error's text with the status code as `int`. * if `custom struct` or `interface{}` or `slice` or `map` then it will be rendered as json, unless a `string` content type is following. * if `mvc.Result` then it executes its `Dispatch` function, so good design patters can be used to split the model's logic where needed. The example below is not intended to be used in production but it's a good showcase of some of the return types we saw before; Another good example with a typical folder structure, that many developers are used to work, can be found at: https://github.com/kataras/iris/tree/v8/_examples/mvc/overview. By creating components that are independent of one another, developers are able to reuse components quickly and easily in other applications. The same (or similar) view for one application can be refactored for another application with different data because the view is simply handling how the data is being displayed to the user. If you're new to back-end web development read about the MVC architectural pattern first, a good start is that wikipedia article: https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller. Follow the examples at: https://github.com/kataras/iris/tree/v8/_examples/#mvc At the previous example, we've seen static routes, group of routes, subdomains, wildcard subdomains, a small example of parameterized path with a single known parameter and custom http errors, now it's time to see wildcard parameters and macros. iris, like net/http std package registers route's handlers by a Handler, the iris' type of handler is just a func(ctx iris.Context) where context comes from github.com/kataras/iris/context. Iris has the easiest and the most powerful routing process you have ever meet. At the same time, iris has its own interpeter(yes like a programming language) for route's path syntax and their dynamic path parameters parsing and evaluation, We call them "macros" for shortcut. How? It calculates its needs and if not any special regexp needed then it just registers the route with the low-level path syntax, otherwise it pre-compiles the regexp and adds the necessary middleware(s). Standard macro types for parameters: if type is missing then parameter's type is defaulted to string, so {param} == {param:string}. If a function not found on that type then the "string"'s types functions are being used. i.e: Besides the fact that iris provides the basic types and some default "macro funcs" you are able to register your own too!. Register a named path parameter function: at the func(argument ...) you can have any standard type, it will be validated before the server starts so don't care about performance here, the only thing it runs at serve time is the returning func(paramValue string) bool. Example Code: A path parameter name should contain only alphabetical letters, symbols, containing '_' and numbers are NOT allowed. If route failed to be registered, the app will panic without any warnings if you didn't catch the second return value(error) on .Handle/.Get.... Last, do not confuse ctx.Values() with ctx.Params(). Path parameter's values goes to ctx.Params() and context's local storage that can be used to communicate between handlers and middleware(s) goes to ctx.Values(), path parameters and the rest of any custom values are separated for your own good. Run Static Files Example code: More examples can be found here: https://github.com/kataras/iris/tree/v8/_examples/beginner/file-server Middleware is just a concept of ordered chain of handlers. Middleware can be registered globally, per-party, per-subdomain and per-route. Example code: iris is able to wrap and convert any external, third-party Handler you used to use to your web application. Let's convert the https://github.com/rs/cors net/http external middleware which returns a `next form` handler. Example code: Iris supports 5 template engines out-of-the-box, developers can still use any external golang template engine, as `context/context#ResponseWriter()` is an `io.Writer`. All of these five template engines have common features with common API, like Layout, Template Funcs, Party-specific layout, partial rendering and more. Example code: View engine supports bundled(https://github.com/jteeuwen/go-bindata) template files too. go-bindata gives you two functions, asset and assetNames, these can be setted to each of the template engines using the `.Binary` func. Example code: A real example can be found here: https://github.com/kataras/iris/tree/v8/_examples/view/embedding-templates-into-app. Enable auto-reloading of templates on each request. Useful while developers are in dev mode as they no neeed to restart their app on every template edit. Example code: Note: In case you're wondering, the code behind the view engines derives from the "github.com/kataras/iris/view" package, access to the engines' variables can be granded by "github.com/kataras/iris" package too. Each one of these template engines has different options located here: https://github.com/kataras/iris/tree/v8/view . This example will show how to store and access data from a session. You don’t need any third-party library, but If you want you can use any session manager compatible or not. In this example we will only allow authenticated users to view our secret message on the /secret page. To get access to it, the will first have to visit /login to get a valid session cookie, which logs him in. Additionally he can visit /logout to revoke his access to our secret message. Example code: Running the example: Sessions persistence can be achieved using one (or more) `sessiondb`. Example Code: More examples: In this example we will create a small chat between web sockets via browser. Example Server Code: Example Client(javascript) Code: Running the example: But you should have a basic idea of the framework by now, we just scratched the surface. If you enjoy what you just saw and want to learn more, please follow the below links: Examples: Middleware: Home Page:
Package iris provides a beautifully expressive and easy to use foundation for your next website, API, or distributed app. Source code and other details for the project are available at GitHub: 11.1.1 The only requirement is the Go Programming Language, at least version 1.8 but 1.11.1 and above is highly recommended. Example code: You can start the server(s) listening to any type of `net.Listener` or even `http.Server` instance. The method for initialization of the server should be passed at the end, via `Run` function. Below you'll see some useful examples: UNIX and BSD hosts can take advantage of the reuse port feature. Example code: That's all with listening, you have the full control when you need it. Let's continue by learning how to catch CONTROL+C/COMMAND+C or unix kill command and shutdown the server gracefully. In order to manually manage what to do when app is interrupted, we have to disable the default behavior with the option `WithoutInterruptHandler` and register a new interrupt handler (globally, across all possible hosts). Example code: Access to all hosts that serve your application can be provided by the `Application#Hosts` field, after the `Run` method. But the most common scenario is that you may need access to the host before the `Run` method, there are two ways of gain access to the host supervisor, read below. First way is to use the `app.NewHost` to create a new host and use one of its `Serve` or `Listen` functions to start the application via the `iris#Raw` Runner. Note that this way needs an extra import of the `net/http` package. Example Code: Second, and probably easier way is to use the `host.Configurator`. Note that this method requires an extra import statement of "github.com/kataras/iris/core/host" when using go < 1.9, if you're targeting on go1.9 then you can use the `iris#Supervisor` and omit the extra host import. All common `Runners` we saw earlier (`iris#Addr, iris#Listener, iris#Server, iris#TLS, iris#AutoTLS`) accept a variadic argument of `host.Configurator`, there are just `func(*host.Supervisor)`. Therefore the `Application` gives you the rights to modify the auto-created host supervisor through these. Example Code: Read more about listening and gracefully shutdown by navigating to: All HTTP methods are supported, developers can also register handlers for same paths for different methods. The first parameter is the HTTP Method, second parameter is the request path of the route, third variadic parameter should contains one or more iris.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: In order to make things easier for the user, iris provides functions for all HTTP Methods. The first parameter is the request path of the route, second variadic parameter should contains one or more iris.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: A set of routes that are being groupped by path prefix can (optionally) share the same middleware handlers and template layout. A group can have a nested group too. `.Party` is being used to group routes, developers can declare an unlimited number of (nested) groups. Example code: iris developers are able to register their own handlers for http statuses like 404 not found, 500 internal server error and so on. Example code: With the help of iris's expressionist router you can build any form of API you desire, with safety. Example code: At the previous example, we've seen static routes, group of routes, subdomains, wildcard subdomains, a small example of parameterized path with a single known parameter and custom http errors, now it's time to see wildcard parameters and macros. iris, like net/http std package registers route's handlers by a Handler, the iris' type of handler is just a func(ctx iris.Context) where context comes from github.com/kataras/iris/context. Iris has the easiest and the most powerful routing process you have ever meet. At the same time, iris has its own interpeter(yes like a programming language) for route's path syntax and their dynamic path parameters parsing and evaluation, We call them "macros" for shortcut. How? It calculates its needs and if not any special regexp needed then it just registers the route with the low-level path syntax, otherwise it pre-compiles the regexp and adds the necessary middleware(s). Standard macro types for parameters: if type is missing then parameter's type is defaulted to string, so {param} == {param:string}. If a function not found on that type then the "string"'s types functions are being used. i.e: Besides the fact that iris provides the basic types and some default "macro funcs" you are able to register your own too!. Register a named path parameter function: at the func(argument ...) you can have any standard type, it will be validated before the server starts so don't care about performance here, the only thing it runs at serve time is the returning func(paramValue string) bool. Example Code: Last, do not confuse ctx.Values() with ctx.Params(). Path parameter's values goes to ctx.Params() and context's local storage that can be used to communicate between handlers and middleware(s) goes to ctx.Values(), path parameters and the rest of any custom values are separated for your own good. Run Static Files Example code: More examples can be found here: https://github.com/kataras/iris/tree/master/_examples/beginner/file-server Middleware is just a concept of ordered chain of handlers. Middleware can be registered globally, per-party, per-subdomain and per-route. Example code: iris is able to wrap and convert any external, third-party Handler you used to use to your web application. Let's convert the https://github.com/rs/cors net/http external middleware which returns a `next form` handler. Example code: Iris supports 5 template engines out-of-the-box, developers can still use any external golang template engine, as `context/context#ResponseWriter()` is an `io.Writer`. All of these five template engines have common features with common API, like Layout, Template Funcs, Party-specific layout, partial rendering and more. Example code: View engine supports bundled(https://github.com/shuLhan/go-bindata) template files too. go-bindata gives you two functions, asset and assetNames, these can be setted to each of the template engines using the `.Binary` func. Example code: A real example can be found here: https://github.com/kataras/iris/tree/master/_examples/view/embedding-templates-into-app. Enable auto-reloading of templates on each request. Useful while developers are in dev mode as they no neeed to restart their app on every template edit. Example code: Note: In case you're wondering, the code behind the view engines derives from the "github.com/kataras/iris/view" package, access to the engines' variables can be granded by "github.com/kataras/iris" package too. Each one of these template engines has different options located here: https://github.com/kataras/iris/tree/master/view . This example will show how to store and access data from a session. You don’t need any third-party library, but If you want you can use any session manager compatible or not. In this example we will only allow authenticated users to view our secret message on the /secret page. To get access to it, the will first have to visit /login to get a valid session cookie, which logs him in. Additionally he can visit /logout to revoke his access to our secret message. Example code: Running the example: Sessions persistence can be achieved using one (or more) `sessiondb`. Example Code: More examples: In this example we will create a small chat between web sockets via browser. Example Server Code: Example Client(javascript) Code: Running the example: Iris has first-class support for the MVC pattern, you'll not find these stuff anywhere else in the Go world. Example Code: // GetUserBy serves // Method: GET // Resource: http://localhost:8080/user/{username:string} // By is a reserved "keyword" to tell the framework that you're going to // bind path parameters in the function's input arguments, and it also // helps to have "Get" and "GetBy" in the same controller. // // func (c *ExampleController) GetUserBy(username string) mvc.Result { // return mvc.View{ // Name: "user/username.html", // Data: username, // } // } Can use more than one, the factory will make sure that the correct http methods are being registered for each route for this controller, uncomment these if you want: Iris web framework supports Request data, Models, Persistence Data and Binding with the fastest possible execution. Characteristics: All HTTP Methods are supported, for example if want to serve `GET` then the controller should have a function named `Get()`, you can define more than one method function to serve in the same Controller. Register custom controller's struct's methods as handlers with custom paths(even with regex parametermized path) via the `BeforeActivation` custom event callback, per-controller. Example: Persistence data inside your Controller struct (share data between requests) by defining services to the Dependencies or have a `Singleton` controller scope. Share the dependencies between controllers or register them on a parent MVC Application, and ability to modify dependencies per-controller on the `BeforeActivation` optional event callback inside a Controller, i.e Access to the `Context` as a controller's field(no manual binding is neede) i.e `Ctx iris.Context` or via a method's input argument, i.e Models inside your Controller struct (set-ed at the Method function and rendered by the View). You can return models from a controller's method or set a field in the request lifecycle and return that field to another method, in the same request lifecycle. Flow as you used to, mvc application has its own `Router` which is a type of `iris/router.Party`, the standard iris api. `Controllers` can be registered to any `Party`, including Subdomains, the Party's begin and done handlers work as expected. Optional `BeginRequest(ctx)` function to perform any initialization before the method execution, useful to call middlewares or when many methods use the same collection of data. Optional `EndRequest(ctx)` function to perform any finalization after any method executed. Session dynamic dependency via manager's `Start` to the MVC Application, i.e Inheritance, recursively. Access to the dynamic path parameters via the controller's methods' input arguments, no binding is needed. When you use the Iris' default syntax to parse handlers from a controller, you need to suffix the methods with the `By` word, uppercase is a new sub path. Example: Register one or more relative paths and able to get path parameters, i.e Response via output arguments, optionally, i.e Where `any` means everything, from custom structs to standard language's types-. `Result` is an interface which contains only that function: Dispatch(ctx iris.Context) and Get where HTTP Method function(Post, Put, Delete...). Iris has a very powerful and blazing fast MVC support, you can return any value of any type from a method function and it will be sent to the client as expected. * if `string` then it's the body. * if `string` is the second output argument then it's the content type. * if `int` then it's the status code. * if `bool` is false then it throws 404 not found http error by skipping everything else. * if `error` and not nil then (any type) response will be omitted and error's text with a 400 bad request will be rendered instead. * if `(int, error)` and error is not nil then the response result will be the error's text with the status code as `int`. * if `custom struct` or `interface{}` or `slice` or `map` then it will be rendered as json, unless a `string` content type is following. * if `mvc.Result` then it executes its `Dispatch` function, so good design patters can be used to split the model's logic where needed. Examples with good patterns to follow but not intend to be used in production of course can be found at: https://github.com/kataras/iris/tree/master/_examples/#mvc. By creating components that are independent of one another, developers are able to reuse components quickly and easily in other applications. The same (or similar) view for one application can be refactored for another application with different data because the view is simply handling how the data is being displayed to the user. If you're new to back-end web development read about the MVC architectural pattern first, a good start is that wikipedia article: https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller. But you should have a basic idea of the framework by now, we just scratched the surface. If you enjoy what you just saw and want to learn more, please follow the below links: Examples: Middleware: Home Page: Book (in-progress):
Package runestone is an open-source implementation of the Bitcoin Rune protocol in Go. The runestone project aims to provide a robust, efficient, and easy-to-use library for developers to interact with the Rune protocol. It covers various aspects of the protocol, such as transaction handling, block validation, and network communication. Features: Getting started: To start using the runestone package, simply import it into your Go project: ``` go get github.com/studyzy/runestone ``` You can then use the various components provided by the package to interact with the Rune protocol, such as creating transactions, validating blocks, and communicating with other nodes on the network. For more information and examples, please refer to the package documentation and the project's GitHub repository. Contributions and feedback are welcome! If you encounter any issues or have suggestions for improvements, please open an issue on the project's GitHub repository or submit a pull request.
Extensible Go library for creating fast, SSR-first frontend avoiding vanilla templating downsides. Creating asynchronous and dynamic layout parts is a complex problem for larger projects using `html/template`. Library tries to simplify this process. Let's go straight into a simple example. Then, we will dig into details, step by step, how it works. Kyoto provides a simple net/http handlers and function wrappers to handle pages rendering and serving. See functions inside of nethttp.go file for details and advanced usage. Example: Kyoto provides a way to define components. It's a very common approach for modern libraries to manage frontend parts. In kyoto each component is a context receiver, which returns it's state. Each component becomes a part of the page or top-level component, which executes component asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Example: As an option, you can wrap component with another function to accept additional paramenters from top-level page/component. Example: Kyoto provides a context, which holds common objects like http.ResponseWriter, *http.Request, etc. See kyoto.Context for details. Example: Kyoto provides a set of parameters and functions to provide a comfortable template building process. You can configure template building parameters with kyoto.TemplateConf configuration. See template.go for available functions and kyoto.TemplateConfiguration for configuration details. Example: Kyoto provides a way to simplify building dynamic UIs. For this purpose it has a feature named actions. Logic is pretty simple. Client calls an action (sends a request to the server). Action is executing on server side and server is sending updated component markup to the client which will be morphed into DOM. That's it. To use actions, you need to go through a few steps. You'll need to include a client into page (JS functions for communication) and register an actions handler for a needed component. Let's start from including a client. Then, let's register an actions handler for a needed component. That's all! Now we ready to use actions to provide a dynamic UI. Example: In this example you can see provided modifications to the quick start example. First, we've added a state and name into our components' markup. In this way we are saving our components' state between actions and find a component root. Unfortunately, we have to manually provide a component name for now, we haven't found a way to provide it dynamically. Second, we've added a reload button with onclick function call. We're using a function Action provided by a client. Action triggering will be described in details later. Third, we've added an action handler inside of our component. This handler will be executed when a client calls an action with a corresponding name. It's highly recommended to keep components' state as small as possible. It will be transmitted on each action call. Kyoto have multiple ways to trigger actions. Now we will check them one by one. This is the simplest way to trigger an action. It's just a function call with a referer (usually 'this', f.e. button) as a first argument (used to determine root), action name as a second argument and arguments as a rest. Arguments must to be JSON serializable. It's possible to trigger an action of another component. If you want to call an action of parent component, use $ prefix in action name. If you want to call an action of component by id, use <id:action> as an action name. This is a specific action which is triggered when a form is submitted. Usually called in onsubmit="..." attribute of a form. You'll need to implement 'Submit' action to handle this trigger. This is a special HTML attribute which will trigger an action on page load. This may be useful for components' lazy loading. With this special HTML attributes you can trigger an action with interval. Useful for components that must to be updated over time (f.e. charts, stats, etc). You can use this trigger with ssa:poll and ssa:poll.interval HTML attributes. This one attribute allows you to trigger an action when an element is visible on the screen. May be useful for lazy loading. Kyoto provides a way to control action flow. For now, it's possible to control display style on component call and push multiple UI updates to the client during a single action. Because kyoto makes a roundtrip to the server every time an action is triggered on the page, there are cases where the page may not react immediately to a user event (like a click). That's why the library provides a way to easily control display attributes on action call. You can use this HTML attribute to control display during action call. At the end of an action the layout will be restored. A small note. Don't forget to set a default display for loading elements like spinners and loaders. You can push multiple component UI updates during a single action call. Just call kyoto.ActionFlush(ctx, state) to initiate an update. Kyoto provides a way to control action rendering. Now there is at least 2 rendering options after an action call: morph (default) and replace. Morph will try to morph received markup to the current one with morphdom library. In case of an error, or explicit "replace" mode, markup will be replaced with x.outerHTML = '...'.
Package oteleventually provides extension components for eventually library to enable OpenTelemetry instrumentation.
Package excelize providing a set of functions that allow you to write to and read from XLSX / XLSM / XLTM files. Supports reading and writing spreadsheet documents generated by Microsoft Exce™ 2007 and later. Supports complex components by high compatibility, and provided streaming API for generating or reading data from a worksheet with huge amounts of data. This library needs Go version 1.10 or later. See https://xuri.me/excelize for more information about this package.
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/jhillyerd/enmime/v2
Package bongo is an elegant static website generator. It is designed to be simple minimal and easy to use. Bongo comes in two flavors. The commandline applicaion and the library. The commandline application can be found in cmd/bongo directory and you can install it via go get like this Or just download the latest binary here https://github.com/gernest/bongo/releases/latest To build your project foo. * You can specify the path to foo * You can run at the root of foo To serve your project locally. This will run a local server at port http://localhost:8000. The project will be rebuilt if any markdown file changes. * You can specify the path to foo * You can run at the root of foo The generated website will be in the directory _site at the root of your foo project. There is no restriction on how you arrange your project. If you have a project foo. It will be somewhare in a directory named foo. You can see the example in testdata/sample directory. Bongo only process markdown files found in your project root.Supported file extensions for the markdown files are This means you can put your markdown files in any nested directories inside your project and bongo will process them without any problem. Bongo support github flavored markdown Optionaly, you can add sitewide configuration file `_bongo.yml` at the root of your project. The configuration is in yaml format. And there are a few settings you can change. There is loose restrictions in the how to create your own theme. What matters is that you have the following templates. These templates can be used in project, by setting the view value of frontmatter. For instance if I set view to post, then post.html will be used on that particular file. IMPORTANT: All static contents should be placed in a diretory named static at the root of the theme. They will be copied to the output directory unchanged. All themes custom themes should live under the _theme directory at the project root. Please see testdata/sample/_themes for an example. Bongo support frontmatter. And it is recomended every post(your markdown file) should have a frontmatter. For convenience, only YAML frontmatter is supported by default. And you can add it at the beginning of your file like this. Important frontmatter settings, Bongo is modular, and uses interfaces to define its components.The most important interface is the Generator interface. So, you can implement your own Generator interface, and pass it to the bongo library to have your own static website generator with your own rules. I challenge you, to try implementing different Generators. Or, implement different components of the generator interface. I have default implementations shipped with bongo.
Package lunk provides a set of tools for structured logging in the style of Google's Dapper or Twitter's Zipkin. When we consider a complex event in a distributed system, we're actually considering a partially-ordered tree of events from various services, libraries, and modules. Consider a user-initiated web request. Their browser sends an HTTP request to an edge server, which extracts the credentials (e.g., OAuth token) and authenticates the request by communicating with an internal authentication service, which returns a signed set of internal credentials (e.g., signed user ID). The edge web server then proxies the request to a cluster of web servers, each running a PHP application. The PHP application loads some data from several databases, places the user in a number of treatment groups for running A/B experiments, writes some data to a Dynamo-style distributed database, and returns an HTML response. The edge server receives this response and proxies it to the user's browser. In this scenario we have a number of infrastructure-specific events: This scenario also involves a number of events which have little to do with the infrastructure, but are still critical information for the business the system supports: There are a number of different teams all trying to monitor and improve aspects of this system. Operational staff need to know if a particular host or service is experiencing a latency spike or drop in throughput. Development staff need to know if their application's response times have gone down as a result of a recent deploy. Customer support staff need to know if the system is operating nominally as a whole, and for customers in particular. Product designers and managers need to know the effect of an A/B test on user behavior. But the fact that these teams will be consuming the data in different ways for different purposes does mean that they are working on different systems. In order to instrument the various components of the system, we need a common data model. We adopt Dapper's notion of a tree to mean a partially-ordered tree of events from a distributed system. A tree in Lunk is identified by its root ID, which is the unique ID of its root event. All events in a common tree share a root ID. In our photo example, we would assign a unique root ID as soon as the edge server received the request. Events inside a tree are causally ordered: each event has a unique ID, and an optional parent ID. By passing the IDs across systems, we establish causal ordering between events. In our photo example, the two database queries from the app would share the same parent ID--the ID of the event corresponding to the app handling the request which caused those queries. Each event has a schema of properties, which allow us to record specific pieces of information about each event. For HTTP requests, we can record the method, the request URI, the elapsed time to handle the request, etc. Lunk is agnostic in terms of aggregation technologies, but two use cases seem clear: real-time process monitoring and offline causational analysis. For real-time process monitoring, events can be streamed to a aggregation service like Riemann (http://riemann.io) or Storm (http://storm.incubator.apache.org), which can calculate process statistics (e.g., the 95th percentile latency for the edge server responses) in real-time. This allows for adaptive monitoring of all services, with the option of including example root IDs in the alerts (e.g., 95th percentile latency is over 300ms, mostly as a result of requests like those in tree XXXXX). For offline causational analysis, events can be written in batches to batch processing systems like Hadoop or OLAP databases like Vertica. These aggregates can be queried to answer questions traditionally reserved for A/B testing systems. "Did users who were show the new navbar view more photos?" "Did the new image optimization algorithm we enabled for 1% of views run faster? Did it produce smaller images? Did it have any effect on user engagement?" "Did any services have increased exception rates after any recent deploys?" &tc &tc By capturing the root ID of a particular web request, we can assemble a partially-ordered tree of events which were involved in the handling of that request. All events with a common root ID are in a common tree, which allows for O(M) retrieval for a tree of M events. To send a request with a root ID and a parent ID, use the Event-ID HTTP header: The header value is simply the root ID and event ID, hex-encoded and separated with a slash. If the event has a parent ID, that may be included as an optional third parameter. A server that receives a request with this header can use this to properly parent its own events. Each event has a set of named properties, the keys and values of which are strings. This allows aggregation layers to take advantage of simplifying assumptions and either store events in normalized form (with event data separate from property data) or in denormalized form (essentially pre-materializing an outer join of the normalized relations). Durations are always recorded as fractional milliseconds. Lunk currently provides two formats for log entries: text and JSON. Text-based logs encode each entry as a single line of text, using key="value" formatting for all properties. Event property keys are scoped to avoid collisions. JSON logs encode each entry as a single JSON object.
Package azcore implements an HTTP request/response middleware pipeline used by Azure SDK clients. The middleware consists of three components. A Policy can be implemented in two ways; as a first-class function for a stateless Policy, or as a method on a type for a stateful Policy. Note that HTTP requests made via the same pipeline share the same Policy instances, so if a Policy mutates its state it MUST be properly synchronized to avoid race conditions. A Policy's Do method is called when an HTTP request wants to be sent over the network. The Do method can perform any operation(s) it desires. For example, it can log the outgoing request, mutate the URL, headers, and/or query parameters, inject a failure, etc. Once the Policy has successfully completed its request work, it must call the Next() method on the *policy.Request instance in order to pass the request to the next Policy in the chain. When an HTTP response comes back, the Policy then gets a chance to process the response/error. The Policy instance can log the response, retry the operation if it failed due to a transient error or timeout, unmarshal the response body, etc. Once the Policy has successfully completed its response work, it must return the *http.Response and error instances to its caller. Template for implementing a stateless Policy: Template for implementing a stateful Policy: The Transporter interface is responsible for sending the HTTP request and returning the corresponding HTTP response or error. The Transporter is invoked by the last Policy in the chain. The default Transporter implementation uses a shared http.Client from the standard library. The same stateful/stateless rules for Policy implementations apply to Transporter implementations. To use the Policy and Transporter instances, an application passes them to the runtime.NewPipeline function. The specified Policy instances form a chain and are invoked in the order provided to NewPipeline followed by the Transporter. Once the Pipeline has been created, create a runtime.Request instance and pass it to Pipeline's Do method. The Pipeline.Do method sends the specified Request through the chain of Policy and Transporter instances. The response/error is then sent through the same chain of Policy instances in reverse order. For example, assuming there are Policy types PolicyA, PolicyB, and PolicyC along with TransportA. The flow of Request and Response looks like the following: The Request instance passed to Pipeline's Do method is a wrapper around an *http.Request. It also contains some internal state and provides various convenience methods. You create a Request instance by calling the runtime.NewRequest function: If the Request should contain a body, call the SetBody method. A seekable stream is required so that upon retry, the retry Policy instance can seek the stream back to the beginning before retrying the network request and re-uploading the body. Operations like JSON-MERGE-PATCH send a JSON null to indicate a value should be deleted. This requirement conflicts with the SDK's default marshalling that specifies "omitempty" as a means to resolve the ambiguity between a field to be excluded and its zero-value. In the above example, Name and Count are defined as pointer-to-type to disambiguate between a missing value (nil) and a zero-value (0) which might have semantic differences. In a PATCH operation, any fields left as nil are to have their values preserved. When updating a Widget's count, one simply specifies the new value for Count, leaving Name nil. To fulfill the requirement for sending a JSON null, the NullValue() function can be used. This sends an explict "null" for Count, indicating that any current value for Count should be deleted. When the HTTP response is received, the *http.Response is returned directly. Each Policy instance can inspect/mutate the *http.Response. To enable logging, set environment variable AZURE_SDK_GO_LOGGING to "all" before executing your program. By default the logger writes to stderr. This can be customized by calling log.SetListener, providing a callback that writes to the desired location. Any custom logging implementation MUST provide its own synchronization to handle concurrent invocations. See the docs for the log package for further details. Pageable operations return potentially large data sets spread over multiple GET requests. The result of each GET is a "page" of data consisting of a slice of items. Pageable operations can be identified by their New*Pager naming convention and return type of *runtime.Pager[T]. The call to WidgetClient.NewListWidgetsPager() returns an instance of *runtime.Pager[T] for fetching pages and determining if there are more pages to fetch. No IO calls are made until the NextPage() method is invoked. Long-running operations (LROs) are operations consisting of an initial request to start the operation followed by polling to determine when the operation has reached a terminal state. An LRO's terminal state is one of the following values. LROs can be identified by their Begin* prefix and their return type of *runtime.Poller[T]. When a call to WidgetClient.BeginCreateOrUpdate() returns a nil error, it means that the LRO has started. It does _not_ mean that the widget has been created or updated (or failed to be created/updated). The *runtime.Poller[T] provides APIs for determining the state of the LRO. To wait for the LRO to complete, call the PollUntilDone() method. The call to PollUntilDone() will block the current goroutine until the LRO has reached a terminal state or the context is canceled/timed out. Note that LROs can take anywhere from several seconds to several minutes. The duration is operation-dependent. Due to this variant behavior, pollers do _not_ have a preconfigured time-out. Use a context with the appropriate cancellation mechanism as required. Pollers provide the ability to serialize their state into a "resume token" which can be used by another process to recreate the poller. This is achieved via the runtime.Poller[T].ResumeToken() method. Note that a token can only be obtained for a poller that's in a non-terminal state. Also note that any subsequent calls to poller.Poll() might change the poller's state. In this case, a new token should be created. After the token has been obtained, it can be used to recreate an instance of the originating poller. When resuming a poller, no IO is performed, and zero-value arguments can be used for everything but the Options.ResumeToken. Resume tokens are unique per service client and operation. Attempting to resume a poller for LRO BeginB() with a token from LRO BeginA() will result in an error. The fake package contains types used for constructing in-memory fake servers used in unit tests. This allows writing tests to cover various success/error conditions without the need for connecting to a live service. Please see https://github.com/gracewilcox/azure-sdk-for-go/tree/main/sdk/samples/fakes for details and examples on how to use fakes.
Package lumberjack provides a rolling logger. Note that this is v2.0 of lumberjack, and should be imported using gopkg.in thusly: The package name remains simply lumberjack, and the code resides at https://github.com/natefinch/lumberjack under the v2.0 branch. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.
Package excelize providing a set of functions that allow you to write to and read from XLAM / XLSM / XLSX / XLTM / XLTX files. Supports reading and writing spreadsheet documents generated by Microsoft Excel™ 2007 and later. Supports complex components by high compatibility, and provided streaming API for generating or reading data from a worksheet with huge amounts of data. This library needs Go version 1.18 or later. See https://xuri.me/excelize for more information about this package.
Extensible Go library for creating fast, SSR-first frontend avoiding vanilla templating downsides. Creating asynchronous and dynamic layout parts is a complex problem for larger projects using `html/template`. Library tries to simplify this process. Let's go straight into a simple example. Then, we will dig into details, step by step, how it works. Kyoto provides a simple net/http handlers and function wrappers to handle pages rendering and serving. See functions inside of nethttp.go file for details and advanced usage. Example: Kyoto provides a way to define components. It's a very common approach for modern libraries to manage frontend parts. In kyoto each component is a context receiver, which returns it's state. Each component becomes a part of the page or top-level component, which executes component asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Example: As an option, you can wrap component with another function to accept additional paramenters from top-level page/component. Example: Kyoto provides a context, which holds common objects like http.ResponseWriter, *http.Request, etc. See kyoto.Context for details. Example: Kyoto provides a set of parameters and functions to provide a comfortable template building process. You can configure template building parameters with kyoto.TemplateConf configuration. See template.go for available functions and kyoto.TemplateConfiguration for configuration details. Example: Kyoto provides a way to simplify building dynamic UIs. For this purpose it has a feature named actions. Logic is pretty simple. Client calls an action (sends a request to the server). Action is executing on server side and server is sending updated component markup to the client which will be morphed into DOM. That's it. To use actions, you need to go through a few steps. You'll need to include a client into page (JS functions for communication) and register an actions handler for a needed component. Let's start from including a client. Then, let's register an actions handler for a needed component. That's all! Now we ready to use actions to provide a dynamic UI. Example: In this example you can see provided modifications to the quick start example. First, we've added a state and name into our components' markup. In this way we are saving our components' state between actions and find a component root. Unfortunately, we have to manually provide a component name for now, we haven't found a way to provide it dynamically. Second, we've added a reload button with onclick function call. We're using a function Action provided by a client. Action triggering will be described in details later. Third, we've added an action handler inside of our component. This handler will be executed when a client calls an action with a corresponding name. It's highly recommended to keep components' state as small as possible. It will be transmitted on each action call. Kyoto have multiple ways to trigger actions. Now we will check them one by one. This is the simplest way to trigger an action. It's just a function call with a referer (usually 'this', f.e. button) as a first argument (used to determine root), action name as a second argument and arguments as a rest. Arguments must to be JSON serializable. It's possible to trigger an action of another component. If you want to call an action of parent component, use $ prefix in action name. If you want to call an action of component by id, use <id:action> as an action name. This is a specific action which is triggered when a form is submitted. Usually called in onsubmit="..." attribute of a form. You'll need to implement 'Submit' action to handle this trigger. This is a special HTML attribute which will trigger an action on page load. This may be useful for components' lazy loading. With this special HTML attributes you can trigger an action with interval. Useful for components that must to be updated over time (f.e. charts, stats, etc). You can use this trigger with ssa:poll and ssa:poll.interval HTML attributes. This one attribute allows you to trigger an action when an element is visible on the screen. May be useful for lazy loading. Kyoto provides a way to control action flow. For now, it's possible to control display style on component call and push multiple UI updates to the client during a single action. Because kyoto makes a roundtrip to the server every time an action is triggered on the page, there are cases where the page may not react immediately to a user event (like a click). That's why the library provides a way to easily control display attributes on action call. You can use this HTML attribute to control display during action call. At the end of an action the layout will be restored. A small note. Don't forget to set a default display for loading elements like spinners and loaders. You can push multiple component UI updates during a single action call. Just call kyoto.ActionFlush(ctx, state) to initiate an update. Kyoto provides a way to control action rendering. Now there is at least 2 rendering options after an action call: morph (default) and replace. Morph will try to morph received markup to the current one with morphdom library. In case of an error, or explicit "replace" mode, markup will be replaced with x.outerHTML = '...'.
Extensible Go library for creating fast, SSR-first frontend avoiding vanilla templating downsides. Creating asynchronous and dynamic layout parts is a complex problem for larger projects using `html/template`. Library tries to simplify this process. Let's go straight into a simple example. Then, we will dig into details, step by step, how it works. Kyoto provides a simple net/http handlers and function wrappers to handle pages rendering and serving. See functions inside of nethttp.go file for details and advanced usage. Example: Kyoto provides a way to define components. It's a very common approach for modern libraries to manage frontend parts. In kyoto each component is a context receiver, which returns it's state. Each component becomes a part of the page or top-level component, which executes component asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Example: As an option, you can wrap component with another function to accept additional paramenters from top-level page/component. Example: Kyoto provides a context, which holds common objects like http.ResponseWriter, *http.Request, etc. See kyoto.Context for details. Example: Kyoto provides a set of parameters and functions to provide a comfortable template building process. You can configure template building parameters with kyoto.TemplateConf configuration. See template.go for available functions and kyoto.TemplateConfiguration for configuration details. Example: Kyoto provides a way to simplify building dynamic UIs. For this purpose it has a feature named actions. Logic is pretty simple. Client calls an action (sends a request to the server). Action is executing on server side and server is sending updated component markup to the client which will be morphed into DOM. That's it. To use actions, you need to go through a few steps. You'll need to include a client into page (JS functions for communication) and register an actions handler for a needed component. Let's start from including a client. Then, let's register an actions handler for a needed component. That's all! Now we ready to use actions to provide a dynamic UI. Example: In this example you can see provided modifications to the quick start example. First, we've added a state and name into our components' markup. In this way we are saving our components' state between actions and find a component root. Unfortunately, we have to manually provide a component name for now, we haven't found a way to provide it dynamically. Second, we've added a reload button with onclick function call. We're using a function Action provided by a client. Action triggering will be described in details later. Third, we've added an action handler inside of our component. This handler will be executed when a client calls an action with a corresponding name. It's highly recommended to keep components' state as small as possible. It will be transmitted on each action call. Kyoto have multiple ways to trigger actions. Now we will check them one by one. This is the simplest way to trigger an action. It's just a function call with a referer (usually 'this', f.e. button) as a first argument (used to determine root), action name as a second argument and arguments as a rest. Arguments must to be JSON serializable. It's possible to trigger an action of another component. If you want to call an action of parent component, use $ prefix in action name. If you want to call an action of component by id, use <id:action> as an action name. This is a specific action which is triggered when a form is submitted. Usually called in onsubmit="..." attribute of a form. You'll need to implement 'Submit' action to handle this trigger. This is a special HTML attribute which will trigger an action on page load. This may be useful for components' lazy loading. With this special HTML attributes you can trigger an action with interval. Useful for components that must to be updated over time (f.e. charts, stats, etc). You can use this trigger with ssa:poll and ssa:poll.interval HTML attributes. This one attribute allows you to trigger an action when an element is visible on the screen. May be useful for lazy loading. Kyoto provides a way to control action flow. For now, it's possible to control display style on component call and push multiple UI updates to the client during a single action. Because kyoto makes a roundtrip to the server every time an action is triggered on the page, there are cases where the page may not react immediately to a user event (like a click). That's why the library provides a way to easily control display attributes on action call. You can use this HTML attribute to control display during action call. At the end of an action the layout will be restored. A small note. Don't forget to set a default display for loading elements like spinners and loaders. You can push multiple component UI updates during a single action call. Just call kyoto.ActionFlush(ctx, state) to initiate an update. Kyoto provides a way to control action rendering. Now there is at least 2 rendering options after an action call: morph (default) and replace. Morph will try to morph received markup to the current one with morphdom library. In case of an error, or explicit "replace" mode, markup will be replaced with x.outerHTML = '...'.
Package kyoto was made for creating fast, server side frontend avoiding vanilla templating downsides. It tries to address complexities in frontend domain like responsibility separation, components structure, asynchronous load and hassle-free dynamic layout updates. These issues are common for frontends written with Go. The library provides you with primitives for pages and components creation, state and rendering management, dynamic layout updates (with external packages integration), utility functions and asynchronous components out of the box. Still, it bundles with minimal dependencies and tries to utilize built-ins as much as possible. You would probably want to opt out from this library in few cases, like, if you're not ready for drastic API changes between major version, you want to develop SPA/PWA and/or complex client-side logic, or you're just feeling OK with your current setup. Please, don't compare kyoto with a popular JS libraries like React, Vue or Svelte. I know you will have such a desire, but most likely you will be wrong. Use cases and underlying principles are just too different. If you want to get an idea of what a typical static component would look like, here's some sample code. It's very ascetic and simplistic, as we don't want to overload you with implementation details. Markup is also not included here (it's just a well-known `html/template`). For details, please check project's website on https://kyoto.codes. Also, you may check the library index to explore available sub-packages and https://pkg.go.dev for Go'ish documentation style. We don't want you to deal with boilerplate code on your own, so you can proceed with our simple starter project. Feel free to use it as an example for your own setup. Components is a common approach for modern libraries to manage frontend parts. Kyoto's components are trying to be mostly independent (but configurable) part of the project. To create component, it would be enough to implement component.Component. It's a function, a context receiver which returns a component state. State is an implementation of component.State, which is easy to implement with nesting one of the state implementations (options will be described later). Each component becomes a part of the page or top-level component, which executes component function asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Stateful components are pretty similar to stateless ones, but they are actually implementing marshal/unmarshal interface instead of mocking it. You have multiple state options to choose from: universal or server. Universal state is a state, that can be marshalled and unmarshalled both on server and client. It's a common state option without functionality limitations. On the other hand, the whole state must be sent and received, which applies some limitations on the state size. Server state can be marshalled and unmarshalled only on server. It's a good option for components, that are not supposed to be updated on client side (f.e. no inputs). Also, it's a good option for components with lots of state data. Sometimes you may want to pass some arguments to the component. It's easy to do with wrapping component with additional function. You have an access to the context inside the component. It includes request and response objects, as well as some other useful stuff like store. This library doesn't provide you with routing out of the box. You can use any router you want, built-in one is not a bad option for basic needs. Rendering might be tricky, but we're trying to make it as simple as possible. By default, we're using `html/template` as a rendering engine. It's a well-known built-in package, so you don't have to learn anything new. Out of the box we're parsing all templates in root directory with `*.html` glob. You can change this behavior with `TEMPLATE_GLOB` global variable. Don't rely on file names while working with template names, use `define` entry for each your component. To provide your components with ability to be rendered, you have to do some basic steps. First, you have to nest one of the rendering implementations into your component state (f.e. `rendering.Template`). You can customize rendering with providing values to the rendering implementation. If you need to modify these values for the entire project, we recommend looking at the global settings or creating a builder function for rendering object. By default, render handler will use a component name as a template name. So, you have to define a template with the same name as your component (not the filename, but "define" entry). That's enough to be rendered by `rendering.Handler`. For rendering a nested component, use built-in `template` function. Provide a resolved future object as a template argument in this way. Nested components are not obligated to have rendering implementation if you're using them in this way. As an alternative, you can nest rendering implementation (e.g. `rendering.Template`) into your nested component. In this way you can use `render` function to simplify your code. Please, don't use this approach heavily now, as it affects rendering performance. HTMX is a frontend library, that allows you to update your page layout dynamically. It perfectly fits into kyoto, which focuses on components and server side rendering. Thanks to the component structure, there is no need to define separate rendering logic specially for HTMX. Please, check https://htmx.org/docs/#installing for installation instructions. In addition to this, you must register HTMX handlers for your dynamic components. This is a basic example of HTMX usage. Please, check https://htmx.org/docs/ for more details. In this example we're defining a form component, that is updating itself on submit. And this is how you can define a component, that will handle this request. Sometimes it might be useful to have a component state, which will persist between requests and will be stored without any actual usage in the client side presentation. This function injects a hidden input field with a serialized state. Let's check how it works on the server side. As a result, we have a component with a persistent state between requests.
Extensible Go library for creating fast, SSR-first frontend avoiding vanilla templating downsides. Creating asynchronous and dynamic layout parts is a complex problem for larger projects using `html/template`. This library tries to simplify overall setup and process. Let's go straight into a simple example. Then, we will dig into details, step by step, how it works. Kyoto provides a set of simple net/http handlers, handler builders and function wrappers to provide serving, pages rendering, component actions, etc. Anyway, this is not an ultimative solution for any case. If you ever need to wrap/extend existing functionality, library encourages this. See functions inside of nethttp.go file for details and advanced usage. Example: Kyoto provides a way to define components. It's a very common approach for modern libraries to manage frontend parts. In kyoto each component is a context receiver, which returns it's state. Each component becomes a part of the page or top-level component, which executes component asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Example: As an option, you can wrap component with another function to accept additional paramenters from top-level page/component. Example: Kyoto provides a context, which holds common objects like http.ResponseWriter, *http.Request, etc. See kyoto.Context for details. Example: Kyoto provides a set of parameters and functions to provide a comfortable template building process. You can configure template building parameters with kyoto.TemplateConf configuration. See template.go for available functions and kyoto.TemplateConfiguration for configuration details. Example: Kyoto provides a way to simplify building dynamic UIs. For this purpose it has a feature named actions. Logic is pretty simple. Client calls an action (sends a request to the server). Action is executing on server side and server is sending updated component markup to the client which will be morphed into DOM. That's it. To use actions, you need to go through a few steps. You'll need to include a client into page (JS functions for communication) and register an actions handler for a needed component. Let's start from including a client. Then, let's register an actions handler for a needed component. That's all! Now we ready to use actions to provide a dynamic UI. Example: In this example you can see provided modifications to the quick start example. First, we've added a state and name into our components' markup. In this way we are saving our components' state between actions and find a component root. Unfortunately, we have to manually provide a component name for now, we haven't found a way to provide it dynamically. Second, we've added a reload button with onclick function call. We're using a function Action provided by a client. Action triggering will be described in details later. Third, we've added an action handler inside of our component. This handler will be executed when a client calls an action with a corresponding name. It's highly recommended to keep components' state as small as possible. It will be transmitted on each action call. Kyoto have multiple ways to trigger actions. Now we will check them one by one. This is the simplest way to trigger an action. It's just a function call with a referer (usually 'this', f.e. button) as a first argument (used to determine root), action name as a second argument and arguments as a rest. Arguments must to be JSON serializable. It's possible to trigger an action of another component. If you want to call an action of parent component, use $ prefix in action name. If you want to call an action of component by id, use <id:action> as an action name. This is a specific action which is triggered when a form is submitted. Usually called in onsubmit="..." attribute of a form. You'll need to implement 'Submit' action to handle this trigger. This is a special HTML attribute which will trigger an action on page load. This may be useful for components' lazy loading. With this special HTML attributes you can trigger an action with interval. Useful for components that must to be updated over time (f.e. charts, stats, etc). You can use this trigger with ssa:poll and ssa:poll.interval HTML attributes. This one attribute allows you to trigger an action when an element is visible on the screen. May be useful for lazy loading. Kyoto provides a way to control action flow. For now, it's possible to control display style on component call and push multiple UI updates to the client during a single action. Because kyoto makes a roundtrip to the server every time an action is triggered on the page, there are cases where the page may not react immediately to a user event (like a click). That's why the library provides a way to easily control display attributes on action call. You can use this HTML attribute to control display during action call. At the end of an action the layout will be restored. A small note. Don't forget to set a default display for loading elements like spinners and loaders. You can push multiple component UI updates during a single action call. Just call kyoto.ActionFlush(ctx, state) to initiate an update. Kyoto provides a way to control action rendering. Now there is at least 2 rendering options after an action call: morph (default) and replace. Morph will try to morph received markup to the current one with morphdom library. In case of an error, or explicit "replace" mode, markup will be replaced with x.outerHTML = '...'.
Package lumberjack provides a rolling logger. Note that this is v2.0 of lumberjack, and should be imported using gopkg.in thusly: The package name remains simply lumberjack, and the code resides at https://github.com/natefinch/lumberjack under the v2.0 branch. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.