Package lumberjack provides a rolling logger. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.
Package telemetry holds observability facades for our services and libraries. The provided interface here allows for instrumenting libraries and packages without any dependencies on Logging and Metric instrumentation implementations. This allows a consistent way of authoring Log lines and Metrics for the producers of these libraries and packages while providing consumers the ability to plug in the implementations of their choice. The following requirements helped shape the form of the interfaces. Or developers will resort to using `fmt.Printf()` Error: something happened that we can't gracefully recover from. This is a log line that should be actionable by an operator and be alerted on. Info: something happened that might be of interest but does not impact the application stability. E.g. someone gave the wrong credentials and was therefore denied access, parsing error on external input, etc. Debug: anything that can help to understand application state during development. More levels get tricky to reason about when writing log lines or establishing the right level of verbosity at runtime. By the above explanations fatal folds into error, warning folds into info, and trace folds into debug. We trust more in partitioning loggers per domain, component, etc. and allow them to be individually addressed to required log levels than controlling a single logger with more levels. We also believe that most logs should be metrics. Anything above Debug level should be able to emit a metric which can be use for dashboards, alerting, etc. We want the ability to rollup / aggregate over the same message while allowing for contextual data to be added. A logging implementation can make the choice how to present to provided log data. This can be 100% structured, a single log line, or a combination. Allow the Go Context object to be passed and have a registry for values of interest we want to pull from context. A good example of an item we want to automatically include in log lines is the `x-request-id` so we can tie log lines produced in the request path together. This allows us to control per component which levels of log lines we want to output at runtime. The interface design allows for this to be implemented without having an opinion on it. By providing at each library or component entry point the ability to provide a Logger implementation, this can be easily achieved. Look at that lovely very empty go.mod and non-existent go.sum file.
Package ho provides automated hyperparameter optimization using Bayesian optimization with Gaussian Processes. It offers efficient, thread-safe optimization capabilities for tuning system parameters with minimal manual intervention. The package includes the following key features: To install the package, use: The library provides four acquisition functions for different optimization strategies: 1. Upper Confidence Bound (UCB): Balances exploration and exploitation Controlled by Beta parameter (higher = more exploration) Default choice, works well in most cases config := DefaultConfig() // Uses UCB by default config.AcqParams.Beta = 2.0 // Adjust exploration-exploitation trade-off 2. Probability of Improvement (PI): Conservative exploration strategy Focuses on small, reliable improvements Good for noise-sensitive applications config := DefaultConfig() config.AcquisitionFunc = ProbabilityOfImprovement config.AcqParams.Xi = 0.01 // Minimum improvement threshold 3. Expected Improvement (EI): Balances improvement probability and magnitude Most commonly used in practice Good for general optimization tasks config := DefaultConfig() config.AcquisitionFunc = ExpectedImprovement config.AcqParams.Xi = 0.01 // Minimum improvement threshold 4. Thompson Sampling: Simple but effective random sampling approach Great for parallel optimization No parameter tuning required config := DefaultConfig() config.AcquisitionFunc = ThompsonSampling config.AcqParams.RandomState = rand.New(rand.NewSource(time.Now().UnixNano())) The OptimizationConfig struct allows customization of the optimization process: Recommended settings: All components are designed to be thread-safe: To contribute to the project:
Package iris provides a beautifully expressive and easy to use foundation for your next website, API, or distributed app. Source code and other details for the project are available at GitHub: 8.5.9 Final The only requirement is the Go Programming Language, at least version 1.8 but 1.9 is highly recommended. Iris takes advantage of the vendor directory feature wisely: https://docs.google.com/document/d/1Bz5-UB7g2uPBdOx-rw5t9MxJwkfpx90cqG9AFL0JAYo. You get truly reproducible builds, as this method guards against upstream renames and deletes. A simple copy-paste and `go get ./...` to resolve two dependencies: https://github.com/kataras/golog and the https://github.com/iris-contrib/httpexpect will work for ever even for older versions, the newest version can be retrieved by `go get` but this file contains documentation for an older version of Iris. Follow the instructions below: 1. install the Go Programming Language: https://golang.org/dl 2. clear yours previously `$GOPATH/src/github.com/kataras/iris` folder or create new 3. download the Iris v8.5.9 (final): https://github.com/kataras/iris/archive/v8.zip 4. extract the contents of the `iris-v8` folder that's inside the downloaded zip file to your `$GOPATH/src/github.com/kataras/iris` 5. navigate to your `$GOPATH/src/github.com/kataras/iris` folder if you're not already there and open a terminal/command prompt, execute the command: `go get ./...` and you're ready to GO:) Example code: You can start the server(s) listening to any type of `net.Listener` or even `http.Server` instance. The method for initialization of the server should be passed at the end, via `Run` function. Below you'll see some useful examples: UNIX and BSD hosts can take advandage of the reuse port feature. Example code: That's all with listening, you have the full control when you need it. Let's continue by learning how to catch CONTROL+C/COMMAND+C or unix kill command and shutdown the server gracefully. In order to manually manage what to do when app is interrupted, we have to disable the default behavior with the option `WithoutInterruptHandler` and register a new interrupt handler (globally, across all possible hosts). Example code: Access to all hosts that serve your application can be provided by the `Application#Hosts` field, after the `Run` method. But the most common scenario is that you may need access to the host before the `Run` method, there are two ways of gain access to the host supervisor, read below. First way is to use the `app.NewHost` to create a new host and use one of its `Serve` or `Listen` functions to start the application via the `iris#Raw` Runner. Note that this way needs an extra import of the `net/http` package. Example Code: Second, and probably easier way is to use the `host.Configurator`. Note that this method requires an extra import statement of "github.com/kataras/iris/core/host" when using go < 1.9, if you're targeting on go1.9 then you can use the `iris#Supervisor` and omit the extra host import. All common `Runners` we saw earlier (`iris#Addr, iris#Listener, iris#Server, iris#TLS, iris#AutoTLS`) accept a variadic argument of `host.Configurator`, there are just `func(*host.Supervisor)`. Therefore the `Application` gives you the rights to modify the auto-created host supervisor through these. Example Code: Read more about listening and gracefully shutdown by navigating to: All HTTP methods are supported, developers can also register handlers for same paths for different methods. The first parameter is the HTTP Method, second parameter is the request path of the route, third variadic parameter should contains one or more iris.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: In order to make things easier for the user, iris provides functions for all HTTP Methods. The first parameter is the request path of the route, second variadic parameter should contains one or more iris.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: A set of routes that are being groupped by path prefix can (optionally) share the same middleware handlers and template layout. A group can have a nested group too. `.Party` is being used to group routes, developers can declare an unlimited number of (nested) groups. Example code: iris developers are able to register their own handlers for http statuses like 404 not found, 500 internal server error and so on. Example code: With the help of iris's expressionist router you can build any form of API you desire, with safety. Example code: Iris has first-class support for the MVC pattern, you'll not find these stuff anywhere else in the Go world. Example Code: Iris web framework supports Request data, Models, Persistence Data and Binding with the fastest possible execution. Characteristics: All HTTP Methods are supported, for example if want to serve `GET` then the controller should have a function named `Get()`, you can define more than one method function to serve in the same Controller struct. Persistence data inside your Controller struct (share data between requests) via `iris:"persistence"` tag right to the field or Bind using `app.Controller("/" , new(myController), theBindValue)`. Models inside your Controller struct (set-ed at the Method function and rendered by the View) via `iris:"model"` tag right to the field, i.e User UserModel `iris:"model" name:"user"` view will recognise it as `{{.user}}`. If `name` tag is missing then it takes the field's name, in this case the `"User"`. Access to the request path and its parameters via the `Path and Params` fields. Access to the template file that should be rendered via the `Tmpl` field. Access to the template data that should be rendered inside the template file via `Data` field. Access to the template layout via the `Layout` field. Access to the low-level `iris.Context` via the `Ctx` field. Get the relative request path by using the controller's name via `RelPath()`. Get the relative template path directory by using the controller's name via `RelTmpl()`. Flow as you used to, `Controllers` can be registered to any `Party`, including Subdomains, the Party's begin and done handlers work as expected. Optional `BeginRequest(ctx)` function to perform any initialization before the method execution, useful to call middlewares or when many methods use the same collection of data. Optional `EndRequest(ctx)` function to perform any finalization after any method executed. Inheritance, recursively, see for example our `mvc.SessionController/iris.SessionController`, it has the `mvc.Controller/iris.Controller` as an embedded field and it adds its logic to its `BeginRequest`. Source file: https://github.com/kataras/iris/blob/v8/mvc/session_controller.go. Read access to the current route via the `Route` field. Support for more than one input arguments (map to dynamic request path parameters). Register one or more relative paths and able to get path parameters, i.e Response via output arguments, optionally, i.e Where `any` means everything, from custom structs to standard language's types-. `Result` is an interface which contains only that function: Dispatch(ctx iris.Context) and Get where HTTP Method function(Post, Put, Delete...). Iris has a very powerful and blazing fast MVC support, you can return any value of any type from a method function and it will be sent to the client as expected. * if `string` then it's the body. * if `string` is the second output argument then it's the content type. * if `int` then it's the status code. * if `bool` is false then it throws 404 not found http error by skipping everything else. * if `error` and not nil then (any type) response will be omitted and error's text with a 400 bad request will be rendered instead. * if `(int, error)` and error is not nil then the response result will be the error's text with the status code as `int`. * if `custom struct` or `interface{}` or `slice` or `map` then it will be rendered as json, unless a `string` content type is following. * if `mvc.Result` then it executes its `Dispatch` function, so good design patters can be used to split the model's logic where needed. The example below is not intended to be used in production but it's a good showcase of some of the return types we saw before; Another good example with a typical folder structure, that many developers are used to work, can be found at: https://github.com/kataras/iris/tree/v8/_examples/mvc/overview. By creating components that are independent of one another, developers are able to reuse components quickly and easily in other applications. The same (or similar) view for one application can be refactored for another application with different data because the view is simply handling how the data is being displayed to the user. If you're new to back-end web development read about the MVC architectural pattern first, a good start is that wikipedia article: https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller. Follow the examples at: https://github.com/kataras/iris/tree/v8/_examples/#mvc At the previous example, we've seen static routes, group of routes, subdomains, wildcard subdomains, a small example of parameterized path with a single known parameter and custom http errors, now it's time to see wildcard parameters and macros. iris, like net/http std package registers route's handlers by a Handler, the iris' type of handler is just a func(ctx iris.Context) where context comes from github.com/kataras/iris/context. Iris has the easiest and the most powerful routing process you have ever meet. At the same time, iris has its own interpeter(yes like a programming language) for route's path syntax and their dynamic path parameters parsing and evaluation, We call them "macros" for shortcut. How? It calculates its needs and if not any special regexp needed then it just registers the route with the low-level path syntax, otherwise it pre-compiles the regexp and adds the necessary middleware(s). Standard macro types for parameters: if type is missing then parameter's type is defaulted to string, so {param} == {param:string}. If a function not found on that type then the "string"'s types functions are being used. i.e: Besides the fact that iris provides the basic types and some default "macro funcs" you are able to register your own too!. Register a named path parameter function: at the func(argument ...) you can have any standard type, it will be validated before the server starts so don't care about performance here, the only thing it runs at serve time is the returning func(paramValue string) bool. Example Code: A path parameter name should contain only alphabetical letters, symbols, containing '_' and numbers are NOT allowed. If route failed to be registered, the app will panic without any warnings if you didn't catch the second return value(error) on .Handle/.Get.... Last, do not confuse ctx.Values() with ctx.Params(). Path parameter's values goes to ctx.Params() and context's local storage that can be used to communicate between handlers and middleware(s) goes to ctx.Values(), path parameters and the rest of any custom values are separated for your own good. Run Static Files Example code: More examples can be found here: https://github.com/kataras/iris/tree/v8/_examples/beginner/file-server Middleware is just a concept of ordered chain of handlers. Middleware can be registered globally, per-party, per-subdomain and per-route. Example code: iris is able to wrap and convert any external, third-party Handler you used to use to your web application. Let's convert the https://github.com/rs/cors net/http external middleware which returns a `next form` handler. Example code: Iris supports 5 template engines out-of-the-box, developers can still use any external golang template engine, as `context/context#ResponseWriter()` is an `io.Writer`. All of these five template engines have common features with common API, like Layout, Template Funcs, Party-specific layout, partial rendering and more. Example code: View engine supports bundled(https://github.com/jteeuwen/go-bindata) template files too. go-bindata gives you two functions, asset and assetNames, these can be setted to each of the template engines using the `.Binary` func. Example code: A real example can be found here: https://github.com/kataras/iris/tree/v8/_examples/view/embedding-templates-into-app. Enable auto-reloading of templates on each request. Useful while developers are in dev mode as they no neeed to restart their app on every template edit. Example code: Note: In case you're wondering, the code behind the view engines derives from the "github.com/kataras/iris/view" package, access to the engines' variables can be granded by "github.com/kataras/iris" package too. Each one of these template engines has different options located here: https://github.com/kataras/iris/tree/v8/view . This example will show how to store and access data from a session. You don’t need any third-party library, but If you want you can use any session manager compatible or not. In this example we will only allow authenticated users to view our secret message on the /secret page. To get access to it, the will first have to visit /login to get a valid session cookie, which logs him in. Additionally he can visit /logout to revoke his access to our secret message. Example code: Running the example: Sessions persistence can be achieved using one (or more) `sessiondb`. Example Code: More examples: In this example we will create a small chat between web sockets via browser. Example Server Code: Example Client(javascript) Code: Running the example: But you should have a basic idea of the framework by now, we just scratched the surface. If you enjoy what you just saw and want to learn more, please follow the below links: Examples: Middleware: Home Page:
Package iris provides a beautifully expressive and easy to use foundation for your next website, API, or distributed app. Source code and other details for the project are available at GitHub: 11.1.1 The only requirement is the Go Programming Language, at least version 1.8 but 1.11.1 and above is highly recommended. Example code: You can start the server(s) listening to any type of `net.Listener` or even `http.Server` instance. The method for initialization of the server should be passed at the end, via `Run` function. Below you'll see some useful examples: UNIX and BSD hosts can take advantage of the reuse port feature. Example code: That's all with listening, you have the full control when you need it. Let's continue by learning how to catch CONTROL+C/COMMAND+C or unix kill command and shutdown the server gracefully. In order to manually manage what to do when app is interrupted, we have to disable the default behavior with the option `WithoutInterruptHandler` and register a new interrupt handler (globally, across all possible hosts). Example code: Access to all hosts that serve your application can be provided by the `Application#Hosts` field, after the `Run` method. But the most common scenario is that you may need access to the host before the `Run` method, there are two ways of gain access to the host supervisor, read below. First way is to use the `app.NewHost` to create a new host and use one of its `Serve` or `Listen` functions to start the application via the `iris#Raw` Runner. Note that this way needs an extra import of the `net/http` package. Example Code: Second, and probably easier way is to use the `host.Configurator`. Note that this method requires an extra import statement of "github.com/kataras/iris/core/host" when using go < 1.9, if you're targeting on go1.9 then you can use the `iris#Supervisor` and omit the extra host import. All common `Runners` we saw earlier (`iris#Addr, iris#Listener, iris#Server, iris#TLS, iris#AutoTLS`) accept a variadic argument of `host.Configurator`, there are just `func(*host.Supervisor)`. Therefore the `Application` gives you the rights to modify the auto-created host supervisor through these. Example Code: Read more about listening and gracefully shutdown by navigating to: All HTTP methods are supported, developers can also register handlers for same paths for different methods. The first parameter is the HTTP Method, second parameter is the request path of the route, third variadic parameter should contains one or more iris.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: In order to make things easier for the user, iris provides functions for all HTTP Methods. The first parameter is the request path of the route, second variadic parameter should contains one or more iris.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: A set of routes that are being groupped by path prefix can (optionally) share the same middleware handlers and template layout. A group can have a nested group too. `.Party` is being used to group routes, developers can declare an unlimited number of (nested) groups. Example code: iris developers are able to register their own handlers for http statuses like 404 not found, 500 internal server error and so on. Example code: With the help of iris's expressionist router you can build any form of API you desire, with safety. Example code: At the previous example, we've seen static routes, group of routes, subdomains, wildcard subdomains, a small example of parameterized path with a single known parameter and custom http errors, now it's time to see wildcard parameters and macros. iris, like net/http std package registers route's handlers by a Handler, the iris' type of handler is just a func(ctx iris.Context) where context comes from github.com/kataras/iris/context. Iris has the easiest and the most powerful routing process you have ever meet. At the same time, iris has its own interpeter(yes like a programming language) for route's path syntax and their dynamic path parameters parsing and evaluation, We call them "macros" for shortcut. How? It calculates its needs and if not any special regexp needed then it just registers the route with the low-level path syntax, otherwise it pre-compiles the regexp and adds the necessary middleware(s). Standard macro types for parameters: if type is missing then parameter's type is defaulted to string, so {param} == {param:string}. If a function not found on that type then the "string"'s types functions are being used. i.e: Besides the fact that iris provides the basic types and some default "macro funcs" you are able to register your own too!. Register a named path parameter function: at the func(argument ...) you can have any standard type, it will be validated before the server starts so don't care about performance here, the only thing it runs at serve time is the returning func(paramValue string) bool. Example Code: Last, do not confuse ctx.Values() with ctx.Params(). Path parameter's values goes to ctx.Params() and context's local storage that can be used to communicate between handlers and middleware(s) goes to ctx.Values(), path parameters and the rest of any custom values are separated for your own good. Run Static Files Example code: More examples can be found here: https://github.com/kataras/iris/tree/master/_examples/beginner/file-server Middleware is just a concept of ordered chain of handlers. Middleware can be registered globally, per-party, per-subdomain and per-route. Example code: iris is able to wrap and convert any external, third-party Handler you used to use to your web application. Let's convert the https://github.com/rs/cors net/http external middleware which returns a `next form` handler. Example code: Iris supports 5 template engines out-of-the-box, developers can still use any external golang template engine, as `context/context#ResponseWriter()` is an `io.Writer`. All of these five template engines have common features with common API, like Layout, Template Funcs, Party-specific layout, partial rendering and more. Example code: View engine supports bundled(https://github.com/shuLhan/go-bindata) template files too. go-bindata gives you two functions, asset and assetNames, these can be setted to each of the template engines using the `.Binary` func. Example code: A real example can be found here: https://github.com/kataras/iris/tree/master/_examples/view/embedding-templates-into-app. Enable auto-reloading of templates on each request. Useful while developers are in dev mode as they no neeed to restart their app on every template edit. Example code: Note: In case you're wondering, the code behind the view engines derives from the "github.com/kataras/iris/view" package, access to the engines' variables can be granded by "github.com/kataras/iris" package too. Each one of these template engines has different options located here: https://github.com/kataras/iris/tree/master/view . This example will show how to store and access data from a session. You don’t need any third-party library, but If you want you can use any session manager compatible or not. In this example we will only allow authenticated users to view our secret message on the /secret page. To get access to it, the will first have to visit /login to get a valid session cookie, which logs him in. Additionally he can visit /logout to revoke his access to our secret message. Example code: Running the example: Sessions persistence can be achieved using one (or more) `sessiondb`. Example Code: More examples: In this example we will create a small chat between web sockets via browser. Example Server Code: Example Client(javascript) Code: Running the example: Iris has first-class support for the MVC pattern, you'll not find these stuff anywhere else in the Go world. Example Code: // GetUserBy serves // Method: GET // Resource: http://localhost:8080/user/{username:string} // By is a reserved "keyword" to tell the framework that you're going to // bind path parameters in the function's input arguments, and it also // helps to have "Get" and "GetBy" in the same controller. // // func (c *ExampleController) GetUserBy(username string) mvc.Result { // return mvc.View{ // Name: "user/username.html", // Data: username, // } // } Can use more than one, the factory will make sure that the correct http methods are being registered for each route for this controller, uncomment these if you want: Iris web framework supports Request data, Models, Persistence Data and Binding with the fastest possible execution. Characteristics: All HTTP Methods are supported, for example if want to serve `GET` then the controller should have a function named `Get()`, you can define more than one method function to serve in the same Controller. Register custom controller's struct's methods as handlers with custom paths(even with regex parametermized path) via the `BeforeActivation` custom event callback, per-controller. Example: Persistence data inside your Controller struct (share data between requests) by defining services to the Dependencies or have a `Singleton` controller scope. Share the dependencies between controllers or register them on a parent MVC Application, and ability to modify dependencies per-controller on the `BeforeActivation` optional event callback inside a Controller, i.e Access to the `Context` as a controller's field(no manual binding is neede) i.e `Ctx iris.Context` or via a method's input argument, i.e Models inside your Controller struct (set-ed at the Method function and rendered by the View). You can return models from a controller's method or set a field in the request lifecycle and return that field to another method, in the same request lifecycle. Flow as you used to, mvc application has its own `Router` which is a type of `iris/router.Party`, the standard iris api. `Controllers` can be registered to any `Party`, including Subdomains, the Party's begin and done handlers work as expected. Optional `BeginRequest(ctx)` function to perform any initialization before the method execution, useful to call middlewares or when many methods use the same collection of data. Optional `EndRequest(ctx)` function to perform any finalization after any method executed. Session dynamic dependency via manager's `Start` to the MVC Application, i.e Inheritance, recursively. Access to the dynamic path parameters via the controller's methods' input arguments, no binding is needed. When you use the Iris' default syntax to parse handlers from a controller, you need to suffix the methods with the `By` word, uppercase is a new sub path. Example: Register one or more relative paths and able to get path parameters, i.e Response via output arguments, optionally, i.e Where `any` means everything, from custom structs to standard language's types-. `Result` is an interface which contains only that function: Dispatch(ctx iris.Context) and Get where HTTP Method function(Post, Put, Delete...). Iris has a very powerful and blazing fast MVC support, you can return any value of any type from a method function and it will be sent to the client as expected. * if `string` then it's the body. * if `string` is the second output argument then it's the content type. * if `int` then it's the status code. * if `bool` is false then it throws 404 not found http error by skipping everything else. * if `error` and not nil then (any type) response will be omitted and error's text with a 400 bad request will be rendered instead. * if `(int, error)` and error is not nil then the response result will be the error's text with the status code as `int`. * if `custom struct` or `interface{}` or `slice` or `map` then it will be rendered as json, unless a `string` content type is following. * if `mvc.Result` then it executes its `Dispatch` function, so good design patters can be used to split the model's logic where needed. Examples with good patterns to follow but not intend to be used in production of course can be found at: https://github.com/kataras/iris/tree/master/_examples/#mvc. By creating components that are independent of one another, developers are able to reuse components quickly and easily in other applications. The same (or similar) view for one application can be refactored for another application with different data because the view is simply handling how the data is being displayed to the user. If you're new to back-end web development read about the MVC architectural pattern first, a good start is that wikipedia article: https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller. But you should have a basic idea of the framework by now, we just scratched the surface. If you enjoy what you just saw and want to learn more, please follow the below links: Examples: Middleware: Home Page: Book (in-progress):
Package lumberjack provides a rolling logger. Note that this is v2.0 of lumberjack, and should be imported using gopkg.in thusly: The package name remains simply lumberjack, and the code resides at https://github.com/natefinch/lumberjack under the v2.0 branch. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.
Package excelize providing a set of functions that allow you to write to and read from XLAM / XLSM / XLSX / XLTM / XLTX files. Supports reading and writing spreadsheet documents generated by Microsoft Excel™ 2007 and later. Supports complex components by high compatibility, and provided streaming API for generating or reading data from a worksheet with huge amounts of data. This library needs Go version 1.18 or later. See https://xuri.me/excelize for more information about this package.
Package runestone is an open-source implementation of the Bitcoin Rune protocol in Go. The runestone project aims to provide a robust, efficient, and easy-to-use library for developers to interact with the Rune protocol. It covers various aspects of the protocol, such as transaction handling, block validation, and network communication. Features: Getting started: To start using the runestone package, simply import it into your Go project: ``` go get github.com/studyzy/runestone ``` You can then use the various components provided by the package to interact with the Rune protocol, such as creating transactions, validating blocks, and communicating with other nodes on the network. For more information and examples, please refer to the package documentation and the project's GitHub repository. Contributions and feedback are welcome! If you encounter any issues or have suggestions for improvements, please open an issue on the project's GitHub repository or submit a pull request.
Package aztables can access an Azure Storage or CosmosDB account. The aztables package is capable of: The Azure Data Tables library allows you to interact with two types of resources: * the tables in your account * the entities within those tables. Interaction with these resources starts with an instance of a client. To create a client object, you will need the account's table service endpoint URL and a credential that allows you to access the account. The clients support different forms of authentication. The aztables library supports any of the `azcore.TokenCredential` interfaces, authorization via a Connection String, or authorization with a Shared Access Signature token. To use an account shared key (aka account key or access key), provide the key as a string. This can be found in your storage account in the Azure Portal under the "Access Keys" section. Use the key as the credential parameter to authenticate the client: Using a Connection String Depending on your use case and authorization method, you may prefer to initialize a client instance with a connection string instead of providing the account URL and credential separately. To do this, pass the connection string to the client's `from_connection_string` class method. The connection string can be found in your storage account in the [Azure Portal][azure_portal_account_url] under the "Access Keys" section or with the following Azure CLI command: Using a Shared Access Signature To use a shared access signature (SAS) token, provide the token at the end of your service URL. You can generate a SAS token from the Azure Portal under Shared Access Signature or use the ServiceClient.GetAccountSASToken or Client.GetTableSASToken() functions. Common uses of the Table service included: * Storing TBs of structured data capable of serving web scale applications * Storing datasets that do not require complex joins, foreign keys, or stored procedures and can be de-normalized for fast access * Quickly querying data using a clustered index * Accessing data using the OData protocol and LINQ filter expressions The following components make up the Azure Data Tables Service: * The account * A table within the account, which contains a set of entities * An entity within a table, as a dictionary The Azure Data Tables client library for Go allows you to interact with each of these components through the use of a dedicated client object. Two different clients are provided to interact with the various components of the Table Service: 1. **`ServiceClient`** - 2. **`Client`** - Entities are similar to rows. An entity has a PartitionKey, a RowKey, and a set of properties. A property is a name value pair, similar to a column. Every entity in a table does not need to have the same properties. Entities are returned as JSON, allowing developers to use JSON marshalling and unmarshalling techniques. Additionally, you can use the aztables.EDMEntity to ensure proper round-trip serialization of all properties. The following sections provide several code snippets covering some of the most common Table tasks, including: * Creating a table * Creating entities * Querying entities Create a table in your account and get a `Client` to perform operations on the newly created table: Creating Entities Querying entities
Package kyoto was made for creating fast, server side frontend avoiding vanilla templating downsides. It tries to address complexities in frontend domain like responsibility separation, components structure, asynchronous load and hassle-free dynamic layout updates. These issues are common for frontends written with Go. The library provides you with primitives for pages and components creation, state and rendering management, dynamic layout updates (with external packages integration), utility functions and asynchronous components out of the box. Still, it bundles with minimal dependencies and tries to utilize built-ins as much as possible. You would probably want to opt out from this library in few cases, like, if you're not ready for drastic API changes between major version, you want to develop SPA/PWA and/or complex client-side logic, or you're just feeling OK with your current setup. Please, don't compare kyoto with a popular JS libraries like React, Vue or Svelte. I know you will have such a desire, but most likely you will be wrong. Use cases and underlying principles are just too different. If you want to get an idea of what a typical static component would look like, here's some sample code. It's very ascetic and simplistic, as we don't want to overload you with implementation details. Markup is also not included here (it's just a well-known `html/template`). For details, please check project's website on https://kyoto.codes. Also, you may check the library index to explore available sub-packages and https://pkg.go.dev for Go'ish documentation style. We don't want you to deal with boilerplate code on your own, so you can proceed with our simple starter project. Feel free to use it as an example for your own setup. Components is a common approach for modern libraries to manage frontend parts. Kyoto's components are trying to be mostly independent (but configurable) part of the project. To create component, it would be enough to implement component.Component. It's a function, a context receiver which returns a component state. State is an implementation of component.State, which is easy to implement with nesting one of the state implementations (options will be described later). Each component becomes a part of the page or top-level component, which executes component function asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Stateful components are pretty similar to stateless ones, but they are actually implementing marshal/unmarshal interface instead of mocking it. You have multiple state options to choose from: universal or server. Universal state is a state, that can be marshalled and unmarshalled both on server and client. It's a common state option without functionality limitations. On the other hand, the whole state must be sent and received, which applies some limitations on the state size. Server state can be marshalled and unmarshalled only on server. It's a good option for components, that are not supposed to be updated on client side (f.e. no inputs). Also, it's a good option for components with lots of state data. Sometimes you may want to pass some arguments to the component. It's easy to do with wrapping component with additional function. You have an access to the context inside the component. It includes request and response objects, as well as some other useful stuff like store. This library doesn't provide you with routing out of the box. You can use any router you want, built-in one is not a bad option for basic needs. Rendering might be tricky, but we're trying to make it as simple as possible. By default, we're using `html/template` as a rendering engine. It's a well-known built-in package, so you don't have to learn anything new. Out of the box we're parsing all templates in root directory with `*.html` glob. You can change this behavior with `TEMPLATE_GLOB` global variable. Don't rely on file names while working with template names, use `define` entry for each your component. To provide your components with ability to be rendered, you have to do some basic steps. First, you have to nest one of the rendering implementations into your component state (f.e. `rendering.Template`). You can customize rendering with providing values to the rendering implementation. If you need to modify these values for the entire project, we recommend looking at the global settings or creating a builder function for rendering object. By default, render handler will use a component name as a template name. So, you have to define a template with the same name as your component (not the filename, but "define" entry). That's enough to be rendered by `rendering.Handler`. For rendering a nested component, use built-in `template` function. Provide a resolved future object as a template argument in this way. Nested components are not obligated to have rendering implementation if you're using them in this way. As an alternative, you can nest rendering implementation (e.g. `rendering.Template`) into your nested component. In this way you can use `render` function to simplify your code. Please, don't use this approach heavily now, as it affects rendering performance. HTMX is a frontend library, that allows you to update your page layout dynamically. It perfectly fits into kyoto, which focuses on components and server side rendering. Thanks to the component structure, there is no need to define separate rendering logic specially for HTMX. Please, check https://htmx.org/docs/#installing for installation instructions. In addition to this, you must register HTMX handlers for your dynamic components. This is a basic example of HTMX usage. Please, check https://htmx.org/docs/ for more details. In this example we're defining a form component, that is updating itself on submit. And this is how you can define a component, that will handle this request. Sometimes it might be useful to have a component state, which will persist between requests and will be stored without any actual usage in the client side presentation. This function injects a hidden input field with a serialized state. Let's check how it works on the server side. As a result, we have a component with a persistent state between requests.
Extensible Go library for creating fast, SSR-first frontend avoiding vanilla templating downsides. Creating asynchronous and dynamic layout parts is a complex problem for larger projects using `html/template`. Library tries to simplify this process. Let's go straight into a simple example. Then, we will dig into details, step by step, how it works. Kyoto provides a simple net/http handlers and function wrappers to handle pages rendering and serving. See functions inside of nethttp.go file for details and advanced usage. Example: Kyoto provides a way to define components. It's a very common approach for modern libraries to manage frontend parts. In kyoto each component is a context receiver, which returns it's state. Each component becomes a part of the page or top-level component, which executes component asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Example: As an option, you can wrap component with another function to accept additional paramenters from top-level page/component. Example: Kyoto provides a context, which holds common objects like http.ResponseWriter, *http.Request, etc. See kyoto.Context for details. Example: Kyoto provides a set of parameters and functions to provide a comfortable template building process. You can configure template building parameters with kyoto.TemplateConf configuration. See template.go for available functions and kyoto.TemplateConfiguration for configuration details. Example: Kyoto provides a way to simplify building dynamic UIs. For this purpose it has a feature named actions. Logic is pretty simple. Client calls an action (sends a request to the server). Action is executing on server side and server is sending updated component markup to the client which will be morphed into DOM. That's it. To use actions, you need to go through a few steps. You'll need to include a client into page (JS functions for communication) and register an actions handler for a needed component. Let's start from including a client. Then, let's register an actions handler for a needed component. That's all! Now we ready to use actions to provide a dynamic UI. Example: In this example you can see provided modifications to the quick start example. First, we've added a state and name into our components' markup. In this way we are saving our components' state between actions and find a component root. Unfortunately, we have to manually provide a component name for now, we haven't found a way to provide it dynamically. Second, we've added a reload button with onclick function call. We're using a function Action provided by a client. Action triggering will be described in details later. Third, we've added an action handler inside of our component. This handler will be executed when a client calls an action with a corresponding name. It's highly recommended to keep components' state as small as possible. It will be transmitted on each action call. Kyoto have multiple ways to trigger actions. Now we will check them one by one. This is the simplest way to trigger an action. It's just a function call with a referer (usually 'this', f.e. button) as a first argument (used to determine root), action name as a second argument and arguments as a rest. Arguments must to be JSON serializable. It's possible to trigger an action of another component. If you want to call an action of parent component, use $ prefix in action name. If you want to call an action of component by id, use <id:action> as an action name. This is a specific action which is triggered when a form is submitted. Usually called in onsubmit="..." attribute of a form. You'll need to implement 'Submit' action to handle this trigger. This is a special HTML attribute which will trigger an action on page load. This may be useful for components' lazy loading. With this special HTML attributes you can trigger an action with interval. Useful for components that must to be updated over time (f.e. charts, stats, etc). You can use this trigger with ssa:poll and ssa:poll.interval HTML attributes. This one attribute allows you to trigger an action when an element is visible on the screen. May be useful for lazy loading. Kyoto provides a way to control action flow. For now, it's possible to control display style on component call and push multiple UI updates to the client during a single action. Because kyoto makes a roundtrip to the server every time an action is triggered on the page, there are cases where the page may not react immediately to a user event (like a click). That's why the library provides a way to easily control display attributes on action call. You can use this HTML attribute to control display during action call. At the end of an action the layout will be restored. A small note. Don't forget to set a default display for loading elements like spinners and loaders. You can push multiple component UI updates during a single action call. Just call kyoto.ActionFlush(ctx, state) to initiate an update. Kyoto provides a way to control action rendering. Now there is at least 2 rendering options after an action call: morph (default) and replace. Morph will try to morph received markup to the current one with morphdom library. In case of an error, or explicit "replace" mode, markup will be replaced with x.outerHTML = '...'.
Package cfd1 provides a client and database/sql driver for interacting with Cloudflare's D1 database service. D1 is a serverless SQL database from Cloudflare that implements the SQLite query engine. This package offers a lightweight wrapper around the D1 API as well as a database/sql compatible driver. To use the direct API implementation, create a new client using NewClient with your Cloudflare account ID and API token: The returned client can be used create, manage, and query D1 databases and provides a 1:1 wrapper around the D1 API. To perform operations on a single database, a Handle can be obtained from the client using a database name or UUID and subsequently queried: The D1 API supports multiple semicolon-separated statements in a single Handle.Query operation, which are executed as a batch. A query can be up to 100KB and contain up to 100 placeholders. To use the database/sql driver, import this library with the blank identifier. Its init function registers the driver as "cfd1": You can then open a connection to a D1 database using a DSN string in URI format: All three components of the DSN are required. Note that this driver does not support transactions through db.Begin(), as connections to D1 over the REST API are not persistent -- every query creates a new HTTP round-trip to the API and connection. Multiple semicolon-separated statements in a single query are supported, however, and can include transactions. This is an unofficial implementation of the Cloudflare D1 API, and its author is not affiliated with Cloudflare. For the official Cloudflare API Go client, see: For more information about Cloudflare D1, see the Cloudflare D1 documentation.
Package oteleventually provides extension components for eventually library to enable OpenTelemetry instrumentation.
Package excelize providing a set of functions that allow you to write to and read from XLSX / XLSM / XLTM files. Supports reading and writing spreadsheet documents generated by Microsoft Exce™ 2007 and later. Supports complex components by high compatibility, and provided streaming API for generating or reading data from a worksheet with huge amounts of data. This library needs Go version 1.10 or later. See https://xuri.me/excelize for more information about this package.
Package lumberjack provides a rolling logger. Note that this is v2.0 of lumberjack, and should be imported using gopkg.in thusly: The package name remains simply lumberjack, and the code resides at https://github.com/natefinch/lumberjack under the v2.0 branch. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/jhillyerd/enmime/v2
Package lumberjack provides a rolling logger. Note that this is v2.0 of lumberjack, and should be imported using gopkg.in thusly: The package name remains simply lumberjack, and the code resides at https://github.com/natefinch/lumberjack under the v2.0 branch. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.
Package bongo is an elegant static website generator. It is designed to be simple minimal and easy to use. Bongo comes in two flavors. The commandline applicaion and the library. The commandline application can be found in cmd/bongo directory and you can install it via go get like this Or just download the latest binary here https://github.com/gernest/bongo/releases/latest To build your project foo. * You can specify the path to foo * You can run at the root of foo To serve your project locally. This will run a local server at port http://localhost:8000. The project will be rebuilt if any markdown file changes. * You can specify the path to foo * You can run at the root of foo The generated website will be in the directory _site at the root of your foo project. There is no restriction on how you arrange your project. If you have a project foo. It will be somewhare in a directory named foo. You can see the example in testdata/sample directory. Bongo only process markdown files found in your project root.Supported file extensions for the markdown files are This means you can put your markdown files in any nested directories inside your project and bongo will process them without any problem. Bongo support github flavored markdown Optionaly, you can add sitewide configuration file `_bongo.yml` at the root of your project. The configuration is in yaml format. And there are a few settings you can change. There is loose restrictions in the how to create your own theme. What matters is that you have the following templates. These templates can be used in project, by setting the view value of frontmatter. For instance if I set view to post, then post.html will be used on that particular file. IMPORTANT: All static contents should be placed in a diretory named static at the root of the theme. They will be copied to the output directory unchanged. All themes custom themes should live under the _theme directory at the project root. Please see testdata/sample/_themes for an example. Bongo support frontmatter. And it is recomended every post(your markdown file) should have a frontmatter. For convenience, only YAML frontmatter is supported by default. And you can add it at the beginning of your file like this. Important frontmatter settings, Bongo is modular, and uses interfaces to define its components.The most important interface is the Generator interface. So, you can implement your own Generator interface, and pass it to the bongo library to have your own static website generator with your own rules. I challenge you, to try implementing different Generators. Or, implement different components of the generator interface. I have default implementations shipped with bongo.
Package lumberjack provides a rolling logger. Note that this is v2.0 of lumberjack, and should be imported using gopkg.in thusly: The package name remains simply lumberjack, and the code resides at https://github.com/natefinch/lumberjack under the v2.0 branch. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.
Package lunk provides a set of tools for structured logging in the style of Google's Dapper or Twitter's Zipkin. When we consider a complex event in a distributed system, we're actually considering a partially-ordered tree of events from various services, libraries, and modules. Consider a user-initiated web request. Their browser sends an HTTP request to an edge server, which extracts the credentials (e.g., OAuth token) and authenticates the request by communicating with an internal authentication service, which returns a signed set of internal credentials (e.g., signed user ID). The edge web server then proxies the request to a cluster of web servers, each running a PHP application. The PHP application loads some data from several databases, places the user in a number of treatment groups for running A/B experiments, writes some data to a Dynamo-style distributed database, and returns an HTML response. The edge server receives this response and proxies it to the user's browser. In this scenario we have a number of infrastructure-specific events: This scenario also involves a number of events which have little to do with the infrastructure, but are still critical information for the business the system supports: There are a number of different teams all trying to monitor and improve aspects of this system. Operational staff need to know if a particular host or service is experiencing a latency spike or drop in throughput. Development staff need to know if their application's response times have gone down as a result of a recent deploy. Customer support staff need to know if the system is operating nominally as a whole, and for customers in particular. Product designers and managers need to know the effect of an A/B test on user behavior. But the fact that these teams will be consuming the data in different ways for different purposes does mean that they are working on different systems. In order to instrument the various components of the system, we need a common data model. We adopt Dapper's notion of a tree to mean a partially-ordered tree of events from a distributed system. A tree in Lunk is identified by its root ID, which is the unique ID of its root event. All events in a common tree share a root ID. In our photo example, we would assign a unique root ID as soon as the edge server received the request. Events inside a tree are causally ordered: each event has a unique ID, and an optional parent ID. By passing the IDs across systems, we establish causal ordering between events. In our photo example, the two database queries from the app would share the same parent ID--the ID of the event corresponding to the app handling the request which caused those queries. Each event has a schema of properties, which allow us to record specific pieces of information about each event. For HTTP requests, we can record the method, the request URI, the elapsed time to handle the request, etc. Lunk is agnostic in terms of aggregation technologies, but two use cases seem clear: real-time process monitoring and offline causational analysis. For real-time process monitoring, events can be streamed to a aggregation service like Riemann (http://riemann.io) or Storm (http://storm.incubator.apache.org), which can calculate process statistics (e.g., the 95th percentile latency for the edge server responses) in real-time. This allows for adaptive monitoring of all services, with the option of including example root IDs in the alerts (e.g., 95th percentile latency is over 300ms, mostly as a result of requests like those in tree XXXXX). For offline causational analysis, events can be written in batches to batch processing systems like Hadoop or OLAP databases like Vertica. These aggregates can be queried to answer questions traditionally reserved for A/B testing systems. "Did users who were show the new navbar view more photos?" "Did the new image optimization algorithm we enabled for 1% of views run faster? Did it produce smaller images? Did it have any effect on user engagement?" "Did any services have increased exception rates after any recent deploys?" &tc &tc By capturing the root ID of a particular web request, we can assemble a partially-ordered tree of events which were involved in the handling of that request. All events with a common root ID are in a common tree, which allows for O(M) retrieval for a tree of M events. To send a request with a root ID and a parent ID, use the Event-ID HTTP header: The header value is simply the root ID and event ID, hex-encoded and separated with a slash. If the event has a parent ID, that may be included as an optional third parameter. A server that receives a request with this header can use this to properly parent its own events. Each event has a set of named properties, the keys and values of which are strings. This allows aggregation layers to take advantage of simplifying assumptions and either store events in normalized form (with event data separate from property data) or in denormalized form (essentially pre-materializing an outer join of the normalized relations). Durations are always recorded as fractional milliseconds. Lunk currently provides two formats for log entries: text and JSON. Text-based logs encode each entry as a single line of text, using key="value" formatting for all properties. Event property keys are scoped to avoid collisions. JSON logs encode each entry as a single JSON object.
Package azcore implements an HTTP request/response middleware pipeline used by Azure SDK clients. The middleware consists of three components. A Policy can be implemented in two ways; as a first-class function for a stateless Policy, or as a method on a type for a stateful Policy. Note that HTTP requests made via the same pipeline share the same Policy instances, so if a Policy mutates its state it MUST be properly synchronized to avoid race conditions. A Policy's Do method is called when an HTTP request wants to be sent over the network. The Do method can perform any operation(s) it desires. For example, it can log the outgoing request, mutate the URL, headers, and/or query parameters, inject a failure, etc. Once the Policy has successfully completed its request work, it must call the Next() method on the *policy.Request instance in order to pass the request to the next Policy in the chain. When an HTTP response comes back, the Policy then gets a chance to process the response/error. The Policy instance can log the response, retry the operation if it failed due to a transient error or timeout, unmarshal the response body, etc. Once the Policy has successfully completed its response work, it must return the *http.Response and error instances to its caller. Template for implementing a stateless Policy: Template for implementing a stateful Policy: The Transporter interface is responsible for sending the HTTP request and returning the corresponding HTTP response or error. The Transporter is invoked by the last Policy in the chain. The default Transporter implementation uses a shared http.Client from the standard library. The same stateful/stateless rules for Policy implementations apply to Transporter implementations. To use the Policy and Transporter instances, an application passes them to the runtime.NewPipeline function. The specified Policy instances form a chain and are invoked in the order provided to NewPipeline followed by the Transporter. Once the Pipeline has been created, create a runtime.Request instance and pass it to Pipeline's Do method. The Pipeline.Do method sends the specified Request through the chain of Policy and Transporter instances. The response/error is then sent through the same chain of Policy instances in reverse order. For example, assuming there are Policy types PolicyA, PolicyB, and PolicyC along with TransportA. The flow of Request and Response looks like the following: The Request instance passed to Pipeline's Do method is a wrapper around an *http.Request. It also contains some internal state and provides various convenience methods. You create a Request instance by calling the runtime.NewRequest function: If the Request should contain a body, call the SetBody method. A seekable stream is required so that upon retry, the retry Policy instance can seek the stream back to the beginning before retrying the network request and re-uploading the body. Operations like JSON-MERGE-PATCH send a JSON null to indicate a value should be deleted. This requirement conflicts with the SDK's default marshalling that specifies "omitempty" as a means to resolve the ambiguity between a field to be excluded and its zero-value. In the above example, Name and Count are defined as pointer-to-type to disambiguate between a missing value (nil) and a zero-value (0) which might have semantic differences. In a PATCH operation, any fields left as nil are to have their values preserved. When updating a Widget's count, one simply specifies the new value for Count, leaving Name nil. To fulfill the requirement for sending a JSON null, the NullValue() function can be used. This sends an explict "null" for Count, indicating that any current value for Count should be deleted. When the HTTP response is received, the *http.Response is returned directly. Each Policy instance can inspect/mutate the *http.Response. To enable logging, set environment variable AZURE_SDK_GO_LOGGING to "all" before executing your program. By default the logger writes to stderr. This can be customized by calling log.SetListener, providing a callback that writes to the desired location. Any custom logging implementation MUST provide its own synchronization to handle concurrent invocations. See the docs for the log package for further details. Pageable operations return potentially large data sets spread over multiple GET requests. The result of each GET is a "page" of data consisting of a slice of items. Pageable operations can be identified by their New*Pager naming convention and return type of *runtime.Pager[T]. The call to WidgetClient.NewListWidgetsPager() returns an instance of *runtime.Pager[T] for fetching pages and determining if there are more pages to fetch. No IO calls are made until the NextPage() method is invoked. Long-running operations (LROs) are operations consisting of an initial request to start the operation followed by polling to determine when the operation has reached a terminal state. An LRO's terminal state is one of the following values. LROs can be identified by their Begin* prefix and their return type of *runtime.Poller[T]. When a call to WidgetClient.BeginCreateOrUpdate() returns a nil error, it means that the LRO has started. It does _not_ mean that the widget has been created or updated (or failed to be created/updated). The *runtime.Poller[T] provides APIs for determining the state of the LRO. To wait for the LRO to complete, call the PollUntilDone() method. The call to PollUntilDone() will block the current goroutine until the LRO has reached a terminal state or the context is canceled/timed out. Note that LROs can take anywhere from several seconds to several minutes. The duration is operation-dependent. Due to this variant behavior, pollers do _not_ have a preconfigured time-out. Use a context with the appropriate cancellation mechanism as required. Pollers provide the ability to serialize their state into a "resume token" which can be used by another process to recreate the poller. This is achieved via the runtime.Poller[T].ResumeToken() method. Note that a token can only be obtained for a poller that's in a non-terminal state. Also note that any subsequent calls to poller.Poll() might change the poller's state. In this case, a new token should be created. After the token has been obtained, it can be used to recreate an instance of the originating poller. When resuming a poller, no IO is performed, and zero-value arguments can be used for everything but the Options.ResumeToken. Resume tokens are unique per service client and operation. Attempting to resume a poller for LRO BeginB() with a token from LRO BeginA() will result in an error. The fake package contains types used for constructing in-memory fake servers used in unit tests. This allows writing tests to cover various success/error conditions without the need for connecting to a live service. Please see https://github.com/gracewilcox/azure-sdk-for-go/tree/main/sdk/samples/fakes for details and examples on how to use fakes.
Package lumberjack provides a rolling logger. Note that this is v2.0 of lumberjack, and should be imported using gopkg.in thusly: The package name remains simply lumberjack, and the code resides at https://github.com/natefinch/lumberjack under the v2.0 branch. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.
Package lumberjack provides a rolling logger. Note that this is v2.0 of lumberjack, and should be imported using gopkg.in thusly: The package name remains simply lumberjack, and the code resides at https://github.com/natefinch/lumberjack under the v2.0 branch. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.
Package lumberjack provides a rolling logger. Note that this is v2.0 of lumberjack, and should be imported using gopkg.in thusly: The package name remains simply lumberjack, and the code resides at https://github.com/natefinch/lumberjack under the v2.0 branch. Lumberjack is intended to be one part of a logging infrastructure. It is not an all-in-one solution, but instead is a pluggable component at the bottom of the logging stack that simply controls the files to which logs are written. Lumberjack plays well with any logging package that can write to an io.Writer, including the standard library's log package. Lumberjack assumes that only one process is writing to the output files. Using the same lumberjack configuration from multiple processes on the same machine will result in improper behavior. To use lumberjack with the standard library's log package, just pass it into the SetOutput function when your application starts.
Package excelize providing a set of functions that allow you to write to and read from XLAM / XLSM / XLSX / XLTM / XLTX files. Supports reading and writing spreadsheet documents generated by Microsoft Excel™ 2007 and later. Supports complex components by high compatibility, and provided streaming API for generating or reading data from a worksheet with huge amounts of data. This library needs Go version 1.18 or later. See https://xuri.me/excelize for more information about this package.
Extensible Go library for creating fast, SSR-first frontend avoiding vanilla templating downsides. Creating asynchronous and dynamic layout parts is a complex problem for larger projects using `html/template`. Library tries to simplify this process. Let's go straight into a simple example. Then, we will dig into details, step by step, how it works. Kyoto provides a simple net/http handlers and function wrappers to handle pages rendering and serving. See functions inside of nethttp.go file for details and advanced usage. Example: Kyoto provides a way to define components. It's a very common approach for modern libraries to manage frontend parts. In kyoto each component is a context receiver, which returns it's state. Each component becomes a part of the page or top-level component, which executes component asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Example: As an option, you can wrap component with another function to accept additional paramenters from top-level page/component. Example: Kyoto provides a context, which holds common objects like http.ResponseWriter, *http.Request, etc. See kyoto.Context for details. Example: Kyoto provides a set of parameters and functions to provide a comfortable template building process. You can configure template building parameters with kyoto.TemplateConf configuration. See template.go for available functions and kyoto.TemplateConfiguration for configuration details. Example: Kyoto provides a way to simplify building dynamic UIs. For this purpose it has a feature named actions. Logic is pretty simple. Client calls an action (sends a request to the server). Action is executing on server side and server is sending updated component markup to the client which will be morphed into DOM. That's it. To use actions, you need to go through a few steps. You'll need to include a client into page (JS functions for communication) and register an actions handler for a needed component. Let's start from including a client. Then, let's register an actions handler for a needed component. That's all! Now we ready to use actions to provide a dynamic UI. Example: In this example you can see provided modifications to the quick start example. First, we've added a state and name into our components' markup. In this way we are saving our components' state between actions and find a component root. Unfortunately, we have to manually provide a component name for now, we haven't found a way to provide it dynamically. Second, we've added a reload button with onclick function call. We're using a function Action provided by a client. Action triggering will be described in details later. Third, we've added an action handler inside of our component. This handler will be executed when a client calls an action with a corresponding name. It's highly recommended to keep components' state as small as possible. It will be transmitted on each action call. Kyoto have multiple ways to trigger actions. Now we will check them one by one. This is the simplest way to trigger an action. It's just a function call with a referer (usually 'this', f.e. button) as a first argument (used to determine root), action name as a second argument and arguments as a rest. Arguments must to be JSON serializable. It's possible to trigger an action of another component. If you want to call an action of parent component, use $ prefix in action name. If you want to call an action of component by id, use <id:action> as an action name. This is a specific action which is triggered when a form is submitted. Usually called in onsubmit="..." attribute of a form. You'll need to implement 'Submit' action to handle this trigger. This is a special HTML attribute which will trigger an action on page load. This may be useful for components' lazy loading. With this special HTML attributes you can trigger an action with interval. Useful for components that must to be updated over time (f.e. charts, stats, etc). You can use this trigger with ssa:poll and ssa:poll.interval HTML attributes. This one attribute allows you to trigger an action when an element is visible on the screen. May be useful for lazy loading. Kyoto provides a way to control action flow. For now, it's possible to control display style on component call and push multiple UI updates to the client during a single action. Because kyoto makes a roundtrip to the server every time an action is triggered on the page, there are cases where the page may not react immediately to a user event (like a click). That's why the library provides a way to easily control display attributes on action call. You can use this HTML attribute to control display during action call. At the end of an action the layout will be restored. A small note. Don't forget to set a default display for loading elements like spinners and loaders. You can push multiple component UI updates during a single action call. Just call kyoto.ActionFlush(ctx, state) to initiate an update. Kyoto provides a way to control action rendering. Now there is at least 2 rendering options after an action call: morph (default) and replace. Morph will try to morph received markup to the current one with morphdom library. In case of an error, or explicit "replace" mode, markup will be replaced with x.outerHTML = '...'.
Extensible Go library for creating fast, SSR-first frontend avoiding vanilla templating downsides. Creating asynchronous and dynamic layout parts is a complex problem for larger projects using `html/template`. Library tries to simplify this process. Let's go straight into a simple example. Then, we will dig into details, step by step, how it works. Kyoto provides a simple net/http handlers and function wrappers to handle pages rendering and serving. See functions inside of nethttp.go file for details and advanced usage. Example: Kyoto provides a way to define components. It's a very common approach for modern libraries to manage frontend parts. In kyoto each component is a context receiver, which returns it's state. Each component becomes a part of the page or top-level component, which executes component asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Example: As an option, you can wrap component with another function to accept additional paramenters from top-level page/component. Example: Kyoto provides a context, which holds common objects like http.ResponseWriter, *http.Request, etc. See kyoto.Context for details. Example: Kyoto provides a set of parameters and functions to provide a comfortable template building process. You can configure template building parameters with kyoto.TemplateConf configuration. See template.go for available functions and kyoto.TemplateConfiguration for configuration details. Example: Kyoto provides a way to simplify building dynamic UIs. For this purpose it has a feature named actions. Logic is pretty simple. Client calls an action (sends a request to the server). Action is executing on server side and server is sending updated component markup to the client which will be morphed into DOM. That's it. To use actions, you need to go through a few steps. You'll need to include a client into page (JS functions for communication) and register an actions handler for a needed component. Let's start from including a client. Then, let's register an actions handler for a needed component. That's all! Now we ready to use actions to provide a dynamic UI. Example: In this example you can see provided modifications to the quick start example. First, we've added a state and name into our components' markup. In this way we are saving our components' state between actions and find a component root. Unfortunately, we have to manually provide a component name for now, we haven't found a way to provide it dynamically. Second, we've added a reload button with onclick function call. We're using a function Action provided by a client. Action triggering will be described in details later. Third, we've added an action handler inside of our component. This handler will be executed when a client calls an action with a corresponding name. It's highly recommended to keep components' state as small as possible. It will be transmitted on each action call. Kyoto have multiple ways to trigger actions. Now we will check them one by one. This is the simplest way to trigger an action. It's just a function call with a referer (usually 'this', f.e. button) as a first argument (used to determine root), action name as a second argument and arguments as a rest. Arguments must to be JSON serializable. It's possible to trigger an action of another component. If you want to call an action of parent component, use $ prefix in action name. If you want to call an action of component by id, use <id:action> as an action name. This is a specific action which is triggered when a form is submitted. Usually called in onsubmit="..." attribute of a form. You'll need to implement 'Submit' action to handle this trigger. This is a special HTML attribute which will trigger an action on page load. This may be useful for components' lazy loading. With this special HTML attributes you can trigger an action with interval. Useful for components that must to be updated over time (f.e. charts, stats, etc). You can use this trigger with ssa:poll and ssa:poll.interval HTML attributes. This one attribute allows you to trigger an action when an element is visible on the screen. May be useful for lazy loading. Kyoto provides a way to control action flow. For now, it's possible to control display style on component call and push multiple UI updates to the client during a single action. Because kyoto makes a roundtrip to the server every time an action is triggered on the page, there are cases where the page may not react immediately to a user event (like a click). That's why the library provides a way to easily control display attributes on action call. You can use this HTML attribute to control display during action call. At the end of an action the layout will be restored. A small note. Don't forget to set a default display for loading elements like spinners and loaders. You can push multiple component UI updates during a single action call. Just call kyoto.ActionFlush(ctx, state) to initiate an update. Kyoto provides a way to control action rendering. Now there is at least 2 rendering options after an action call: morph (default) and replace. Morph will try to morph received markup to the current one with morphdom library. In case of an error, or explicit "replace" mode, markup will be replaced with x.outerHTML = '...'.
Package kyoto was made for creating fast, server side frontend avoiding vanilla templating downsides. It tries to address complexities in frontend domain like responsibility separation, components structure, asynchronous load and hassle-free dynamic layout updates. These issues are common for frontends written with Go. The library provides you with primitives for pages and components creation, state and rendering management, dynamic layout updates (with external packages integration), utility functions and asynchronous components out of the box. Still, it bundles with minimal dependencies and tries to utilize built-ins as much as possible. You would probably want to opt out from this library in few cases, like, if you're not ready for drastic API changes between major version, you want to develop SPA/PWA and/or complex client-side logic, or you're just feeling OK with your current setup. Please, don't compare kyoto with a popular JS libraries like React, Vue or Svelte. I know you will have such a desire, but most likely you will be wrong. Use cases and underlying principles are just too different. If you want to get an idea of what a typical static component would look like, here's some sample code. It's very ascetic and simplistic, as we don't want to overload you with implementation details. Markup is also not included here (it's just a well-known `html/template`). For details, please check project's website on https://kyoto.codes. Also, you may check the library index to explore available sub-packages and https://pkg.go.dev for Go'ish documentation style. We don't want you to deal with boilerplate code on your own, so you can proceed with our simple starter project. Feel free to use it as an example for your own setup. Components is a common approach for modern libraries to manage frontend parts. Kyoto's components are trying to be mostly independent (but configurable) part of the project. To create component, it would be enough to implement component.Component. It's a function, a context receiver which returns a component state. State is an implementation of component.State, which is easy to implement with nesting one of the state implementations (options will be described later). Each component becomes a part of the page or top-level component, which executes component function asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Stateful components are pretty similar to stateless ones, but they are actually implementing marshal/unmarshal interface instead of mocking it. You have multiple state options to choose from: universal or server. Universal state is a state, that can be marshalled and unmarshalled both on server and client. It's a common state option without functionality limitations. On the other hand, the whole state must be sent and received, which applies some limitations on the state size. Server state can be marshalled and unmarshalled only on server. It's a good option for components, that are not supposed to be updated on client side (f.e. no inputs). Also, it's a good option for components with lots of state data. Sometimes you may want to pass some arguments to the component. It's easy to do with wrapping component with additional function. You have an access to the context inside the component. It includes request and response objects, as well as some other useful stuff like store. This library doesn't provide you with routing out of the box. You can use any router you want, built-in one is not a bad option for basic needs. Rendering might be tricky, but we're trying to make it as simple as possible. By default, we're using `html/template` as a rendering engine. It's a well-known built-in package, so you don't have to learn anything new. Out of the box we're parsing all templates in root directory with `*.html` glob. You can change this behavior with `TEMPLATE_GLOB` global variable. Don't rely on file names while working with template names, use `define` entry for each your component. To provide your components with ability to be rendered, you have to do some basic steps. First, you have to nest one of the rendering implementations into your component state (f.e. `rendering.Template`). You can customize rendering with providing values to the rendering implementation. If you need to modify these values for the entire project, we recommend looking at the global settings or creating a builder function for rendering object. By default, render handler will use a component name as a template name. So, you have to define a template with the same name as your component (not the filename, but "define" entry). That's enough to be rendered by `rendering.Handler`. For rendering a nested component, use built-in `template` function. Provide a resolved future object as a template argument in this way. Nested components are not obligated to have rendering implementation if you're using them in this way. As an alternative, you can nest rendering implementation (e.g. `rendering.Template`) into your nested component. In this way you can use `render` function to simplify your code. Please, don't use this approach heavily now, as it affects rendering performance. HTMX is a frontend library, that allows you to update your page layout dynamically. It perfectly fits into kyoto, which focuses on components and server side rendering. Thanks to the component structure, there is no need to define separate rendering logic specially for HTMX. Please, check https://htmx.org/docs/#installing for installation instructions. In addition to this, you must register HTMX handlers for your dynamic components. This is a basic example of HTMX usage. Please, check https://htmx.org/docs/ for more details. In this example we're defining a form component, that is updating itself on submit. And this is how you can define a component, that will handle this request. Sometimes it might be useful to have a component state, which will persist between requests and will be stored without any actual usage in the client side presentation. This function injects a hidden input field with a serialized state. Let's check how it works on the server side. As a result, we have a component with a persistent state between requests.
Extensible Go library for creating fast, SSR-first frontend avoiding vanilla templating downsides. Creating asynchronous and dynamic layout parts is a complex problem for larger projects using `html/template`. This library tries to simplify overall setup and process. Let's go straight into a simple example. Then, we will dig into details, step by step, how it works. Kyoto provides a set of simple net/http handlers, handler builders and function wrappers to provide serving, pages rendering, component actions, etc. Anyway, this is not an ultimative solution for any case. If you ever need to wrap/extend existing functionality, library encourages this. See functions inside of nethttp.go file for details and advanced usage. Example: Kyoto provides a way to define components. It's a very common approach for modern libraries to manage frontend parts. In kyoto each component is a context receiver, which returns it's state. Each component becomes a part of the page or top-level component, which executes component asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Example: As an option, you can wrap component with another function to accept additional paramenters from top-level page/component. Example: Kyoto provides a context, which holds common objects like http.ResponseWriter, *http.Request, etc. See kyoto.Context for details. Example: Kyoto provides a set of parameters and functions to provide a comfortable template building process. You can configure template building parameters with kyoto.TemplateConf configuration. See template.go for available functions and kyoto.TemplateConfiguration for configuration details. Example: Kyoto provides a way to simplify building dynamic UIs. For this purpose it has a feature named actions. Logic is pretty simple. Client calls an action (sends a request to the server). Action is executing on server side and server is sending updated component markup to the client which will be morphed into DOM. That's it. To use actions, you need to go through a few steps. You'll need to include a client into page (JS functions for communication) and register an actions handler for a needed component. Let's start from including a client. Then, let's register an actions handler for a needed component. That's all! Now we ready to use actions to provide a dynamic UI. Example: In this example you can see provided modifications to the quick start example. First, we've added a state and name into our components' markup. In this way we are saving our components' state between actions and find a component root. Unfortunately, we have to manually provide a component name for now, we haven't found a way to provide it dynamically. Second, we've added a reload button with onclick function call. We're using a function Action provided by a client. Action triggering will be described in details later. Third, we've added an action handler inside of our component. This handler will be executed when a client calls an action with a corresponding name. It's highly recommended to keep components' state as small as possible. It will be transmitted on each action call. Kyoto have multiple ways to trigger actions. Now we will check them one by one. This is the simplest way to trigger an action. It's just a function call with a referer (usually 'this', f.e. button) as a first argument (used to determine root), action name as a second argument and arguments as a rest. Arguments must to be JSON serializable. It's possible to trigger an action of another component. If you want to call an action of parent component, use $ prefix in action name. If you want to call an action of component by id, use <id:action> as an action name. This is a specific action which is triggered when a form is submitted. Usually called in onsubmit="..." attribute of a form. You'll need to implement 'Submit' action to handle this trigger. This is a special HTML attribute which will trigger an action on page load. This may be useful for components' lazy loading. With this special HTML attributes you can trigger an action with interval. Useful for components that must to be updated over time (f.e. charts, stats, etc). You can use this trigger with ssa:poll and ssa:poll.interval HTML attributes. This one attribute allows you to trigger an action when an element is visible on the screen. May be useful for lazy loading. Kyoto provides a way to control action flow. For now, it's possible to control display style on component call and push multiple UI updates to the client during a single action. Because kyoto makes a roundtrip to the server every time an action is triggered on the page, there are cases where the page may not react immediately to a user event (like a click). That's why the library provides a way to easily control display attributes on action call. You can use this HTML attribute to control display during action call. At the end of an action the layout will be restored. A small note. Don't forget to set a default display for loading elements like spinners and loaders. You can push multiple component UI updates during a single action call. Just call kyoto.ActionFlush(ctx, state) to initiate an update. Kyoto provides a way to control action rendering. Now there is at least 2 rendering options after an action call: morph (default) and replace. Morph will try to morph received markup to the current one with morphdom library. In case of an error, or explicit "replace" mode, markup will be replaced with x.outerHTML = '...'.