This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go If you are trying to make a PUT/POST API call with binary request body, please make sure the binary request body is resettable, which means the request body should inherit Seeker interface. The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
Package smpp implements SMPP protocol v3.4. It allows easier creation of SMPP clients and servers by providing utilities for PDU and session handling. In order to do any kind of interaction you first need to create an SMPP Session(https://godoc.org/github.com/ajankovic/smpp#Session). Session is the main carrier of the protocol and enforcer of the specification rules. Naked session can be created with: But it's much more convenient to use helpers that would do the binding with the remote SMSC and return you session prepared for sending: And once you have the session it can be used for sending PDUs to the binded peer. Session that is no longer used must be closed: If you want to handle incoming requests to the session specify SMPPHandler in session configuration when creating new session similarly to HTTPHandler from _net/http_ package: Detailed examples for SMPP client and server can be found in the examples dir.
Package metrics provides utilities for running and testing a prometheus scrape server. Server provides an HTTP server with a Prometheus metrics endpoint configured. Running it is as simple as: This will start an HTTP server on port 8080, with a /metrics endpoint for Prometheus scraping. To add a metric endpoint to an existing http.Server, use the GetRouter() function instead: APIClientMetrics provides a standard way of measuring client metrics of API calls. Currently, latency & errors are supported: MetricName, MetricLabel and MetricValue provide an easy way to validate metrics during unit testing:
Package radir implements a large subset of "The OID Directory" -- an EXPERIMENTAL Internet-Draft (I-D) series by Jesse Coretta. The Internet-Drafts (henceforth referred to as "the I-D series") is made up of the following individual drafts: These drafts can be viewed on the IETF Data Tracker site, or via the official OID Directory site and GitHub repositories. At present, the current revisions are set to expire on February 23, 2025. The I-D series, and by necessity this package, is thoroughly EXPERIMENTAL. It is not yet approved by the IETF, and thus should NEVER be used in any capacity beyond proof-of-concept or work-in-progress efforts. This package is an abstract, general-use framework supplement. It will aid in the marshaling and unmarshaling of OID registration and registrant (contact) constructs, whether using a proper go-ldap/v3 Entry instance, or through manual assembly. The package can aid in the bidirectional conversion of certain values, such as "dotNotation" and "dn" values, and offers many other useful features in service to the I-D series mentioned above. Implementations which use this package may be of a server-side or client-side nature, or neither. There is no singular use-case for this package. TLDR; its a nifty toolbox; what you build is what the package serves. As the terms are defined throughout the OID Directory I-D series, this package is absolutely not a complete DUA, DIT or DSA. While it can serve as a valuable component in such constructs, its current state does not allow drop-in functionality of that nature, nor was this intended. For instance, those designing a compliant RA DUA, per the RADUA I-D, are expected to install and utilize the go-ldap/v3 package on their own terms and in service to their particular environment or infrastructure. This is done to maximize compatibility across the many potential use-cases and directory products, as well as to limit potential security vulnerabilities relating to this package itself. This approach also has the secondary effect of making potential integration efforts much simpler and far less disruptive. TLDR; this package works with go-ldap/v3, but it does NOT import it directly. Do it yourself. Thanks to the GetOrSetFunc closure type, this package is supremely extensible. Virtually all Registration and Registrant methods -- such as Registration.SetDN or FirstAuthority.POBoxGetFunc -- allow for closure-based behavioral overrides. This allows limitless control over how values manifest during presentation, as well as how they are written to instances of the aforementioned types. For additional information, see the GetOrSetFunc type documentation, as well as the package examples for all methods which allow input of instances of this type. See also the next section regarding storage space considerations with regards to especially -- and unnecessarily -- large values. TLDR; Control value I/O using a closure signature of "func(...any) (any, error)" (GetOrSetFunc) for any "Set<X>" or "<X>GetFunc" methods. Per Section 2.2.3.4 of the RADUA I-D, this package provides a thread-safe, memory-based Cache facility for use by a client. The primary purpose of this facility is to cache, or store temporarily, all *Registration and *Registrant instances that have either been crafted manually, or marshaled by way of a go-ldap/v3 entry instance. While crude, it can help provide considerable I/O savings in terms of LDAP search requests, which may or may not be transmitted over-the-wire. Lifespans of cached entries is directed by manual specification (e.g.: by the end user), or by way of a literal or collectively-inherited TTL obtained within the RA DIT or via the appropriate *DITProfile instance as a global fallback. See the aforementioned RA DUA section for details regarding TTL precedence and other mechanics. Use of this facility is not required to comply with the aforementioned specification. Adopters may freely supplant the package-provided Cache with a caching system of their own choosing or design. TLDR; Caching eligible instances reduces network (LDAP) I/O at the expense of memory. You can use the Cache type, or a third-party one, or abstain from caching entirely. The I-D series offers two (2) directory models in terms of Registration structure and layout, each of which are implemented in this package. The two dimensional model is discussed in Section 3.1.2 of the RADIT I-D. The three dimensional model is discussed in Section 3.1.3 of the RADIT I-D. In most scenarios, use of the three dimensional model is the preferred strategy. TLDR; Use the ThreeDimensional directory model. The I-D series offers two (2) registrant entry policies, each of which are implemented in this package. Dedicated registrants are covered in Section 3.2.1.1.1 of the RADIT I-D. Combined registrants are briefly covered in Section 3.2.1.1.2 of the RADIT I-D. In most scenarios, use of dedicated registrants is the preferred strategy. TLDR; Use *Registrant instances instead of "combining" registrant content with *Registration instances (in-line). As stated in Section 3.2.1.1.1 of the RADIT I-D, it is possible to forego use of the draft-based authority types, such as "sponsorCommonName", in favor of the traditional "cn" type. This logic applies may extend to either "Combined" or "Dedicated" Registrant Policies. See the DITProfile.UseAltAuthorityTypes method for a means of enabling this behavior. Note there are caveats with either standpoint, and thus the reader is advised to review the aforementioned section of the draft to ensure they understand the ramifications of their decision. Please also note it is inadvisable to change this value without a good reason, and inappropriate alteration will result in degraded client behavior and likely a deviation from the established content policies enforced within the directory. You have been warned. See the FirstAuthorityAltAttributeTypes, CurrentAuthorityAltAttributeTypes and SponsorAltAttributeTypes map variables for a complete list of the types that are -- and are not -- subject to the influence of the aforementioned method. TLDR; You may use, for example, "cn" instead of "sponsorCommonName" ... but there are caveats. This package makes conversion (in either direction) between go-ldap/v3 Entry and *Registration or *Registrant instances a breeze! When unmarshaling FROM an instance of go-ldap/v3 Entry TO an instance of *Registration, rather than using the go-ldap/v3 Entry.Unmarshal method directly, simply feed the method to *Registration.Marshal to achieve the same effect: This is necessary because the go-ldap/v3 Entry.Unmarshal method only supports a limited number of struct field value types. To get around this issue, radir simply performs independent marshaling upon any individual struct components within the destination instance (*Registration). In other words, if there are four fields that contain struct values, each of these fields is marshaled independently. This ensures that all of the needed attribute values are collected from the source go-ldap/v3 Entry instance. When unmarshaling FROM an instance of *Registration (or *Registrant) TO an instance of go-ldap/v3 Entry, simply use the Registration.Unmarshal (or Registrant.Unmarshal) method. Feed the output to the go-ldap/v3 NewEntry function: TLDR; Excellent marshal and unmarshal features. And while go-ldap/v3 Entry.Unmarshal is very limited, we have a most suitable workaround: don't "use" it, let us handle it for you. OIDs are virtually infinite in size. Large pools of sibling registrations can be exceedingly difficult to navigate manually; the sequence of number forms may not be contiguous, and there is no guarantee the entries which bear these values will be ordered correctly in directory search results. To that end, the "spatialContext" AUXILIARY class defined within the I-D series is implemented within this package as the *Spatial struct type. Use of this type can help mitigate some of this tedium by allowing any given registration entry to bear direct DN-based references to other spatially-relevant registrations. Specifically, this produces an abstraction of directional movement in the following contexts: Non-collective *Spatial attribute types may be set manually, or they may be present within entries marshaled into Registration instances as literal or collective values. Collective values are not meant for manual assignment, thus no related "set" methods exist in that regard. Like virtually all other methods in this package, the relevant *Spatial methods allow for GetOrSetFunc closure use, thereby letting the user enhance the behavior of instances of this type in a variety of ways: TLDR; RA DIT navigation with a "🕹️" duct-taped on to it.
Package kyoto was made for creating fast, server side frontend avoiding vanilla templating downsides. It tries to address complexities in frontend domain like responsibility separation, components structure, asynchronous load and hassle-free dynamic layout updates. These issues are common for frontends written with Go. The library provides you with primitives for pages and components creation, state and rendering management, dynamic layout updates (with external packages integration), utility functions and asynchronous components out of the box. Still, it bundles with minimal dependencies and tries to utilize built-ins as much as possible. You would probably want to opt out from this library in few cases, like, if you're not ready for drastic API changes between major version, you want to develop SPA/PWA and/or complex client-side logic, or you're just feeling OK with your current setup. Please, don't compare kyoto with a popular JS libraries like React, Vue or Svelte. I know you will have such a desire, but most likely you will be wrong. Use cases and underlying principles are just too different. If you want to get an idea of what a typical static component would look like, here's some sample code. It's very ascetic and simplistic, as we don't want to overload you with implementation details. Markup is also not included here (it's just a well-known `html/template`). For details, please check project's website on https://kyoto.codes. Also, you may check the library index to explore available sub-packages and https://pkg.go.dev for Go'ish documentation style. We don't want you to deal with boilerplate code on your own, so you can proceed with our simple starter project. Feel free to use it as an example for your own setup. Components is a common approach for modern libraries to manage frontend parts. Kyoto's components are trying to be mostly independent (but configurable) part of the project. To create component, it would be enough to implement component.Component. It's a function, a context receiver which returns a component state. State is an implementation of component.State, which is easy to implement with nesting one of the state implementations (options will be described later). Each component becomes a part of the page or top-level component, which executes component function asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Stateful components are pretty similar to stateless ones, but they are actually implementing marshal/unmarshal interface instead of mocking it. You have multiple state options to choose from: universal or server. Universal state is a state, that can be marshalled and unmarshalled both on server and client. It's a common state option without functionality limitations. On the other hand, the whole state must be sent and received, which applies some limitations on the state size. Server state can be marshalled and unmarshalled only on server. It's a good option for components, that are not supposed to be updated on client side (f.e. no inputs). Also, it's a good option for components with lots of state data. Sometimes you may want to pass some arguments to the component. It's easy to do with wrapping component with additional function. You have an access to the context inside the component. It includes request and response objects, as well as some other useful stuff like store. This library doesn't provide you with routing out of the box. You can use any router you want, built-in one is not a bad option for basic needs. Rendering might be tricky, but we're trying to make it as simple as possible. By default, we're using `html/template` as a rendering engine. It's a well-known built-in package, so you don't have to learn anything new. Out of the box we're parsing all templates in root directory with `*.html` glob. You can change this behavior with `TEMPLATE_GLOB` global variable. Don't rely on file names while working with template names, use `define` entry for each your component. To provide your components with ability to be rendered, you have to do some basic steps. First, you have to nest one of the rendering implementations into your component state (f.e. `rendering.Template`). You can customize rendering with providing values to the rendering implementation. If you need to modify these values for the entire project, we recommend looking at the global settings or creating a builder function for rendering object. By default, render handler will use a component name as a template name. So, you have to define a template with the same name as your component (not the filename, but "define" entry). That's enough to be rendered by `rendering.Handler`. For rendering a nested component, use built-in `template` function. Provide a resolved future object as a template argument in this way. Nested components are not obligated to have rendering implementation if you're using them in this way. As an alternative, you can nest rendering implementation (e.g. `rendering.Template`) into your nested component. In this way you can use `render` function to simplify your code. Please, don't use this approach heavily now, as it affects rendering performance. HTMX is a frontend library, that allows you to update your page layout dynamically. It perfectly fits into kyoto, which focuses on components and server side rendering. Thanks to the component structure, there is no need to define separate rendering logic specially for HTMX. Please, check https://htmx.org/docs/#installing for installation instructions. In addition to this, you must register HTMX handlers for your dynamic components. This is a basic example of HTMX usage. Please, check https://htmx.org/docs/ for more details. In this example we're defining a form component, that is updating itself on submit. And this is how you can define a component, that will handle this request. Sometimes it might be useful to have a component state, which will persist between requests and will be stored without any actual usage in the client side presentation. This function injects a hidden input field with a serialized state. Let's check how it works on the server side. As a result, we have a component with a persistent state between requests.
Package api is the root of the packages used to access Google Cloud Services. See https://godoc.org/google.golang.org/api for a full list of sub-packages. Within api there exist numerous clients which connect to Google APIs, and various utility packages. All clients in sub-packages are configurable via client options. These options are described here: https://godoc.org/google.golang.org/api/option. All the clients in sub-packages support authentication via Google Application Default Credentials (see https://cloud.google.com/docs/authentication/production), or by providing a JSON key file for a Service Account. See the authentication examples in https://godoc.org/google.golang.org/api/transport for more details. Due to the auto-generated nature of this collection of libraries, complete APIs or specific versions can appear or go away without notice. As a result, you should always locally vendor any API(s) that your code relies upon. Google APIs follow semver as specified by https://cloud.google.com/apis/design/versioning. The code generator and the code it produces - the libraries in the google.golang.org/api/... subpackages - are beta. Note that versioning and stability is strictly not communicated through Go modules. Go modules are used only for dependency management. Many parameters are specified using ints. However, underlying APIs might operate on a finer granularity, expecting int64, int32, uint64, or uint32, all of whom have different maximum values. Subsequently, specifying an int parameter in one of these clients may result in an error from the API because the value is too large. To see the exact type of int that the API expects, you can inspect the API's discovery doc. A global catalogue pointing to the discovery doc of APIs can be found at https://www.googleapis.com/discovery/v1/apis. This field can be found on all Request/Response structs in the generated clients. All of these types have the JSON `omitempty` field tag present on their fields. This means if a type is set to its default value it will not be marshalled. Sometimes you may actually want to send a default value, for instance sending an int of `0`. In this case you can override the `omitempty` feature by adding the field name to the `ForceSendFields` slice. See docs on any struct for more details. This may be used to include empty fields in Patch requests. This field can be found on all Request/Response structs in the generated clients. It can be be used to send JSON null values for the listed fields. By default, fields with empty values are omitted from API requests because of the presence of the `omitempty` field tag on all fields. However, any field with an empty value appearing in NullFields will be sent to the server as null. It is an error if a field in this list has a non-empty value. This may be used to include null fields in Patch requests. An error returned by a client's Do method may be cast to a *googleapi.Error or unwrapped to an *apierror.APIError. The https://pkg.go.dev/google.golang.org/api/googleapi#Error type is useful for getting the HTTP status code: The https://pkg.go.dev/github.com/googleapis/gax-go/v2/apierror#APIError type is useful for inspecting structured details of the underlying API response, such as the reason for the error and the error domain, which is typically the registered service name of the tool or product that generated the error: If an API call returns an Operation, that means it could take some time to complete the work initiated by the API call. Applications that are interested in the end result of the operation they initiated should wait until the Operation.Done field indicates it is finished. To do this, use the service's Operation client, and a loop, like so:
Package micro is a pluggable framework for microservices
Package smpp implements SMPP protocol v3.4. It allows easier creation of SMPP clients and servers by providing utilities for PDU and session handling. In order to do any kind of interaction you first need to create an SMPP Session(https://godoc.org/github.com/majiddarvishan/smpp#Session). Session is the main carrier of the protocol and enforcer of the specification rules. Naked session can be created with: But it's much more convenient to use helpers that would do the binding with the remote SMSC and return you session prepared for sending: And once you have the session it can be used for sending PDUs to the binded peer. Session that is no longer used must be closed: If you want to handle incoming requests to the session specify SMPPHandler in session configuration when creating new session similarly to HTTPHandler from _net/http_ package: Detailed examples for SMPP client and server can be found in the examples dir.
Package micro is a pluggable framework for microservices
Package nbd implements the NBD network protocol. You can find a full description of the protocol at https://sourceforge.net/p/nbd/code/ci/master/tree/doc/proto.md This package implements both the client and the server side of the protocol, as well as (on Linux) utilities to hook up the kernel NBD client to use a server as a block device. The protocol is split into two phases: The handshake phase, which allows the client and server to negotiate their respective capabilities and what export to use. And the transmission phase, for actually reading/writing to the block device. The client side of the handshake is done with the Client type. Its methods can be used to list the exports a server provides and their respective capabilities. Its Go method enters transmission phase. The returned Export can then be passed to Configure (linux only) to hook it up to an NBD device (/dev/nbdX). The server side combines both handshake and transmission phase into the Serve or ListenAndServe functions. The user is expected to implement the Device interface to serve actual reads/writes. Under linux, the Loopback function serves as a convenient way to use a given Device as a block device.
Package diderot provides a set of utilities to implement an xDS control plan in go. Namely, it provides two core elements: The ADSServer is an implementation of the xDS protocol's various features. It implements both the Delta and state-of-the-world variants, but abstracts this away completely by only exposing a single entry point: the ResourceLocator. When the server receives a request (be it Delta or SotW), it will first check if the requested type is supported, whether it is an ACK (or a NACK), then invoke, if necessary, the corresponding subscription methods on the ResourceLocator. The locator is simply in charge of invoking Notify on the handler whenever the resource changes, and the server will relay that resource update to the client using the corresponding response type. This makes it very easy to implement an xDS control plane without needing to worry about the finer details of the xDS protocol. Most ResourceLocator implementations will likely be a series of Cache instances for the corresponding supported types, which implements the semantics of Subscribe and Resubscribe out of the box. However, as long as the semantics are respected, implementations may do as they please. For example, a common pattern is listed in the xDS spec: Instead of invoking subscribing to a backing Cache with the wildcard subscription, the said "business logic" can be implemented in the ResourceLocator and wildcard subscriptions can be transformed into an explicit set of resources. This type is the core building block provided by this package. It is effectively a map from resource name to ads.Resource definitions. It provides a way to subscribe to them in order to be notified whenever they change. For example, the ads.Endpoint type (aka "envoy.config.endpoint.v3.ClusterLoadAssignment") contains the set of IPs that back a specific ads.Cluster ("envoy.config.cluster.v3.Cluster") and is the final step in the standard LDS -> RDS -> CDS -> EDS Envoy flow. The Cache will store the Endpoint instances that back each cluster, and Envoy will be able to subscribe to the ads.Endpoint resource by providing the correct name when subscribing. See diderot.Cache.Subscribe for additional details on the subscription model. It is safe for concurrent use as its concurrency model is per-resource. This means different goroutines can modify different resources concurrently, and goroutines attempting to modify the same resource will be synchronized. The cache supports a notion of "priority". Concretely, this feature is intended to be used when a resource definition can come from multiple sources. For example, if resource definitions are being migrated from one source to another, it would be sane to always use the new source if it is present, otherwise fall back to the old source. This would be as opposed to simply picking whichever source defined the resource most recently, as it would mean the resource definition cannot be relied upon to be stable. NewPrioritizedCache returns a slice of instances of their respective types. The instances all point to the same underlying cache, but at different priorities, where instances that appear earlier in the slice have a higher priority than those that appear later. If a resource is defined at priorities p1 and p2 where p1 is a higher priority than p2, subscribers will see the version that was defined at p1. If the resource is cleared at p1, the cache will fall back to the definition at p2. This means that a resource is only ever considered fully deleted if it is cleared at all priority levels. The reason a slice of instances is returned rather than adding a priority parameter to each function on Cache is to avoid complicated configuration or simple bugs where a resource is being set at an unintended or invalid priority. Instead, the code path where a source is populating the cache simply receives a reference to the cache and starts writing to it. If the priority of a source changes in subsequent versions, it can be handled at initialization/startup instead of requiring any actual code changes to the source itself. The notion of glob collections defined in the TP1 proposal is supported natively in the Cache. This means that if resource names are xdstp:// URNs, they will be automatically added to the corresponding glob collection, if applicable. These resources are still available for subscription by their full URN, but will also be available for subscription by subscribing to the parent glob collection. More details available at diderot.Cache.Subscribe, ads.ParseGlobCollectionURL and ads.ExtractGlobCollectionURLFromResourceURN.
Package kyoto was made for creating fast, server side frontend avoiding vanilla templating downsides. It tries to address complexities in frontend domain like responsibility separation, components structure, asynchronous load and hassle-free dynamic layout updates. These issues are common for frontends written with Go. The library provides you with primitives for pages and components creation, state and rendering management, dynamic layout updates (with external packages integration), utility functions and asynchronous components out of the box. Still, it bundles with minimal dependencies and tries to utilize built-ins as much as possible. You would probably want to opt out from this library in few cases, like, if you're not ready for drastic API changes between major version, you want to develop SPA/PWA and/or complex client-side logic, or you're just feeling OK with your current setup. Please, don't compare kyoto with a popular JS libraries like React, Vue or Svelte. I know you will have such a desire, but most likely you will be wrong. Use cases and underlying principles are just too different. If you want to get an idea of what a typical static component would look like, here's some sample code. It's very ascetic and simplistic, as we don't want to overload you with implementation details. Markup is also not included here (it's just a well-known `html/template`). For details, please check project's website on https://kyoto.codes. Also, you may check the library index to explore available sub-packages and https://pkg.go.dev for Go'ish documentation style. We don't want you to deal with boilerplate code on your own, so you can proceed with our simple starter project. Feel free to use it as an example for your own setup. Components is a common approach for modern libraries to manage frontend parts. Kyoto's components are trying to be mostly independent (but configurable) part of the project. To create component, it would be enough to implement component.Component. It's a function, a context receiver which returns a component state. State is an implementation of component.State, which is easy to implement with nesting one of the state implementations (options will be described later). Each component becomes a part of the page or top-level component, which executes component function asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Stateful components are pretty similar to stateless ones, but they are actually implementing marshal/unmarshal interface instead of mocking it. You have multiple state options to choose from: universal or server. Universal state is a state, that can be marshalled and unmarshalled both on server and client. It's a common state option without functionality limitations. On the other hand, the whole state must be sent and received, which applies some limitations on the state size. Server state can be marshalled and unmarshalled only on server. It's a good option for components, that are not supposed to be updated on client side (f.e. no inputs). Also, it's a good option for components with lots of state data. Sometimes you may want to pass some arguments to the component. It's easy to do with wrapping component with additional function. You have an access to the context inside the component. It includes request and response objects, as well as some other useful stuff like store. This library doesn't provide you with routing out of the box. You can use any router you want, built-in one is not a bad option for basic needs. Rendering might be tricky, but we're trying to make it as simple as possible. By default, we're using `html/template` as a rendering engine. It's a well-known built-in package, so you don't have to learn anything new. Out of the box we're parsing all templates in root directory with `*.html` glob. You can change this behavior with `TEMPLATE_GLOB` global variable. Don't rely on file names while working with template names, use `define` entry for each your component. To provide your components with ability to be rendered, you have to do some basic steps. First, you have to nest one of the rendering implementations into your component state (f.e. `rendering.Template`). You can customize rendering with providing values to the rendering implementation. If you need to modify these values for the entire project, we recommend looking at the global settings or creating a builder function for rendering object. By default, render handler will use a component name as a template name. So, you have to define a template with the same name as your component (not the filename, but "define" entry). That's enough to be rendered by `rendering.Handler`. For rendering a nested component, use built-in `template` function. Provide a resolved future object as a template argument in this way. Nested components are not obligated to have rendering implementation if you're using them in this way. As an alternative, you can nest rendering implementation (e.g. `rendering.Template`) into your nested component. In this way you can use `render` function to simplify your code. Please, don't use this approach heavily now, as it affects rendering performance. HTMX is a frontend library, that allows you to update your page layout dynamically. It perfectly fits into kyoto, which focuses on components and server side rendering. Thanks to the component structure, there is no need to define separate rendering logic specially for HTMX. Please, check https://htmx.org/docs/#installing for installation instructions. In addition to this, you must register HTMX handlers for your dynamic components. This is a basic example of HTMX usage. Please, check https://htmx.org/docs/ for more details. In this example we're defining a form component, that is updating itself on submit. And this is how you can define a component, that will handle this request. Sometimes it might be useful to have a component state, which will persist between requests and will be stored without any actual usage in the client side presentation. This function injects a hidden input field with a serialized state. Let's check how it works on the server side. As a result, we have a component with a persistent state between requests.
Package helper: common project provides commonly used helper utility functions, custom utility types, and third party package wrappers. common project helps code reuse, and faster composition of logic without having to delve into commonly recurring code logic and related testings. common project source directories and brief description: + /ascii = helper types and/or functions related to ascii manipulations. + /crypto = helper types and/or functions related to encryption, decryption, hashing, such as rsa, aes, sha, tls etc. + /csv = helper types and/or functions related to csv file manipulations. + /rest = helper types and/or functions related to http rest api GET, POST, PUT, DELETE actions invoked from client side. + /tcp = helper types providing wrapped tcp client and tcp server logic. - /wrapper = wrappers provides a simpler usage path to third party packages, as well as adding additional enhancements. /helper-conv.go = helpers for data conversion operations. /helper-db.go = helpers for database data type operations. /helper-emv.go = helpers for emv chip card related operations. /helper-io.go = helpers for io related operations. /helper-net.go = helpers for network related operations. /helper-num.go = helpers for numeric related operations. /helper-other.go = helpers for misc. uncategorized operations. /helper-reflect.go = helpers for reflection based operations. /helper-regex.go = helpers for regular express related operations. /helper-str.go = helpers for string operations. /helper-struct.go = helpers for struct related operations. /helper-time.go = helpers for time related operations. /helper-uuid.go = helpers for generating globally unique ids.
Package sshego is a golang libary that does secure port forwarding over ssh. Also `gosshtun` is a command line utility included here that demonstrates use of the library; and may be useful standalone. The intent of having a Go library is so that it can be used to secure (via SSH tunnel) any other traffic that your Go application would normally have to do over cleartext TCP. While you could always run a tunnel as a separate process, by running the tunnel in process with your application, you know the tunnel is running when the process is running. It's just simpler to administer; only one thing to start instead of two. Also this is much simpler, and much faster, than using a virtual private network (VPN). For a speed comparison, consider [1] where SSH is seen to be at least 2x faster than OpenVPN. [1] http://serverfault.com/questions/653211/ssh-tunneling-is-faster-than-openvpn-could-it-be The sshego library typically acts as an ssh client, but also provides options to support running an embedded sshd server daemon. Port forwarding is the most typical use of the client, and this is the equivalent of using the standalone `ssh` client program and giving the `-L` and/or `-R` flags. If you only trust the user running your application and not your entire host, you can further restrict access by using either DialConfig.Dial() for a direct-tcpip connection, or by using the unix-domain-socket support. For example, is equivalent to with the addendum that `gosshtun` requires the use of passwordless private `-key` file, and will never prompt you for a password at the keyboard. This makes it ideal for embedding inside your application to secure your (e.g. mysql, postgres, other cleartext) traffic. As many connections as you need will be multiplexed over the same ssh tunnel. We check the sshd server's host key. We prevent MITM attacks by only allowing new servers if `-new` is given. You should give `-new` only once at setup time. Then the lack of `-new` can protect you on subsequent runs, because the server's host key must match what we were given the first time. means the following two network hops will happen, when a local browser connects to localhost:8888 where (a) takes place inside the previously established ssh tunnel. Connection (b) takes place over basic, un-adorned, un-encrypted TCP/IP. Of course you could always run `gosshtun` again on the remote host to secure the additional hop as well, but typically -remote is aimed at the 127.0.0.1, which will be internal to the remote host itself and so needs no encryption.
Package socksy5 provides a SOCKS5 middle layer and utils for simple request handling. MidLayer implements the middle layer, which accepts client connections in the form of net.Conn (see MidLayer.ServeClient), then wraps client handshakes and requests as structs, letting external code to decide whether accept or reject, which kind of subnegotiation to use e.t.c.. This provides advantages when you need multi-homed BND or UDP ASSOCIATION processing, custom subnegotiation and encryption, attaching special connection to CONNECT requests. Besides that, socksy5 also provides Connect, Binder and Associator as simple handlers for CONNECT, BND and UDP ASSOC requests. Listen is also provided as a simple listening util which passes net.Conn to MidLayer automatically. They are for ease of use if you want to set up a SOCKS5 server fast, thus they only have basic features. You can handle handshakes and requests yourself if they don't meet your requirement. First pass a net.Conn to a MidLayer instance, then MidLayer will begin communicating with the client. When client begins handshakes or sends requests, MidLayer will emit Handshake, ConnectRequest, BindRequest and AssocRequest via channels. Call methods of them to decide which kind of authentication to use, whether accept or reject and so on. Logs are emitted via channels too. See MidLayer.LogChan, MidLayer.HandshakeChan, MidLayer.RequestChan. User of this package should read Request, as it contains general info about different types of requests. socksy5 provides limited implementations of authenticate methods, for quite a long time. MidLayer does relay TCP traffic, but it doesn't dial outbound or relay UDP traffic.
Package gpoll manages long-polling. Long-polling is a means of near-realtime data transfer that utilizes keep-alive requests to allow clients to receive data as it becomes available. Notable uses for long-polling include push notification and chat clients. This package has the aim of making message distribution as quick as possible. The way this is accomplished in gpoll is by utilizing coroutines (goroutines) and tracking them so that coroutines are automatically created and terminated as necessary based on the number of clients connected and your configuration. To use gpoll, you first need to setup a Broadcaster object. The Broadcaster object has several fields you can tweak to tune the behavior of the Broadcaster. For instance, the ClientBufferSize dictates how many massages you will allow each client buffer to store. However, because client removal is automated, the package will sense when a buffer is full, then remove that client since the program assumes this means the client is no longer listening. If you wish to just drop in the package and use it right away, you can just have the server use the DefaultBroadcaster. This is already available and by calling the ListenAndBroadcast() function in the global context, your server will default to this object and you're ready to go. An example of such a setup is: It's as easy as that. Total reset. Okay this is probably something that should never be done unless absolutely necessary, but if you need to do a full reset and kick all your users, just call the CleanUp() function. Since the CleanUp function takes an argument in the form of an int slice made up of the INDICES (not IDs) of the goroutines present within a Broadcaster, just pass in a slice containing all indices of the Broadcaster.Routines slice. This would look like this: I'll add in support for individual client removal, but you can even implement your own client removal since the number before the '$' in a client ID is the goroutine ID. Just search the Routine object that contains that ID for your client within the Clients slice and remove it.
Package controllerruntime provides tools to construct Kubernetes-style controllers that manipulate both Kubernetes CRDs and aggregated/built-in Kubernetes APIs. It defines easy helpers for the common use cases when building CRDs, built on top of customizable layers of abstraction. Common cases should be easy, and uncommon cases should be possible. In general, controller-runtime tries to guide users towards Kubernetes controller best-practices. The main entrypoint for controller-runtime is this root package, which contains all of the common types needed to get started building controllers: The examples in this package walk through a basic controller setup. The kubebuilder book (https://book.kubebuilder.io) has some more in-depth walkthroughs. controller-runtime favors structs with sane defaults over constructors, so it's fairly common to see structs being used directly in controller-runtime. A brief-ish walkthrough of the layout of this library can be found below. Each package contains more information about how to use it. Frequently asked questions about using controller-runtime and designing controllers can be found at https://github.com/kubernetes-sigs/controller-runtime/blob/master/FAQ.md. Every controller and webhook is ultimately run by a Manager (pkg/manager). A manager is responsible for running controllers and webhooks, and setting up common dependencies (pkg/runtime/inject), like shared caches and clients, as well as managing leader election (pkg/leaderelection). Managers are generally configured to gracefully shut down controllers on pod termination by wiring up a signal handler (pkg/manager/signals). Controllers (pkg/controller) use events (pkg/event) to eventually trigger reconcile requests. They may be constructed manually, but are often constructed with a Builder (pkg/builder), which eases the wiring of event sources (pkg/source), like Kubernetes API object changes, to event handlers (pkg/handler), like "enqueue a reconcile request for the object owner". Predicates (pkg/predicate) can be used to filter which events actually trigger reconciles. There are pre-written utilities for the common cases, and interfaces and helpers for advanced cases. Controller logic is implemented in terms of Reconcilers (pkg/reconcile). A Reconciler implements a function which takes a reconcile Request containing the name and namespace of the object to reconcile, reconciles the object, and returns a Response or an error indicating whether to requeue for a second round of processing. Reconcilers use Clients (pkg/client) to access API objects. The default client provided by the manager reads from a local shared cache (pkg/cache) and writes directly to the API server, but clients can be constructed that only talk to the API server, without a cache. The Cache will auto-populate with watched objects, as well as when other structured objects are requested. The default split client does not promise to invalidate the cache during writes (nor does it promise sequential create/get coherence), and code should not assume a get immediately following a create/update will return the updated resource. Caches may also have indexes, which can be created via a FieldIndexer (pkg/client) obtained from the manager. Indexes can used to quickly and easily look up all objects with certain fields set. Reconcilers may retrieve event recorders (pkg/recorder) to emit events using the manager. Clients, Caches, and many other things in Kubernetes use Schemes (pkg/scheme) to associate Go types to Kubernetes API Kinds (Group-Version-Kinds, to be specific). Similarly, webhooks (pkg/webhook/admission) may be implemented directly, but are often constructed using a builder (pkg/webhook/admission/builder). They are run via a server (pkg/webhook) which is managed by a Manager. Logging (pkg/log) in controller-runtime is done via structured logs, using a log set of interfaces called logr (https://pkg.go.dev/github.com/go-logr/logr). While controller-runtime provides easy setup for using Zap (https://go.uber.org/zap, pkg/log/zap), you can provide any implementation of logr as the base logger for controller-runtime. Metrics (pkg/metrics) provided by controller-runtime are registered into a controller-runtime-specific Prometheus metrics registry. The manager can serve these by an HTTP endpoint, and additional metrics may be registered to this Registry as normal. You can easily build integration and unit tests for your controllers and webhooks using the test Environment (pkg/envtest). This will automatically stand up a copy of etcd and kube-apiserver, and provide the correct options to connect to the API server. It's designed to work well with the Ginkgo testing framework, but should work with any testing setup. This example creates a simple application Controller that is configured for ReplicaSets and Pods. * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application. TODO(pwittrock): Update this example when we have better dependency injection support. This example creates a simple application Controller that is configured for ReplicaSets and Pods. This application controller will be running leader election with the provided configuration in the manager options. If leader election configuration is not provided, controller runs leader election with default values. Default values taken from: https://github.com/kubernetes/component-base/blob/master/config/v1alpha1/defaults.go * defaultLeaseDuration = 15 * time.Second * defaultRenewDeadline = 10 * time.Second * defaultRetryPeriod = 2 * time.Second * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application. TODO(pwittrock): Update this example when we have better dependency injection support.
Pacakge rss-processor provides RSS feeds processing for clients It has the search, fetchrss and serverss packages to enable it get store and provide clients with the ability to search stored feeds and present the results via a HTTP interface. The application uses already stored RSS sources (i.e links) on the sources table in the database, thus for more RSS feeds one only needs to add the source and the application will pick up in the next wake up (at the moment every 5 minutes). MySQL database was chosen as the data backend to leverage it's FULL TEXT search capabilities to enable clients query for various topics get results. N.B the added RSS sources in the DB are not exhaustive and others from the publishers can be added to the table and get processed on. the next run. The application has three main packages. The packages are called and composed in the main function to start up the application. First the application fetches feeds for RSS sources defined in the Database and stores them locally. Thereafter periodically after 5 minutes the fetchrss package fetches the RSS feeds and updates the local database with newer feeds. The main programme also starts a http server (on port 9000) on a separate goroutine to listen to and process client requests to search the current locally stored feeds. The server package utilizes the search package to search and return matching results from the feeds in the database. The search is dependent on FULL TEXT search for MySQL databases. The documentation can be read using the godoc tool by running the command and thereafter visiting To enable quick running, a docker-compose file is present in the directory and the app can be started with. The docker file creates two containers one for the MySQL db and one for the RSS processor app (this app). Then you can query to get feeds for a given query The schema is under the sql/ directory and is automatically used by the docker containers to bootstrap the DB.
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
Package radish is a stateless asynchronous task queue and handler framework. Radish is designed to maximize the resources of a single node by being able to flexibly increase and decrease the number of worker go routines that handle tasks. A radish server allows users to scale the number of workers that can handle generic tasks, add tasks to the queue, and reports metrics to prometheus for easy tracking and management. Radish also provides a CLI program for interacting with servers that are running the radish service. Radish is intended to be used as a framework to create asynchronous task handling services that do not rely on an intermediate message broker like RabbitMQ or Redis. The statelessness of Radish makes it much simpler to use, but also does not guarantee fault tolerance in task handling. It is up to the application using Radish to determine how to handle task scheduling and timeouts as well as success and failure callbacks. The way applications do this is by defining tasks handlers that implement the Task interface and registering them with the radish server. Tasks can then be queued using the Delay method or by submitting a Queue request to the API server. On success or failure, the worker will call one of the handlers callback methods then move on to the next task. A task handler is implemented by defining a struct that implements the Task interface and registering it with the Radish task queue. Custom tasks must specify a Name method that uniquely identifies the type of task it is (which is also used when queueing tasks) as well as a Handle method. The Handle method must accept a uuid, which describes the future being handled (in case the application wants to implement statefulness) as well as generic parameters as a byte slice. We have chosen []byte for parameters so that applications can define any serialization format they choose, e.g. json or protobuf. Task handlers may also implement two callbacks: Success and Failure. Both of these callbacks take parameters that are specific to those methods and must be provided with the task being queued. The Failure method will additionally be passed the error that caused the task to fail. Once we have defined our custom task handlers, we can register them and begin delaying tasks for asynchronous processing. If we have two task handlers, SendEmail and DailyReport whose names are "sendEmail" and "dailyReport" respectively, then the simplest way we can get started is as follows: When the task queue is created, it immediately launches workers (1 per CPU on the machine) to start handling tasks. You can then delay tasks, which will return the unique id of the future of the task (which you can use for book keeping in success or failure). In this example, the tasks are submited with an email and an address, but no parameters for success or failure handling. More detailed configuration and registration is possible with radish. In the quick start example we submitted a nil configuration as the first argument to New - this allowed us to set reasonable defaults for the radish queue. We can configure it more specifically using the Config object: The config is validated when it is created and any invalid configurations will return an error when the queue is created. We can also manually register tasks with the queue (and register tasks at runtime) as follows: This allows the queue to be dynamic and handle different tasks at different times. It is also possible to scale the number of workers at runtime: The queue can also be scaled and tasks delayed using the Radish service. Radish implements a gRPC API so that remote clients can connect and get the queue status, delay tasks, and scale the number of workers. The simplest way to run this service is as follows: This wil serve on the address and port specified in the configuration and block until an interrupt signal is received from the OS, which will shutdown the queue. Applications can also manually call: To gracefully shutdown the queue, completing any tasks that are in flight and not accepting new tasks if they run the listener in its own go routine. Applications that need to specify their own services using gRPC or http servers can manually run the service as follows: The radish CLI command can then be used to access the service and submit tasks. Radish also serves a metrics endpoint that can be polled by Prometheus. Radish keeps track of the following metrics associated with the task queue: Coming soon: If you have your own Prometheus endpoint, you will be able to register Radish metrics manually without serving them in Radish. The radish CLI utility is found in `cmd/radish` and can be installed as follows: This utility allows you to interact with any radish server and can be used to manage your task queue services out of the box. You can view the commands and options using the --help flag. In order to connect to a radish server you need to specify options as follows: This connects radish to a server on port 5356 on the local host without TLS (the -U stands for "unsecure"). Note that you can also use the $RADISH_ENDPOINT and $RADISH_UNSECURE environment variables. The misspelling of "unsecure" is a joke, radish is not insecure it's just not connecting with encryption. After the connection options are specified you can use a command to interact with the server. For example to set the number of workers you can use the scale command: To get the status of the server and the currently registered tasks you can use the status command: Finally, once you know the names of the tasks that the radish server is handling, you can queue tasks as follows: The CLI interface is meant to help you get quickly started with Radish task queues without having to write your own interfaces or servers.
Package kvcert is a simple utility that utilizes the azure-sdk-for-go to fetch a Certificate from Azure Key Vault. The certificate can then be used in your Go web server to support TLS communication. A trivial example is below. This example uses the following environment variables: KEY_VAULT_NAME: name of your Azure Key Vault KEY_VAULT_CERT_NAME: name of your certificate in Azure Key Vault AZURE_TENANT_ID: azure tenant id (not visible in example, but required by azure-sdk-for-go) AZURE_CLIENT_ID: azure client id (not visible in example, but required by azure-sdk-for-go) AZURE_CLIENT_SECRET: azure client secret (not visible in example, but required by azure-sdk-for-go)
Package sshego is a golang libary that does secure port forwarding over ssh. Also `gosshtun` is a command line utility included here that demonstrates use of the library; and may be useful standalone. The intent of having a Go library is so that it can be used to secure (via SSH tunnel) any other traffic that your Go application would normally have to do over cleartext TCP. While you could always run a tunnel as a separate process, by running the tunnel in process with your application, you know the tunnel is running when the process is running. It's just simpler to administer; only one thing to start instead of two. Also this is much simpler, and much faster, than using a virtual private network (VPN). For a speed comparison, consider [1] where SSH is seen to be at least 2x faster than OpenVPN. [1] http://serverfault.com/questions/653211/ssh-tunneling-is-faster-than-openvpn-could-it-be The sshego library typically acts as an ssh client, but also provides options to support running an embedded sshd server daemon. Port forwarding is the most typical use of the client, and this is the equivalent of using the standalone `ssh` client program and giving the `-L` and/or `-R` flags. If you only trust the user running your application and not your entire host, you can further restrict access by using either DialConfig.Dial() for a direct-tcpip connection, or by using the unix-domain-socket support. For example, is equivalent to with the addendum that `gosshtun` requires the use of passwordless private `-key` file, and will never prompt you for a password at the keyboard. This makes it ideal for embedding inside your application to secure your (e.g. mysql, postgres, other cleartext) traffic. As many connections as you need will be multiplexed over the same ssh tunnel. We check the sshd server's host key. We prevent MITM attacks by only allowing new servers if `-new` is given. You should give `-new` only once at setup time. Then the lack of `-new` can protect you on subsequent runs, because the server's host key must match what we were given the first time. means the following two network hops will happen, when a local browser connects to localhost:8888 where (a) takes place inside the previously established ssh tunnel. Connection (b) takes place over basic, un-adorned, un-encrypted TCP/IP. Of course you could always run `gosshtun` again on the remote host to secure the additional hop as well, but typically -remote is aimed at the 127.0.0.1, which will be internal to the remote host itself and so needs no encryption.
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
Package twilio provides server side authentication utilities for applications to integrate with Twilio service. Access Tokens are short-lived tokens that can be used to authenticate Twilio Client SDKs like Video and IP Messaging. See detail in https://www.twilio.com/docs/api/rest/access-tokens. Capability Tokens are a secure way of setting up your device to access various features of Twilio, allowing you to add Twilio capabilities to web and mobile applications without exposing your AuthToken in any client-side environment. See detail in https://www.twilio.com/docs/api/client/capability-tokens.
Maestro is a SQL-centric tool for orchestrating BigQuery jobs. Maestro also supports data transfers from and to Google Cloud Storage (GCS) and relational databases (presently PostgresSQL and MySQL). Maestro is a "catalog" of SQL statements. Key feature of Maestro is the ability to infer dependencies by examining the SQL and without any additional configuration. Maestro can execute all tasks in correct order without a manually specified order (i.e. a "DAG"). Execution can be associated with a frequency (cadence) without requiring any cron or cron-like configuration. Maestro is an ever-running service (daemon). It uses PostgreSQL to store the SQL and all other configuration, state and history. The daemon takes great care to maintain all of its state in PostgreSQL so that it can be stopped or restarted without interrupting any in-progress jobs (in most cases). Maestro records all BigQuery job and other history and has a notion of users and groups which is useful for attributing costs and resource utilization to users and groups. Maestro has a basic web-based user interface implemented in React, though its API can also be used directly. Maestro can notify arbitrary applications of job completion via a simple HTTP request. Maestro also provides a Python client library for a more native Python experience. Maestro integrates with Google OAuth for authentication, Google Sheets for simple exports, Github (for SQL revision control) and Slack (for alerts and notifications). Maestro was designed with simplicity as one of its primary goals. It trades flexibility usually afforded by configurability in various languages for the transaprency and clarity achievable by leveraging the declarative nature of SQL. Maestro works best for environments where BigQuery is the primary store of all data for analyitcal purposes. E.g. the data may be periodically imported into BigQuery from various databases. Once imported, data may be subsequently summarized or transformed via a sequence of BigQuery jobs. The summarized data can then be exported to external databases/application for additional processing (e.g. SciPy) and possibly be imported back into BigQiery, and so on. Every step of this process can be orchestrated by Maestro without relying on any external scheduling facility such as cron. Below is the listing of all the key conepts with explanations. A table is the central object in Maestro. It always corresponds to a table in BigQuery. Maestro code and documentation use the verb "run" with respect to tables. To "run a table" means to perform whatever action is called for in its configuration and store the result in a BigQuery table. A table is (in most cases) defined by a BigQuery SQL statement. There are three kinds of tables in Maestro. A summary table is produced by executing a BigQuery SQL statement (a Query job). An import table is produced by executing SQL on an external database and importing the result into BigQuery. The SQL statement in this case it intentionally restricted to a primitive which supports only SELECT, FROM, WHERE and LIMIT. This is so as to discourage the users from running a complex and taxing query on the database server. The main reason for this SQL statement is to filter out or transform columns, any other processing is best done subsequently in BigQuery. This is a table whose data comes from GCS. The import is triggered via the Maestro API. Such tables are generally used when BigQuery data needs to be processed by an external tool, e.g. SciPy, etc. A job is a BigQuery job. BigQquery has three types of jobs: query, extract and load. All three types are used in Maestro. These details are internal but should be familiar to developers. A BigQquery query job is executed as part of running a table. A BigQuery extract job is executed as part of running a table, after the query job is complete. It results in one or more extract files in GCS. Maestro provides signed URLs to the GCS files so that external tools require no authentication to access the data. This is also facilitated via the Maestro pythonlib. A BigQuery load job is executed as part of running an import table. It is the last step of the import, after the external database table data has been copied to GCS. A run is a complex process which happens periodically, according to a frequency. For example if a daily frequency is defined, then Maestro will construct a run once per day, selecting all tables (including import tables) assigned to this frequency, computing the dependency graph and creating jobs for each table. The jobs are then executed in correct order based on the position in the graph and the number of workers available. Maestro will also assign priority based on the number of child dependencies a table has, thus running the most "important" tables first. PostgreSQL 9.6 or later is required to run Maestro. Building a "production" binary, i.e. with all assets included in the binary itself requires Webpack. Webpack is not necessary for "development" mode which uses Babel for transpilation. Download and compile Maestro with "go get github.com/voxmedia/maestro". (Note that this will create a $GOPATH/bin/maestro binary, which is not very useful, you can delete it). From here cd $GOPATH/src/github.com/voxmedia/maestro and go build. You should now have a "maestro" binary in this directory. You can also create a "production" binary by running "make build". This will combine all the javascript code into a single file and pack it and all other assets into the maestro binary itself, so that to deploy you only need the binary and no other files. Create a PostgreSQL database named "maestro". If you name it something other than that, you will need to provide that name to Maestro via the -db-connect flag which defaults to "host=/var/run/postgresql dbname=maestro sslmode=disable", which should work on most Linux distros. On MacOS the Postgres socket is likely to be in "/private/tmp" and one way to address this is to run "ln -s /private/tmp /var/run/postgresql" Maestro connects to many services and needs credentials for all of them. These credentials are stored in the database, all encrypted using the same shared secret which must be specified on the command line via the -secret argument. The -secret argument is meant mostly for development, in production it is much more secure to use the -secretpath option pointing to the location of a file containing the secret. From the Google Cloud perspective, it is best to create a project entirely dedicated to Maestro, with BigQuery and GCS API's enabled, then create a Service Account (in IAM) dedicated to Maestro, as well as OAuth credentials. The service account will need BigQuery Editor, Job User and Storage Object Admin roles. Run Maestro like so: ./maestro -secret=whatever where "whatever" is the shared secret you invent and need to remember. You should now be able to visit the Maestro UI, by default it is at http://localhost:3000 When you click on the log-in link, since at this point Maestro has no OAuth configuration, you will be presented with a form asking for the relevant info, which you will need to provide. You should then be redirected to the Google OAuth login page. From here on the configuration is stored in the database in encrypted form. As the first user of this Maestro instance, you are automatically marked as "admin", which means you can perform any action. As an admin, you should see the "Admin" menu in the upper right. Click on it and select the "Credentials" option. You now need to populate the credentials. The BigQuery, default dataset and GCS bucket are required, while Git and Slack are optional, but highly recommended. Note that the BigQuery dataset and the GCS bucket are not created by Maestro, you need to create those manually. The GCS bucket is used for data exports, and it is generally a good idea to set the data in it to expire after several days or whatever works for you. If you need to import data from external databases, you can add those credentials under the Admin / Databases menu. You may want to create a frequency (also under Admin menu). A frequency is how periodic jobs are scheduled in Maestro. It is defined by a period and an offset. The period is passed to time.Truncate() function, and if the result is 0, this is when a run is triggered. The offset is an offset into the period. E.g. to define a frequency that start a run at 4am UTC, you need to specify a period of 86400 and an offset of 14400. Note that Maestro needs to be restarted after these configuration changes (this will be fixed later). At this point you should be able to create a summary table with some simple SQL, e.g. "SELECT 'hello' AS world", save it and run it. If it executes correctly, you should be able to see this new table in the BigQuery UI.
Package smpp implements SMPP protocol v3.4. It allows easier creation of SMPP clients and servers by providing utilities for PDU and session handling. In order to do any kind of interaction you first need to create an SMPP Session(https://godoc.org/github.com/daominah/smpp#Session). Session is the main carrier of the protocol and enforcer of the specification rules. Naked session can be created with: But it's much more convenient to use helpers that would do the binding with the remote SMSC and return you session prepared for sending: And once you have the session it can be used for sending PDUs to the binded peer. Session that is no longer used must be closed: If you want to handle incoming requests to the session specify SMPPHandler in session configuration when creating new session similarly to HTTPHandler from _net/http_ package: Detailed examples for SMPP client and server can be found in the examples dir.
Package main is a stub for wr's command line interface, with the actual implementation in the cmd package. wr is a workflow runner. You use it to run the commands in your workflow easily, automatically, reliably, with repeatability, and while making optimal use of your available computing resources. wr is implemented as a polling-free in-memory job queue with an on-disk acid transactional embedded database, written in go. Its main benefits over other software workflow management systems are its very low latency and overhead, its high performance at scale, its real-time status updates with a view on all your workflows on one screen, its permanent searchable history of all the commands you have ever run, and its "live" dependencies enabling easy automation of on-going projects. Start up the manager daemon, which gives you a url you can view the web interface on: In addition to the "local" scheduler, which will run your commands on all available cores of the local machine, you can also have it run your commands on your LSF cluster or in your OpenStack environment (where it will scale the number of servers needed up and down automatically). Now, stick the commands you want to run in a text file and: Arbitrarily complex workflows can be formed by specifying command dependencies. Use the --help option of `wr add` for details. wr's core is implemented in the queue package. This is the in-memory job queue that holds commands that still need to be run. Its multiple sub-queues enable certain guarantees: a given command will only get run by a single client at any one time; if a client dies, the command will get run by another client instead; if a command cannot be run, it is buried until the user takes action; if a command has a dependency, it won't run until its dependencies are complete. The jobqueue package provides client+server code for interacting with the in-memory queue from the queue package, and by storing all new commands in an on-disk database, provides an additional guarantee: that (dynamic) workflows won't break because a job that was added got "lost" before it got run. It also retains all completed jobs, enabling searching through of past workflows and allowing for "live" dependencies, triggering the rerunning of previously completed commands if their dependencies change. The jobqueue package is also what actually does the main "work" of the system: the server component knows how many commands need to be run and what their resource requirements (memory, time, cpus etc.) are, and submits the appropriate number of jobqueue runner clients to the job scheduler. The jobqueue/scheduler package has the scheduler-specific code that ensures that these runner clients get run on the configured system in the most efficient way possible. Eg. for LSF, if we have 10 commands that need 2GB of memory to run, we will submit a job array of size 10 with 2GB of memory reservation to LSF. The most limited (and therefore potentially least contended) queue capable of running the commands will be chosen. For OpenStack, the cheapest server (in terms of cores and memory) that can run the commands will be spawned, and once there is no more work to do on those servers, they get terminated to free up resources. The cloud package implements methods for interacting with cloud environments such as OpenStack. The corresponding jobqueue/scheduler package uses these methods to do their work. The static subdirectory contains the html, css and javascript needed for the web interface. See jobqueue/serverWebI.go for how the web interface backend is implemented. The internal package contains general utility functions, and most notably config.go holds the code for how the command line interface deals with config options.
Package sshego is a golang libary that does secure port forwarding over ssh. Also `gosshtun` is a command line utility included here that demonstrates use of the library; and may be useful standalone. The intent of having a Go library is so that it can be used to secure (via SSH tunnel) any other traffic that your Go application would normally have to do over cleartext TCP. While you could always run a tunnel as a separate process, by running the tunnel in process with your application, you know the tunnel is running when the process is running. It's just simpler to administer; only one thing to start instead of two. Also this is much simpler, and much faster, than using a virtual private network (VPN). For a speed comparison, consider [1] where SSH is seen to be at least 2x faster than OpenVPN. [1] http://serverfault.com/questions/653211/ssh-tunneling-is-faster-than-openvpn-could-it-be The sshego library typically acts as an ssh client, but also provides options to support running an embedded sshd server daemon. Port forwarding is the most typical use of the client, and this is the equivalent of using the standalone `ssh` client program and giving the `-L` and/or `-R` flags. If you only trust the user running your application and not your entire host, you can further restrict access by using either DialConfig.Dial() for a direct-tcpip connection, or by using the unix-domain-socket support. For example, is equivalent to with the addendum that `gosshtun` requires the use of passwordless private `-key` file, and will never prompt you for a password at the keyboard. This makes it ideal for embedding inside your application to secure your (e.g. mysql, postgres, other cleartext) traffic. As many connections as you need will be multiplexed over the same ssh tunnel. We check the sshd server's host key. We prevent MITM attacks by only allowing new servers if `-new` is given. You should give `-new` only once at setup time. Then the lack of `-new` can protect you on subsequent runs, because the server's host key must match what we were given the first time. means the following two network hops will happen, when a local browser connects to localhost:8888 where (a) takes place inside the previously established ssh tunnel. Connection (b) takes place over basic, un-adorned, un-encrypted TCP/IP. Of course you could always run `gosshtun` again on the remote host to secure the additional hop as well, but typically -remote is aimed at the 127.0.0.1, which will be internal to the remote host itself and so needs no encryption.
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/erikcai/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/erikcai/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/erikcai/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/erikcai/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/erikcai/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic json objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic json request handling can be found here: https://github.com/erikcai/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/erikcai/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy. You can find the examples here: https://github.com/erikcai/oci-go-sdk/blob/master/example/example_retry_test.go The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic json response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic json response. Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/erikcai/oci-go-sdk Licensing information available at: https://github.com/erikcai/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/erikcai/oci-go-sdk/releases.atom Please refer to this link: https://github.com/erikcai/oci-go-sdk#help
Package codes defines the canonical error codes used by gRPC. It is consistent across various languages. Package keepalive defines configurable parameters for point-to-point healthcheck. Package metadata define the structure of the metadata supported by gRPC library. Please refer to https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md for more information about custom-metadata. Package peer defines various peer information associated with RPCs and corresponding utils. Package stats is for collecting and reporting various network and RPC stats. This package is for monitoring purpose only. All fields are read-only. All APIs are experimental. Package status implements errors returned by gRPC. These errors are serialized and transmitted on the wire between server and client, and allow for additional data to be transmitted via the Details field in the status proto. gRPC service handlers should return an error created by this package, and gRPC clients should expect a corresponding error to be returned from the RPC call. This package upholds the invariants that a non-nil error may not contain an OK code, and an OK code must result in a nil error. package http2 defines and implements message oriented communication channel to complete various transactions (e.g., an RPC).
Package twilio provides server side authentication utilities for applications to integrate with Twilio service. Access Tokens are short-lived tokens that can be used to authenticate Twilio Client SDKs like Video and IP Messaging. See detail in https://www.twilio.com/docs/api/rest/access-tokens. Capability Tokens are a secure way of setting up your device to access various features of Twilio, allowing you to add Twilio capabilities to web and mobile applications without exposing your AuthToken in any client-side environment. See detail in https://www.twilio.com/docs/api/client/capability-tokens.
lf is a terminal file manager. Source code can be found in the repository at https://github.com/gokcehan/lf. This documentation can either be read from terminal using 'lf -doc' or online at https://godoc.org/github.com/gokcehan/lf. You can also use 'doc' command (default '<f-1>') inside lf to view the documentation in a pager. You can run 'lf -help' to see descriptions of command line options. The following commands are provided by lf: The following command line commands are provided by lf: The following options can be used to customize the behavior of lf: The following environment variables are exported for shell commands: The following commands/keybindings are provided by default: The following additional keybindings are provided by default: Configuration files should be located at: Marks file should be located at: History file should be located at: You can configure the default values of following variables to change these locations: A sample configuration file can be found at https://github.com/gokcehan/lf/blob/master/etc/lfrc.example. This section shows information about builtin commands. Modal commands do not take any arguments, but instead change the operation mode to read their input conveniently, and so they are meant to be assigned to keybindings. Quit lf and return to the shell. Move the current file selection upwards/downwards by one/half a page/full page. Change the current working directory to the parent directory. If the current file is a directory, then change the current directory to it, otherwise, execute the 'open' command. A default 'open' command is provided to call the default system opener asynchronously with the current file as the argument. A custom 'open' command can be defined to override this default. (See also 'OPENER' variable and 'Opening Files' section) Move the current file selection to the top/bottom of the directory. Toggle the selection of the current file or files given as arguments. Reverse the selection of all files in the current directory (i.e. 'toggle' all files). Selections in other directories are not effected by this command. You can define a new command to select all files in the directory by combining 'invert' with 'unselect' (i.e. `cmd select-all :unselect; invert`), though this will also remove selections in other directories. Remove the selection of all files in all directories. Select files that match the given glob. Unselect files that match the given glob. If there are no selections, save the path of the current file to the copy buffer, otherwise, copy the paths of selected files. If there are no selections, save the path of the current file to the cut buffer, otherwise, copy the paths of selected files. Copy/Move files in copy/cut buffer to the current working directory. Clear file paths in copy/cut buffer. Synchronize copied/cut files with server. This command is automatically called when required. Draw the screen. This command is automatically called when required. Synchronize the terminal and redraw the screen. Load modified files and directories. This command is automatically called when required. Flush the cache and reload all files and directories. Print given arguments to the message line at the bottom. Print given arguments to the message line at the bottom and also to the log file. Print given arguments to the message line at the bottom in red color and also to the log file. Change the working directory to the given argument. Change the current file selection to the given argument. Remove the current file or selected file(s). Rename the current file using the builtin method. A custom 'rename' command can be defined to override this default. Read the configuration file given in the argument. Simulate key pushes given in the argument. Read a command to evaluate. Read a shell command to execute. (See also 'Prefixes' and 'Shell Commands' sections) Read a shell command to execute piping its standard I/O to the bottom statline. (See also 'Prefixes' and 'Piping Shell Commands' sections) Read a shell command to execute and wait for a key press in the end. (See also 'Prefixes' and 'Waiting Shell Commands' sections) Read a shell command to execute synchronously without standard I/O. Read key(s) to find the appropriate file name match in the forward/backward direction and jump to the next/previous match. (See also 'anchorfind', 'findlen', 'wrapscan', 'ignorecase', 'smartcase', 'ignoredia', and 'smartdia' options and 'Searching Files' section) Read a pattern to search for a file name match in the forward/backward direction and jump to the next/previous match. (See also 'globsearch', 'incsearch', 'wrapscan', 'ignorecase', 'smartcase', 'ignoredia', and 'smartdia' options and 'Searching Files' section) Save the current directory as a bookmark assigned to the given key. Change the current directory to the bookmark assigned to the given key. A special bookmark "'" holds the previous directory after a 'mark-load', 'cd', or 'select' command. Remove a bookmark assigned to the given key. This section shows information about command line commands. These should be mostly compatible with readline keybindings. A character refers to a unicode code point, a word consists of letters and digits, and a unix word consists of any non-blank characters. Quit command line mode and return to normal mode. Autocomplete the current word. Autocomplete the current word, then you can press the binded key/s again to cycle completition options. Autocomplete the current word, then you can press the binded key/s again to cycle completition options backwards. Execute the current line. Interrupt the current shell-pipe command and return to the normal mode. Go to next/previous item in the history. Move the cursor to the left/right. Move the cursor to the beginning/end of line. Delete the next character in forward/backward direction. Delete everything up to the beginning/end of line. Delete the previous unix word. Paste the buffer content containing the last deleted item. Transpose the positions of last two characters/words. Move the cursor by one word in forward/backward direction. Delete the next word in forward direction. Capitalize/uppercase/lowercase the current word and jump to the next word. This section shows information about options to customize the behavior. Character ':' is used as the separator for list options '[]int' and '[]string'. When this option is enabled, find command starts matching patterns from the beginning of file names, otherwise, it can match at an arbitrary position. When this option is enabled, directory sizes show the number of items inside instead of the size of directory file. The former needs to be calculated by reading the directory and counting the items inside. The latter is directly provided by the operating system and it does not require any calculation, though it is non-intuitive and it can often be misleading. This option is disabled by default for performance reasons. This option only has an effect when 'info' has a 'size' field and the pane is wide enough to show the information. A thousand items are counted per directory at most, and bigger directories are shown as '999+'. Show directories first above regular files. Draw boxes around panes with box drawing characters. Format string of error messages shown in the bottom message line. File separator used in environment variables 'fs' and 'fx'. Number of characters prompted for the find command. When this value is set to 0, find command prompts until there is only a single match left. When this option is enabled, search command patterns are considered as globs, otherwise they are literals. With globbing, '*' matches any sequence, '?' matches any character, and '[...]' or '[^...] matches character sets or ranges. Otherwise, these characters are interpreted as they are. Show hidden files. On unix systems, hidden files are determined by the value of 'hiddenfiles'. On windows, only files with hidden attributes are considered hidden files. List of hidden file glob patterns. Patterns can be given as relative or absolute paths. Globbing supports the usual special characters, '*' to match any sequence, '?' to match any character, and '[...]' or '[^...] to match character sets or ranges. In addition, if a pattern starts with '!', then its matches are excluded from hidden files. Show icons before each item in the list. By default, only two icons, 🗀 (U+1F5C0) and 🗎 (U+1F5CE), are used for directories and files respectively, as they are supported in the unicode standard. Icons can be configured with an environment variable named 'LF_ICONS'. The syntax of this variable is similar to 'LS_COLORS'. See the wiki page for an example icon configuration. Sets 'IFS' variable in shell commands. It works by adding the assignment to the beginning of the command string as 'IFS='...'; ...'. The reason is that 'IFS' variable is not inherited by the shell for security reasons. This method assumes a POSIX shell syntax and so it can fail for non-POSIX shells. This option has no effect when the value is left empty. This option does not have any effect on windows. Ignore case in sorting and search patterns. Ignore diacritics in sorting and search patterns. Jump to the first match after each keystroke during searching. List of information shown for directory items at the right side of pane. Currently supported information types are 'size', 'time', 'atime', and 'ctime'. Information is only shown when the pane width is more than twice the width of information. Send mouse events as input. Show the position number for directory items at the left side of pane. When 'relativenumber' is enabled, only the current line shows the absolute position and relative positions are shown for the rest. Set the interval in seconds for periodic checks of directory updates. This works by periodically calling the 'load' command. Note that directories are already updated automatically in many cases. This option can be useful when there is an external process changing the displayed directory and you are not doing anything in lf. Periodic checks are disabled when the value of this option is set to zero. Show previews of files and directories at the right most pane. If the file has more lines than the preview pane, rest of the lines are not read. Files containing the null character (U+0000) in the read portion are considered binary files and displayed as 'binary'. Set the path of a previewer file to filter the content of regular files for previewing. The file should be executable. Five arguments are passed to the file, first is the current file name; the second, third, fourth, and fifth are width, height, horizontal position, and vertical position of preview pane respectively. SIGPIPE signal is sent when enough lines are read. If the previewer returns a non-zero exit code, then the preview cache for the given file is disabled. This means that if the file is selected in the future, the previewer is called once again. Preview filtering is disabled and files are displayed as they are when the value of this option is left empty. Set the path of a cleaner file. This file will be called if previewing is enabled, the previewer is set, and the previously selected file had its preview cache disabled. The file should be executable. One argument is passed to the file; the path to the file whose preview should be cleaned. Preview clearing is disabled when the value of this option is left empty. Format string of the prompt shown in the top line. Special expansions are provided, '%u' as the user name, '%h' as the host name, '%w' as the working directory, '%d' as the working directory with a trailing path separator, and '%f' as the file name. Home folder is shown as '~' in the working directory expansion. Directory names are automatically shortened to a single character starting from the left most parent when the prompt does not fit to the screen. List of ratios of pane widths. Number of items in the list determines the number of panes in the ui. When 'preview' option is enabled, the right most number is used for the width of preview pane. Show the position number relative to the current line. When 'number' is enabled, current line shows the absolute position, otherwise nothing is shown. Reverse the direction of sort. Minimum number of offset lines shown at all times in the top and the bottom of the screen when scrolling. The current line is kept in the middle when this option is set to a large value that is bigger than the half of number of lines. A smaller offset can be used when the current file is close to the beginning or end of the list to show the maximum number of items. Shell executable to use for shell commands. On unix, a POSIX compatible shell is required. Shell commands are executed as 'shell shellopts -c command -- arguments'. On windows, '/c' is used instead of '-c' which should work in 'cmd' and 'powershell'. List of shell options to pass to the shell executable. Override 'ignorecase' option when the pattern contains an uppercase character. This option has no effect when 'ignorecase' is disabled. Override 'ignoredia' option when the pattern contains a character with diacritic. This option has no effect when 'ignoredia' is disabled. Sort type for directories. Currently supported sort types are 'natural', 'name', 'size', 'time', 'ctime', 'atime', and 'ext'. Number of space characters to show for horizontal tabulation (U+0009) character. Format string of the file modification time shown in the bottom line. Truncate character shown at the end when the file name does not fit to the pane. Searching can wrap around the file list. Scrolling can wrap around the file list. The following variables are exported for shell commands: These are referred with a '$' prefix on POSIX shells (e.g. '$f'), between '%' characters on Windows cmd (e.g. '%f%'), and with a '$env:' prefix on Windows powershell (e.g. '$env:f'). Current file selection as a full path. Selected file(s) separated with the value of 'filesep' option as full path(s). Selected file(s) (i.e. 'fs') if there are any selected files, otherwise current file selection (i.e. 'f'). Id of the running client. The value of this variable is set to the current nesting level when you run lf from a shell spawned inside lf. You can add the value of this variable to your shell prompt to make it clear that your shell runs inside lf. For example, with POSIX shells, you can use '[ -n "$LF_LEVEL" ] && PS1="$PS1""(lf level: $LF_LEVEL) "' in your shell configuration file (e.g. '~/.bashrc'). If this variable is set in the environment, use the same value, otherwise set the value to 'start' in Windows, 'open' in MacOS, 'xdg-open' in others. If this variable is set in the environment, use the same value, otherwise set the value to 'vi' on unix, 'notepad' in Windows. If this variable is set in the environment, use the same value, otherwise set the value to 'less' on unix, 'more' in Windows. If this variable is set in the environment, use the same value, otherwise set the value to 'sh' on unix, 'cmd' in Windows. The following command prefixes are used by lf: The same evaluator is used for the command line and the configuration file for read and shell commands. The difference is that prefixes are not necessary in the command line. Instead, different modes are provided to read corresponding commands. These modes are mapped to the prefix keys above by default. Characters from '#' to newline are comments and ignored: There are three special commands ('set', 'map', and 'cmd') and their variants for configuration. Command 'set' is used to set an option which can be boolean, integer, or string: Command 'map' is used to bind a key to a command which can be builtin command, custom command, or shell command: Command 'cmap' is used to bind a key to a command line command which can only be one of the builtin commands: You can delete an existing binding by leaving the expression empty: Command 'cmd' is used to define a custom command: You can delete an existing command by leaving the expression empty: If there is no prefix then ':' is assumed: An explicit ':' can be provided to group statements until a newline which is especially useful for 'map' and 'cmd' commands: If you need multiline you can wrap statements in '{{' and '}}' after the proper prefix. Regular keys are assigned to a command with the usual syntax: Keys combined with the shift key simply use the uppercase letter: Special keys are written in between '<' and '>' characters and always use lowercase letters: Angle brackets can be assigned with their special names: Function keys are prefixed with 'f' character: Keys combined with the control key are prefixed with 'c' character: Keys combined with the alt key are assigned in two different ways depending on the behavior of your terminal. Older terminals (e.g. xterm) may set the 8th bit of a character when the alt key is pressed. On these terminals, you can use the corresponding byte for the mapping: Newer terminals (e.g. gnome-terminal) may prefix the key with an escape key when the alt key is pressed. lf uses the escape delaying mechanism to recognize alt keys in these terminals (delay is 100ms). On these terminals, keys combined with the alt key are prefixed with 'a' character: Please note that, some key combinations are not possible due to the way terminals work (e.g. control and h combination sends a backspace key instead). The easiest way to find the name of a key combination is to press the key while lf is running and read the name of the key from the unknown mapping error. Mouse buttons are prefixed with 'm' character: Mouse wheel events are also prefixed with 'm' character: The usual way to map a key sequence is to assign it to a named or unnamed command. While this provides a clean way to remap builtin keys as well as other commands, it can be limiting at times. For this reason 'push' command is provided by lf. This command is used to simulate key pushes given as its arguments. You can 'map' a key to a 'push' command with an argument to create various keybindings. This is mainly useful for two purposes. First, it can be used to map a command with a command count: Second, it can be used to avoid typing the name when a command takes arguments: One thing to be careful is that since 'push' command works with keys instead of commands it is possible to accidentally create recursive bindings: These types of bindings create a deadlock when executed. Regular shell commands are the most basic command type that is useful for many purposes. For example, we can write a shell command to move selected file(s) to trash. A first attempt to write such a command may look like this: We check '$fs' to see if there are any selected files. Otherwise we just delete the current file. Since this is such a common pattern, a separate '$fx' variable is provided. We can use this variable to get rid of the conditional: The trash directory is checked each time the command is executed. We can move it outside of the command so it would only run once at startup: Since these are one liners, we can drop '{{' and '}}': Finally note that we set 'IFS' variable manually in these commands. Instead we could use the 'ifs' option to set it for all shell commands (i.e. 'set ifs "\n"'). This can be especially useful for interactive use (e.g. '$rm $f' or '$rm $fs' would simply work). This option is not set by default as it can behave unexpectedly for new users. However, use of this option is highly recommended and it is assumed in the rest of the documentation. Regular shell commands have some limitations in some cases. When an output or error message is given and the command exits afterwards, the ui is immediately resumed and there is no way to see the message without dropping to shell again. Also, even when there is no output or error, the ui still needs to be paused while the command is running. This can cause flickering on the screen for short commands and similar distractions for longer commands. Instead of pausing the ui, piping shell commands connects stdin, stdout, and stderr of the command to the statline in the bottom of the ui. This can be useful for programs following the unix philosophy to give no output in the success case, and brief error messages or prompts in other cases. For example, following rename command prompts for overwrite in the statline if there is an existing file with the given name: You can also output error messages in the command and it will show up in the statline. For example, an alternative rename command may look like this: Note that input is line buffered and output and error are byte buffered. Waiting shell commands are similar to regular shell commands except that they wait for a key press when the command is finished. These can be useful to see the output of a program before the ui is resumed. Waiting shell commands are more appropriate than piping shell commands when the command is verbose and the output is best displayed as multiline. Asynchronous shell commands are used to start a command in the background and then resume operation without waiting for the command to finish. Stdin, stdout, and stderr of the command is neither connected to the terminal nor to the ui. One of the more advanced features in lf is remote commands. All clients connect to a server on startup. It is possible to send commands to all or any of the connected clients over the common server. This is used internally to notify file selection changes to other clients. To use this feature, you need to use a client which supports communicating with a UNIX-domain socket. OpenBSD implementation of netcat (nc) is one such example. You can use it to send a command to the socket file: Since such a client may not be available everywhere, lf comes bundled with a command line flag to be used as such. When using lf, you do not need to specify the address of the socket file. This is the recommended way of using remote commands since it is shorter and immune to socket file address changes: In this command 'send' is used to send the rest of the string as a command to all connected clients. You can optionally give it an id number to send a command to a single client: All clients have a unique id number but you may not be aware of the id number when you are writing a command. For this purpose, an '$id' variable is exported to the environment for shell commands. You can use it to send a remote command from a client to the server which in return sends a command back to itself. So now you can display a message in the current client by calling the following in a shell command: Since lf does not have control flow syntax, remote commands are used for such needs. For example, you can configure the number of columns in the ui with respect to the terminal width as follows: Besides 'send' command, there are also two commands to get or set the current file selection. Two possible modes 'copy' and 'move' specify whether selected files are to be copied or moved. File names are separated by newline character. Setting the file selection is done with 'save' command: Getting the file selection is similarly done with 'load' command: There is a 'quit' command to close client connections and quit the server: Lastly, there is a 'conn' command to connect the server as a client. This should not be needed for users. lf uses its own builtin copy and move operations by default. These are implemented as asynchronous operations and progress is shown in the bottom ruler. These commands do not overwrite existing files or directories with the same name. Instead, a suffix that is compatible with '--backup=numbered' option in GNU cp is added to the new files or directories. Only file modes are preserved and all other attributes are ignored including ownership, timestamps, context, links, and xattr. Special files such as character and block devices, named pipes, and sockets are skipped and links are followed. Moving is performed using the rename operation of the underlying OS. For cross-device moving, lf falls back to copying and then deletes the original files if there are no errors. Operation errors are shown in the message line as well as the log file and they do not preemptively finish the corresponding file operation. File operations can be performed on the current selected file or alternatively on multiple files by selecting them first. When you 'copy' a file, lf doesn't actually copy the file on the disk, but only records its name to memory. The actual file copying takes place when you 'paste'. Similarly 'paste' after a 'cut' operation moves the file. You can customize copy and move operations by defining a 'paste' command. This is a special command that is called when it is defined instead of the builtin implementation. You can use the following example as a starting point: Some useful things to be considered are to use the backup ('--backup') and/or preserve attributes ('-a') options with 'cp' and 'mv' commands if they support it (i.e. GNU implementation), change the command type to asynchronous, or use 'rsync' command with progress bar option for copying and feed the progress to the client periodically with remote 'echo' calls. By default, lf does not assign 'delete' command to a key to protect new users. You can customize file deletion by defining a 'delete' command. You can also assign a key to this command if you like. An example command to move selected files to a trash folder and remove files completely after a prompt are provided in the example configuration file. There are two mechanisms implemented in lf to search a file in the current directory. Searching is the traditional method to move the selection to a file matching a given pattern. Finding is an alternative way to search for a pattern possibly using fewer keystrokes. Searching mechanism is implemented with commands 'search' (default '/'), 'search-back' (default '?'), 'search-next' (default 'n'), and 'search-prev' (default 'N'). You can enable 'globsearch' option to match with a glob pattern. Globbing supports '*' to match any sequence, '?' to match any character, and '[...]' or '[^...] to match character sets or ranges. You can enable 'incsearch' option to jump to the current match at each keystroke while typing. In this mode, you can either use 'cmd-enter' to accept the search or use 'cmd-escape' to cancel the search. Alternatively, you can also map some other commands with 'cmap' to accept the search and execute the command immediately afterwards. Possible candidates are 'up', 'down' and their variants, 'top', 'bottom', 'updir', and 'open' commands. For example, you can use arrow keys to finish the search with the following mappings: Finding mechanism is implemented with commands 'find' (default 'f'), 'find-back' (default 'F'), 'find-next' (default ';'), 'find-prev' (default ','). You can disable 'anchorfind' option to match a pattern at an arbitrary position in the filename instead of the beginning. You can set the number of keys to match using 'findlen' option. If you set this value to zero, then the the keys are read until there is only a single match. Default values of these two options are set to jump to the first file with the given initial. Some options effect both searching and finding. You can disable 'wrapscan' option to prevent searches to wrap around at the end of the file list. You can disable 'ignorecase' option to match cases in the pattern and the filename. This option is already automatically overridden if the pattern contains upper case characters. You can disable 'smartcase' option to disable this behavior. Two similar options 'ignoredia' and 'smartdia' are provided to control matching diacritics in latin letters. You can define a an 'open' command (default 'l' and '<right>') to configure file opening. This command is only called when the current file is not a directory, otherwise the directory is entered instead. You can define it just as you would define any other command: It is possible to use different command types: You may want to use either file extensions or mime types from 'file' command: You may want to use 'setsid' before your opener command to have persistent processes that continue to run after lf quits. Following command is provided by default: You may also use any other existing file openers as you like. Possible options are 'libfile-mimeinfo-perl' (executable name is 'mimeopen'), 'rifle' (ranger's default file opener), or 'mimeo' to name a few. lf previews files on the preview pane by printing the file until the end or the preview pane is filled. This output can be enhanced by providing a custom preview script for filtering. This can be used to highlight source codes, list contents of archive files or view pdf or image files as text to name few. For coloring lf recognizes ansi escape codes. In order to use this feature you need to set the value of 'previewer' option to the path of an executable file. lf passes the current file name as the first argument and the height of the preview pane as the second argument when running this file. Output of the execution is printed in the preview pane. You may want to use the same script in your pager mapping as well if any: For 'less' pager, you may instead utilize 'LESSOPEN' mechanism so that useful information about the file such as the full path of the file can be displayed in the statusline below: Since this script is called for each file selection change it needs to be as efficient as possible and this responsibility is left to the user. You may use file extensions to determine the type of file more efficiently compared to obtaining mime types from 'file' command. Extensions can then be used to match cleanly within a conditional: Another important consideration for efficiency is the use of programs with short startup times for preview. For this reason, 'highlight' is recommended over 'pygmentize' for syntax highlighting. Besides, it is also important that the application is processing the file on the fly rather than first reading it to the memory and then do the processing afterwards. This is especially relevant for big files. lf automatically closes the previewer script output pipe with a SIGPIPE when enough lines are read. When everything else fails, you can make use of the height argument to only feed the first portion of the file to a program for preview. Note that some programs may not respond well to SIGPIPE to exit with a non-zero return code and avoid caching. You may trap SIGPIPE in your preview script to avoid error propogation: You may also use an existing preview filter as you like. Your system may already come with a preview filter named 'lesspipe'. These filters may have a mechanism to add user customizations as well. See the related documentations for more information. lf changes the working directory of the process to the current directory so that shell commands always work in the displayed directory. After quitting, it returns to the original directory where it is first launched like all shell programs. If you want to stay in the current directory after quitting, you can use one of the example wrapper shell scripts provided in the repository. There is a special command 'on-cd' that runs a shell command when it is defined and the directory is changed. You can define it just as you would define any other command: If you want to print escape sequences, you may redirect 'printf' output to '/dev/tty'. The following xterm specific escape sequence sets the terminal title to the working directory: This command runs whenever you change directory but not on startup. You can add an extra call to make it run on startup as well: Note that all shell commands are possible but `%` and `&` are usually more appropriate as `$` and `!` causes flickers and pauses respectively. lf tries to automatically adapt its colors to the environment. It starts with a default colorscheme and updates colors using values of existing environment variables possibly by overwriting its previous values. Colors are set in the following order: Please refer to the corresponding man pages for more information about 'LSCOLORS' and 'LS_COLORS'. 'LF_COLORS' is provided with the same syntax as 'LS_COLORS' in case you want to configure colors only for lf but not ls. This can be useful since there are some differences between ls and lf, though one should expect the same behavior for common cases. You can configure lf colors in two different ways. First, you can only configure 8 basic colors used by your terminal and lf should pick up those colors automatically. Depending on your terminal, you should be able to select your colors from a 24-bit palette. This is the recommended approach as colors used by other programs will also match each other. Second, you can set the values of environmental variables mentioned above for fine grained customization. Note that 'LS_COLORS/LF_COLORS' are more powerful than 'LSCOLORS' and they can be used even when GNU programs are not installed on the system. You can combine this second method with the first method for best results. Lastly, you may also want to configure the colors of the prompt line to match the rest of the colors. Colors of the prompt line can be configured using the 'promptfmt' option which can include hardcoded colors as ansi escapes. See the default value of this option to have an idea about how to color this line. It is worth noting that lf uses as many colors are advertised by your terminal's entry in your systems terminfo or infocmp database, if this is not present lf will default to an internal database. For terminals supporting 24-bit (or "true") color that do not have a database entry (or one that does not advertise all capabilities), support can be enabled by either setting the '$COLORTERM' variable to "truecolor" or ensuring '$TERM' is set to a value that ends with "-truecolor". Default lf colors are mostly taken from GNU dircolors defaults. These defaults use 8 basic colors and bold attribute. Default dircolors entries with background colors are simplified to avoid confusion with current file selection in lf. Similarly, there are only file type matchings and extension matchings are left out for simplicity. Default values are as follows given with their matching order in lf: Note that, lf first tries matching file names and then falls back to file types. The full order of matchings from most specific to least are as follows: For example, given a regular text file '/path/to/README.txt', the following entries are checked in the configuration and the first one to match is used: Given a regular directory '/path/to/example.d', the following entries are checked in the configuration and the first one to match is used: Note that glob-like patterns do not actually perform glob matching due to performance reasons. For example, you can set a variable as follows: Having all entries on a single line can make it hard to read. You may instead divide it to multiple lines in between double quotes by escaping newlines with backslashes as follows: Having such a long variable definition in a shell configuration file might be undesirable. You may instead put this definition in a separate file and source it in your shell configuration file as follows: See the wiki page for ansi escape codes https://en.wikipedia.org/wiki/ANSI_escape_code. Icons are configured using 'LF_ICONS' environment variable. This variable uses the same syntax as 'LS_COLORS/LF_COLORS'. Instead of colors, you should put a single characters as values of entries. Do not forget to enable 'icons' option to see the icons. Default values are as follows given with their matching order in lf: See the wiki page for an example icons configuration https://github.com/gokcehan/lf/wiki/Icons.
Package controllerruntime provides tools to construct Kubernetes-style controllers that manipulate both Kubernetes CRDs and aggregated/built-in Kubernetes APIs. It defines easy helpers for the common use cases when building CRDs, built on top of customizable layers of abstraction. Common cases should be easy, and uncommon cases should be possible. In general, controller-runtime tries to guide users towards Kubernetes controller best-practices. The main entrypoint for controller-runtime is this root package, which contains all of the common types needed to get started building controllers: The examples in this package walk through a basic controller setup. The kubebuilder book (https://book.kubebuilder.io) has some more in-depth walkthroughs. controller-runtime favors structs with sane defaults over constructors, so it's fairly common to see structs being used directly in controller-runtime. A brief-ish walkthrough of the layout of this library can be found below. Each package contains more information about how to use it. Frequently asked questions about using controller-runtime and designing controllers can be found at https://github.com/kubernetes-sigs/controller-runtime/blob/master/FAQ.md. Every controller and webhook is ultimately run by a Manager (pkg/manager). A manager is responsible for running controllers and webhooks, and setting up common dependencies (pkg/runtime/inject), like shared caches and clients, as well as managing leader election (pkg/leaderelection). Managers are generally configured to gracefully shut down controllers on pod termination by wiring up a signal handler (pkg/manager/signals). Controllers (pkg/controller) use events (pkg/events) to eventually trigger reconcile requests. They may be constructed manually, but are often constructed with a Builder (pkg/builder), which eases the wiring of event sources (pkg/source), like Kubernetes API object changes, to event handlers (pkg/handler), like "enqueue a reconcile request for the object owner". Predicates (pkg/predicate) can be used to filter which events actually trigger reconciles. There are pre-written utilities for the common cases, and interfaces and helpers for advanced cases. Controller logic is implemented in terms of Reconcilers (pkg/reconcile). A Reconciler implements a function which takes a reconcile Request containing the name and namespace of the object to reconcile, reconciles the object, and returns a Response or an error indicating whether to requeue for a second round of processing. Reconcilers use Clients (pkg/client) to access API objects. The default client provided by the manager reads from a local shared cache (pkg/cache) and writes directly to the API server, but clients can be constructed that only talk to the API server, without a cache. The Cache will auto-populate with watched objects, as well as when other structured objects are requested. The default split client does not promise to invalidate the cache during writes (nor does it promise sequential create/get coherence), and code should not assume a get immediately following a create/update will return the updated resource. Caches may also have indexes, which can be created via a FieldIndexer (pkg/client) obtained from the manager. Indexes can used to quickly and easily look up all objects with certain fields set. Reconcilers may retrieve event recorders (pkg/recorder) to emit events using the manager. Clients, Caches, and many other things in Kubernetes use Schemes (pkg/scheme) to associate Go types to Kubernetes API Kinds (Group-Version-Kinds, to be specific). Similarly, webhooks (pkg/webhook/admission) may be implemented directly, but are often constructed using a builder (pkg/webhook/admission/builder). They are run via a server (pkg/webhook) which is managed by a Manager. Logging (pkg/log) in controller-runtime is done via structured logs, using a log set of interfaces called logr (https://godoc.org/github.com/go-logr/logr). While controller-runtime provides easy setup for using Zap (https://go.uber.org/zap, pkg/log/zap), you can provide any implementation of logr as the base logger for controller-runtime. Metrics (pkg/metrics) provided by controller-runtime are registered into a controller-runtime-specific Prometheus metrics registry. The manager can serve these by an HTTP endpoint, and additional metrics may be registered to this Registry as normal. You can easily build integration and unit tests for your controllers and webhooks using the test Environment (pkg/envtest). This will automatically stand up a copy of etcd and kube-apiserver, and provide the correct options to connect to the API server. It's designed to work well with the Ginkgo testing framework, but should work with any testing setup. This example creates a simple application Controller that is configured for ReplicaSets and Pods. * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application. TODO(pwittrock): Update this example when we have better dependency injection support This example creates a simple application Controller that is configured for ReplicaSets and Pods. This application controller will be running leader election with the provided configuration in the manager options. If leader election configuration is not provided, controller runs leader election with default values. Default values taken from: https://github.com/kubernetes/apiserver/blob/master/pkg/apis/config/v1alpha1/defaults.go * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application. TODO(pwittrock): Update this example when we have better dependency injection support
Package encrypt will encrypt and decrypt data securely using the same password for both operations. It was developed specifically for safe file encryption. WARNING: These functions are not suitable for client-server communication protocols. See details bellow. The author used VeraCrypt and TrueCrypt as inspirations for the implementation. Unlike these two products, we don't need to encrypt whole dynamic filesystems, or hidden volumes so many steps are greatly simplified. Support was also added for more advanced password hash, such as adding Argon2 password hashing on top of PBKDF2 used by VeraCrypt. Just like VeraCrypt and BitLocker (Microsoft), we rely on AES-256 in XTS mode symmetric-key encryption. It's a modern block cipher developed for disk encryption that is a bit less malleable than the more traditional CBC mode. While AES provides fast content encryption, it's not a complete solution. AES keys are fixed-length 256 bits and unlike user passwords, they must have excellent entropy. To create fixed-length keys with excellent entropy, we rely on password hash functions. These are built to spread the entropy to the full length of the key and it gives ample protection against password brute force attacks. Rainbow table attacks (precalculated hashes) are mitigated with a 512 bits random password salt. The salt can be public, as long as the password stays private. For password hashing, we joint a battle-tested algorithm, PBKDF2, with a next gen password hash: Argon2id. Argon2 helps protect against GPU-based attacks, but is a very recent algo. If flaws are ever discovered in it, we have a fallback algorithm. Settings for both password hash functions are secure and stronger the usually recommended settings as of 2018. This does mean that our password hashing function is very expensive (benchmarked around 1s on my desktop computer), but this is not usually an issue for tasks such as file encryption or decryption and the added protection is significant. AES with XTS mode doesn't prevent an attacker from maliciously modifying the encrypted content. To ensure that we catch these cases, we calculate a SHA-512 digest on the plain content and we encrypt it too. Once we decrypt that content, if the header matches, it's likely (although not 100% certain) that the password is correct. If the header matches, but the SHA-512 digest doesn't match, it's likely that the data has been tampered with and we reject it. Finally, decrypting with the AES cypher will always seem to work, whether the password is correct or not. The only difference is that the output will be valid content or garbage. To make the distinction between a bad password and tampered data in a user-friendly way, we include a small header in the plain content ('GOODPW'). (1) These encryption utilities are not suitable as a secure client-server communication protocol, which must deal with additional security constraints. For example, depending on how a server would use it, it could be vulnerable to padding oracle attacks. (2) We store and cache passwords and AES keys in memory, which can then also be swapped to disk by the OS. Encrypter and Decrypter will erase the password and EAS when they are closed explicitly, but this is weak defense in depth only so there is an assumption that the attacker doesn't have memory read access. Data Format We store the salt along with the data.This is because these utilities are geared toward file encryption and its impractical to store it separately. AES: https://en.wikipedia.org/wiki/Advanced_Encryption_Standard PBKDF2: https://en.wikipedia.org/wiki/PBKDF2 Argon2: https://en.wikipedia.org/wiki/Argon2 VeraCrypt: https://veracrypt.fr TrueCrypt implementations: http://blog.bjrn.se/2008/01/truecrypt-explained.html Oracle attack: https://en.wikipedia.org/wiki/Oracle_attack NIST Digital Security Guidelines: https://pages.nist.gov/800-63-3/sp800-63b.html
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
lf is a terminal file manager. Source code can be found in the repository at https://github.com/gokcehan/lf. This documentation can either be read from terminal using 'lf -doc' or online at https://godoc.org/github.com/gokcehan/lf. You can also use 'doc' command (default '<f-1>') inside lf to view the documentation in a pager. You can run 'lf -help' to see descriptions of command line options. The following commands are provided by lf: The following command line commands are provided by lf: The following options can be used to customize the behavior of lf: The following environment variables are exported for shell commands: The following commands/keybindings are provided by default: The following additional keybindings are provided by default: Configuration files should be located at: Selection file should be located at: Marks file should be located at: History file should be located at: You can configure the default values of following variables to change these locations: A sample configuration file can be found at https://github.com/gokcehan/lf/blob/master/etc/lfrc.example. This section shows information about builtin commands. Modal commands do not take any arguments, but instead change the operation mode to read their input conveniently, and so they are meant to be assigned to keybindings. Quit lf and return to the shell. Move the current file selection upwards/downwards by one/half a page/full page. Change the current working directory to the parent directory. If the current file is a directory, then change the current directory to it, otherwise, execute the 'open' command. A default 'open' command is provided to call the default system opener asynchronously with the current file as the argument. A custom 'open' command can be defined to override this default. (See also 'OPENER' variable and 'Opening Files' section) Move the current file selection to the top/bottom of the directory. Toggle the selection of the current file or files given as arguments. Reverse the selection of all files in the current directory (i.e. 'toggle' all files). Selections in other directories are not effected by this command. You can define a new command to select all files in the directory by combining 'invert' with 'unselect' (i.e. `cmd select-all :unselect; invert`), though this will also remove selections in other directories. Remove the selection of all files in all directories. Select files that match the given glob. Unselect files that match the given glob. If there are no selections, save the path of the current file to the copy buffer, otherwise, copy the paths of selected files. If there are no selections, save the path of the current file to the cut buffer, otherwise, copy the paths of selected files. Copy/Move files in copy/cut buffer to the current working directory. Clear file paths in copy/cut buffer. Synchronize copied/cut files with server. This command is automatically called when required. Draw the screen. This command is automatically called when required. Synchronize the terminal and redraw the screen. Load modified files and directories. This command is automatically called when required. Flush the cache and reload all files and directories. Print given arguments to the message line at the bottom. Print given arguments to the message line at the bottom and also to the log file. Print given arguments to the message line at the bottom in red color and also to the log file. Change the working directory to the given argument. Change the current file selection to the given argument. Remove the current file or selected file(s). Rename the current file using the builtin method. A custom 'rename' command can be defined to override this default. Read the configuration file given in the argument. Simulate key pushes given in the argument. Read a command to evaluate. Read a shell command to execute. (See also 'Prefixes' and 'Shell Commands' sections) Read a shell command to execute piping its standard I/O to the bottom statline. (See also 'Prefixes' and 'Piping Shell Commands' sections) Read a shell command to execute and wait for a key press in the end. (See also 'Prefixes' and 'Waiting Shell Commands' sections) Read a shell command to execute synchronously without standard I/O. Read key(s) to find the appropriate file name match in the forward/backward direction and jump to the next/previous match. (See also 'anchorfind', 'findlen', 'wrapscan', 'ignorecase', 'smartcase', 'ignoredia', and 'smartdia' options and 'Searching Files' section) Read a pattern to search for a file name match in the forward/backward direction and jump to the next/previous match. (See also 'globsearch', 'incsearch', 'wrapscan', 'ignorecase', 'smartcase', 'ignoredia', and 'smartdia' options and 'Searching Files' section) Save the current directory as a bookmark assigned to the given key. Change the current directory to the bookmark assigned to the given key. A special bookmark "'" holds the previous directory after a 'mark-load', 'cd', or 'select' command. Remove a bookmark assigned to the given key. This section shows information about command line commands. These should be mostly compatible with readline keybindings. A character refers to a unicode code point, a word consists of letters and digits, and a unix word consists of any non-blank characters. Quit command line mode and return to normal mode. Autocomplete the current word. Autocomplete the current word, then you can press the binded key/s again to cycle completition options. Autocomplete the current word, then you can press the binded key/s again to cycle completition options backwards. Execute the current line. Interrupt the current shell-pipe command and return to the normal mode. Go to next/previous item in the history. Move the cursor to the left/right. Move the cursor to the beginning/end of line. Delete the next character in forward/backward direction. Delete everything up to the beginning/end of line. Delete the previous unix word. Paste the buffer content containing the last deleted item. Transpose the positions of last two characters/words. Move the cursor by one word in forward/backward direction. Delete the next word in forward direction. Capitalize/uppercase/lowercase the current word and jump to the next word. This section shows information about options to customize the behavior. Character ':' is used as the separator for list options '[]int' and '[]string'. When this option is enabled, find command starts matching patterns from the beginning of file names, otherwise, it can match at an arbitrary position. Automatically quit server when there are no clients left connected. When this option is enabled, directory sizes show the number of items inside instead of the size of directory file. The former needs to be calculated by reading the directory and counting the items inside. The latter is directly provided by the operating system and it does not require any calculation, though it is non-intuitive and it can often be misleading. This option is disabled by default for performance reasons. This option only has an effect when 'info' has a 'size' field and the pane is wide enough to show the information. A thousand items are counted per directory at most, and bigger directories are shown as '999+'. Show directories first above regular files. Draw boxes around panes with box drawing characters. Format string of error messages shown in the bottom message line. File separator used in environment variables 'fs' and 'fx'. Number of characters prompted for the find command. When this value is set to 0, find command prompts until there is only a single match left. When this option is enabled, search command patterns are considered as globs, otherwise they are literals. With globbing, '*' matches any sequence, '?' matches any character, and '[...]' or '[^...] matches character sets or ranges. Otherwise, these characters are interpreted as they are. Show hidden files. On unix systems, hidden files are determined by the value of 'hiddenfiles'. On windows, only files with hidden attributes are considered hidden files. List of hidden file glob patterns. Patterns can be given as relative or absolute paths. Globbing supports the usual special characters, '*' to match any sequence, '?' to match any character, and '[...]' or '[^...] to match character sets or ranges. In addition, if a pattern starts with '!', then its matches are excluded from hidden files. Show icons before each item in the list. By default, only two icons, 🗀 (U+1F5C0) and 🗎 (U+1F5CE), are used for directories and files respectively, as they are supported in the unicode standard. Icons can be configured with an environment variable named 'LF_ICONS'. The syntax of this variable is similar to 'LS_COLORS'. See the wiki page for an example icon configuration. Sets 'IFS' variable in shell commands. It works by adding the assignment to the beginning of the command string as 'IFS='...'; ...'. The reason is that 'IFS' variable is not inherited by the shell for security reasons. This method assumes a POSIX shell syntax and so it can fail for non-POSIX shells. This option has no effect when the value is left empty. This option does not have any effect on windows. Ignore case in sorting and search patterns. Ignore diacritics in sorting and search patterns. Jump to the first match after each keystroke during searching. List of information shown for directory items at the right side of pane. Currently supported information types are 'size', 'time', 'atime', and 'ctime'. Information is only shown when the pane width is more than twice the width of information. Send mouse events as input. Show the position number for directory items at the left side of pane. When 'relativenumber' is enabled, only the current line shows the absolute position and relative positions are shown for the rest. Set the interval in seconds for periodic checks of directory updates. This works by periodically calling the 'load' command. Note that directories are already updated automatically in many cases. This option can be useful when there is an external process changing the displayed directory and you are not doing anything in lf. Periodic checks are disabled when the value of this option is set to zero. Show previews of files and directories at the right most pane. If the file has more lines than the preview pane, rest of the lines are not read. Files containing the null character (U+0000) in the read portion are considered binary files and displayed as 'binary'. Set the path of a previewer file to filter the content of regular files for previewing. The file should be executable. Five arguments are passed to the file, first is the current file name; the second, third, fourth, and fifth are width, height, horizontal position, and vertical position of preview pane respectively. SIGPIPE signal is sent when enough lines are read. If the previewer returns a non-zero exit code, then the preview cache for the given file is disabled. This means that if the file is selected in the future, the previewer is called once again. Preview filtering is disabled and files are displayed as they are when the value of this option is left empty. Set the path of a cleaner file. This file will be called if previewing is enabled, the previewer is set, and the previously selected file had its preview cache disabled. The file should be executable. One argument is passed to the file; the path to the file whose preview should be cleaned. Preview clearing is disabled when the value of this option is left empty. Format string of the prompt shown in the top line. Special expansions are provided, '%u' as the user name, '%h' as the host name, '%w' as the working directory, '%d' as the working directory with a trailing path separator, and '%f' as the file name. Home folder is shown as '~' in the working directory expansion. Directory names are automatically shortened to a single character starting from the left most parent when the prompt does not fit to the screen. List of ratios of pane widths. Number of items in the list determines the number of panes in the ui. When 'preview' option is enabled, the right most number is used for the width of preview pane. Show the position number relative to the current line. When 'number' is enabled, current line shows the absolute position, otherwise nothing is shown. Reverse the direction of sort. Minimum number of offset lines shown at all times in the top and the bottom of the screen when scrolling. The current line is kept in the middle when this option is set to a large value that is bigger than the half of number of lines. A smaller offset can be used when the current file is close to the beginning or end of the list to show the maximum number of items. Shell executable to use for shell commands. Shell commands are executed as 'shell shellopts shellflag command -- arguments'. Command line flag used to pass shell commands. List of shell options to pass to the shell executable. Override 'ignorecase' option when the pattern contains an uppercase character. This option has no effect when 'ignorecase' is disabled. Override 'ignoredia' option when the pattern contains a character with diacritic. This option has no effect when 'ignoredia' is disabled. Sort type for directories. Currently supported sort types are 'natural', 'name', 'size', 'time', 'ctime', 'atime', and 'ext'. Number of space characters to show for horizontal tabulation (U+0009) character. Format string of the file modification time shown in the bottom line. Truncate character shown at the end when the file name does not fit to the pane. String shown after commands of shell-wait type. Searching can wrap around the file list. Scrolling can wrap around the file list. The following variables are exported for shell commands: These are referred with a '$' prefix on POSIX shells (e.g. '$f'), between '%' characters on Windows cmd (e.g. '%f%'), and with a '$env:' prefix on Windows powershell (e.g. '$env:f'). Current file selection as a full path. Selected file(s) separated with the value of 'filesep' option as full path(s). Selected file(s) (i.e. 'fs') if there are any selected files, otherwise current file selection (i.e. 'f'). Id of the running client. Present working directory. Initial working directory. The value of this variable is set to the current nesting level when you run lf from a shell spawned inside lf. You can add the value of this variable to your shell prompt to make it clear that your shell runs inside lf. For example, with POSIX shells, you can use '[ -n "$LF_LEVEL" ] && PS1="$PS1""(lf level: $LF_LEVEL) "' in your shell configuration file (e.g. '~/.bashrc'). If this variable is set in the environment, use the same value, otherwise set the value to 'start' in Windows, 'open' in MacOS, 'xdg-open' in others. If this variable is set in the environment, use the same value, otherwise set the value to 'vi' on unix, 'notepad' in Windows. If this variable is set in the environment, use the same value, otherwise set the value to 'less' on unix, 'more' in Windows. If this variable is set in the environment, use the same value, otherwise set the value to 'sh' on unix, 'cmd' in Windows. The following command prefixes are used by lf: The same evaluator is used for the command line and the configuration file for read and shell commands. The difference is that prefixes are not necessary in the command line. Instead, different modes are provided to read corresponding commands. These modes are mapped to the prefix keys above by default. Characters from '#' to newline are comments and ignored: There are three special commands ('set', 'map', and 'cmd') and their variants for configuration. Command 'set' is used to set an option which can be boolean, integer, or string: Command 'map' is used to bind a key to a command which can be builtin command, custom command, or shell command: Command 'cmap' is used to bind a key to a command line command which can only be one of the builtin commands: You can delete an existing binding by leaving the expression empty: Command 'cmd' is used to define a custom command: You can delete an existing command by leaving the expression empty: If there is no prefix then ':' is assumed: An explicit ':' can be provided to group statements until a newline which is especially useful for 'map' and 'cmd' commands: If you need multiline you can wrap statements in '{{' and '}}' after the proper prefix. Regular keys are assigned to a command with the usual syntax: Keys combined with the shift key simply use the uppercase letter: Special keys are written in between '<' and '>' characters and always use lowercase letters: Angle brackets can be assigned with their special names: Function keys are prefixed with 'f' character: Keys combined with the control key are prefixed with 'c' character: Keys combined with the alt key are assigned in two different ways depending on the behavior of your terminal. Older terminals (e.g. xterm) may set the 8th bit of a character when the alt key is pressed. On these terminals, you can use the corresponding byte for the mapping: Newer terminals (e.g. gnome-terminal) may prefix the key with an escape key when the alt key is pressed. lf uses the escape delaying mechanism to recognize alt keys in these terminals (delay is 100ms). On these terminals, keys combined with the alt key are prefixed with 'a' character: Please note that, some key combinations are not possible due to the way terminals work (e.g. control and h combination sends a backspace key instead). The easiest way to find the name of a key combination is to press the key while lf is running and read the name of the key from the unknown mapping error. Mouse buttons are prefixed with 'm' character: Mouse wheel events are also prefixed with 'm' character: The usual way to map a key sequence is to assign it to a named or unnamed command. While this provides a clean way to remap builtin keys as well as other commands, it can be limiting at times. For this reason 'push' command is provided by lf. This command is used to simulate key pushes given as its arguments. You can 'map' a key to a 'push' command with an argument to create various keybindings. This is mainly useful for two purposes. First, it can be used to map a command with a command count: Second, it can be used to avoid typing the name when a command takes arguments: One thing to be careful is that since 'push' command works with keys instead of commands it is possible to accidentally create recursive bindings: These types of bindings create a deadlock when executed. Regular shell commands are the most basic command type that is useful for many purposes. For example, we can write a shell command to move selected file(s) to trash. A first attempt to write such a command may look like this: We check '$fs' to see if there are any selected files. Otherwise we just delete the current file. Since this is such a common pattern, a separate '$fx' variable is provided. We can use this variable to get rid of the conditional: The trash directory is checked each time the command is executed. We can move it outside of the command so it would only run once at startup: Since these are one liners, we can drop '{{' and '}}': Finally note that we set 'IFS' variable manually in these commands. Instead we could use the 'ifs' option to set it for all shell commands (i.e. 'set ifs "\n"'). This can be especially useful for interactive use (e.g. '$rm $f' or '$rm $fs' would simply work). This option is not set by default as it can behave unexpectedly for new users. However, use of this option is highly recommended and it is assumed in the rest of the documentation. Regular shell commands have some limitations in some cases. When an output or error message is given and the command exits afterwards, the ui is immediately resumed and there is no way to see the message without dropping to shell again. Also, even when there is no output or error, the ui still needs to be paused while the command is running. This can cause flickering on the screen for short commands and similar distractions for longer commands. Instead of pausing the ui, piping shell commands connects stdin, stdout, and stderr of the command to the statline in the bottom of the ui. This can be useful for programs following the unix philosophy to give no output in the success case, and brief error messages or prompts in other cases. For example, following rename command prompts for overwrite in the statline if there is an existing file with the given name: You can also output error messages in the command and it will show up in the statline. For example, an alternative rename command may look like this: Note that input is line buffered and output and error are byte buffered. Waiting shell commands are similar to regular shell commands except that they wait for a key press when the command is finished. These can be useful to see the output of a program before the ui is resumed. Waiting shell commands are more appropriate than piping shell commands when the command is verbose and the output is best displayed as multiline. Asynchronous shell commands are used to start a command in the background and then resume operation without waiting for the command to finish. Stdin, stdout, and stderr of the command is neither connected to the terminal nor to the ui. One of the more advanced features in lf is remote commands. All clients connect to a server on startup. It is possible to send commands to all or any of the connected clients over the common server. This is used internally to notify file selection changes to other clients. To use this feature, you need to use a client which supports communicating with a UNIX-domain socket. OpenBSD implementation of netcat (nc) is one such example. You can use it to send a command to the socket file: Since such a client may not be available everywhere, lf comes bundled with a command line flag to be used as such. When using lf, you do not need to specify the address of the socket file. This is the recommended way of using remote commands since it is shorter and immune to socket file address changes: In this command 'send' is used to send the rest of the string as a command to all connected clients. You can optionally give it an id number to send a command to a single client: All clients have a unique id number but you may not be aware of the id number when you are writing a command. For this purpose, an '$id' variable is exported to the environment for shell commands. The value of this variable is set to the process id of the client. You can use it to send a remote command from a client to the server which in return sends a command back to itself. So now you can display a message in the current client by calling the following in a shell command: Since lf does not have control flow syntax, remote commands are used for such needs. For example, you can configure the number of columns in the ui with respect to the terminal width as follows: Besides 'send' command, there is a 'quit' command to quit the server when there are no connected clients left, and a 'quit!' command to force quit the server by closing client connections first: Lastly, there is a 'conn' command to connect the server as a client. This should not be needed for users. lf uses its own builtin copy and move operations by default. These are implemented as asynchronous operations and progress is shown in the bottom ruler. These commands do not overwrite existing files or directories with the same name. Instead, a suffix that is compatible with '--backup=numbered' option in GNU cp is added to the new files or directories. Only file modes are preserved and all other attributes are ignored including ownership, timestamps, context, and xattr. Special files such as character and block devices, named pipes, and sockets are skipped and links are not followed. Moving is performed using the rename operation of the underlying OS. For cross-device moving, lf falls back to copying and then deletes the original files if there are no errors. Operation errors are shown in the message line as well as the log file and they do not preemptively finish the corresponding file operation. File operations can be performed on the current selected file or alternatively on multiple files by selecting them first. When you 'copy' a file, lf doesn't actually copy the file on the disk, but only records its name to a file. The actual file copying takes place when you 'paste'. Similarly 'paste' after a 'cut' operation moves the file. You can customize copy and move operations by defining a 'paste' command. This is a special command that is called when it is defined instead of the builtin implementation. You can use the following example as a starting point: Some useful things to be considered are to use the backup ('--backup') and/or preserve attributes ('-a') options with 'cp' and 'mv' commands if they support it (i.e. GNU implementation), change the command type to asynchronous, or use 'rsync' command with progress bar option for copying and feed the progress to the client periodically with remote 'echo' calls. By default, lf does not assign 'delete' command to a key to protect new users. You can customize file deletion by defining a 'delete' command. You can also assign a key to this command if you like. An example command to move selected files to a trash folder and remove files completely after a prompt are provided in the example configuration file. There are two mechanisms implemented in lf to search a file in the current directory. Searching is the traditional method to move the selection to a file matching a given pattern. Finding is an alternative way to search for a pattern possibly using fewer keystrokes. Searching mechanism is implemented with commands 'search' (default '/'), 'search-back' (default '?'), 'search-next' (default 'n'), and 'search-prev' (default 'N'). You can enable 'globsearch' option to match with a glob pattern. Globbing supports '*' to match any sequence, '?' to match any character, and '[...]' or '[^...] to match character sets or ranges. You can enable 'incsearch' option to jump to the current match at each keystroke while typing. In this mode, you can either use 'cmd-enter' to accept the search or use 'cmd-escape' to cancel the search. Alternatively, you can also map some other commands with 'cmap' to accept the search and execute the command immediately afterwards. Possible candidates are 'up', 'down' and their variants, 'top', 'bottom', 'updir', and 'open' commands. For example, you can use arrow keys to finish the search with the following mappings: Finding mechanism is implemented with commands 'find' (default 'f'), 'find-back' (default 'F'), 'find-next' (default ';'), 'find-prev' (default ','). You can disable 'anchorfind' option to match a pattern at an arbitrary position in the filename instead of the beginning. You can set the number of keys to match using 'findlen' option. If you set this value to zero, then the the keys are read until there is only a single match. Default values of these two options are set to jump to the first file with the given initial. Some options effect both searching and finding. You can disable 'wrapscan' option to prevent searches to wrap around at the end of the file list. You can disable 'ignorecase' option to match cases in the pattern and the filename. This option is already automatically overridden if the pattern contains upper case characters. You can disable 'smartcase' option to disable this behavior. Two similar options 'ignoredia' and 'smartdia' are provided to control matching diacritics in latin letters. You can define a an 'open' command (default 'l' and '<right>') to configure file opening. This command is only called when the current file is not a directory, otherwise the directory is entered instead. You can define it just as you would define any other command: It is possible to use different command types: You may want to use either file extensions or mime types from 'file' command: You may want to use 'setsid' before your opener command to have persistent processes that continue to run after lf quits. Following command is provided by default: You may also use any other existing file openers as you like. Possible options are 'libfile-mimeinfo-perl' (executable name is 'mimeopen'), 'rifle' (ranger's default file opener), or 'mimeo' to name a few. lf previews files on the preview pane by printing the file until the end or the preview pane is filled. This output can be enhanced by providing a custom preview script for filtering. This can be used to highlight source codes, list contents of archive files or view pdf or image files as text to name few. For coloring lf recognizes ansi escape codes. In order to use this feature you need to set the value of 'previewer' option to the path of an executable file. lf passes the current file name as the first argument and the height of the preview pane as the second argument when running this file. Output of the execution is printed in the preview pane. You may want to use the same script in your pager mapping as well if any: For 'less' pager, you may instead utilize 'LESSOPEN' mechanism so that useful information about the file such as the full path of the file can be displayed in the statusline below: Since this script is called for each file selection change it needs to be as efficient as possible and this responsibility is left to the user. You may use file extensions to determine the type of file more efficiently compared to obtaining mime types from 'file' command. Extensions can then be used to match cleanly within a conditional: Another important consideration for efficiency is the use of programs with short startup times for preview. For this reason, 'highlight' is recommended over 'pygmentize' for syntax highlighting. Besides, it is also important that the application is processing the file on the fly rather than first reading it to the memory and then do the processing afterwards. This is especially relevant for big files. lf automatically closes the previewer script output pipe with a SIGPIPE when enough lines are read. When everything else fails, you can make use of the height argument to only feed the first portion of the file to a program for preview. Note that some programs may not respond well to SIGPIPE to exit with a non-zero return code and avoid caching. You may add a trailing '|| true' command to avoid such errors: You may also use an existing preview filter as you like. Your system may already come with a preview filter named 'lesspipe'. These filters may have a mechanism to add user customizations as well. See the related documentations for more information. lf changes the working directory of the process to the current directory so that shell commands always work in the displayed directory. After quitting, it returns to the original directory where it is first launched like all shell programs. If you want to stay in the current directory after quitting, you can use one of the example wrapper shell scripts provided in the repository. There is a special command 'on-cd' that runs a shell command when it is defined and the directory is changed. You can define it just as you would define any other command: If you want to print escape sequences, you may redirect 'printf' output to '/dev/tty'. The following xterm specific escape sequence sets the terminal title to the working directory: This command runs whenever you change directory but not on startup. You can add an extra call to make it run on startup as well: Note that all shell commands are possible but `%` and `&` are usually more appropriate as `$` and `!` causes flickers and pauses respectively. lf tries to automatically adapt its colors to the environment. It starts with a default colorscheme and updates colors using values of existing environment variables possibly by overwriting its previous values. Colors are set in the following order: Please refer to the corresponding man pages for more information about 'LSCOLORS' and 'LS_COLORS'. 'LF_COLORS' is provided with the same syntax as 'LS_COLORS' in case you want to configure colors only for lf but not ls. This can be useful since there are some differences between ls and lf, though one should expect the same behavior for common cases. You can configure lf colors in two different ways. First, you can only configure 8 basic colors used by your terminal and lf should pick up those colors automatically. Depending on your terminal, you should be able to select your colors from a 24-bit palette. This is the recommended approach as colors used by other programs will also match each other. Second, you can set the values of environmental variables mentioned above for fine grained customization. Note that 'LS_COLORS/LF_COLORS' are more powerful than 'LSCOLORS' and they can be used even when GNU programs are not installed on the system. You can combine this second method with the first method for best results. Lastly, you may also want to configure the colors of the prompt line to match the rest of the colors. Colors of the prompt line can be configured using the 'promptfmt' option which can include hardcoded colors as ansi escapes. See the default value of this option to have an idea about how to color this line. It is worth noting that lf uses as many colors are advertised by your terminal's entry in your systems terminfo or infocmp database, if this is not present lf will default to an internal database. For terminals supporting 24-bit (or "true") color that do not have a database entry (or one that does not advertise all capabilities), support can be enabled by either setting the '$COLORTERM' variable to "truecolor" or ensuring '$TERM' is set to a value that ends with "-truecolor". Default lf colors are mostly taken from GNU dircolors defaults. These defaults use 8 basic colors and bold attribute. Default dircolors entries with background colors are simplified to avoid confusion with current file selection in lf. Similarly, there are only file type matchings and extension matchings are left out for simplicity. Default values are as follows given with their matching order in lf: Note that, lf first tries matching file names and then falls back to file types. The full order of matchings from most specific to least are as follows: For example, given a regular text file '/path/to/README.txt', the following entries are checked in the configuration and the first one to match is used: Given a regular directory '/path/to/example.d', the following entries are checked in the configuration and the first one to match is used: Note that glob-like patterns do not actually perform glob matching due to performance reasons. For example, you can set a variable as follows: Having all entries on a single line can make it hard to read. You may instead divide it to multiple lines in between double quotes by escaping newlines with backslashes as follows: Having such a long variable definition in a shell configuration file might be undesirable. You may instead put this definition in a separate file and source it in your shell configuration file as follows: See the wiki page for ansi escape codes https://en.wikipedia.org/wiki/ANSI_escape_code. Icons are configured using 'LF_ICONS' environment variable. This variable uses the same syntax as 'LS_COLORS/LF_COLORS'. Instead of colors, you should put a single characters as values of entries. Do not forget to enable 'icons' option to see the icons. Default values are as follows given with their matching order in lf: See the wiki page for an example icons configuration https://github.com/gokcehan/lf/wiki/Icons.