This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
Package gamelift provides the client and types for making API requests to Amazon GameLift. Amazon GameLift is a managed service for developers who need a scalable, dedicated server solution for their multiplayer games. Use Amazon GameLift for these tasks: (1) set up computing resources and deploy your game servers, (2) run game sessions and get players into games, (3) automatically scale your resources to meet player demand and manage costs, and (4) track in-depth metrics on game server performance and player usage. The Amazon GameLift service API includes two important function sets: Manage game sessions and player access -- Retrieve information on available game sessions; create new game sessions; send player requests to join a game session. Configure and manage game server resources -- Manage builds, fleets, queues, and aliases; set auto-scaling policies; retrieve logs and metrics. This reference guide describes the low-level service API for Amazon GameLift. You can use the API functionality with these tools: The Amazon Web Services software development kit (AWS SDK (http://aws.amazon.com/tools/#sdk)) is available in multiple languages (http://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-supported.html#gamelift-supported-clients) including C++ and C#. Use the SDK to access the API programmatically from an application, such as a game client. The AWS command-line interface (http://aws.amazon.com/cli/) (CLI) tool is primarily useful for handling administrative actions, such as setting up and managing Amazon GameLift settings and resources. You can use the AWS CLI to manage all of your AWS services. The AWS Management Console (https://console.aws.amazon.com/gamelift/home) for Amazon GameLift provides a web interface to manage your Amazon GameLift settings and resources. The console includes a dashboard for tracking key resources, including builds and fleets, and displays usage and performance metrics for your games as customizable graphs. Amazon GameLift Local is a tool for testing your game's integration with Amazon GameLift before deploying it on the service. This tools supports a subset of key API actions, which can be called from either the AWS CLI or programmatically. See Testing an Integration (http://docs.aws.amazon.com/gamelift/latest/developerguide/integration-testing-local.html). Learn more Developer Guide (http://docs.aws.amazon.com/gamelift/latest/developerguide/) -- Read about Amazon GameLift features and how to use them. Tutorials (https://gamedev.amazon.com/forums/tutorials) -- Get started fast with walkthroughs and sample projects. GameDev Blog (http://aws.amazon.com/blogs/gamedev/) -- Stay up to date with new features and techniques. GameDev Forums (https://gamedev.amazon.com/forums/spaces/123/gamelift-discussion.html) -- Connect with the GameDev community. Release notes (http://aws.amazon.com/releasenotes/Amazon-GameLift/) and document history (http://docs.aws.amazon.com/gamelift/latest/developerguide/doc-history.html) -- Stay current with updates to the Amazon GameLift service, SDKs, and documentation. This list offers a functional overview of the Amazon GameLift service API. Use these actions to start new game sessions, find existing game sessions, track game session status and other information, and enable player access to game sessions. SearchGameSessions -- Retrieve all available game sessions or search for Start new games with Queues to find the best available hosting resources StartGameSessionPlacement -- Request a new game session placement and add DescribeGameSessionPlacement -- Get details on a placement request, including StopGameSessionPlacement -- Cancel a placement request. CreateGameSession -- Start a new game session on a specific fleet. Available StartMatchmaking -- Request matchmaking for one players or a group who want StartMatchBackfill - Request additional player matches to fill empty slots DescribeMatchmaking -- Get details on a matchmaking request, including status. AcceptMatch -- Register that a player accepts a proposed match, for matches StopMatchmaking -- Cancel a matchmaking request. DescribeGameSessions -- Retrieve metadata for one or more game sessions, DescribeGameSessionDetails -- Retrieve metadata and the game session protection UpdateGameSession -- Change game session settings, such as maximum player GetGameSessionLogUrl -- Get the location of saved logs for a game session. CreatePlayerSession -- Send a request for a player to join a game session. CreatePlayerSessions -- Send a request for multiple players to join a game DescribePlayerSessions -- Get details on player activity, including status, When setting up Amazon GameLift resources for your game, you first create a game build (http://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-build-intro.html) and upload it to Amazon GameLift. You can then use these actions to configure and manage a fleet of resources to run your game servers, scale capacity to meet player demand, access performance and utilization metrics, and more. CreateBuild -- Create a new build using files stored in an Amazon S3 bucket. ListBuilds -- Get a list of all builds uploaded to a Amazon GameLift region. DescribeBuild -- Retrieve information associated with a build. UpdateBuild -- Change build metadata, including build name and version. DeleteBuild -- Remove a build from Amazon GameLift. CreateFleet -- Configure and activate a new fleet to run a build's game servers. ListFleets -- Get a list of all fleet IDs in a Amazon GameLift region (all DeleteFleet -- Terminate a fleet that is no longer running game servers or View / update fleet configurations. DescribeFleetAttributes / UpdateFleetAttributes -- View or change a fleet's DescribeFleetPortSettings / UpdateFleetPortSettings -- View or change the DescribeRuntimeConfiguration / UpdateRuntimeConfiguration -- View or change DescribeEC2InstanceLimits -- Retrieve maximum number of instances allowed DescribeFleetCapacity / UpdateFleetCapacity -- Retrieve the capacity settings Autoscale -- Manage auto-scaling rules and apply them to a fleet. PutScalingPolicy -- Create a new auto-scaling policy, or update an existing DescribeScalingPolicies -- Retrieve an existing auto-scaling policy. DeleteScalingPolicy -- Delete an auto-scaling policy and stop it from affecting StartFleetActions -- Restart a fleet's auto-scaling policies. StopFleetActions -- Suspend a fleet's auto-scaling policies. CreateVpcPeeringAuthorization -- Authorize a peering connection to one of DescribeVpcPeeringAuthorizations -- Retrieve valid peering connection authorizations. DeleteVpcPeeringAuthorization -- Delete a peering connection authorization. CreateVpcPeeringConnection -- Establish a peering connection between the DescribeVpcPeeringConnections -- Retrieve information on active or pending DeleteVpcPeeringConnection -- Delete a VPC peering connection with a Amazon DescribeFleetUtilization -- Get current data on the number of server processes, DescribeFleetEvents -- Get a fleet's logged events for a specified time span. DescribeGameSessions -- Retrieve metadata associated with one or more game DescribeInstances -- Get information on each instance in a fleet, including GetInstanceAccess -- Request access credentials needed to remotely connect CreateAlias -- Define a new alias and optionally assign it to a fleet. ListAliases -- Get all fleet aliases defined in a Amazon GameLift region. DescribeAlias -- Retrieve information on an existing alias. UpdateAlias -- Change settings for a alias, such as redirecting it from one DeleteAlias -- Remove an alias from the region. ResolveAlias -- Get the fleet ID that a specified alias points to. CreateGameSessionQueue -- Create a queue for processing requests for new DescribeGameSessionQueues -- Retrieve game session queues defined in a Amazon UpdateGameSessionQueue -- Change the configuration of a game session queue. DeleteGameSessionQueue -- Remove a game session queue from the region. CreateMatchmakingConfiguration -- Create a matchmaking configuration with DescribeMatchmakingConfigurations -- Retrieve matchmaking configurations UpdateMatchmakingConfiguration -- Change settings for matchmaking configuration. DeleteMatchmakingConfiguration -- Remove a matchmaking configuration from CreateMatchmakingRuleSet -- Create a set of rules to use when searching for DescribeMatchmakingRuleSets -- Retrieve matchmaking rule sets defined in ValidateMatchmakingRuleSet -- Verify syntax for a set of matchmaking rules. See https://docs.aws.amazon.com/goto/WebAPI/gamelift-2015-10-01 for more information on this service. See gamelift package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/gamelift/ To Amazon GameLift with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon GameLift client GameLift for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/gamelift/#New
Package dynamodb provides the client and types for making API requests to Amazon DynamoDB. Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables' throughput capacity without downtime or performance degradation, and use the AWS Management Console to monitor resource utilization and performance metrics. DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an AWS region, providing built-in high availability and data durability. See https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10 for more information on this service. See dynamodb package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/ To Amazon DynamoDB with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon DynamoDB client DynamoDB for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/#New Utility helpers to marshal and unmarshal AttributeValue to and from Go types can be found in the dynamodbattribute sub package. This package provides has specialized functions for the common ways of working with AttributeValues. Such as map[string]*AttributeValue, []*AttributeValue, and directly with *AttributeValue. This is helpful for marshaling Go types for API operations such as PutItem, and unmarshaling Query and Scan APIs' responses. See the dynamodbattribute package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/dynamodbattribute/ The expression package provides utility types and functions to build DynamoDB expression for type safe construction of API ExpressionAttributeNames, and ExpressionAttribute Values. The package represents the various DynamoDB Expressions as structs named accordingly. For example, ConditionBuilder represents a DynamoDB Condition Expression, an UpdateBuilder represents a DynamoDB Update Expression, and so on. See the expression package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/expression/
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
Package smpp implements SMPP protocol v3.4. It allows easier creation of SMPP clients and servers by providing utilities for PDU and session handling. In order to do any kind of interaction you first need to create an SMPP Session(https://godoc.org/github.com/Derek-meng/smpp#Session). Session is the main carrier of the protocol and enforcer of the specification rules. Naked session can be created with: But it's much more convenient to use helpers that would do the binding with the remote SMSC and return you session prepared for sending: And once you have the session it can be used for sending PDUs to the binded peer. Session that is no longer used must be closed: If you want to handle incoming requests to the session specify SMPPHandler in session configuration when creating new session similarly to HTTPHandler from _net/http_ package: Detailed examples for SMPP client and server can be found in the examples dir.
Package controllerruntime provides tools to construct Kubernetes-style controllers that manipulate both Kubernetes CRDs and aggregated/built-in Kubernetes APIs. It defines easy helpers for the common use cases when building CRDs, built on top of customizable layers of abstraction. Common cases should be easy, and uncommon cases should be possible. In general, controller-runtime tries to guide users towards Kubernetes controller best-practices. The main entrypoint for controller-runtime is this root package, which contains all of the common types needed to get started building controllers: The examples in this package walk through a basic controller setup. The kubebuilder book (https://book.kubebuilder.io) has some more in-depth walkthroughs. controller-runtime favors structs with sane defaults over constructors, so it's fairly common to see structs being used directly in controller-runtime. A brief-ish walkthrough of the layout of this library can be found below. Each package contains more information about how to use it. Frequently asked questions about using controller-runtime and designing controllers can be found at https://github.com/kubernetes-sigs/controller-runtime/blob/master/FAQ.md. Every controller and webhook is ultimately run by a Manager (pkg/manager). A manager is responsible for running controllers and webhooks, and setting up common dependencies (pkg/runtime/inject), like shared caches and clients, as well as managing leader election (pkg/leaderelection). Managers are generally configured to gracefully shut down controllers on pod termination by wiring up a signal handler (pkg/manager/signals). Controllers (pkg/controller) use events (pkg/event) to eventually trigger reconcile requests. They may be constructed manually, but are often constructed with a Builder (pkg/builder), which eases the wiring of event sources (pkg/source), like Kubernetes API object changes, to event handlers (pkg/handler), like "enqueue a reconcile request for the object owner". Predicates (pkg/predicate) can be used to filter which events actually trigger reconciles. There are pre-written utilities for the common cases, and interfaces and helpers for advanced cases. Controller logic is implemented in terms of Reconcilers (pkg/reconcile). A Reconciler implements a function which takes a reconcile Request containing the name and namespace of the object to reconcile, reconciles the object, and returns a Response or an error indicating whether to requeue for a second round of processing. Reconcilers use Clients (pkg/client) to access API objects. The default client provided by the manager reads from a local shared cache (pkg/cache) and writes directly to the API server, but clients can be constructed that only talk to the API server, without a cache. The Cache will auto-populate with watched objects, as well as when other structured objects are requested. The default split client does not promise to invalidate the cache during writes (nor does it promise sequential create/get coherence), and code should not assume a get immediately following a create/update will return the updated resource. Caches may also have indexes, which can be created via a FieldIndexer (pkg/client) obtained from the manager. Indexes can used to quickly and easily look up all objects with certain fields set. Reconcilers may retrieve event recorders (pkg/recorder) to emit events using the manager. Clients, Caches, and many other things in Kubernetes use Schemes (pkg/scheme) to associate Go types to Kubernetes API Kinds (Group-Version-Kinds, to be specific). Similarly, webhooks (pkg/webhook/admission) may be implemented directly, but are often constructed using a builder (pkg/webhook/admission/builder). They are run via a server (pkg/webhook) which is managed by a Manager. Logging (pkg/log) in controller-runtime is done via structured logs, using a log set of interfaces called logr (https://godoc.org/github.com/go-logr/logr). While controller-runtime provides easy setup for using Zap (https://go.uber.org/zap, pkg/log/zap), you can provide any implementation of logr as the base logger for controller-runtime. Metrics (pkg/metrics) provided by controller-runtime are registered into a controller-runtime-specific Prometheus metrics registry. The manager can serve these by an HTTP endpoint, and additional metrics may be registered to this Registry as normal. You can easily build integration and unit tests for your controllers and webhooks using the test Environment (pkg/envtest). This will automatically stand up a copy of etcd and kube-apiserver, and provide the correct options to connect to the API server. It's designed to work well with the Ginkgo testing framework, but should work with any testing setup. This example creates a simple application Controller that is configured for ReplicaSets and Pods. * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application. TODO(pwittrock): Update this example when we have better dependency injection support. This example creates a simple application Controller that is configured for ReplicaSets and Pods. This application controller will be running leader election with the provided configuration in the manager options. If leader election configuration is not provided, controller runs leader election with default values. Default values taken from: https://github.com/kubernetes/component-base/blob/master/config/v1alpha1/defaults.go * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application. TODO(pwittrock): Update this example when we have better dependency injection support.
NanoMux is a package of HTTP request routers for the Go language. The package has three types that can be used as routers. The first one is the Resource, which represents the path segment resource. The second one is the Host. it represents the host segment of the URL but also takes on the role of the root resource when HTTP method handlers are set. The third one is the Router which supports registering multiple hosts and resources. It passes the request to the matching host and, when there is no matching host, to the root resource. In NanoMux terms, hosts and resources are called responders. Responders are organized into a tree. The request's URL segments are matched against the host and corresponding resources' templates in the tree. The request passes through each matching responder in its URL until it reaches the last segment's responder. To pass the request to the next segment's responder, the request passers of the Router, Host, and Resource are called. When the request reaches the last responder, that responder's request handler is called. The request handler is responsible for calling the responder's HTTP method handler. The request passer, request handler, and HTTP method handlers can all be wrapped with middleware. The NanoMux types provide many methods, but most of them are for convenience. Sections below discuss the main features of the package. Based on the segments they comprise, there are three types of templates: static, pattern, and wildcard. Static templates have no regex or wildcard segments. Pattern templates have one or more regex segments and/or one wildcard segment and static segments. A regex segment must be in curly braces and consists of a value name and a regex pattern separated by a colon: "{valueName:regexPattern}". The wildcard segment only has a value name: "{valueName}". There can be only one wildcard segment in a template. Wildcard templates have only one wildcard segment and no static or regex segments. The host segment templates must always follow the scheme with the colon ":" and the two slashes "//" of the authority component. The path segment templates may be preceded by a slash "/" or a scheme, a colon ":", and three slashes "///" (two authority component slashes and the third separator slash). Like in "https:///blog". The preceding slash is just a separator. It doesn't denote the root resource, except when it is used alone. The template "/" or "https:///" denotes the root resource. Both the host and path segment templates can have a trailing slash. When its template starts with "https" unless configured to redirect, the host or resource will not handle a request when used under HTTP and respond with a "404 Not Found" status code. When its template has a trailing slash unless configured to be lenient or strict, the resource will redirect the request that was made to a URL without a trailing slash to the one with a trailing slash and vice versa. The trailing slash has no effect on the host. But if the host is a subtree handler and should respond to the request, its configurations related to the trailing slash will be used on the last path segment. As a side note, every parent resource should have a trailing slash in its template. For example, in a resource tree "/parent/child/grandchild", two non-leaf resources should be referenced with templates "parent/" and "child/". NanoMux doesn't force this, but it's good practice to follow. It helps the clients avoid forming a broken URL when adding a relative URL to the base URL. By default, if the resource has a trailing slash, NanoMux redirects the requests made to the URL without a trailing slash to the URL with a trailing slash. So the clients will have the correct URL. Templates can have a name. The name segment comes at the beginning of the host or path segment templates, but after the slashes. The name segment begins with a "$" sign and is separated from the template's contents by a colon ":". The template's name comes between the "$" sign and the colon ":". The name given in the template can be used to retrieve the host or resource from its parent (the router is considered the host's parent). For convenience, a regex segment's or wildcard segment's value name is used as the name of the template when there is no template name and no other regex segments. For example, templates "{color:red|green|blue}", `day {day:(?:3[01]|[12]\d|0?[1-9])}`, "{model}" have names color, day, and model, respectively. If the template's static part needs a "$" sign at the beginning or curly braces anywhere, they can be escaped with a backslash `\`. Like in the template `\$tatic\{template\}`. For some reason, if the template name or value name needs a colon ":", it can also be escapbed with a backslash: `$smileys\:):{smiley\:)}`. When retrieving the host, resource, or value of the regex or wildcard segment, names are used unescaped without a backslash `\`. Constructors of the Host and Resource types and some methods take URL templates or path templates. In the URL and path templates, the scheme and trailing slash belong to the last segment. In the URL template, after the host segment, if the path component contains only a slash "/", it's considered a trailing slash of the host template. The host template's trailing slash is used only for the last path segment when the host is a subtree handler and should respond to the request. It has no effect on the host itself. In templates, disallowed characters must be used without a percent-encoding, except for a slash "/". Because the slash "/" is a separator when needed in a template, it must be replaced with %2f or %2F. Hosts and resources may have child resources with all three types of templates. In that case, the request's path segments are first matched against child resources with a static template. If no resource with a static template matches, then child resources with a pattern template are matched in the order of their registration. Lastly, if there is no child resource with a pattern template that matches the request's path segment, a child resource with a wildcard template accepts the request. Hosts and resources can have only one direct child resource with a wildcard template. The Host and Resource types implement the http.Handler interface. They can be constructed, have HTTP method handlers set with the SetHandlerFor method, and can be registered with the RegisterResource or RegisterResourceUnder methods to form a resource tree. There is one restriction: the root resource cannot be registered under a host. It can be used without a host or be registered with the router. When the Host type is used, the root resource is implied. Handlers must return true if they respond to the request. Sometimes middlewares and the responder itself need to know whether the request was handled or not. For example, when the middleware responds to the request instead of calling the request passer, the responder may assume that none of its child resources responded to the request in its subtree, so it responds with "404 Not Found" to the request that was already responded to. To prevent this, the middleware's handler must return true if it responds to the request, or it must return the value returned from the argument handler. Sometimes resources have to handle HTTP methods they don't support. NanoMux provides a default not allowed HTTP method handler that responds with a "405 Method Not Allowed" status code, listing all HTTP methods the resource supports in the "Allow" header. But when the host or resource needs a custom implementation, its SetHandlerFor method can be used to replace the default handler. To denote the not allowed HTTP method handler, the exclamation mark "!" must be used instead of an HTTP method. In addition to the not allowed HTTP method handler, if the host or resource has at least one HTTP method handler, NanoMux also provides a default OPTIONS HTTP method handler. The host and resource allow setting a handler for a child resource in their subtree. If the subtree resource doesn't exist, it will be created. It's possible to retrieve a subtree resource with the method Resource. If the subtree resource doesn't exist, the method Resource creates it. If an existing resource must be retrieved, the method RegisteredResource can be used. If there is a need for shared data, it can be set with the SetSharedData method of the Host and Resource. The shared data can then be retrieved in the handlers calling the ResponderSharedData method of the passed *Args argument. Hosts and resources can have their own shared data. Handlers retrieve the shared data of their responder. The responder's SetSharedData is useful for sharing data between its handlers. But, when the data needs to be shared between responders in the resource tree, the *Args argument can be used. The Set method of the *Args argument takes a key and a value of any type and sets the value for the key. The rules for defining a key are the same as for "context.Context". Any responder can set as many values as needed when it's passing or handling a request. The *Args argument carries the values between middlewares and handlers as the request passes through the matching responders in the resource tree. Hosts and resources can be implemented as a type with methods. Each method that has a name beginning with "Handle" and has the signature of a nanomux.Handler is used as an HTTP method handler. The remaining part of the method's name is considered an HTTP method. It is possible to set the implementation later with the SetImplementation method of the Host and Resource. The implementation may also be set for a child resource in the subtree with the method SetImplementationAt. Hosts and resources configured to be a subtree handler respond to the request when there is no matching resource in their subtree. In the resource tree, if resource-12 is a subtree handler, it handles the request to a path "/resource-1/resource-12/non-existent-resource". Subtree handlers can get the remaining part of the path with the RemainingPath method of the *Args argument. The remaining path starts with a slash "/" if the subtree handler has no trailing slash in its template, otherwise it starts without a trailing slash. A subtree handler can also be used as a file server. Let's say we have the following resource tree: When the request is made to a URL "http://example.com/resource-1/resource-12", the host's request passer is called. It passes the request to the resource-1. Then the resource-1's request passer is called and it passes the request to the resource-12. The resource-12 is the last resource in the URL, so its request handler is called. The request handler is responsible for calling the resource's HTTP method handler. If there is no handler for the request's method, it calls the not allowed HTTP method handler. All of these handlers and the request passer can be wrapped in middleware. In the above snippet, the WrapRequestPasserAt method wraps the request passer of the "admin" resource with the CheckCredentials middleware. The CheckCredentials calls the "admin" resource's request passer if the credentials are valid; if not, no further segments will be matched, the request will be dropped, and the client will be responded with a "404 Not Found" status code. In the above case, "admin" is a dormant resource. But as its name states, the request passer's purpose is to pass the request to the next resource. It's called even when the resource is dormant. Unlike the request passer, the request handler is called only when the host or resource is the one that must respond to the request. The request handler can be wrapped when the middleware must be called before any HTTP method handler. The WrapRequestPasser, WrapRequestHandler, and WrapHandlerOf methods of the Host and Resource types wrap their request passer, request handler, and the HTTP method handlers, respectively. The WrapRequestPasserAt, WrapRequestHandlerAt, and WrapPathHandlerOf methods wrap the request passer and the respective handlers of the child resource at the path. The WrapSubtreeRequestPassers, WrapSubtreeRequestHandlers, and WrapSubtreeHandlersOf methods wrap the request passer and the respective handlers of all the resources in the host's or resource's subtree. When calling the WrapHandlerOf, WrapPathHandlerOf, and WrapSubtreeHandlersOf methods, "*" may be used to denote all HTTP methods for which handlers exist. When "*" is used instead of an HTTP method, all the existing HTTP method handlers of the responder are wrapped. The Router type is more suitable when multiple hosts with different root domains or subdomains are needed. It is possible to use an http.Handler and an http.HandlerFunc with NanoMux. For that, NanoMux provides four converters: Hr, HrWithArgs, FnHr, and FnHrWithArgs. The Hr and HrWithArgs convert the http.Handler, while the FnHr and FnHrWithArgs convert the function with the signature of the http.HandlerFunc to the nanomux.Handler. Most of the time, when there is a need to use an http.Handler and http.HandlerFunc, it's to utilize the handlers written outside the context of NanoMux, and those handlers don't use the *Args argument. The Hr and FnHr converters return a handler that ignores the *Args argument instead of inserting it into the request's context, which is a slower operation. These converters must be used when the http.Handler and http.HandlerFunc handlers don't need the *Args argument. If they are written to use the *Args argument, then it's better to change their signatures as well. One situation where http.Handler and http.HandlerFunc can be considered when writing handlers might be to use a middleware with the signature of func(http.Handler) http.Handler. But for middleware with that signature, NanoMux provides an Mw converter. The Mw converter converts the middleware with the signature of func(http.Handler) http.Handler to the middleware with the signature of func(nanomux.Handler) nanomux.Handler, so it can be used to wrap the NanoMux handlers.
An HTTP client for interacting with the Kubecost Allocation API. For documentation on the Go standard library net/http package, see the following: For documentation on the Kubecost Allocation API, see the following: Package main is a generated GoMock package. Application configuration. For documentation on Viper, see the following: An HTTP server for exposing cost allocation metrics retrieved from Kubecost. Metrics are exposed via an HTTP metrics endpoint. Applications that provide a Prometheus OpenMetrics integration can gather cost allocation metrics from this endpoint to store and visualize the data. Generate Prometheus metrics from configuration. For documentation on the Go client library for Prometheus, see the following: Utility functions.
Package ociregistry provides an abstraction that represents the capabilities provided by an OCI registry. See the OCI distribution specification for more information on OCI registries. Packages within this module provide the capability to translate to and from the HTTP protocol documented in that specification: - cuelabs.dev/go/oci/ociregistry/ociclient provides an Interface value that acts as an HTTP client. - cuelabs.dev/go/oci/ociregistry/ociserver provides an HTTP server that serves the distribution protocol by making calls to an arbitrary Interface value. When used together in a stack, the above two packages can be used to provide a simple proxy server. The cuelabs.dev/go/oci/ociregistry/ocimem package provides a trivial in-memory implementation of the interface. Other packages provide some utilities that manipulate Interface values: - cuelabs.dev/go/oci/ociregistry/ocifilter provides functionality for exposing modified or restricted views onto a registry. - cuelabs.dev/go/oci/ociregistry/ociunify can combine two registries into one unified view across both. In general, the caller cannot assume that the implementation of a given Interface value is present on the network. For example, cuelabs.dev/go/oci/ociregistry/ocimem doesn't know about the network at all. But there are times when an implementation might want to provide information about the location of blobs or manifests so that a client can go direct if it wishes. That is, a proxy might not wish to ship all the traffic for all blobs through itself, but instead redirect clients to talk to some other location on the internet. When an Interface implementation wishes to provide that information, it can do so by setting the `URLs` field on the descriptor that it returns for a given blob or manifest. Although it is not mandatory for a caller to use this, some callers (specifically the ociserver package) can use this information to redirect clients appropriately.
Package freeipa provides a client for the FreeIPA API. It provides access to almost all methods available through the API. Every API method has generated go structs for request parameters and output. This code is generated from a schema which was queried from a FreeIPA server using its "schema" method. This client performs basic response validation. Since the FreeIPA server does not always conform to its own schema, it can happen that this libary fails to unmarshal a response from FreeIPA. If you run into that, please open an issue for this client library. With that said, this is still the most extensive golang FreeIPA client and it's probably easier to fix those issues here than to write a new client from scratch. Since FreeIPA cares about the presence or abscence of fields in requests, all optional fields are defined as pointers. There are utility functions like freeipa.String to make filling these less painful. The client uses FreeIPA's JSON-RPC interface with username/password authentication. There is no support for connecting to FreeIPA with Kerberos authentication. There is currently no support for batched requests. See https://github.com/stefanabl/go-freeipa/blob/master/developing.md for information on how this library is generated.
Package ociregistry provides an abstraction that represents the capabilities provided by an OCI registry. See the OCI distribution specification for more information on OCI registries. Packages within this module provide the capability to translate to and from the HTTP protocol documented in that specification: - github.com/rogpeppe/ociregistry/ociclient provides an Interface value that acts as an HTTP client. - github.com/rogpeppe/ociregistry/ociserver provides an HTTP server that serves the distribution protocol by making calls to an arbitrary Interface value. When used together in a stack, the above two packages can be used to provide a simple proxy server. The github.com/rogpeppe/ociregistry/ocimem package provides a trivial in-memory implementation of the interface. Other packages provide some utilities that manipulate Interface values: - github.com/rogpeppe/ociregistry/ocifilter provides functionality for exposing modified or restricted views onto a registry. - github.com/rogpeppe/ociregistry/ociunify can combine two registries into one unified view across both. In general, the caller cannot assume that the implementation of a given Interface value is present on the network. For example, github.com/rogpeppe/ociregistry/ocimem doesn't know about the network at all. But there are times when an implementation might want to provide information about the location of blobs or manifests so that a client can go direct if it wishes. That is, a proxy might not wish to ship all the traffic for all blobs through itself, but instead redirect clients to talk to some other location on the internet. When an Interface implementation wishes to provide that information, it can do so by setting the `URLs` field on the descriptor that it returns for a given blob or manifest. Although it is not mandatory for a caller to use this, some callers (specifically the ociserver package) can use this information to redirect clients appropriately.
Q is a small utility which acts and behaves like 'dig' from BIND. It is meant to stay lean and mean, while having a bunch of handy features, like -check which checks if a packet is correctly signed (without checking the chain of trust). When using -check a comment is printed: ;+ Secure signature, miek.nl. RRSIG(SOA) validates (DNSKEY miek.nl./4155/net) which says the SOA has a valid RRSIG and it validated with the DNSKEY of miek.nl, which has key id 4155 and is retrieved from the server. Other values are 'disk'.
Package dynamodb provides the API client, operations, and parameter types for Amazon DynamoDB. Amazon DynamoDB Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables' throughput capacity without downtime or performance degradation, and use the Amazon Web Services Management Console to monitor resource utilization and performance metrics. DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an Amazon Web Services Region, providing built-in high availability and data durability.
Package bindle contains a Bindle client, types, and other utilities for interacting with a Bindle server. For more information on Bindle, see the main project page: https://github.com/deislabs/bindle. There is nothing exported at this top level, but each subpackage contains more information on its functionality
Package bibliotek is a library for the Hemtjänst ecosystem. It comes with a few different packages, most importantly client, server and transport. The client package lets you create new devices and register them onto MQTT, so that other things can become aware of it. For example, if you wanted to take devices from an IoT gateway like the IKEA Tradfri gateway and make them available as Hemtjänst devices, this is where you'd start. The server package lets you observe all existing devices, subscribe to state changes as well as change the state of existing devices. This is useful if you want to create a bridge to somewhere else. A HomeKit bridge could be implemented with it, or exposing device information through another medium, like providing an HTTP API. The transport package contains some MQTT related utilities. You'll likely never need them, aside from transport/mqtt.Flags() so you don't have to define all the different flags for setting up a connection to an MQTT broker for ya CLI utility yourself.
go-trustless-utils is a set of utilities for working with the IPFS Trustless Gateway protocol as defined at The utilities contained here should be useful for building server and client implementations of the protocol.
Package freeipa provides a client for the FreeIPA API. It provides access to almost all methods available through the API. Every API method has generated go structs for request parameters and output. This code is generated from a schema which was queried from a FreeIPA server using its "schema" method. This client performs basic response validation. Since the FreeIPA server does not always conform to its own schema, it can happen that this libary fails to unmarshal a response from FreeIPA. If you run into that, please open an issue for this client library. With that said, this is still the most extensive golang FreeIPA client and it's probably easier to fix those issues here than to write a new client from scratch. Since FreeIPA cares about the presence or abscence of fields in requests, all optional fields are defined as pointers. There are utility functions like freeipa.String to make filling these less painful. The client uses FreeIPA's JSON-RPC interface with username/password authentication. There is no support for connecting to FreeIPA with Kerberos authentication. There is currently no support for batched requests. See https://github.com/tehwalris/go-freeipa/blob/master/developing.md for information on how this library is generated.
Package dbsql implements the go driver to Databricks SQL Clients should use the database/sql package in conjunction with the driver: Use sql.Open() to create a database handle via a data source name string: The DSN format is: Supported optional connection parameters can be specified in param=value and include: Supported optional session parameters can be specified in param=value and include: Use sql.OpenDB() to create a database handle via a new connector object created with dbsql.NewConnector(): Supported functional options include: Cancelling a query via context cancellation or timeout is supported. Use the driverctx package under driverctx/ctx.go to add CorrelationId and ConnId to the context. CorrelationId and ConnId makes it convenient to parse and create metrics in logging. **Connection Id** Internal id to track what happens under a connection. Connections can be reused so this would track across queries. **Query Id** Internal id to track what happens under a query. Useful because the same query can be used with multiple connections. **Correlation Id** External id, such as request ID, to track what happens under a request. Useful to track multiple connections in the same request. Use the logger package under logger.go to set up logging (from zerolog). By default, logging level is `warn`. If you want to disable logging, use `disabled`. The user can also utilize Track() and Duration() to custom log the elapsed time of anything tracked. The result log may look like this: Use the driverctx package under driverctx/ctx.go to add callbacks to the query context to receive the connection id and query id. Passing parameters to a query is supported when run against servers with version DBR 14.1. For complex types, you can specify the SQL type using the dbsql.Parameter type field. If this field is set, the value field MUST be set to a string. The Go driver now supports staging operations. In order to use a staging operation, you first must update the context with a list of folders that you are allowing the driver to access. After doing so, you can execute staging operations using this context using the exec context. There are three error types exposed via dbsql/errors Each type has a corresponding sentinel value which can be used with errors.Is() to determine if one of the types is present in an error chain. Example usage: See the documentation for dbsql/errors for more information. The driver supports the ability to retrieve Apache Arrow record batches. To work with record batches it is necessary to use sql.Conn.Raw() to access the underlying driver connection to retrieve a driver.Rows instance. The driver exposes two public interfaces for working with record batches from the rows sub-package: The driver.Rows instance retrieved using Conn.Raw() can be converted to a Databricks Rows instance via a type assertion, then use GetArrowBatches() to retrieve a batch iterator. If the ArrowBatchIterator is not closed it will leak resources, such as the underlying connection. Calling code must call Release() on records returned by DBSQLArrowBatchIterator.Next(). Example usage: ================================== Databricks Type --> Golang Type ================================== BOOLEAN --> bool TINYINT --> int8 SMALLINT --> int16 INT --> int32 BIGINT --> int64 FLOAT --> float32 DOUBLE --> float64 VOID --> nil STRING --> string DATE --> time.Time TIMESTAMP --> time.Time DECIMAL(p,s) --> sql.RawBytes BINARY --> sql.RawBytes ARRAY<elementType> --> sql.RawBytes STRUCT --> sql.RawBytes MAP<keyType, valueType> --> sql.RawBytes INTERVAL (year-month) --> string INTERVAL (day-time) --> string For ARRAY, STRUCT, and MAP types, sql.Scan can cast sql.RawBytes to JSON string, which can be unmarshalled to Golang arrays, maps, and structs. For example: May generate the following row:
Package socksy5 provides a SOCKS5 middle layer and utils for simple request handling. MidLayer implements the middle layer, which accepts client connections in the form of net.Conn (see MidLayer.ServeClient), then wraps client handshakes and requests as structs, letting external code to decide whether accept or reject, which kind of subnegotiation to use e.t.c.. This provides advantages when you need multi-homed BND or UDP ASSOCIATION processing, custom subnegotiation and encryption, attaching special connection to CONNECT requests. Besides that, socksy5 also provides Connect, Binder and Associator as simple handlers for CONNECT, BND and UDP ASSOC requests. Listen is also provided as a simple listening util which passes net.Conn to MidLayer automatically. They are for ease of use if you want to set up a SOCKS5 server fast, thus they only have basic features. You can handle handshakes and requests yourself if they don't meet your requirement. First pass a net.Conn to a MidLayer instance, then MidLayer will begin communicating with the client. When client begins handshakes or sends requests, MidLayer will emit Handshake, ConnectRequest, BindRequest and AssocRequest via channels. Call methods of them to decide which kind of authentication to use, whether accept or reject and so on. Logs are emitted via channels too. See MidLayer.LogChan, MidLayer.HandshakeChan, MidLayer.RequestChan. User of this package should read Request, as it contains general info about different types of requests. socksy5 provides limited implementations of authenticate methods, for quite a long time. MidLayer does relay TCP traffic, but it doesn't dial outbound or relay UDP traffic.
lf is a terminal file manager. Source code can be found in the repository at https://github.com/gokcehan/lf This documentation can either be read from terminal using 'lf -doc' or online at https://pkg.go.dev/github.com/gokcehan/lf You can also use 'doc' command (default '<f-1>') inside lf to view the documentation in a pager. A man page with the same content is also available in the repository at https://github.com/gokcehan/lf/blob/master/lf.1 You can run 'lf -help' to see descriptions of command line options. The following commands are provided by lf: The following command line commands are provided by lf: The following options can be used to customize the behavior of lf: The following environment variables are exported for shell commands: The following special shell commands are used to customize the behavior of lf when defined: The following commands/keybindings are provided by default: The following additional keybindings are provided by default: If the 'mouse' option is enabled, mouse buttons have the following default effects: Configuration files should be located at: Colors file should be located at: Icons file should be located at: Selection file should be located at: Marks file should be located at: Tags file should be located at: History file should be located at: You can configure these locations with the following variables given with their order of precedences and their default values: A sample configuration file can be found at https://github.com/gokcehan/lf/blob/master/etc/lfrc.example This section shows information about builtin commands. Modal commands do not take any arguments, but instead change the operation mode to read their input conveniently, and so they are meant to be assigned to keybindings. Quit lf and return to the shell. Move/scroll the current file selection upwards/downwards by one/half a page/full page. Change the current working directory to the parent directory. If the current file is a directory, then change the current directory to it, otherwise, execute the 'open' command. A default 'open' command is provided to call the default system opener asynchronously with the current file as the argument. A custom 'open' command can be defined to override this default. Change the current working directory to the next/previous jumplist item. Move the current file selection to the top/bottom of the directory. A count can be specified to move to a specific line, for example use `3G` to move to the third line. Move the current file selection to the high/middle/low of the screen. Toggle the selection of the current file or files given as arguments. Reverse the selection of all files in the current directory (i.e. 'toggle' all files). Selections in other directories are not effected by this command. You can define a new command to select all files in the directory by combining 'invert' with 'unselect' (i.e. 'cmd select-all :unselect; invert'), though this will also remove selections in other directories. Reverse the selection (i.e. 'toggle') of all files at or after the current file in the current directory. To select a contiguous block of files, use this command on the first file you want to select. Then, move down to the first file you do *not* want to select (the one after the end of the desired selection) and use this command again. This achieves an effect similar to the visual mode in vim. This command is experimental and may be removed once a better replacement for the visual mode is implemented in 'lf'. If you'd like to experiment with using this command, you should bind it to a key (e.g. 'V') for a better experience. Remove the selection of all files in all directories. Select/unselect files that match the given glob. Calculate the total size for each of the selected directories. Option 'info' should include 'size' and option 'dircounts' should be disabled to show this size. If the total size of a directory is not calculated, it will be shown as '-'. Remove all keybindings associated with the `map` command. This command can be used in the config file to remove the default keybindings. For safety purposes, `:` is left mapped to the `read` command, and `cmap` keybindings are retained so that it is still possible to exit `lf` using `:quit`. If there are no selections, save the path of the current file to the copy buffer, otherwise, copy the paths of selected files. If there are no selections, save the path of the current file to the cut buffer, otherwise, copy the paths of selected files. Copy/Move files in copy/cut buffer to the current working directory. A custom 'paste' command can be defined to override this default. Clear file paths in copy/cut buffer. Synchronize copied/cut files with server. This command is automatically called when required. Draw the screen. This command is automatically called when required. Synchronize the terminal and redraw the screen. Load modified files and directories. This command is automatically called when required. Flush the cache and reload all files and directories. Print given arguments to the message line at the bottom. Print given arguments to the message line at the bottom and also to the log file. Print given arguments to the message line at the bottom as 'errorfmt' and also to the log file. Change the working directory to the given argument. Change the current file selection to the given argument. Remove the current file or selected file(s). A custom 'delete' command can be defined to override this default. Rename the current file using the builtin method. A custom 'rename' command can be defined to override this default. Read the configuration file given in the argument. Simulate key pushes given in the argument. Read a command to evaluate. Read a shell command to execute. Read a shell command to execute piping its standard I/O to the bottom statline. Read a shell command to execute and wait for a key press in the end. Read a shell command to execute asynchronously without standard I/O. Read key(s) to find the appropriate file name match in the forward/backward direction and jump to the next/previous match. Read a pattern to search for a file name match in the forward/backward direction and jump to the next/previous match. Command 'filter' reads a pattern to filter out and only view files matching the pattern. Command 'setfilter' does the same but uses an argument to set the filter immediately. You can supply an argument to 'filter', in order to use that as the starting prompt. Save the current directory as a bookmark assigned to the given key. Change the current directory to the bookmark assigned to the given key. A special bookmark "'" holds the previous directory after a 'mark-load', 'cd', or 'select' command. Remove a bookmark assigned to the given key. Tag a file with '*' or a single width character given in the argument. You can define a new tag clearing command by combining 'tag' with 'tag-toggle' (i.e. 'cmd tag-clear :tag; tag-toggle'). Tag a file with '*' or a single width character given in the argument if the file is untagged, otherwise remove the tag. The prompt character specifies which of the several command-line modes you are in. For example, the 'read' command takes you to the ':' mode. When the cursor is at the first character in ':' mode, pressing one of the keys '!', '$', '%', or '&' takes you to the corresponding mode. You can go back with 'cmd-delete-back' ('<backspace>' by default). The command line commands should be mostly compatible with readline keybindings. A character refers to a unicode code point, a word consists of letters and digits, and a unix word consists of any non-blank characters. Quit command line mode and return to normal mode. Autocomplete the current word. Autocomplete the current word with menu selection. You need to assign keys to these commands (e.g. 'cmap <tab> cmd-menu-complete; cmap <backtab> cmd-menu-complete-back'). You can use the assigned keys assigned to display the menu and then cycle through completion options. Accept the currently selected match in menu completion and close the menu. Execute the current line. Interrupt the current shell-pipe command and return to the normal mode. Go to next/previous item in the history. Move the cursor to the left/right. Move the cursor to the beginning/end of line. Delete the next character. Delete the previous character. When at the beginning of a prompt, returns either to normal mode or to ':' mode. Delete everything up to the beginning/end of line. Delete the previous unix word. Paste the buffer content containing the last deleted item. Transpose the positions of last two characters/words. Move the cursor by one word in forward/backward direction. Delete the next word in forward direction. Capitalize/uppercase/lowercase the current word and jump to the next word. List all key mappings in normal mode or command-line editing mode. List all custom commands defined using the `cmd` command List the contents of the jump list, in order of the most recently visited locations. Each location is marked with the count that can be used with the `jump-prev` and `jump-next` commands (e.g. use `3[` to move three spaces backwards in the jump list). A '>' is used to mark the current location in the jump list. This section shows information about options to customize the behavior. Character ':' is used as the separator for list options '[]int' and '[]string'. When this option is enabled, find command starts matching patterns from the beginning of file names, otherwise, it can match at an arbitrary position. Automatically quit server when there are no clients left connected. Format string of the box drawing characters enabled by the `drawbox` option. Set the path of a cleaner file. The file should be executable. This file is called if previewing is enabled, the previewer is set, and the previously selected file had its preview cache disabled. Five arguments are passed to the file, (1) current file name, (2) width, (3) height, (4) horizontal position, and (5) vertical position of preview pane respectively. Preview clearing is disabled when the value of this option is left empty. Format strings for highlighting the cursor. `cursoractivefmt` applies in the current directory pane, `cursorparentfmt` applies in panes that show parents of the current directory, and `cursorpreviewfmt` applies in panes that preview directories. The default is to make the active cursor and the parent directory cursor inverted. The preview cursor is underlined. Some other possibilities to consider for the preview or parent cursors: an empty string for no cursor, "\033[7;2m" for dimmed inverted text (visibility varies by terminal), "\033[7;90m" for inverted text with grey (aka "brightblack") background. If the format string contains the characters `%s`, it is interpreted as a format string for `fmt.Sprintf`. Such a string should end with the terminal reset sequence. For example, "\033[4m%s\033[0m" has the same effect as "\033[4m". Cache directory contents. When this option is enabled, directory sizes show the number of items inside instead of the total size of the directory, which needs to be calculated for each directory using 'calcdirsize'. This information needs to be calculated by reading the directory and counting the items inside. Therefore, this option is disabled by default for performance reasons. This option only has an effect when 'info' has a 'size' field and the pane is wide enough to show the information. 999 items are counted per directory at most, and bigger directories are shown as '999+'. Show directories first above regular files. Show only directories. If enabled, directories will also be passed to the previewer script. This allows custom previews for directories. Draw boxes around panes with box drawing characters. Format string of file name when creating duplicate files. With the default format, copying a file `abc.txt` to the same directory will result in a duplicate file called `abc.txt.~1~`. Special expansions are provided, '%f' as the file name, '%b' for basename (file name without extension), '%e' as the extension (including the dot) and '%n' as the number of duplicates. Format string of error messages shown in the bottom message line. If the format string contains the characters `%s`, it is interpreted as a format string for `fmt.Sprintf`. Such a string should end with the terminal reset sequence. For example, "\033[4m%s\033[0m" has the same effect as "\033[4m". File separator used in environment variables 'fs' and 'fx'. Number of characters prompted for the find command. When this value is set to 0, find command prompts until there is only a single match left. When this option is enabled, search command patterns are considered as globs, otherwise they are literals. With globbing, '*' matches any sequence, '?' matches any character, and '[...]' or '[^...]' matches character sets or ranges. Otherwise, these characters are interpreted as they are. Show hidden files. On Unix systems, hidden files are determined by the value of 'hiddenfiles'. On Windows, only files with hidden attributes are considered hidden files. List of hidden file glob patterns. Patterns can be given as relative or absolute paths. Globbing supports the usual special characters, '*' to match any sequence, '?' to match any character, and '[...]' or '[^...]' to match character sets or ranges. In addition, if a pattern starts with '!', then its matches are excluded from hidden files. To add multiple patterns, use ':' as a separator. Example: '.*:lost+found:*.bak' Save command history. Show icons before each item in the list. Sets 'IFS' variable in shell commands. It works by adding the assignment to the beginning of the command string as "IFS='...'; ...". The reason is that 'IFS' variable is not inherited by the shell for security reasons. This method assumes a POSIX shell syntax and so it can fail for non-POSIX shells. This option has no effect when the value is left empty. This option does not have any effect on Windows. Ignore case in sorting and search patterns. Ignore diacritics in sorting and search patterns. Jump to the first match after each keystroke during searching. Apply filter pattern after each keystroke during filtering. List of information shown for directory items at the right side of pane. Currently supported information types are 'size', 'time', 'atime', and 'ctime'. Information is only shown when the pane width is more than twice the width of information. Format string of the file time shown in the info column when it matches this year. Format string of the file time shown in the info column when it doesn't match this year. Send mouse events as input. Show the position number for directory items at the left side of pane. When 'relativenumber' option is enabled, only the current line shows the absolute position and relative positions are shown for the rest. Format string of the position number for each line. Set the interval in seconds for periodic checks of directory updates. This works by periodically calling the 'load' command. Note that directories are already updated automatically in many cases. This option can be useful when there is an external process changing the displayed directory and you are not doing anything in lf. Periodic checks are disabled when the value of this option is set to zero. List of attributes that are preserved when copying files. Currently supported attributes are 'mode' (i.a. access mode) and 'timestamps' (i.e. modification time and access time). Note: Preserving other attribute like ownership of change/birth timestamp is desireable, but not portably supported in go. Show previews of files and directories at the right most pane. If the file has more lines than the preview pane, rest of the lines are not read. Files containing the null character (U+0000) in the read portion are considered binary files and displayed as 'binary'. Set the path of a previewer file to filter the content of regular files for previewing. The file should be executable. Five arguments are passed to the file, (1) current file name, (2) width, (3) height, (4) horizontal position, and (5) vertical position of preview pane respectively. SIGPIPE signal is sent when enough lines are read. If the previewer returns a non-zero exit code, then the preview cache for the given file is disabled. This means that if the file is selected in the future, the previewer is called once again. Preview filtering is disabled and files are displayed as they are when the value of this option is left empty. Format string of the prompt shown in the top line. Special expansions are provided, '%u' as the user name, '%h' as the host name, '%w' as the working directory, '%d' as the working directory with a trailing path separator, '%f' as the file name, and '%F' as the current filter. '%S' may be used once and will provide a spacer so that the following parts are right aligned on the screen. Home folder is shown as '~' in the working directory expansion. Directory names are automatically shortened to a single character starting from the left most parent when the prompt does not fit to the screen. List of ratios of pane widths. Number of items in the list determines the number of panes in the ui. When 'preview' option is enabled, the right most number is used for the width of preview pane. Show the position number relative to the current line. When 'number' is enabled, current line shows the absolute position, otherwise nothing is shown. Reverse the direction of sort. List of information shown in status line ruler. Currently supported information types are 'acc', 'progress', 'selection', 'filter', 'ind', 'df' and names starting with 'lf_'. `acc` shows the pressed keys (e.g. for bindings with multiple key presses or counts given to bindings). `progress` shows the progress of file operations (e.g. copying a large directory). `selection` shows the number of files that are selected, or designated for being cut/copied. `filter` shows 'F' if a filter is currently being applied. `ind` shows the current position of the cursor as well as the number of files in the current directory. `df` shows the amount of free disk space remaining. Names starting with `lf_` show the value of environment variables exported by lf. This is useful for displaying the current settings (e.g. `lf_selmode` displays the current setting for the `selmode` option). User defined options starting with `lf_user_` are also supported, so it is possible to display information set from external sources. Selection mode for commands. When set to 'all' it will use the selected files from all directories. When set to 'dir' it will only use the selected files in the current directory. Minimum number of offset lines shown at all times in the top and the bottom of the screen when scrolling. The current line is kept in the middle when this option is set to a large value that is bigger than the half of number of lines. A smaller offset can be used when the current file is close to the beginning or end of the list to show the maximum number of items. Shell executable to use for shell commands. Shell commands are executed as 'shell shellopts shellflag command -- arguments'. Command line flag used to pass shell commands. List of shell options to pass to the shell executable. Override 'ignorecase' option when the pattern contains an uppercase character. This option has no effect when 'ignorecase' is disabled. Override 'ignoredia' option when the pattern contains a character with diacritic. This option has no effect when 'ignoredia' is disabled. Sort type for directories. Currently supported sort types are 'natural', 'name', 'size', 'time', 'ctime', 'atime', and 'ext'. Format string of the file info shown in the bottom left corner. Special expansions are provided, '%p' as the file permissions, '%c' as the link count, '%u' as the user, '%g' as the group, '%s' as the file size, '%t' as the last modified time, and '%l' as the link target if it exists (otherwise a blank string). '%L' is the same as '%l' but with an arrow '-> ' prepended. On Windows, the link count, user and group fields are not supported and will be replaced with a blank string if specified. The default for Windows is "\033[36m%p\033[0m %s %t %L". Number of space characters to show for horizontal tabulation (U+0009) character. Format string of the tags. If the format string contains the characters `%s`, it is interpreted as a format string for `fmt.Sprintf`. Such a string should end with the terminal reset sequence. For example, "\033[4m%s\033[0m" has the same effect as "\033[4m". Marks to be considered temporary (e.g. 'abc' refers to marks 'a', 'b', and 'c'). These marks are not synced to other clients and they are not saved in the bookmarks file. Note that the special bookmark "'" is always treated as temporary and it does not need to be specified. Format string of the file modification time shown in the bottom line. Truncate character shown at the end when the file name does not fit to the pane. When a filename is too long to be shown completely, the available space is partitioned in two pieces. truncatepct defines a fraction (in percent between 0 and 100) for the size of the first piece, which will show the beginning of the filename. The second piece will show the end of the filename and will use the rest of the available space. Both pieces are separated by the truncation character (truncatechar). A value of 100 will only show the beginning of the filename, while a value of 0 will only show the end of the filename, e.g.: - `set truncatepct 100` -> "very-long-filename-tr~" (default) - `set truncatepct 50` -> "very-long-f~-truncated" - `set truncatepct 0` -> "~ng-filename-truncated" String shown after commands of shell-wait type. Searching can wrap around the file list. Scrolling can wrap around the file list. Any option that is prefixed with 'user_' is a user defined option and can be set to any string. Inside a user defined command the value will be provided in the `lf_user_{option}` environment variable. These options are not used by lf and are not persisted. The following variables are exported for shell commands: These are referred with a '$' prefix on POSIX shells (e.g. '$f'), between '%' characters on Windows cmd (e.g. '%f%'), and with a '$env:' prefix on Windows powershell (e.g. '$env:f'). Current file selection as a full path. Selected file(s) separated with the value of 'filesep' option as full path(s). Selected file(s) (i.e. 'fs') if there are any selected files, otherwise current file selection (i.e. 'f'). Id of the running client. Present working directory. Initial working directory. The value of this variable is set to the current nesting level when you run lf from a shell spawned inside lf. You can add the value of this variable to your shell prompt to make it clear that your shell runs inside lf. For example, with POSIX shells, you can use '[ -n "$LF_LEVEL" ] && PS1="$PS1""(lf level: $LF_LEVEL) "' in your shell configuration file (e.g. '~/.bashrc'). If this variable is set in the environment, use the same value. Otherwise, this is set to 'start' in Windows, 'open' in MacOS, 'xdg-open' in others. If VISUAL is set in the environment, use its value. Otherwise, use the value of the environment variable EDITOR. If neither variable is set, this is set to 'vi' on Unix, 'notepad' in Windows. If this variable is set in the environment, use the same value. Otherwise, this is set to 'less' on Unix, 'more' in Windows. If this variable is set in the environment, use the same value. Otherwise, this is set to 'sh' on Unix, 'cmd' in Windows. Absolute path to the currently running lf binary, if it can be found. Otherwise, this is set to the string 'lf'. Value of the {option}. Value of the user_{option}. Width/Height of the terminal. Value of the count associated with the current command. This section shows information about special shell commands. This shell command can be defined to override the default 'open' command when the current file is not a directory. This shell command can be defined to override the default 'paste' command. This shell command can be defined to override the default 'rename' command. This shell command can be defined to override the default 'delete' command. This shell command can be defined to be executed before changing a directory. This shell command can be defined to be executed after changing a directory. This shell command can be defined to be executed after the selection changes. This shell command can be defined to be executed before quit. The following command prefixes are used by lf: The same evaluator is used for the command line and the configuration file for read and shell commands. The difference is that prefixes are not necessary in the command line. Instead, different modes are provided to read corresponding commands. These modes are mapped to the prefix keys above by default. Characters from '#' to newline are comments and ignored: There are four special commands ('set', 'map', 'cmap', and 'cmd') for configuration. Command 'set' is used to set an option which can be boolean, integer, or string: Command 'map' is used to bind a key to a command which can be builtin command, custom command, or shell command: Command 'cmap' is used to bind a key on the command line to a command line command or any other command: You can delete an existing binding by leaving the expression empty: Command 'cmd' is used to define a custom command: You can delete an existing command by leaving the expression empty: If there is no prefix then ':' is assumed: An explicit ':' can be provided to group statements until a newline which is especially useful for 'map' and 'cmd' commands: If you need multiline you can wrap statements in '{{' and '}}' after the proper prefix. Regular keys are assigned to a command with the usual syntax: Keys combined with the shift key simply use the uppercase letter: Special keys are written in between '<' and '>' characters and always use lowercase letters: Angle brackets can be assigned with their special names: Function keys are prefixed with 'f' character: Keys combined with the control key are prefixed with 'c' character: Keys combined with the alt key are assigned in two different ways depending on the behavior of your terminal. Older terminals (e.g. xterm) may set the 8th bit of a character when the alt key is pressed. On these terminals, you can use the corresponding byte for the mapping: Newer terminals (e.g. gnome-terminal) may prefix the key with an escape key when the alt key is pressed. lf uses the escape delaying mechanism to recognize alt keys in these terminals (delay is 100ms). On these terminals, keys combined with the alt key are prefixed with 'a' character: It is possible to combine special keys with modifiers: WARNING: Some key combinations will likely be intercepted by your OS, window manager, or terminal. Other key combinations cannot be recognized by lf due to the way terminals work (e.g. `Ctrl+h` combination sends a backspace key instead). The easiest way to find out the name of a key combination and whether it will work on your system is to press the key while lf is running and read the name from the "unknown mapping" error. Mouse buttons are prefixed with 'm' character: Mouse wheel events are also prefixed with 'm' character: The usual way to map a key sequence is to assign it to a named or unnamed command. While this provides a clean way to remap builtin keys as well as other commands, it can be limiting at times. For this reason 'push' command is provided by lf. This command is used to simulate key pushes given as its arguments. You can 'map' a key to a 'push' command with an argument to create various keybindings. This is mainly useful for two purposes. First, it can be used to map a command with a command count: Second, it can be used to avoid typing the name when a command takes arguments: One thing to be careful is that since 'push' command works with keys instead of commands it is possible to accidentally create recursive bindings: These types of bindings create a deadlock when executed. Regular shell commands are the most basic command type that is useful for many purposes. For example, we can write a shell command to move selected file(s) to trash. A first attempt to write such a command may look like this: We check '$fs' to see if there are any selected files. Otherwise we just delete the current file. Since this is such a common pattern, a separate '$fx' variable is provided. We can use this variable to get rid of the conditional: The trash directory is checked each time the command is executed. We can move it outside of the command so it would only run once at startup: Since these are one liners, we can drop '{{' and '}}': Finally note that we set 'IFS' variable manually in these commands. Instead we could use the 'ifs' option to set it for all shell commands (i.e. 'set ifs "\n"'). This can be especially useful for interactive use (e.g. '$rm $f' or '$rm $fs' would simply work). This option is not set by default as it can behave unexpectedly for new users. However, use of this option is highly recommended and it is assumed in the rest of the documentation. Regular shell commands have some limitations in some cases. When an output or error message is given and the command exits afterwards, the ui is immediately resumed and there is no way to see the message without dropping to shell again. Also, even when there is no output or error, the ui still needs to be paused while the command is running. This can cause flickering on the screen for short commands and similar distractions for longer commands. Instead of pausing the ui, piping shell commands connects stdin, stdout, and stderr of the command to the statline in the bottom of the ui. This can be useful for programs following the Unix philosophy to give no output in the success case, and brief error messages or prompts in other cases. For example, following rename command prompts for overwrite in the statline if there is an existing file with the given name: You can also output error messages in the command and it will show up in the statline. For example, an alternative rename command may look like this: Note that input is line buffered and output and error are byte buffered. Waiting shell commands are similar to regular shell commands except that they wait for a key press when the command is finished. These can be useful to see the output of a program before the ui is resumed. Waiting shell commands are more appropriate than piping shell commands when the command is verbose and the output is best displayed as multiline. Asynchronous shell commands are used to start a command in the background and then resume operation without waiting for the command to finish. Stdin, stdout, and stderr of the command is neither connected to the terminal nor to the ui. One of the more advanced features in lf is remote commands. All clients connect to a server on startup. It is possible to send commands to all or any of the connected clients over the common server. This is used internally to notify file selection changes to other clients. To use this feature, you need to use a client which supports communicating with a Unix domain socket. OpenBSD implementation of netcat (nc) is one such example. You can use it to send a command to the socket file: Since such a client may not be available everywhere, lf comes bundled with a command line flag to be used as such. When using lf, you do not need to specify the address of the socket file. This is the recommended way of using remote commands since it is shorter and immune to socket file address changes: In this command 'send' is used to send the rest of the string as a command to all connected clients. You can optionally give it an id number to send a command to a single client: All clients have a unique id number but you may not be aware of the id number when you are writing a command. For this purpose, an '$id' variable is exported to the environment for shell commands. The value of this variable is set to the process id of the client. You can use it to send a remote command from a client to the server which in return sends a command back to itself. So now you can display a message in the current client by calling the following in a shell command: Since lf does not have control flow syntax, remote commands are used for such needs. For example, you can configure the number of columns in the ui with respect to the terminal width as follows: Besides 'send' command, there is a 'quit' command to quit the server when there are no connected clients left, and a 'quit!' command to force quit the server by closing client connections first: Lastly, there is a 'conn' command to connect the server as a client. This should not be needed for users. lf uses its own builtin copy and move operations by default. These are implemented as asynchronous operations and progress is shown in the bottom ruler. These commands do not overwrite existing files or directories with the same name. Instead, a suffix that is compatible with '--backup=numbered' option in GNU cp is added to the new files or directories. Only file modes and (some) timestamps can be preserved (see `preserve` option), all other attributes are ignored including ownership, context, and xattr. Special files such as character and block devices, named pipes, and sockets are skipped and links are not followed. Moving is performed using the rename operation of the underlying OS. For cross-device moving, lf falls back to copying and then deletes the original files if there are no errors. Operation errors are shown in the message line as well as the log file and they do not preemptively finish the corresponding file operation. File operations can be performed on the current selected file or alternatively on multiple files by selecting them first. When you 'copy' a file, lf doesn't actually copy the file on the disk, but only records its name to a file. The actual file copying takes place when you 'paste'. Similarly 'paste' after a 'cut' operation moves the file. You can customize copy and move operations by defining a 'paste' command. This is a special command that is called when it is defined instead of the builtin implementation. You can use the following example as a starting point: Some useful things to be considered are to use the backup ('--backup') and/or preserve attributes ('-a') options with 'cp' and 'mv' commands if they support it (i.e. GNU implementation), change the command type to asynchronous, or use 'rsync' command with progress bar option for copying and feed the progress to the client periodically with remote 'echo' calls. By default, lf does not assign 'delete' command to a key to protect new users. You can customize file deletion by defining a 'delete' command. You can also assign a key to this command if you like. An example command to move selected files to a trash folder and remove files completely after a prompt are provided in the example configuration file. There are two mechanisms implemented in lf to search a file in the current directory. Searching is the traditional method to move the selection to a file matching a given pattern. Finding is an alternative way to search for a pattern possibly using fewer keystrokes. Searching mechanism is implemented with commands 'search' (default '/'), 'search-back' (default '?'), 'search-next' (default 'n'), and 'search-prev' (default 'N'). You can enable 'globsearch' option to match with a glob pattern. Globbing supports '*' to match any sequence, '?' to match any character, and '[...]' or '[^...] to match character sets or ranges. You can enable 'incsearch' option to jump to the current match at each keystroke while typing. In this mode, you can either use 'cmd-enter' to accept the search or use 'cmd-escape' to cancel the search. You can also map some other commands with 'cmap' to accept the search and execute the command immediately afterwards. For example, you can use the right arrow key to finish the search and open the selected file with the following mapping: Finding mechanism is implemented with commands 'find' (default 'f'), 'find-back' (default 'F'), 'find-next' (default ';'), 'find-prev' (default ','). You can disable 'anchorfind' option to match a pattern at an arbitrary position in the filename instead of the beginning. You can set the number of keys to match using 'findlen' option. If you set this value to zero, then the the keys are read until there is only a single match. Default values of these two options are set to jump to the first file with the given initial. Some options effect both searching and finding. You can disable 'wrapscan' option to prevent searches to wrap around at the end of the file list. You can disable 'ignorecase' option to match cases in the pattern and the filename. This option is already automatically overridden if the pattern contains upper case characters. You can disable 'smartcase' option to disable this behavior. Two similar options 'ignoredia' and 'smartdia' are provided to control matching diacritics in latin letters. You can define a an 'open' command (default 'l' and '<right>') to configure file opening. This command is only called when the current file is not a directory, otherwise the directory is entered instead. You can define it just as you would define any other command: It is possible to use different command types: You may want to use either file extensions or mime types from 'file' command: You may want to use 'setsid' before your opener command to have persistent processes that continue to run after lf quits. Regular shell commands (i.e. '$') drop to terminal which results in a flicker for commands that finishes immediately (e.g. 'xdg-open' in the above example). If you want to use asynchronous shell commands (i.e. '&') but also want to use the terminal when necessary (e.g. 'vi' in the above exxample), you can use a remote command: Note, asynchronous shell commands run in their own process group by default so they do not require the manual use of 'setsid'. Following command is provided by default: You may also use any other existing file openers as you like. Possible options are 'libfile-mimeinfo-perl' (executable name is 'mimeopen'), 'rifle' (ranger's default file opener), or 'mimeo' to name a few. lf previews files on the preview pane by printing the file until the end or the preview pane is filled. This output can be enhanced by providing a custom preview script for filtering. This can be used to highlight source codes, list contents of archive files or view pdf or image files to name a few. For coloring lf recognizes ansi escape codes. In order to use this feature you need to set the value of 'previewer' option to the path of an executable file. Five arguments are passed to the file, (1) current file name, (2) width, (3) height, (4) horizontal position, and (5) vertical position of preview pane respectively. Output of the execution is printed in the preview pane. You may also want to use the same script in your pager mapping as well: For 'less' pager, you may instead utilize 'LESSOPEN' mechanism so that useful information about the file such as the full path of the file can still be displayed in the statusline below: Since this script is called for each file selection change it needs to be as efficient as possible and this responsibility is left to the user. You may use file extensions to determine the type of file more efficiently compared to obtaining mime types from 'file' command. Extensions can then be used to match cleanly within a conditional: Another important consideration for efficiency is the use of programs with short startup times for preview. For this reason, 'highlight' is recommended over 'pygmentize' for syntax highlighting. Besides, it is also important that the application is processing the file on the fly rather than first reading it to the memory and then do the processing afterwards. This is especially relevant for big files. lf automatically closes the previewer script output pipe with a SIGPIPE when enough lines are read. When everything else fails, you can make use of the height argument to only feed the first portion of the file to a program for preview. Note that some programs may not respond well to SIGPIPE to exit with a non-zero return code and avoid caching. You may add a trailing '|| true' command to avoid such errors: You may also use an existing preview filter as you like. Your system may already come with a preview filter named 'lesspipe'. These filters may have a mechanism to add user customizations as well. See the related documentations for more information. lf changes the working directory of the process to the current directory so that shell commands always work in the displayed directory. After quitting, it returns to the original directory where it is first launched like all shell programs. If you want to stay in the current directory after quitting, you can use one of the example lfcd wrapper shell scripts provided in the repository at https://github.com/gokcehan/lf/tree/master/etc There is a special command 'on-cd' that runs a shell command when it is defined and the directory is changed. You can define it just as you would define any other command: If you want to print escape sequences, you may redirect 'printf' output to '/dev/tty'. The following xterm specific escape sequence sets the terminal title to the working directory: This command runs whenever you change directory but not on startup. You can add an extra call to make it run on startup as well: Note that all shell commands are possible but '%' and '&' are usually more appropriate as '$' and '!' causes flickers and pauses respectively. There is also a 'pre-cd' command, that works like 'on-cd', but is run before the directory is actually changed. lf tries to automatically adapt its colors to the environment. It starts with a default colorscheme and updates colors using values of existing environment variables possibly by overwriting its previous values. Colors are set in the following order: Please refer to the corresponding man pages for more information about 'LSCOLORS' and 'LS_COLORS'. 'LF_COLORS' is provided with the same syntax as 'LS_COLORS' in case you want to configure colors only for lf but not ls. This can be useful since there are some differences between ls and lf, though one should expect the same behavior for common cases. Colors file is provided for easier configuration without environment variables. This file should consist of whitespace separated pairs with '#' character to start comments until the end of line. You can configure lf colors in two different ways. First, you can only configure 8 basic colors used by your terminal and lf should pick up those colors automatically. Depending on your terminal, you should be able to select your colors from a 24-bit palette. This is the recommended approach as colors used by other programs will also match each other. Second, you can set the values of environment variables or colors file mentioned above for fine grained customization. Note that 'LS_COLORS/LF_COLORS' are more powerful than 'LSCOLORS' and they can be used even when GNU programs are not installed on the system. You can combine this second method with the first method for best results. Lastly, you may also want to configure the colors of the prompt line to match the rest of the colors. Colors of the prompt line can be configured using the 'promptfmt' option which can include hardcoded colors as ansi escapes. See the default value of this option to have an idea about how to color this line. It is worth noting that lf uses as many colors advertised by your terminal's entry in terminfo or infocmp databases on your system. If an entry is not present, it falls back to an internal database. If your terminal supports 24-bit colors but either does not have a database entry or does not advertise all capabilities, you can enable support by setting the '$COLORTERM' variable to 'truecolor' or ensuring '$TERM' is set to a value that ends with '-truecolor'. Default lf colors are mostly taken from GNU dircolors defaults. These defaults use 8 basic colors and bold attribute. Default dircolors entries with background colors are simplified to avoid confusion with current file selection in lf. Similarly, there are only file type matchings and extension matchings are left out for simplicity. Default values are as follows given with their matching order in lf: Note that lf first tries matching file names and then falls back to file types. The full order of matchings from most specific to least are as follows: For example, given a regular text file '/path/to/README.txt', the following entries are checked in the configuration and the first one to match is used: Given a regular directory '/path/to/example.d', the following entries are checked in the configuration and the first one to match is used: Note that glob-like patterns do not actually perform glob matching due to performance reasons. For example, you can set a variable as follows: Having all entries on a single line can make it hard to read. You may instead divide it to multiple lines in between double quotes by escaping newlines with backslashes as follows: Having such a long variable definition in a shell configuration file might be undesirable. You may instead use the colors file for configuration. A sample colors file can be found at https://github.com/gokcehan/lf/blob/master/etc/colors.example You may also see the wiki page for ansi escape codes https://en.wikipedia.org/wiki/ANSI_escape_code Icons are configured using 'LF_ICONS' environment variable or an icons file. The variable uses the same syntax as 'LS_COLORS/LF_COLORS'. Instead of colors, you should put a single characters as values of entries. Icons file should consist of whitespace separated pairs with '#' character to start comments until the end of line. Do not forget to enable 'icons' option to see the icons. Default values are as follows given with their matching order in lf: A sample icons file can be found at https://github.com/gokcehan/lf/blob/master/etc/icons.example
Package sshego is a golang libary that does secure port forwarding over ssh. Also `gosshtun` is a command line utility included here that demonstrates use of the library; and may be useful standalone. The intent of having a Go library is so that it can be used to secure (via SSH tunnel) any other traffic that your Go application would normally have to do over cleartext TCP. While you could always run a tunnel as a separate process, by running the tunnel in process with your application, you know the tunnel is running when the process is running. It's just simpler to administer; only one thing to start instead of two. Also this is much simpler, and much faster, than using a virtual private network (VPN). For a speed comparison, consider [1] where SSH is seen to be at least 2x faster than OpenVPN. [1] http://serverfault.com/questions/653211/ssh-tunneling-is-faster-than-openvpn-could-it-be The sshego library typically acts as an ssh client, but also provides options to support running an embedded sshd server daemon. Port forwarding is the most typical use of the client, and this is the equivalent of using the standalone `ssh` client program and giving the `-L` and/or `-R` flags. If you only trust the user running your application and not your entire host, you can further restrict access by using either DialConfig.Dial() for a direct-tcpip connection, or by using the unix-domain-socket support. For example, is equivalent to with the addendum that `gosshtun` requires the use of passwordless private `-key` file, and will never prompt you for a password at the keyboard. This makes it ideal for embedding inside your application to secure your (e.g. mysql, postgres, other cleartext) traffic. As many connections as you need will be multiplexed over the same ssh tunnel. We check the sshd server's host key. We prevent MITM attacks by only allowing new servers if `-new` is given. You should give `-new` only once at setup time. Then the lack of `-new` can protect you on subsequent runs, because the server's host key must match what we were given the first time. means the following two network hops will happen, when a local browser connects to localhost:8888 where (a) takes place inside the previously established ssh tunnel. Connection (b) takes place over basic, un-adorned, un-encrypted TCP/IP. Of course you could always run `gosshtun` again on the remote host to secure the additional hop as well, but typically -remote is aimed at the 127.0.0.1, which will be internal to the remote host itself and so needs no encryption.
Package controllerruntime provides tools to construct Kubernetes-style controllers that manipulate both Kubernetes CRDs and aggregated/built-in Kubernetes APIs. It defines easy helpers for the common use cases when building CRDs, built on top of customizable layers of abstraction. Common cases should be easy, and uncommon cases should be possible. In general, controller-runtime tries to guide users towards Kubernetes controller best-practices. The main entrypoint for controller-runtime is this root package, which contains all of the common types needed to get started building controllers: The examples in this package walk through a basic controller setup. The kubebuilder book (https://book.kubebuilder.io) has some more in-depth walkthroughs. controller-runtime favors structs with sane defaults over constructors, so it's fairly common to see structs being used directly in controller-runtime. A brief-ish walkthrough of the layout of this library can be found below. Each package contains more information about how to use it. Frequently asked questions about using controller-runtime and designing controllers can be found at https://github.com/kubernetes-sigs/controller-runtime/blob/main/FAQ.md. Every controller and webhook is ultimately run by a Manager (pkg/manager). A manager is responsible for running controllers and webhooks, and setting up common dependencies, like shared caches and clients, as well as managing leader election (pkg/leaderelection). Managers are generally configured to gracefully shut down controllers on pod termination by wiring up a signal handler (pkg/manager/signals). Controllers (pkg/controller) use events (pkg/event) to eventually trigger reconcile requests. They may be constructed manually, but are often constructed with a Builder (pkg/builder), which eases the wiring of event sources (pkg/source), like Kubernetes API object changes, to event handlers (pkg/handler), like "enqueue a reconcile request for the object owner". Predicates (pkg/predicate) can be used to filter which events actually trigger reconciles. There are pre-written utilities for the common cases, and interfaces and helpers for advanced cases. Controller logic is implemented in terms of Reconcilers (pkg/reconcile). A Reconciler implements a function which takes a reconcile Request containing the name and namespace of the object to reconcile, reconciles the object, and returns a Response or an error indicating whether to requeue for a second round of processing. Reconcilers use Clients (pkg/client) to access API objects. The default client provided by the manager reads from a local shared cache (pkg/cache) and writes directly to the API server, but clients can be constructed that only talk to the API server, without a cache. The Cache will auto-populate with watched objects, as well as when other structured objects are requested. The default split client does not promise to invalidate the cache during writes (nor does it promise sequential create/get coherence), and code should not assume a get immediately following a create/update will return the updated resource. Caches may also have indexes, which can be created via a FieldIndexer (pkg/client) obtained from the manager. Indexes can used to quickly and easily look up all objects with certain fields set. Reconcilers may retrieve event recorders (pkg/recorder) to emit events using the manager. Clients, Caches, and many other things in Kubernetes use Schemes (pkg/scheme) to associate Go types to Kubernetes API Kinds (Group-Version-Kinds, to be specific). Similarly, webhooks (pkg/webhook/admission) may be implemented directly, but are often constructed using a builder (pkg/webhook/admission/builder). They are run via a server (pkg/webhook) which is managed by a Manager. Logging (pkg/log) in controller-runtime is done via structured logs, using a log set of interfaces called logr (https://pkg.go.dev/github.com/go-logr/logr). While controller-runtime provides easy setup for using Zap (https://go.uber.org/zap, pkg/log/zap), you can provide any implementation of logr as the base logger for controller-runtime. Metrics (pkg/metrics) provided by controller-runtime are registered into a controller-runtime-specific Prometheus metrics registry. The manager can serve these by an HTTP endpoint, and additional metrics may be registered to this Registry as normal. You can easily build integration and unit tests for your controllers and webhooks using the test Environment (pkg/envtest). This will automatically stand up a copy of etcd and kube-apiserver, and provide the correct options to connect to the API server. It's designed to work well with the Ginkgo testing framework, but should work with any testing setup. This example creates a simple application Controller that is configured for ReplicaSets and Pods. * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application. This example creates a simple application Controller that is configured for ReplicaSets and Pods. This application controller will be running leader election with the provided configuration in the manager options. If leader election configuration is not provided, controller runs leader election with default values. Default values taken from: https://github.com/kubernetes/component-base/blob/master/config/v1alpha1/defaults.go * defaultLeaseDuration = 15 * time.Second * defaultRenewDeadline = 10 * time.Second * defaultRetryPeriod = 2 * time.Second * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application.
Package smpp implements SMPP protocol v3.4. It allows easier creation of SMPP clients and servers by providing utilities for PDU and session handling. In order to do any kind of interaction you first need to create an SMPP Session(https://godoc.org/github.com/pentolbakso/smpp-go#Session). Session is the main carrier of the protocol and enforcer of the specification rules. Naked session can be created with: But it's much more convenient to use helpers that would do the binding with the remote SMSC and return you session prepared for sending: And once you have the session it can be used for sending PDUs to the binded peer. Session that is no longer used must be closed: If you want to handle incoming requests to the session specify SMPPHandler in session configuration when creating new session similarly to HTTPHandler from _net/http_ package: Detailed examples for SMPP client and server can be found in the examples dir.
Package api is the root of the packages used to access Google Cloud Services. See https://godoc.org/google.golang.org/api for a full list of sub-packages. Within api there exist numerous clients which connect to Google APIs, and various utility packages. All clients in sub-packages are configurable via client options. These options are described here: https://godoc.org/google.golang.org/api/option. All the clients in sub-packages support authentication via Google Application Default Credentials (see https://cloud.google.com/docs/authentication/production), or by providing a JSON key file for a Service Account. See the authentication examples in https://godoc.org/google.golang.org/api/transport for more details. Due to the auto-generated nature of this collection of libraries, complete APIs or specific versions can appear or go away without notice. As a result, you should always locally vendor any API(s) that your code relies upon. Google APIs follow semver as specified by https://cloud.google.com/apis/design/versioning. The code generator and the code it produces - the libraries in the google.golang.org/api/... subpackages - are beta. Note that versioning and stability is strictly not communicated through Go modules. Go modules are used only for dependency management. Many parameters are specified using ints. However, underlying APIs might operate on a finer granularity, expecting int64, int32, uint64, or uint32, all of whom have different maximum values. Subsequently, specifying an int parameter in one of these clients may result in an error from the API because the value is too large. To see the exact type of int that the API expects, you can inspect the API's discovery doc. A global catalogue pointing to the discovery doc of APIs can be found at https://www.googleapis.com/discovery/v1/apis. This field can be found on all Request/Response structs in the generated clients. All of these types have the JSON `omitempty` field tag present on their fields. This means if a type is set to its default value it will not be marshalled. Sometimes you may actually want to send a default value, for instance sending an int of `0`. In this case you can override the `omitempty` feature by adding the field name to the `ForceSendFields` slice. See docs on any struct for more details. This may be used to include empty fields in Patch requests. This field can be found on all Request/Response structs in the generated clients. It can be be used to send JSON null values for the listed fields. By default, fields with empty values are omitted from API requests because of the presence of the `omitempty` field tag on all fields. However, any field with an empty value appearing in NullFields will be sent to the server as null. It is an error if a field in this list has a non-empty value. This may be used to include null fields in Patch requests. An error returned by a client's Do method may be cast to a *googleapi.Error or unwrapped to an *apierror.APIError. The https://pkg.go.dev/google.golang.org/api/googleapi#Error type is useful for getting the HTTP status code: The https://pkg.go.dev/github.com/googleapis/gax-go/v2/apierror#APIError type is useful for inspecting structured details of the underlying API response, such as the reason for the error and the error domain, which is typically the registered service name of the tool or product that generated the error: If an API call returns an Operation, that means it could take some time to complete the work initiated by the API call. Applications that are interested in the end result of the operation they initiated should wait until the Operation.Done field indicates it is finished. To do this, use the service's Operation client, and a loop, like so: