Package withings is a client module for working with the Withings (previously Nokia Health (previously Withings)) API. The current version (v2) of this module has been updated to work with the newer Oauth2 implementation of the API. As with all Oauth2 APIs, you must obtain authorization to access the users data. To do so you must first register your application with the Withings api to obtain a ClientID and ClientSecret. Once you have your client information you can now create a new client. You will need to also provide the redirectURL you provided during application registration. This URL is where the user will be redirected to and will include the code you need to generate the accessToken needed to access the users data. Under normal situations you should have an http server listening on that address and pull the code from the URL query parameters. You can now use the client to generate the authorization URL the user needs to navigate to. This URL will take the user to a page that will prompt them to allow your application to access their data. The state string returned by the method is a randomly generated BASE64 string using the crypto/rand module. It will be returned in the redirect as a query parameter and the two values verified they match. It's not required, but useful for security reasons. Using the code returned by the redirect in the query parameters, you can generate a new user. This user struct can be used to immediately perform actions against the users data. It also contains the token you should save somewhere for reuse. Obviously use whatever context you would like here. Make sure the save at least the refreshToken for accessing the user data at a later date. You may also save the accessToken, but it does expire and creating a new client from saved token data only requires the refreshToken. You can easily create a user from a saved token using the NewUserFromRefreshToken method. A working configured client is required for the user generated from this method to work. The user struct has various methods associated with each API endpoint to perform data retrieval. The methods take a specific param struct specifying the api options to use on the request. The API is a bit "special" so the params vary a bit between each method. The client does what it can to smooth those out but there is only so much that can be done. Every method has two forms, one that accepts a context and one that does now. This allows you to provide a context on each request if you would like to. By default all methods utilize a context to timeout the request to the API. The value of the timeout is stored on the Client and can be access as/set on Client.Timeout. Setting is _not_ thread safe and should only be set on client creation. If you need to change the timeout for different requests use the methodCtx variant of the method. By default the state generated by the AuthCodeURL utilized crypto/rand. If you would like to implement your own random method you can do so by assigning the function to Rand field of the Client struct. The function should support the Rand type. Also this is _not_ thread safe so only perform this action on client creation. By default every returned response will be parsed and the parsed data returned. If you need access to the raw request data you can enable it by setting the SaveRawResponse field of the client struct to true. This should be done at client creation time. With it set to true the RawResponse field of the returned structs will include the raw response. Some data request methods include a parseResponse field on the params struct. If this is included additional parsing is performed to make the data more usable. This can be seen on GetBodyMeasures for example. You can include the path fields sent to the API by setting IncludePath to true on the client. This is primarily used for debugging but could be helpful in some situations. By default the client will request all known scopes. If you would like to pair down this you can change the scope by using the SetScope method of the client. Consts in the form of ScopeXxx are provided to aid selection.
Package applicationdiscoveryservice provides the client and types for making API requests to AWS Application Discovery Service. AWS Application Discovery Service helps you plan application migration projects by automatically identifying servers, virtual machines (VMs), software, and software dependencies running in your on-premises data centers. Application Discovery Service also collects application performance data, which can help you assess the outcome of your migration. The data collected by Application Discovery Service is securely retained in an Amazon-hosted and managed database in the cloud. You can export the data as a CSV or XML file into your preferred visualization tool or cloud-migration solution to plan your migration. For more information, see the Application Discovery Service FAQ (http://aws.amazon.com/application-discovery/faqs/). Application Discovery Service offers two modes of operation. Agentless discovery mode is recommended for environments that use VMware vCenter Server. This mode doesn't require you to install an agent on each host. Agentless discovery gathers server information regardless of the operating systems, which minimizes the time required for initial on-premises infrastructure assessment. Agentless discovery doesn't collect information about software and software dependencies. It also doesn't work in non-VMware environments. We recommend that you use agent-based discovery for non-VMware environments and if you want to collect information about software and software dependencies. You can also run agent-based and agentless discovery simultaneously. Use agentless discovery to quickly complete the initial infrastructure assessment and then install agents on select hosts to gather information about software and software dependencies. Agent-based discovery mode collects a richer set of data than agentless discovery by using Amazon software, the AWS Application Discovery Agent, which you install on one or more hosts in your data center. The agent captures infrastructure and application information, including an inventory of installed software applications, system and process performance, resource utilization, and network dependencies between workloads. The information collected by agents is secured at rest and in transit to the Application Discovery Service database in the cloud. Application Discovery Service integrates with application discovery solutions from AWS Partner Network (APN) partners. Third-party application discovery tools can query Application Discovery Service and write to the Application Discovery Service database using a public API. You can then import the data into either a visualization tool or cloud-migration solution. Application Discovery Service doesn't gather sensitive information. All data is handled according to the AWS Privacy Policy (http://aws.amazon.com/privacy/). You can operate Application Discovery Service using offline mode to inspect collected data before it is shared with the service. Your AWS account must be granted access to Application Discovery Service, a process called whitelisting. This is true for AWS partners and customers alike. To request access, sign up for AWS Application Discovery Service here (http://aws.amazon.com/application-discovery/preview/). We send you information about how to get started. This API reference provides descriptions, syntax, and usage examples for each of the actions and data types for Application Discovery Service. The topic for each action shows the API request parameters and the response. Alternatively, you can use one of the AWS SDKs to access an API that is tailored to the programming language or platform that you're using. For more information, see AWS SDKs (http://aws.amazon.com/tools/#SDKs). This guide is intended for use with the AWS Application Discovery Service User Guide (http://docs.aws.amazon.com/application-discovery/latest/userguide/). See https://docs.aws.amazon.com/goto/WebAPI/discovery-2015-11-01 for more information on this service. See applicationdiscoveryservice package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/applicationdiscoveryservice/ To AWS Application Discovery Service with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the AWS Application Discovery Service client ApplicationDiscoveryService for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/applicationdiscoveryservice/#New
Package gamelift provides the client and types for making API requests to Amazon GameLift. Amazon GameLift is a managed service for developers who need a scalable, dedicated server solution for their multiplayer games. Amazon GameLift provides tools for the following tasks: (1) acquire computing resources and deploy game servers, (2) scale game server capacity to meet player demand, (3) host game sessions and manage player access, and (4) track in-depth metrics on player usage and server performance. The Amazon GameLift service API includes two important function sets: Manage game sessions and player access -- Retrieve information on available game sessions; create new game sessions; send player requests to join a game session. Configure and manage game server resources -- Manage builds, fleets, queues, and aliases; set autoscaling policies; retrieve logs and metrics. This reference guide describes the low-level service API for Amazon GameLift. You can use the API functionality with these tools: The Amazon Web Services software development kit (AWS SDK (http://aws.amazon.com/tools/#sdk)) is available in multiple languages (http://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-supported.html#gamelift-supported-clients) including C++ and C#. Use the SDK to access the API programmatically from an application, such as a game client. The AWS command-line interface (http://aws.amazon.com/cli/) (CLI) tool is primarily useful for handling administrative actions, such as setting up and managing Amazon GameLift settings and resources. You can use the AWS CLI to manage all of your AWS services. The AWS Management Console (https://console.aws.amazon.com/gamelift/home) for Amazon GameLift provides a web interface to manage your Amazon GameLift settings and resources. The console includes a dashboard for tracking key resources, including builds and fleets, and displays usage and performance metrics for your games as customizable graphs. Amazon GameLift Local is a tool for testing your game's integration with Amazon GameLift before deploying it on the service. This tools supports a subset of key API actions, which can be called from either the AWS CLI or programmatically. See Testing an Integration (http://docs.aws.amazon.com/gamelift/latest/developerguide/integration-testing-local.html). MORE RESOURCES Amazon GameLift Developer Guide (http://docs.aws.amazon.com/gamelift/latest/developerguide/) -- Learn more about Amazon GameLift features and how to use them. Lumberyard and Amazon GameLift Tutorials (https://gamedev.amazon.com/forums/tutorials) -- Get started fast with walkthroughs and sample projects. GameDev Blog (http://aws.amazon.com/blogs/gamedev/) -- Stay up to date with new features and techniques. GameDev Forums (https://gamedev.amazon.com/forums/spaces/123/gamelift-discussion.html) -- Connect with the GameDev community. Amazon GameLift Document History (http://docs.aws.amazon.com/gamelift/latest/developerguide/doc-history.html) -- See changes to the Amazon GameLift service, SDKs, and documentation, as well as links to release notes. This list offers a functional overview of the Amazon GameLift service API. Use these actions to start new game sessions, find existing game sessions, track game session status and other information, and enable player access to game sessions. SearchGameSessions -- Retrieve all available game sessions or search for Start new games with Queues to find the best available hosting resources StartGameSessionPlacement -- Request a new game session placement and add DescribeGameSessionPlacement -- Get details on a placement request, including StopGameSessionPlacement -- Cancel a placement request. CreateGameSession -- Start a new game session on a specific fleet. Available StartMatchmaking -- Request matchmaking for one players or a group who want DescribeMatchmaking -- Get details on a matchmaking request, including status. AcceptMatch -- Register that a player accepts a proposed match, for matches StopMatchmaking -- Cancel a matchmaking request. DescribeGameSessions -- Retrieve metadata for one or more game sessions, DescribeGameSessionDetails -- Retrieve metadata and the game session protection UpdateGameSession -- Change game session settings, such as maximum player GetGameSessionLogUrl -- Get the location of saved logs for a game session. CreatePlayerSession -- Send a request for a player to join a game session. CreatePlayerSessions -- Send a request for multiple players to join a game DescribePlayerSessions -- Get details on player activity, including status, When setting up Amazon GameLift resources for your game, you first create a game build (http://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-build-intro.html) and upload it to Amazon GameLift. You can then use these actions to configure and manage a fleet of resources to run your game servers, scale capacity to meet player demand, access performance and utilization metrics, and more. CreateBuild -- Create a new build using files stored in an Amazon S3 bucket. ListBuilds -- Get a list of all builds uploaded to a Amazon GameLift region. DescribeBuild -- Retrieve information associated with a build. UpdateBuild -- Change build metadata, including build name and version. DeleteBuild -- Remove a build from Amazon GameLift. CreateFleet -- Configure and activate a new fleet to run a build's game servers. ListFleets -- Get a list of all fleet IDs in a Amazon GameLift region (all DeleteFleet -- Terminate a fleet that is no longer running game servers or View / update fleet configurations. DescribeFleetAttributes / UpdateFleetAttributes -- View or change a fleet's DescribeFleetPortSettings / UpdateFleetPortSettings -- View or change the DescribeRuntimeConfiguration / UpdateRuntimeConfiguration -- View or change DescribeEC2InstanceLimits -- Retrieve maximum number of instances allowed DescribeFleetCapacity / UpdateFleetCapacity -- Retrieve the capacity settings Autoscale -- Manage autoscaling rules and apply them to a fleet. PutScalingPolicy -- Create a new autoscaling policy, or update an existing DescribeScalingPolicies -- Retrieve an existing autoscaling policy. DeleteScalingPolicy -- Delete an autoscaling policy and stop it from affecting CreateVpcPeeringAuthorization -- Authorize a peering connection to one of DescribeVpcPeeringAuthorizations -- Retrieve valid peering connection authorizations. DeleteVpcPeeringAuthorization -- Delete a peering connection authorization. CreateVpcPeeringConnection -- Establish a peering connection between the DescribeVpcPeeringConnections -- Retrieve information on active or pending DeleteVpcPeeringConnection -- Delete a VPC peering connection with a Amazon DescribeFleetUtilization -- Get current data on the number of server processes, DescribeFleetEvents -- Get a fleet's logged events for a specified time span. DescribeGameSessions -- Retrieve metadata associated with one or more game DescribeInstances -- Get information on each instance in a fleet, including GetInstanceAccess -- Request access credentials needed to remotely connect CreateAlias -- Define a new alias and optionally assign it to a fleet. ListAliases -- Get all fleet aliases defined in a Amazon GameLift region. DescribeAlias -- Retrieve information on an existing alias. UpdateAlias -- Change settings for a alias, such as redirecting it from one DeleteAlias -- Remove an alias from the region. ResolveAlias -- Get the fleet ID that a specified alias points to. CreateGameSessionQueue -- Create a queue for processing requests for new DescribeGameSessionQueues -- Retrieve game session queues defined in a Amazon UpdateGameSessionQueue -- Change the configuration of a game session queue. DeleteGameSessionQueue -- Remove a game session queue from the region. CreateMatchmakingConfiguration -- Create a matchmaking configuration with DescribeMatchmakingConfigurations -- Retrieve matchmaking configurations UpdateMatchmakingConfiguration -- Change settings for matchmaking configuration. DeleteMatchmakingConfiguration -- Remove a matchmaking configuration from CreateMatchmakingRuleSet -- Create a set of rules to use when searching for DescribeMatchmakingRuleSets -- Retrieve matchmaking rule sets defined in ValidateMatchmakingRuleSet -- Verify syntax for a set of matchmaking rules. See https://docs.aws.amazon.com/goto/WebAPI/gamelift-2015-10-01 for more information on this service. See gamelift package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/gamelift/ To Amazon GameLift with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon GameLift client GameLift for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/gamelift/#New
Package dynamodb provides the client and types for making API requests to Amazon DynamoDB. Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables' throughput capacity without downtime or performance degradation, and use the AWS Management Console to monitor resource utilization and performance metrics. DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an AWS region, providing built-in high availability and data durability. See https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10 for more information on this service. See dynamodb package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/ To Amazon DynamoDB with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon DynamoDB client DynamoDB for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/#New Utility helpers to marshal and unmarshal AttributeValue to and from Go types can be found in the dynamodbattribute sub package. This package provides has specialized functions for the common ways of working with AttributeValues. Such as map[string]*AttributeValue, []*AttributeValue, and directly with *AttributeValue. This is helpful for marshaling Go types for API operations such as PutItem, and unmarshaling Query and Scan APIs' responses. See the dynamodbattribute package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/dynamodbattribute/ The expression package provides utility types and functions to build DynamoDB expression for type safe construction of API ExpressionAttributeNames, and ExpressionAttribute Values. The package represents the various DynamoDB Expressions as structs named accordingly. For example, ConditionBuilder represents a DynamoDB Condition Expression, an UpdateBuilder represents a DynamoDB Update Expression, and so on. See the expression package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/expression/
lf is a terminal file manager. Source code can be found in the repository at https://github.com/gokcehan/lf This documentation can either be read from terminal using 'lf -doc' or online at https://pkg.go.dev/github.com/gokcehan/lf You can also use 'doc' command (default '<f-1>') inside lf to view the documentation in a pager. A man page with the same content is also available in the repository at https://github.com/gokcehan/lf/blob/master/lf.1 You can run 'lf -help' to see descriptions of command line options. The following commands are provided by lf: The following command line commands are provided by lf: The following options can be used to customize the behavior of lf: The following environment variables are exported for shell commands: The following special shell commands are used to customize the behavior of lf when defined: The following commands/keybindings are provided by default: The following additional keybindings are provided by default: If the 'mouse' option is enabled, mouse buttons have the following default effects: Configuration files should be located at: Colors file should be located at: Icons file should be located at: Selection file should be located at: Marks file should be located at: Tags file should be located at: History file should be located at: You can configure these locations with the following variables given with their order of precedences and their default values: A sample configuration file can be found at https://github.com/gokcehan/lf/blob/master/etc/lfrc.example This section shows information about builtin commands. Modal commands do not take any arguments, but instead change the operation mode to read their input conveniently, and so they are meant to be assigned to keybindings. Quit lf and return to the shell. Move/scroll the current file selection upwards/downwards by one/half a page/full page. Change the current working directory to the parent directory. If the current file is a directory, then change the current directory to it, otherwise, execute the 'open' command. A default 'open' command is provided to call the default system opener asynchronously with the current file as the argument. A custom 'open' command can be defined to override this default. Change the current working directory to the next/previous jumplist item. Move the current file selection to the top/bottom of the directory. A count can be specified to move to a specific line, for example use `3G` to move to the third line. Move the current file selection to the high/middle/low of the screen. Toggle the selection of the current file or files given as arguments. Reverse the selection of all files in the current directory (i.e. 'toggle' all files). Selections in other directories are not effected by this command. You can define a new command to select all files in the directory by combining 'invert' with 'unselect' (i.e. 'cmd select-all :unselect; invert'), though this will also remove selections in other directories. Reverse the selection (i.e. 'toggle') of all files at or after the current file in the current directory. To select a contiguous block of files, use this command on the first file you want to select. Then, move down to the first file you do *not* want to select (the one after the end of the desired selection) and use this command again. This achieves an effect similar to the visual mode in vim. This command is experimental and may be removed once a better replacement for the visual mode is implemented in 'lf'. If you'd like to experiment with using this command, you should bind it to a key (e.g. 'V') for a better experience. Remove the selection of all files in all directories. Select/unselect files that match the given glob. Calculate the total size for each of the selected directories. Option 'info' should include 'size' and option 'dircounts' should be disabled to show this size. If the total size of a directory is not calculated, it will be shown as '-'. Remove all keybindings associated with the `map` command. This command can be used in the config file to remove the default keybindings. For safety purposes, `:` is left mapped to the `read` command, and `cmap` keybindings are retained so that it is still possible to exit `lf` using `:quit`. If there are no selections, save the path of the current file to the copy buffer, otherwise, copy the paths of selected files. If there are no selections, save the path of the current file to the cut buffer, otherwise, copy the paths of selected files. Copy/Move files in copy/cut buffer to the current working directory. A custom 'paste' command can be defined to override this default. Clear file paths in copy/cut buffer. Synchronize copied/cut files with server. This command is automatically called when required. Draw the screen. This command is automatically called when required. Synchronize the terminal and redraw the screen. Load modified files and directories. This command is automatically called when required. Flush the cache and reload all files and directories. Print given arguments to the message line at the bottom. Print given arguments to the message line at the bottom and also to the log file. Print given arguments to the message line at the bottom as 'errorfmt' and also to the log file. Change the working directory to the given argument. Change the current file selection to the given argument. Remove the current file or selected file(s). A custom 'delete' command can be defined to override this default. Rename the current file using the builtin method. A custom 'rename' command can be defined to override this default. Read the configuration file given in the argument. Simulate key pushes given in the argument. Read a command to evaluate. Read a shell command to execute. Read a shell command to execute piping its standard I/O to the bottom statline. Read a shell command to execute and wait for a key press in the end. Read a shell command to execute asynchronously without standard I/O. Read key(s) to find the appropriate file name match in the forward/backward direction and jump to the next/previous match. Read a pattern to search for a file name match in the forward/backward direction and jump to the next/previous match. Command 'filter' reads a pattern to filter out and only view files matching the pattern. Command 'setfilter' does the same but uses an argument to set the filter immediately. You can supply an argument to 'filter', in order to use that as the starting prompt. Save the current directory as a bookmark assigned to the given key. Change the current directory to the bookmark assigned to the given key. A special bookmark "'" holds the previous directory after a 'mark-load', 'cd', or 'select' command. Remove a bookmark assigned to the given key. Tag a file with '*' or a single width character given in the argument. You can define a new tag clearing command by combining 'tag' with 'tag-toggle' (i.e. 'cmd tag-clear :tag; tag-toggle'). Tag a file with '*' or a single width character given in the argument if the file is untagged, otherwise remove the tag. The prompt character specifies which of the several command-line modes you are in. For example, the 'read' command takes you to the ':' mode. When the cursor is at the first character in ':' mode, pressing one of the keys '!', '$', '%', or '&' takes you to the corresponding mode. You can go back with 'cmd-delete-back' ('<backspace>' by default). The command line commands should be mostly compatible with readline keybindings. A character refers to a unicode code point, a word consists of letters and digits, and a unix word consists of any non-blank characters. Quit command line mode and return to normal mode. Autocomplete the current word. Autocomplete the current word with menu selection. You need to assign keys to these commands (e.g. 'cmap <tab> cmd-menu-complete; cmap <backtab> cmd-menu-complete-back'). You can use the assigned keys assigned to display the menu and then cycle through completion options. Accept the currently selected match in menu completion and close the menu. Execute the current line. Interrupt the current shell-pipe command and return to the normal mode. Go to next/previous item in the history. Move the cursor to the left/right. Move the cursor to the beginning/end of line. Delete the next character. Delete the previous character. When at the beginning of a prompt, returns either to normal mode or to ':' mode. Delete everything up to the beginning/end of line. Delete the previous unix word. Paste the buffer content containing the last deleted item. Transpose the positions of last two characters/words. Move the cursor by one word in forward/backward direction. Delete the next word in forward direction. Capitalize/uppercase/lowercase the current word and jump to the next word. List all key mappings in normal mode or command-line editing mode. List all custom commands defined using the `cmd` command List the contents of the jump list, in order of the most recently visited locations. Each location is marked with the count that can be used with the `jump-prev` and `jump-next` commands (e.g. use `3[` to move three spaces backwards in the jump list). A '>' is used to mark the current location in the jump list. This section shows information about options to customize the behavior. Character ':' is used as the separator for list options '[]int' and '[]string'. When this option is enabled, find command starts matching patterns from the beginning of file names, otherwise, it can match at an arbitrary position. Automatically quit server when there are no clients left connected. Format string of the box drawing characters enabled by the `drawbox` option. Set the path of a cleaner file. The file should be executable. This file is called if previewing is enabled, the previewer is set, and the previously selected file had its preview cache disabled. The following arguments are passed to the file, (1) current file name, (2) width, (3) height, (4) horizontal position, (5) vertical position of preview pane and (6) next file name to be previewed respectively. Preview cleaning is disabled when the value of this option is left empty. Format strings for highlighting the cursor. `cursoractivefmt` applies in the current directory pane, `cursorparentfmt` applies in panes that show parents of the current directory, and `cursorpreviewfmt` applies in panes that preview directories. The default is to make the active cursor and the parent directory cursor inverted. The preview cursor is underlined. Some other possibilities to consider for the preview or parent cursors: an empty string for no cursor, "\033[7;2m" for dimmed inverted text (visibility varies by terminal), "\033[7;90m" for inverted text with grey (aka "brightblack") background. If the format string contains the characters `%s`, it is interpreted as a format string for `fmt.Sprintf`. Such a string should end with the terminal reset sequence. For example, "\033[4m%s\033[0m" has the same effect as "\033[4m". Cache directory contents. When this option is enabled, directory sizes show the number of items inside instead of the total size of the directory, which needs to be calculated for each directory using 'calcdirsize'. This information needs to be calculated by reading the directory and counting the items inside. Therefore, this option is disabled by default for performance reasons. This option only has an effect when 'info' has a 'size' field and the pane is wide enough to show the information. 999 items are counted per directory at most, and bigger directories are shown as '999+'. Show directories first above regular files. Show only directories. If enabled, directories will also be passed to the previewer script. This allows custom previews for directories. Draw boxes around panes with box drawing characters. Format string of file name when creating duplicate files. With the default format, copying a file `abc.txt` to the same directory will result in a duplicate file called `abc.txt.~1~`. Special expansions are provided, '%f' as the file name, '%b' for basename (file name without extension), '%e' as the extension (including the dot) and '%n' as the number of duplicates. Format string of error messages shown in the bottom message line. If the format string contains the characters `%s`, it is interpreted as a format string for `fmt.Sprintf`. Such a string should end with the terminal reset sequence. For example, "\033[4m%s\033[0m" has the same effect as "\033[4m". File separator used in environment variables 'fs' and 'fx'. Number of characters prompted for the find command. When this value is set to 0, find command prompts until there is only a single match left. When this option is enabled, search command patterns are considered as globs, otherwise they are literals. With globbing, '*' matches any sequence, '?' matches any character, and '[...]' or '[^...]' matches character sets or ranges. Otherwise, these characters are interpreted as they are. Show hidden files. On Unix systems, hidden files are determined by the value of 'hiddenfiles'. On Windows, only files with hidden attributes are considered hidden files. List of hidden file glob patterns. Patterns can be given as relative or absolute paths. Globbing supports the usual special characters, '*' to match any sequence, '?' to match any character, and '[...]' or '[^...]' to match character sets or ranges. In addition, if a pattern starts with '!', then its matches are excluded from hidden files. To add multiple patterns, use ':' as a separator. Example: '.*:lost+found:*.bak' Save command history. Show icons before each item in the list. Sets 'IFS' variable in shell commands. It works by adding the assignment to the beginning of the command string as "IFS='...'; ...". The reason is that 'IFS' variable is not inherited by the shell for security reasons. This method assumes a POSIX shell syntax and so it can fail for non-POSIX shells. This option has no effect when the value is left empty. This option does not have any effect on Windows. Ignore case in sorting and search patterns. Ignore diacritics in sorting and search patterns. Jump to the first match after each keystroke during searching. Apply filter pattern after each keystroke during filtering. List of information shown for directory items at the right side of pane. Currently supported information types are 'size', 'time', 'atime', and 'ctime'. Information is only shown when the pane width is more than twice the width of information. Format string of the file time shown in the info column when it matches this year. Format string of the file time shown in the info column when it doesn't match this year. Send mouse events as input. Show the position number for directory items at the left side of pane. When 'relativenumber' option is enabled, only the current line shows the absolute position and relative positions are shown for the rest. Format string of the position number for each line. Set the interval in seconds for periodic checks of directory updates. This works by periodically calling the 'load' command. Note that directories are already updated automatically in many cases. This option can be useful when there is an external process changing the displayed directory and you are not doing anything in lf. Periodic checks are disabled when the value of this option is set to zero. List of attributes that are preserved when copying files. Currently supported attributes are 'mode' (i.a. access mode) and 'timestamps' (i.e. modification time and access time). Note: Preserving other attribute like ownership of change/birth timestamp is desireable, but not portably supported in go. Show previews of files and directories at the right most pane. If the file has more lines than the preview pane, rest of the lines are not read. Files containing the null character (U+0000) in the read portion are considered binary files and displayed as 'binary'. Set the path of a previewer file to filter the content of regular files for previewing. The file should be executable. The following arguments are passed to the file, (1) current file name, (2) width, (3) height, (4) horizontal position, and (5) vertical position of preview pane respectively. SIGPIPE signal is sent when enough lines are read. If the previewer returns a non-zero exit code, then the preview cache for the given file is disabled. This means that if the file is selected in the future, the previewer is called once again. Preview filtering is disabled and files are displayed as they are when the value of this option is left empty. Format string of the prompt shown in the top line. Special expansions are provided, '%u' as the user name, '%h' as the host name, '%w' as the working directory, '%d' as the working directory with a trailing path separator, '%f' as the file name, and '%F' as the current filter. '%S' may be used once and will provide a spacer so that the following parts are right aligned on the screen. Home folder is shown as '~' in the working directory expansion. Directory names are automatically shortened to a single character starting from the left most parent when the prompt does not fit to the screen. List of ratios of pane widths. Number of items in the list determines the number of panes in the ui. When 'preview' option is enabled, the right most number is used for the width of preview pane. Show the position number relative to the current line. When 'number' is enabled, current line shows the absolute position, otherwise nothing is shown. Reverse the direction of sort. List of information shown in status line ruler. Currently supported information types are 'acc', 'progress', 'selection', 'filter', 'ind', 'df' and names starting with 'lf_'. `acc` shows the pressed keys (e.g. for bindings with multiple key presses or counts given to bindings). `progress` shows the progress of file operations (e.g. copying a large directory). `selection` shows the number of files that are selected, or designated for being cut/copied. `filter` shows 'F' if a filter is currently being applied. `ind` shows the current position of the cursor as well as the number of files in the current directory. `df` shows the amount of free disk space remaining. Names starting with `lf_` show the value of environment variables exported by lf. This is useful for displaying the current settings (e.g. `lf_selmode` displays the current setting for the `selmode` option). User defined options starting with `lf_user_` are also supported, so it is possible to display information set from external sources. Selection mode for commands. When set to 'all' it will use the selected files from all directories. When set to 'dir' it will only use the selected files in the current directory. Minimum number of offset lines shown at all times in the top and the bottom of the screen when scrolling. The current line is kept in the middle when this option is set to a large value that is bigger than the half of number of lines. A smaller offset can be used when the current file is close to the beginning or end of the list to show the maximum number of items. Shell executable to use for shell commands. Shell commands are executed as 'shell shellopts shellflag command -- arguments'. Command line flag used to pass shell commands. List of shell options to pass to the shell executable. Override 'ignorecase' option when the pattern contains an uppercase character. This option has no effect when 'ignorecase' is disabled. Override 'ignoredia' option when the pattern contains a character with diacritic. This option has no effect when 'ignoredia' is disabled. Sort type for directories. Currently supported sort types are 'natural', 'name', 'size', 'time', 'ctime', 'atime', and 'ext'. Format string of the file info shown in the bottom left corner. Special expansions are provided, '%p' as the file permissions, '%c' as the link count, '%u' as the user, '%g' as the group, '%s' as the file size, '%t' as the last modified time, and '%l' as the link target. The `|` character splits the format string into sections. Any section containing a failed expansion (result is a blank string) is discarded and not shown. Number of space characters to show for horizontal tabulation (U+0009) character. Format string of the tags. If the format string contains the characters `%s`, it is interpreted as a format string for `fmt.Sprintf`. Such a string should end with the terminal reset sequence. For example, "\033[4m%s\033[0m" has the same effect as "\033[4m". Marks to be considered temporary (e.g. 'abc' refers to marks 'a', 'b', and 'c'). These marks are not synced to other clients and they are not saved in the bookmarks file. Note that the special bookmark "'" is always treated as temporary and it does not need to be specified. Format string of the file modification time shown in the bottom line. Truncate character shown at the end when the file name does not fit to the pane. When a filename is too long to be shown completely, the available space is partitioned in two pieces. truncatepct defines a fraction (in percent between 0 and 100) for the size of the first piece, which will show the beginning of the filename. The second piece will show the end of the filename and will use the rest of the available space. Both pieces are separated by the truncation character (truncatechar). A value of 100 will only show the beginning of the filename, while a value of 0 will only show the end of the filename, e.g.: - `set truncatepct 100` -> "very-long-filename-tr~" (default) - `set truncatepct 50` -> "very-long-f~-truncated" - `set truncatepct 0` -> "~ng-filename-truncated" String shown after commands of shell-wait type. Searching can wrap around the file list. Scrolling can wrap around the file list. Any option that is prefixed with 'user_' is a user defined option and can be set to any string. Inside a user defined command the value will be provided in the `lf_user_{option}` environment variable. These options are not used by lf and are not persisted. The following variables are exported for shell commands: These are referred with a '$' prefix on POSIX shells (e.g. '$f'), between '%' characters on Windows cmd (e.g. '%f%'), and with a '$env:' prefix on Windows powershell (e.g. '$env:f'). Current file selection as a full path. Selected file(s) separated with the value of 'filesep' option as full path(s). Selected file(s) (i.e. 'fs') if there are any selected files, otherwise current file selection (i.e. 'f'). Id of the running client. Present working directory. Initial working directory. The value of this variable is set to the current nesting level when you run lf from a shell spawned inside lf. You can add the value of this variable to your shell prompt to make it clear that your shell runs inside lf. For example, with POSIX shells, you can use '[ -n "$LF_LEVEL" ] && PS1="$PS1""(lf level: $LF_LEVEL) "' in your shell configuration file (e.g. '~/.bashrc'). If this variable is set in the environment, use the same value. Otherwise, this is set to 'start' in Windows, 'open' in MacOS, 'xdg-open' in others. If VISUAL is set in the environment, use its value. Otherwise, use the value of the environment variable EDITOR. If neither variable is set, this is set to 'vi' on Unix, 'notepad' in Windows. If this variable is set in the environment, use the same value. Otherwise, this is set to 'less' on Unix, 'more' in Windows. If this variable is set in the environment, use the same value. Otherwise, this is set to 'sh' on Unix, 'cmd' in Windows. Absolute path to the currently running lf binary, if it can be found. Otherwise, this is set to the string 'lf'. Value of the {option}. Value of the user_{option}. Width/Height of the terminal. Value of the count associated with the current command. This section shows information about special shell commands. This shell command can be defined to override the default 'open' command when the current file is not a directory. This shell command can be defined to override the default 'paste' command. This shell command can be defined to override the default 'rename' command. This shell command can be defined to override the default 'delete' command. This shell command can be defined to be executed before changing a directory. This shell command can be defined to be executed after changing a directory. This shell command can be defined to be executed after the selection changes. This shell command can be defined to be executed before quit. The following command prefixes are used by lf: The same evaluator is used for the command line and the configuration file for read and shell commands. The difference is that prefixes are not necessary in the command line. Instead, different modes are provided to read corresponding commands. These modes are mapped to the prefix keys above by default. Characters from '#' to newline are comments and ignored: There are four special commands ('set', 'map', 'cmap', and 'cmd') for configuration. Command 'set' is used to set an option which can be boolean, integer, or string: Command 'map' is used to bind a key to a command which can be builtin command, custom command, or shell command: Command 'cmap' is used to bind a key on the command line to a command line command or any other command: You can delete an existing binding by leaving the expression empty: Command 'cmd' is used to define a custom command: You can delete an existing command by leaving the expression empty: If there is no prefix then ':' is assumed: An explicit ':' can be provided to group statements until a newline which is especially useful for 'map' and 'cmd' commands: If you need multiline you can wrap statements in '{{' and '}}' after the proper prefix. Regular keys are assigned to a command with the usual syntax: Keys combined with the shift key simply use the uppercase letter: Special keys are written in between '<' and '>' characters and always use lowercase letters: Angle brackets can be assigned with their special names: Function keys are prefixed with 'f' character: Keys combined with the control key are prefixed with 'c' character: Keys combined with the alt key are assigned in two different ways depending on the behavior of your terminal. Older terminals (e.g. xterm) may set the 8th bit of a character when the alt key is pressed. On these terminals, you can use the corresponding byte for the mapping: Newer terminals (e.g. gnome-terminal) may prefix the key with an escape key when the alt key is pressed. lf uses the escape delaying mechanism to recognize alt keys in these terminals (delay is 100ms). On these terminals, keys combined with the alt key are prefixed with 'a' character: It is possible to combine special keys with modifiers: WARNING: Some key combinations will likely be intercepted by your OS, window manager, or terminal. Other key combinations cannot be recognized by lf due to the way terminals work (e.g. `Ctrl+h` combination sends a backspace key instead). The easiest way to find out the name of a key combination and whether it will work on your system is to press the key while lf is running and read the name from the "unknown mapping" error. Mouse buttons are prefixed with 'm' character: Mouse wheel events are also prefixed with 'm' character: The usual way to map a key sequence is to assign it to a named or unnamed command. While this provides a clean way to remap builtin keys as well as other commands, it can be limiting at times. For this reason 'push' command is provided by lf. This command is used to simulate key pushes given as its arguments. You can 'map' a key to a 'push' command with an argument to create various keybindings. This is mainly useful for two purposes. First, it can be used to map a command with a command count: Second, it can be used to avoid typing the name when a command takes arguments: One thing to be careful is that since 'push' command works with keys instead of commands it is possible to accidentally create recursive bindings: These types of bindings create a deadlock when executed. Regular shell commands are the most basic command type that is useful for many purposes. For example, we can write a shell command to move selected file(s) to trash. A first attempt to write such a command may look like this: We check '$fs' to see if there are any selected files. Otherwise we just delete the current file. Since this is such a common pattern, a separate '$fx' variable is provided. We can use this variable to get rid of the conditional: The trash directory is checked each time the command is executed. We can move it outside of the command so it would only run once at startup: Since these are one liners, we can drop '{{' and '}}': Finally note that we set 'IFS' variable manually in these commands. Instead we could use the 'ifs' option to set it for all shell commands (i.e. 'set ifs "\n"'). This can be especially useful for interactive use (e.g. '$rm $f' or '$rm $fs' would simply work). This option is not set by default as it can behave unexpectedly for new users. However, use of this option is highly recommended and it is assumed in the rest of the documentation. Regular shell commands have some limitations in some cases. When an output or error message is given and the command exits afterwards, the ui is immediately resumed and there is no way to see the message without dropping to shell again. Also, even when there is no output or error, the ui still needs to be paused while the command is running. This can cause flickering on the screen for short commands and similar distractions for longer commands. Instead of pausing the ui, piping shell commands connects stdin, stdout, and stderr of the command to the statline in the bottom of the ui. This can be useful for programs following the Unix philosophy to give no output in the success case, and brief error messages or prompts in other cases. For example, following rename command prompts for overwrite in the statline if there is an existing file with the given name: You can also output error messages in the command and it will show up in the statline. For example, an alternative rename command may look like this: Note that input is line buffered and output and error are byte buffered. Waiting shell commands are similar to regular shell commands except that they wait for a key press when the command is finished. These can be useful to see the output of a program before the ui is resumed. Waiting shell commands are more appropriate than piping shell commands when the command is verbose and the output is best displayed as multiline. Asynchronous shell commands are used to start a command in the background and then resume operation without waiting for the command to finish. Stdin, stdout, and stderr of the command is neither connected to the terminal nor to the ui. One of the more advanced features in lf is remote commands. All clients connect to a server on startup. It is possible to send commands to all or any of the connected clients over the common server. This is used internally to notify file selection changes to other clients. To use this feature, you need to use a client which supports communicating with a Unix domain socket. OpenBSD implementation of netcat (nc) is one such example. You can use it to send a command to the socket file: Since such a client may not be available everywhere, lf comes bundled with a command line flag to be used as such. When using lf, you do not need to specify the address of the socket file. This is the recommended way of using remote commands since it is shorter and immune to socket file address changes: In this command 'send' is used to send the rest of the string as a command to all connected clients. You can optionally give it an id number to send a command to a single client: All clients have a unique id number but you may not be aware of the id number when you are writing a command. For this purpose, an '$id' variable is exported to the environment for shell commands. The value of this variable is set to the process id of the client. You can use it to send a remote command from a client to the server which in return sends a command back to itself. So now you can display a message in the current client by calling the following in a shell command: Since lf does not have control flow syntax, remote commands are used for such needs. For example, you can configure the number of columns in the ui with respect to the terminal width as follows: Besides 'send' command, there is a 'quit' command to quit the server when there are no connected clients left, and a 'quit!' command to force quit the server by closing client connections first: Lastly, there is a 'conn' command to connect the server as a client. This should not be needed for users. lf uses its own builtin copy and move operations by default. These are implemented as asynchronous operations and progress is shown in the bottom ruler. These commands do not overwrite existing files or directories with the same name. Instead, a suffix that is compatible with '--backup=numbered' option in GNU cp is added to the new files or directories. Only file modes and (some) timestamps can be preserved (see `preserve` option), all other attributes are ignored including ownership, context, and xattr. Special files such as character and block devices, named pipes, and sockets are skipped and links are not followed. Moving is performed using the rename operation of the underlying OS. For cross-device moving, lf falls back to copying and then deletes the original files if there are no errors. Operation errors are shown in the message line as well as the log file and they do not preemptively finish the corresponding file operation. File operations can be performed on the current selected file or alternatively on multiple files by selecting them first. When you 'copy' a file, lf doesn't actually copy the file on the disk, but only records its name to a file. The actual file copying takes place when you 'paste'. Similarly 'paste' after a 'cut' operation moves the file. You can customize copy and move operations by defining a 'paste' command. This is a special command that is called when it is defined instead of the builtin implementation. You can use the following example as a starting point: Some useful things to be considered are to use the backup ('--backup') and/or preserve attributes ('-a') options with 'cp' and 'mv' commands if they support it (i.e. GNU implementation), change the command type to asynchronous, or use 'rsync' command with progress bar option for copying and feed the progress to the client periodically with remote 'echo' calls. By default, lf does not assign 'delete' command to a key to protect new users. You can customize file deletion by defining a 'delete' command. You can also assign a key to this command if you like. An example command to move selected files to a trash folder and remove files completely after a prompt are provided in the example configuration file. There are two mechanisms implemented in lf to search a file in the current directory. Searching is the traditional method to move the selection to a file matching a given pattern. Finding is an alternative way to search for a pattern possibly using fewer keystrokes. Searching mechanism is implemented with commands 'search' (default '/'), 'search-back' (default '?'), 'search-next' (default 'n'), and 'search-prev' (default 'N'). You can enable 'globsearch' option to match with a glob pattern. Globbing supports '*' to match any sequence, '?' to match any character, and '[...]' or '[^...] to match character sets or ranges. You can enable 'incsearch' option to jump to the current match at each keystroke while typing. In this mode, you can either use 'cmd-enter' to accept the search or use 'cmd-escape' to cancel the search. You can also map some other commands with 'cmap' to accept the search and execute the command immediately afterwards. For example, you can use the right arrow key to finish the search and open the selected file with the following mapping: Finding mechanism is implemented with commands 'find' (default 'f'), 'find-back' (default 'F'), 'find-next' (default ';'), 'find-prev' (default ','). You can disable 'anchorfind' option to match a pattern at an arbitrary position in the filename instead of the beginning. You can set the number of keys to match using 'findlen' option. If you set this value to zero, then the the keys are read until there is only a single match. Default values of these two options are set to jump to the first file with the given initial. Some options effect both searching and finding. You can disable 'wrapscan' option to prevent searches to wrap around at the end of the file list. You can disable 'ignorecase' option to match cases in the pattern and the filename. This option is already automatically overridden if the pattern contains upper case characters. You can disable 'smartcase' option to disable this behavior. Two similar options 'ignoredia' and 'smartdia' are provided to control matching diacritics in latin letters. You can define a an 'open' command (default 'l' and '<right>') to configure file opening. This command is only called when the current file is not a directory, otherwise the directory is entered instead. You can define it just as you would define any other command: It is possible to use different command types: You may want to use either file extensions or mime types from 'file' command: You may want to use 'setsid' before your opener command to have persistent processes that continue to run after lf quits. Regular shell commands (i.e. '$') drop to terminal which results in a flicker for commands that finishes immediately (e.g. 'xdg-open' in the above example). If you want to use asynchronous shell commands (i.e. '&') but also want to use the terminal when necessary (e.g. 'vi' in the above exxample), you can use a remote command: Note, asynchronous shell commands run in their own process group by default so they do not require the manual use of 'setsid'. Following command is provided by default: You may also use any other existing file openers as you like. Possible options are 'libfile-mimeinfo-perl' (executable name is 'mimeopen'), 'rifle' (ranger's default file opener), or 'mimeo' to name a few. lf previews files on the preview pane by printing the file until the end or the preview pane is filled. This output can be enhanced by providing a custom preview script for filtering. This can be used to highlight source codes, list contents of archive files or view pdf or image files to name a few. For coloring lf recognizes ansi escape codes. In order to use this feature you need to set the value of 'previewer' option to the path of an executable file. Five arguments are passed to the file, (1) current file name, (2) width, (3) height, (4) horizontal position, and (5) vertical position of preview pane respectively. Output of the execution is printed in the preview pane. You may also want to use the same script in your pager mapping as well: For 'less' pager, you may instead utilize 'LESSOPEN' mechanism so that useful information about the file such as the full path of the file can still be displayed in the statusline below: Since this script is called for each file selection change it needs to be as efficient as possible and this responsibility is left to the user. You may use file extensions to determine the type of file more efficiently compared to obtaining mime types from 'file' command. Extensions can then be used to match cleanly within a conditional: Another important consideration for efficiency is the use of programs with short startup times for preview. For this reason, 'highlight' is recommended over 'pygmentize' for syntax highlighting. Besides, it is also important that the application is processing the file on the fly rather than first reading it to the memory and then do the processing afterwards. This is especially relevant for big files. lf automatically closes the previewer script output pipe with a SIGPIPE when enough lines are read. When everything else fails, you can make use of the height argument to only feed the first portion of the file to a program for preview. Note that some programs may not respond well to SIGPIPE to exit with a non-zero return code and avoid caching. You may add a trailing '|| true' command to avoid such errors: You may also use an existing preview filter as you like. Your system may already come with a preview filter named 'lesspipe'. These filters may have a mechanism to add user customizations as well. See the related documentations for more information. lf changes the working directory of the process to the current directory so that shell commands always work in the displayed directory. After quitting, it returns to the original directory where it is first launched like all shell programs. If you want to stay in the current directory after quitting, you can use one of the example lfcd wrapper shell scripts provided in the repository at https://github.com/gokcehan/lf/tree/master/etc There is a special command 'on-cd' that runs a shell command when it is defined and the directory is changed. You can define it just as you would define any other command: If you want to print escape sequences, you may redirect 'printf' output to '/dev/tty'. The following xterm specific escape sequence sets the terminal title to the working directory: This command runs whenever you change directory but not on startup. You can add an extra call to make it run on startup as well: Note that all shell commands are possible but '%' and '&' are usually more appropriate as '$' and '!' causes flickers and pauses respectively. There is also a 'pre-cd' command, that works like 'on-cd', but is run before the directory is actually changed. lf tries to automatically adapt its colors to the environment. It starts with a default colorscheme and updates colors using values of existing environment variables possibly by overwriting its previous values. Colors are set in the following order: Please refer to the corresponding man pages for more information about 'LSCOLORS' and 'LS_COLORS'. 'LF_COLORS' is provided with the same syntax as 'LS_COLORS' in case you want to configure colors only for lf but not ls. This can be useful since there are some differences between ls and lf, though one should expect the same behavior for common cases. Colors file is provided for easier configuration without environment variables. This file should consist of whitespace separated pairs with '#' character to start comments until the end of line. You can configure lf colors in two different ways. First, you can only configure 8 basic colors used by your terminal and lf should pick up those colors automatically. Depending on your terminal, you should be able to select your colors from a 24-bit palette. This is the recommended approach as colors used by other programs will also match each other. Second, you can set the values of environment variables or colors file mentioned above for fine grained customization. Note that 'LS_COLORS/LF_COLORS' are more powerful than 'LSCOLORS' and they can be used even when GNU programs are not installed on the system. You can combine this second method with the first method for best results. Lastly, you may also want to configure the colors of the prompt line to match the rest of the colors. Colors of the prompt line can be configured using the 'promptfmt' option which can include hardcoded colors as ansi escapes. See the default value of this option to have an idea about how to color this line. It is worth noting that lf uses as many colors advertised by your terminal's entry in terminfo or infocmp databases on your system. If an entry is not present, it falls back to an internal database. If your terminal supports 24-bit colors but either does not have a database entry or does not advertise all capabilities, you can enable support by setting the '$COLORTERM' variable to 'truecolor' or ensuring '$TERM' is set to a value that ends with '-truecolor'. Default lf colors are mostly taken from GNU dircolors defaults. These defaults use 8 basic colors and bold attribute. Default dircolors entries with background colors are simplified to avoid confusion with current file selection in lf. Similarly, there are only file type matchings and extension matchings are left out for simplicity. Default values are as follows given with their matching order in lf: Note that lf first tries matching file names and then falls back to file types. The full order of matchings from most specific to least are as follows: For example, given a regular text file '/path/to/README.txt', the following entries are checked in the configuration and the first one to match is used: Given a regular directory '/path/to/example.d', the following entries are checked in the configuration and the first one to match is used: Note that glob-like patterns do not actually perform glob matching due to performance reasons. For example, you can set a variable as follows: Having all entries on a single line can make it hard to read. You may instead divide it to multiple lines in between double quotes by escaping newlines with backslashes as follows: Having such a long variable definition in a shell configuration file might be undesirable. You may instead use the colors file for configuration. A sample colors file can be found at https://github.com/gokcehan/lf/blob/master/etc/colors.example You may also see the wiki page for ansi escape codes https://en.wikipedia.org/wiki/ANSI_escape_code Icons are configured using 'LF_ICONS' environment variable or an icons file. The variable uses the same syntax as 'LS_COLORS/LF_COLORS'. Instead of colors, you should put a single characters as values of entries. Icons file should consist of whitespace separated pairs with '#' character to start comments until the end of line. Do not forget to enable 'icons' option to see the icons. Default values are as follows given with their matching order in lf: A sample icons file can be found at https://github.com/gokcehan/lf/blob/master/etc/icons.example
Package twilio provides server side authentication utilities for applications to integrate with Twilio service. Access Tokens are short-lived tokens that can be used to authenticate Twilio Client SDKs like Video and IP Messaging. See detail in https://www.twilio.com/docs/api/rest/access-tokens. Capability Tokens are a secure way of setting up your device to access various features of Twilio, allowing you to add Twilio capabilities to web and mobile applications without exposing your AuthToken in any client-side environment. See detail in https://www.twilio.com/docs/api/client/capability-tokens.
Package nokiahealth is a client module for working with the Nokia Health (previously Withings) API. The current version (v2) of this module has been updated to work with the newer Oauth2 implementation of the API. As with all Oauth2 APIs, you must obtain authorization to access the users data. To do so you must first register your application with the nokia health api to obtain a ClientID and ClientSecret. Once you have your client information you can now create a new client. You will need to also provide the redirectURL you provided during application registration. This URL is where the user will be redirected to and will include the code you need to generate the accessToken needed to access the users data. Under normal situations you should have an http server listening on that address and pull the code from the URL query parameters. You can now use the client to generate the authorization URL the user needs to navigate to. This URL will take the user to a page that will prompt them to allow your application to access their data. The state string returned by the method is a randomly generated BASE64 string using the crypto/rand module. It will be returned in the redirect as a query parameter and the two values verified they match. It's not required, but useful for security reasons. Using the code returned by the redirect in the query parameters, you can generate a new user. This user struct can be used to immediately perform actions against the users data. It also contains the token you should save somewhere for reuse. Obviously use whatever context you would like here. Make sure the save at least the refreshToken for accessing the user data at a later date. You may also save the accessToken, but it does expire and creating a new client from saved token data only requires the refreshToken. You can easily create a user from a saved token using the NewUserFromRefreshToken method. A working configured client is required for the user generated from this method to work. The user struct has various methods associated with each API endpoint to perform data retrieval. The methods take a specific param struct specifying the api options to use on the request. The API is a bit "special" so the params vary a bit between each method. The client does what it can to smooth those out but there is only so much that can be done. Every method has two forms, one that accepts a context and one that does now. This allows you to provide a context on each request if you would like to. By default all methods utilize a context to timeout the request to the API. The value of the timeout is stored on the Client and can be access as/set on Client.Timeout. Setting is _not_ thread safe and should only be set on client creation. If you need to change the timeout for different requests use the methodCtx variant of the method. By default the state generated by the AuthCodeURL utilized crypto/rand. If you would like to implement your own random method you can do so by assigning the function to Rand field of the Client struct. The function should support the Rand type. Also this is _not_ thread safe so only perform this action on client creation. By default every returned response will be parsed and the parsed data returned. If you need access to the raw request data you can enable it by setting the SaveRawResponse field of the client struct to true. This should be done at client creation time. With it set to true the RawResponse field of the returned structs will include the raw response. Some data request methods include a parseResponse field on the params struct. If this is included additional parsing is performed to make the data more usable. This can be seen on GetBodyMeasures for example. You can include the path fields sent to the API by setting IncludePath to true on the client. This is primarily used for debugging but could be helpful in some situations. By default the client will request all known scopes. If you would like to pair down this you can change the scope by using the SetScope method of the client. Consts in the form of ScopeXxx are provided to aid selection.
Parallel Sync - parallel recursive copying of directories psync is a tool which copies a directory recursively to another directory. Unlike "cp -r", which walks through the files and subdirectories in sequential order, psync copies several files concurrently by using threads. psync was written to speed up copying of large trees to or from a network file system like GlusterFS, Ceph, WebDAV or NFS. Operations on such file systems are usually latency bound, especially with small files. Parallel execution can help to utilize the bandwidth better and avoid that the latencies sum up, as this is the case in sequential operations. Currently, psync does only copy directory trees, similar to "cp -r". A "sync" mode, similar to "rsync -rl" is planned. See [GOALS.md](GOALD.md) on how psync finally may look like. psync is invoked as follows: Copy all files and subdirectories from /data/src into /data/dest. /data/src and /data/dest must exist and must be directories. A recursive copy of a directory can be a throughput bound or latency bound operation, depending on the size of files and characteristics of the source and/or destination file system. When copying between standard file systems on two local hard disks, the operation is typically throughput bound, and copying in parallel has no performance advantage over copying sequentially. In this case, you have a bunch of options, including "cp -r" or "rsync". However, when copying from or to network file systems (NFS, CIFS), WAN storage (WebDAV, external cloud services), distributed file systems (GlusterFS, CephFS) or file systems that live on a DRBD device, the latency for each file access is often limiting performance factor. With sequential copies, the operation can consume lots of time, although the bandwidth is not saturated. In this case, it can make up a significant performance boost if the files are copied in parallel. Parallel copying is typically not so useful when copying between local or very fast hard disks. psync can be used there, and with a moderate concurrency level like 2..5, it can be slightly faster than a sequential copy. Parallel copying should never be used when duplicating directories on the same physical hard disk. Even sequential copies suffer from the frequent hard disk head movements which are needed to read and write concurrently on/to the same disk. Parallel copying even increases the amount of head movements. psync uses goroutines for copying files in parallel. By default, 16 copy workers are spawned as goroutines, but the number can be adjusted with the -threads switch. Each worker waits for a directory to be submitted. It then handles all the directory entries sequentially. Files are copied one by one to the destination directory. When subdirectories are discovered, they are created on the destination side. Traversal of the subdirecory is then submitted to other workers and thus done in parallel to the current workload. Here are some performance values comparing psync to cp and rsync when copying a large directory structure with many small files from a local file system to an NFS share. The NFS server has an AMD E-350 CPU, 8 GB of RAM, a 2TB hard drive (WD Green series) running Debian GNU/Linux 10 (Linux kernel 4.19). The NFS export is a logical volume on the HDD with ext4 file system. The NFS export options are: rw,no_root_squash,async,no_subtree_check. The client is a workstation with AMD Ryzen 7 1700 CPU, 64 GB of RAM, running Ubuntu 18.04 LTS with HWE stack (Linux kernel 5.3). The data to copy is located on a 1TB SATA SSD with XFS, and buffered in memory. The NFS mount options are: fstype=nfs,vers=3,soft,intr,async. The hosts are connected over ethernet with 1 Gbit/s, ping latency is 160µs. The data is an extracted linux kernel source code 4.15.2 tarball, containing 62273 files and 32 symbolic links in 4377 directories, summing up to 892 MB (as seen by "du -s"). It is copied from the workstation to the server over NFS. The options for the three commands are selected comparably. They copy the files and links recursively and preserve permissions, but no ownership or time stamps. psync currently can only handle directories, regular files, and symbolic links. Other filesystem entries like devices, sockets or named pipes are silently ignored. psync preserves the Unix permissions (rwx) of the copied files and directories, but has currently no support for preserving other permission bits (suid, sticky). When using the according options, psync tries to preserve the ownership (user/group) and/or the access and modification time stamps. Preserve ownership does only work when psync is running under the root user account. Preserving the time stamps does only work for regular files and directories, not for symbolic links. psync does currently implement a simple recursive copy, like "cp -r", and not a versatile sync algorithm like rsync. There is no check wether a file already exists in the destination, nor its content and timestamps. Existing files on the destination side are not deleted when they don't exist on the source side. psync is being developed under Linux (Debian, Ubuntu, CentOS). It should work on other distributions, but this has not been tested. It does currently not compile for Windows, Darwin (MacOS), NetBSD and FreeBSD (but this should easily be fixed). psync is released under the terms of the GNU General Public License, version 3.
package extenderr is an error utility aimed at application servers. It's purpose is to supply the outter most caller (http handler, middleware, etc) eith useful info about the error, and communicate accurate and helpful status to clients and human end users. Gives errors additional annotations and retrivial of: All of the interfaces are private, but considered stable, such that if your use case deviates from this package, one should be able to implement the interface in a similar way that this package implements "Error", "Format", Cause", and "Unwrap". This package is safe to use on any error (and nil), it will return "zero" values for any unused fields, or any unimplimented interfaces when retrieving annotations.
Package twilio provides server side authentication utilities for applications to integrate with Twilio service. Access Tokens are short-lived tokens that can be used to authenticate Twilio Client SDKs like Video and IP Messaging. See detail in https://www.twilio.com/docs/api/rest/access-tokens. Capability Tokens are a secure way of setting up your device to access various features of Twilio, allowing you to add Twilio capabilities to web and mobile applications without exposing your AuthToken in any client-side environment. See detail in https://www.twilio.com/docs/api/client/capability-tokens.
fm is a terminal file manager. The source code can be found in the repository at https://github.com/pchchv/fm The documentation can be read from the terminal with the command 'fm -doc' or online at https://pkg.go.dev/github.com/pchchv/fm You can also use the command 'doc' (default '<f-1>') command inside fm to view the documentation in the pager. You can run the command 'fm -help' to see a description of the command line options. The following commands are provided by fm: The following command line commands are provided by fm: The following options can be used to customize the behavior of fm: The following environment variables are exported for shell commands: The following special shell commands are used to customize the behavior of fm when defined: The following commands/keybindings are provided by default: The following additional keybindings are provided by default: If the 'mouse' option is enabled, mouse buttons have the following default effects: # Configuration Configuration files should be located at: Colors file should be located at: Icons file should be located at: Selection file should be located at: Marks file should be located at: Tags file should be located at: History file should be located at: You can configure the default values of following variables to change these locations: # Commands This section shows information about builtin commands. Modal commands do not take any arguments, but instead change the operation mode to read their input conveniently, and so they are meant to be assigned to keybindings. Quit fm and return to the shell. Move/scroll the current file selection upwards/downwards by one/half a page/full page. Change the current working directory to the parent directory. If the current file is a directory, then change the current directory to it, otherwise, execute the 'open' command. A default 'open' command is provided to call the default system opener asynchronously with the current file as the argument. A custom 'open' command can be defined to override this default. Change the current working directory to the next/previous jumplist item. Move the current file selection to the top/bottom of the directory. Move the current file selection to the high/middle/low of the screen. Toggle the selection of the current file or files given as arguments. Reverse the selection of all files in the current directory (i.e. 'toggle' all files). Selections in other directories are not effected by this command. You can define a new command to select all files in the directory by combining 'invert' with 'unselect' (i.e. 'cmd select-all :unselect; invert'), though this will also remove selections in other directories. Remove the selection of all files in all directories. Select/unselect files that match the given glob. Calculate the total size for each of the selected directories. Option 'info' should include 'size' and option 'dircounts' should be disabled to show this size. If the total size of a directory is not calculated, it will be shown as '-'. If there are no selections, save the path of the current file to the copy buffer, otherwise, copy the paths of selected files. If there are no selections, save the path of the current file to the cut buffer, otherwise, copy the paths of selected files. Copy/Move files in copy/cut buffer to the current working directory. A custom 'paste' command can be defined to override this default. Clear file paths in copy/cut buffer. Synchronize copied/cut files with server. This command is automatically called when required. Draw the screen. This command is automatically called when required. Synchronize the terminal and redraw the screen. Load modified files and directories. This command is automatically called when required. Flush the cache and reload all files and directories. Print given arguments to the message line at the bottom. Print given arguments to the message line at the bottom and also to the log file. Print given arguments to the message line at the bottom as 'errorfmt' and also to the log file. Change the working directory to the given argument. Change the current file selection to the given argument. Remove the current file or selected file(s). A custom 'delete' command can be defined to override this default. Rename the current file using the builtin method. A custom 'rename' command can be defined to override this default. Read the configuration file given in the argument. Simulate key pushes given in the argument. Read a command to evaluate. Read a shell command to execute. Read a shell command to execute piping its standard I/O to the bottom statline. Read a shell command to execute and wait for a key press in the end. Read a shell command to execute asynchronously without standard I/O. Read key(s) to find the appropriate file name match in the forward/backward direction and jump to the next/previous match. Read a pattern to search for a file name match in the forward/backward direction and jump to the next/previous match. Command 'filter' reads a pattern to filter out and only view files matching the pattern. Command 'setfilter' does the same but uses an argument to set the filter immediately. You can supply an argument to 'filter', in order to use that as the starting prompt. Save the current directory as a bookmark assigned to the given key. Change the current directory to the bookmark assigned to the given key. A special bookmark "'" holds the previous directory after a 'mark-load', 'cd', or 'select' command. Remove a bookmark assigned to the given key. Tag a file with '*' or a single width character given in the argument. You can define a new tag clearing command by combining 'tag' with 'tag-toggle' (i.e. 'cmd tag-clear :tag; tag-toggle'). Tag a file with '*' or a single width character given in the argument if the file is untagged, otherwise remove the tag. # Command Line Commands The prompt character specifies which of the several command-line modes you are in. For example, the 'read' command takes you to the ':' mode. When the cursor is at the first character in ':' mode, pressing one of the keys '!', '$', '%', or '&' takes you to the corresponding mode. You can go back with 'cmd-delete-back' ('<backspace>' by default). The command line commands should be mostly compatible with readline keybindings. A character refers to a unicode code point, a word consists of letters and digits, and a unix word consists of any non-blank characters. Quit command line mode and return to normal mode. Autocomplete the current word. Autocomplete the current word with menu selection. You need to assign keys to these commands (e.g. 'cmap <tab> cmd-menu-complete; cmap <backtab> cmd-menu-complete-back'). You can use the assigned keys assigned to display the menu and then cycle through completion options. Accept the currently selected match in menu completion and close the menu. Execute the current line. Interrupt the current shell-pipe command and return to the normal mode. Go to next/previous item in the history. Move the cursor to the left/right. Move the cursor to the beginning/end of line. Delete the next character. Delete the previous character. When at the beginning of a prompt, returns either to normal mode or to ':' mode. Delete everything up to the beginning/end of line. Delete the previous unix word. Paste the buffer content containing the last deleted item. Transpose the positions of last two characters/words. Move the cursor by one word in forward/backward direction. Delete the next word in forward direction. Capitalize/uppercase/lowercase the current word and jump to the next word. # Options This section shows information about options to customize the behavior. Character ':' is used as the separator for list options '[]int' and '[]string'. When this option is enabled, find command starts matching patterns from the beginning of file names, otherwise, it can match at an arbitrary position. Automatically quit server when there are no clients left connected. Set the path of a cleaner file. The file should be executable. This file is called if previewing is enabled, the previewer is set, and the previously selected file had its preview cache disabled. Five arguments are passed to the file, (1) current file name, (2) width, (3) height, (4) horizontal position, and (5) vertical position of preview pane respectively. Preview clearing is disabled when the value of this option is left empty. Format strings for highlighting the cursor. 'cursorpreviewfmt' applies in panes that preview directories, and 'cursorfmt' applies in all other panes. The default is to make the normal cursor inverted and the preview cursor underlined. Some other possibilities to consider for the preview cursor: an empty string for no cursor, "\033[7;2m" for dimmed inverted text (visibility varies by terminal), "\033[7;90m" for inverted text with grey (aka "brightblack") background. If the format string contains the characters '%s', it is interpreted as a format string for 'fmt.Sprintf'. Such a string should end with the terminal reset sequence. For example, "\033[4m%s\033[0m" has the same effect as "\033[4m". Cache directory contents. When this option is enabled, directory sizes show the number of items inside instead of the total size of the directory, which needs to be calculated for each directory using 'calcdirsize'. This information needs to be calculated by reading the directory and counting the items inside. Therefore, this option is disabled by default for performance reasons. This option only has an effect when 'info' has a 'size' field and the pane is wide enough to show the information. 999 items are counted per directory at most, and bigger directories are shown as '999+'. Show directories first above regular files. Show only directories. If enabled, directories will also be passed to the previewer script. This allows custom previews for directories. Draw boxes around panes with box drawing characters. Format string of error messages shown in the bottom message line. If the format string contains the characters '%s', it is interpreted as a format string for 'fmt.Sprintf'. Such a string should end with the terminal reset sequence. For example, "\033[4m%s\033[0m" has the same effect as "\033[4m". File separator used in environment variables 'fs' and 'fx'. Number of characters prompted for the find command. When this value is set to 0, find command prompts until there is only a single match left. When this option is enabled, search command patterns are considered as globs, otherwise they are literals. With globbing, '*' matches any sequence, '?' matches any character, and '[...]' or '[^...]' matches character sets or ranges. Otherwise, these characters are interpreted as they are. Show hidden files. On Unix systems, hidden files are determined by the value of 'hiddenfiles'. On Windows, only files with hidden attributes are considered hidden files. List of hidden file glob patterns. Patterns can be given as relative or absolute paths. Globbing supports the usual special characters, '*' to match any sequence, '?' to match any character, and '[...]' or '[^...]' to match character sets or ranges. In addition, if a pattern starts with '!', then its matches are excluded from hidden files. To add multiple patterns, use ':' as a separator. Example: '.*:lost+found:*.bak' Save command history. Show icons before each item in the list. Sets 'IFS' variable in shell commands. It works by adding the assignment to the beginning of the command string as "IFS='...'; ...". The reason is that 'IFS' variable is not inherited by the shell for security reasons. This method assumes a POSIX shell syntax and so it can fail for non-POSIX shells. This option has no effect when the value is left empty. This option does not have any effect on Windows. Ignore case in sorting and search patterns. Ignore diacritics in sorting and search patterns. Jump to the first match after each keystroke during searching. Apply filter pattern after each keystroke during filtering. List of information shown for directory items at the right side of pane. Currently supported information types are 'size', 'time', 'atime', and 'ctime'. Information is only shown when the pane width is more than twice the width of information. Format string of the file time shown in the info column when it matches this year. Format string of the file time shown in the info column when it doesn't match this year. Send mouse events as input. Show the position number for directory items at the left side of pane. When 'relativenumber' option is enabled, only the current line shows the absolute position and relative positions are shown for the rest. Set the interval in seconds for periodic checks of directory updates. This works by periodically calling the 'load' command. Note that directories are already updated automatically in many cases. This option can be useful when there is an external process changing the displayed directory and you are not doing anything in fm. Periodic checks are disabled when the value of this option is set to zero. Show previews of files and directories at the right most pane. If the file has more lines than the preview pane, rest of the lines are not read. Files containing the null character (U+0000) in the read portion are considered binary files and displayed as 'binary'. Set the path of a previewer file to filter the content of regular files for previewing. The file should be executable. Five arguments are passed to the file, (1) current file name, (2) width, (3) height, (4) horizontal position, and (5) vertical position of preview pane respectively. SIGPIPE signal is sent when enough lines are read. If the previewer returns a non-zero exit code, then the preview cache for the given file is disabled. This means that if the file is selected in the future, the previewer is called once again. Preview filtering is disabled and files are displayed as they are when the value of this option is left empty. Format string of the prompt shown in the top line. Special expansions are provided, '%u' as the user name, '%h' as the host name, '%w' as the working directory, '%d' as the working directory with a trailing path separator, '%f' as the file name, and '%F' as the current filter. '%S' may be used once and will provide a spacer so that the following parts are right aligned on the screen. Home folder is shown as '~' in the working directory expansion. Directory names are automatically shortened to a single character starting from the left most parent when the prompt does not fit to the screen. List of ratios of pane widths. Number of items in the list determines the number of panes in the ui. When 'preview' option is enabled, the right most number is used for the width of preview pane. Show the position number relative to the current line. When 'number' is enabled, current line shows the absolute position, otherwise nothing is shown. Reverse the direction of sort. Selection mode for commands. When set to 'all' it will use the selected files from all directories. When set to 'dir' it will only use the selected files in the current directory. Minimum number of offset lines shown at all times in the top and the bottom of the screen when scrolling. The current line is kept in the middle when this option is set to a large value that is bigger than the half of number of lines. A smaller offset can be used when the current file is close to the beginning or end of the list to show the maximum number of items. Shell executable to use for shell commands. Shell commands are executed as 'shell shellopts shellflag command -- arguments'. Command line flag used to pass shell commands. List of shell options to pass to the shell executable. Override 'ignorecase' option when the pattern contains an uppercase character. This option has no effect when 'ignorecase' is disabled. Override 'ignoredia' option when the pattern contains a character with diacritic. This option has no effect when 'ignoredia' is disabled. Sort type for directories. Currently supported sort types are 'natural', 'name', 'size', 'time', 'ctime', 'atime', and 'ext'. Number of space characters to show for horizontal tabulation (U+0009) character. Format string of the tags. If the format string contains the characters '%s', it is interpreted as a format string for 'fmt.Sprintf'. Such a string should end with the terminal reset sequence. For example, "\033[4m%s\033[0m" has the same effect as "\033[4m". Marks to be considered temporary (e.g. 'abc' refers to marks 'a', 'b', and 'c'). These marks are not synced to other clients and they are not saved in the bookmarks file. Note that the special bookmark "'" is always treated as temporary and it does not need to be specified. Format string of the file modification time shown in the bottom line. Truncate character shown at the end when the file name does not fit to the pane. String shown after commands of shell-wait type. Searching can wrap around the file list. Scrolling can wrap around the file list. Any option that is prefixed with 'user_' is a user defined option and can be set to any string. Inside a user defined command the value will be provided in the 'fm_user_{option}' environment variable. These options are not used by fm and are not persisted. # Environment Variables The following variables are exported for shell commands: These are referred with a '$' prefix on POSIX shells (e.g. '$f'), between '%' characters on Windows cmd (e.g. '%f%'), and with a '$env:' prefix on Windows powershell (e.g. '$env:f'). Current file selection as a full path. Selected file(s) separated with the value of 'filesep' option as full path(s). Selected file(s) (i.e. 'fs') if there are any selected files, otherwise current file selection (i.e. 'f'). Id of the running client. Present working directory. Initial working directory. The value of this variable is set to the current nesting level when you run fm from a shell spawned inside fm. You can add the value of this variable to your shell prompt to make it clear that your shell runs inside fm. For example, with POSIX shells, you can use '[ -n "$FM_LEVEL" ] && PS1="$PS1""(fm level: $FM_LEVEL) "' in your shell configuration file (e.g. '~/.bashrc'). If this variable is set in the environment, use the same value, otherwise set the value to 'start' in Windows, 'open' in MacOS, 'xdg-open' in others. If this variable is set in the environment, use the same value, otherwise set the value to 'vi' on Unix, 'notepad' in Windows. If this variable is set in the environment, use the same value, otherwise set the value to 'less' on Unix, 'more' in Windows. If this variable is set in the environment, use the same value, otherwise set the value to 'sh' on Unix, 'cmd' in Windows. Value of the {option}. Value of the user_{option}. Width/Height of the terminal. # Special Commands This section shows information about special shell commands. This shell command can be defined to override the default 'open' command when the current file is not a directory. This shell command can be defined to override the default 'paste' command. This shell command can be defined to override the default 'rename' command. This shell command can be defined to override the default 'delete' command. This shell command can be defined to be executed before changing a directory. This shell command can be defined to be executed after changing a directory. This shell command can be defined to be executed after the selection changes. This shell command can be defined to be executed before quit. # Prefixes The following command prefixes are used by fm: The same evaluator is used for the command line and the configuration file for read and shell commands. The difference is that prefixes are not necessary in the command line. Instead, different modes are provided to read corresponding commands. These modes are mapped to the prefix keys above by default. # Syntax Characters from '#' to newline are comments and ignored: There are four special commands ('set', 'map', 'cmap', and 'cmd') for configuration. Command 'set' is used to set an option which can be boolean, integer, or string: Command 'map' is used to bind a key to a command which can be builtin command, custom command, or shell command: Command 'cmap' is used to bind a key on the command line to a command line command or any other command: You can delete an existing binding by leaving the expression empty: Command 'cmd' is used to define a custom command: You can delete an existing command by leaving the expression empty: If there is no prefix then ':' is assumed: An explicit ':' can be provided to group statements until a newline which is especially useful for 'map' and 'cmd' commands: If you need multiline you can wrap statements in '{{' and '}}' after the proper prefix. # Key Mappings Regular keys are assigned to a command with the usual syntax: Keys combined with the shift key simply use the uppercase letter: Special keys are written in between '<' and '>' characters and always use lowercase letters: Angle brackets can be assigned with their special names: Function keys are prefixed with 'f' character: Keys combined with the control key are prefixed with 'c' character: Keys combined with the alt key are assigned in two different ways depending on the behavior of your terminal. Older terminals (e.g. xterm) may set the 8th bit of a character when the alt key is pressed. On these terminals, you can use the corresponding byte for the mapping: Newer terminals (e.g. gnome-terminal) may prefix the key with an escape key when the alt key is pressed. fm uses the escape delaying mechanism to recognize alt keys in these terminals (delay is 100ms). On these terminals, keys combined with the alt key are prefixed with 'a' character: Please note that, some key combinations are not possible due to the way terminals work (e.g. control and h combination sends a backspace key instead). The easiest way to find the name of a key combination is to press the key while fm is running and read the name of the key from the unknown mapping error. Mouse buttons are prefixed with 'm' character: Mouse wheel events are also prefixed with 'm' character: # Push Mappings The usual way to map a key sequence is to assign it to a named or unnamed command. While this provides a clean way to remap builtin keys as well as other commands, it can be limiting at times. For this reason 'push' command is provided by fm. This command is used to simulate key pushes given as its arguments. You can 'map' a key to a 'push' command with an argument to create various keybindings. This is mainly useful for two purposes. First, it can be used to map a command with a command count: Second, it can be used to avoid typing the name when a command takes arguments: One thing to be careful is that since 'push' command works with keys instead of commands it is possible to accidentally create recursive bindings: These types of bindings create a deadlock when executed. # Shell Commands Regular shell commands are the most basic command type that is useful for many purposes. For example, we can write a shell command to move selected file(s) to trash. A first attempt to write such a command may look like this: We check '$fs' to see if there are any selected files. Otherwise we just delete the current file. Since this is such a common pattern, a separate '$fx' variable is provided. We can use this variable to get rid of the conditional: The trash directory is checked each time the command is executed. We can move it outside of the command so it would only run once at startup: Since these are one liners, we can drop '{{' and '}}': Finally note that we set 'IFS' variable manually in these commands. Instead we could use the 'ifs' option to set it for all shell commands (i.e. 'set ifs "\n"'). This can be especially useful for interactive use (e.g. '$rm $f' or '$rm $fs' would simply work). This option is not set by default as it can behave unexpectedly for new users. However, use of this option is highly recommended and it is assumed in the rest of the documentation. # Piping Shell Commands Regular shell commands have some limitations in some cases. When an output or error message is given and the command exits afterwards, the ui is immediately resumed and there is no way to see the message without dropping to shell again. Also, even when there is no output or error, the ui still needs to be paused while the command is running. This can cause flickering on the screen for short commands and similar distractions for longer commands. Instead of pausing the ui, piping shell commands connects stdin, stdout, and stderr of the command to the statline in the bottom of the ui. This can be useful for programs following the Unix philosophy to give no output in the success case, and brief error messages or prompts in other cases. For example, following rename command prompts for overwrite in the statline if there is an existing file with the given name: You can also output error messages in the command and it will show up in the statline. For example, an alternative rename command may look like this: Note that input is line buffered and output and error are byte buffered. # Waiting Shell Commands Waiting shell commands are similar to regular shell commands except that they wait for a key press when the command is finished. These can be useful to see the output of a program before the ui is resumed. Waiting shell commands are more appropriate than piping shell commands when the command is verbose and the output is best displayed as multiline. # Asynchronous Shell Commands Asynchronous shell commands are used to start a command in the background and then resume operation without waiting for the command to finish. Stdin, stdout, and stderr of the command is neither connected to the terminal nor to the ui. # Remote Commands One of the more advanced features in fm is remote commands. All clients connect to a server on startup. It is possible to send commands to all or any of the connected clients over the common server. This is used internally to notify file selection changes to other clients. To use this feature, you need to use a client which supports communicating with a Unix domain socket. OpenBSD implementation of netcat (nc) is one such example. You can use it to send a command to the socket file: Since such a client may not be available everywhere, fm comes bundled with a command line flag to be used as such. When using fm, you do not need to specify the address of the socket file. This is the recommended way of using remote commands since it is shorter and immune to socket file address changes: In this command 'send' is used to send the rest of the string as a command to all connected clients. You can optionally give it an id number to send a command to a single client: All clients have a unique id number but you may not be aware of the id number when you are writing a command. For this purpose, an '$id' variable is exported to the environment for shell commands. The value of this variable is set to the process id of the client. You can use it to send a remote command from a client to the server which in return sends a command back to itself. So now you can display a message in the current client by calling the following in a shell command: Since fm does not have control flow syntax, remote commands are used for such needs. For example, you can configure the number of columns in the ui with respect to the terminal width as follows: Besides 'send' command, there is a 'quit' command to quit the server when there are no connected clients left, and a 'quit!' command to force quit the server by closing client connections first: Lastly, there is a 'conn' command to connect the server as a client. This should not be needed for users. # File Operations fm uses its own builtin copy and move operations by default. These are implemented as asynchronous operations and progress is shown in the bottom ruler. These commands do not overwrite existing files or directories with the same name. Instead, a suffix that is compatible with '--backup=numbered' option in GNU cp is added to the new files or directories. Only file modes are preserved and all other attributes are ignored including ownership, timestamps, context, and xattr. Special files such as character and block devices, named pipes, and sockets are skipped and links are not followed. Moving is performed using the rename operation of the underlying OS. For cross-device moving, fm falls back to copying and then deletes the original files if there are no errors. Operation errors are shown in the message line as well as the log file and they do not preemptively finish the corresponding file operation. File operations can be performed on the current selected file or alternatively on multiple files by selecting them first. When you 'copy' a file, fm doesn't actually copy the file on the disk, but only records its name to a file. The actual file copying takes place when you 'paste'. Similarly 'paste' after a 'cut' operation moves the file. You can customize copy and move operations by defining a 'paste' command. This is a special command that is called when it is defined instead of the builtin implementation. You can use the following example as a starting point: Some useful things to be considered are to use the backup ('--backup') and/or preserve attributes ('-a') options with 'cp' and 'mv' commands if they support it (i.e. GNU implementation), change the command type to asynchronous, or use 'rsync' command with progress bar option for copying and feed the progress to the client periodically with remote 'echo' calls. By default, fm does not assign 'delete' command to a key to protect new users. You can customize file deletion by defining a 'delete' command. You can also assign a key to this command if you like. An example command to move selected files to a trash folder and remove files completely after a prompt are provided in the example configuration file. # Searching Files There are two mechanisms implemented in fm to search a file in the current directory. Searching is the traditional method to move the selection to a file matching a given pattern. Finding is an alternative way to search for a pattern possibly using fewer keystrokes. Searching mechanism is implemented with commands 'search' (default '/'), 'search-back' (default '?'), 'search-next' (default 'n'), and 'search-prev' (default 'N'). You can enable 'globsearch' option to match with a glob pattern. Globbing supports '*' to match any sequence, '?' to match any character, and '[...]' or '[^...] to match character sets or ranges. You can enable 'incsearch' option to jump to the current match at each keystroke while typing. In this mode, you can either use 'cmd-enter' to accept the search or use 'cmd-escape' to cancel the search. You can also map some other commands with 'cmap' to accept the search and execute the command immediately afterwards. For example, you can use the right arrow key to finish the search and open the selected file with the following mapping: Finding mechanism is implemented with commands 'find' (default 'f'), 'find-back' (default 'F'), 'find-next' (default ';'), 'find-prev' (default ','). You can disable 'anchorfind' option to match a pattern at an arbitrary position in the filename instead of the beginning. You can set the number of keys to match using 'findlen' option. If you set this value to zero, then the the keys are read until there is only a single match. Default values of these two options are set to jump to the first file with the given initial. Some options effect both searching and finding. You can disable 'wrapscan' option to prevent searches to wrap around at the end of the file list. You can disable 'ignorecase' option to match cases in the pattern and the filename. This option is already automatically overridden if the pattern contains upper case characters. You can disable 'smartcase' option to disable this behavior. Two similar options 'ignoredia' and 'smartdia' are provided to control matching diacritics in latin letters. # Opening Files You can define a an 'open' command (default 'l' and '<right>') to configure file opening. This command is only called when the current file is not a directory, otherwise the directory is entered instead. You can define it just as you would define any other command: It is possible to use different command types: You may want to use either file extensions or mime types from 'file' command: You may want to use 'setsid' before your opener command to have persistent processes that continue to run after fm quits. Regular shell commands (i.e. '$') drop to terminal which results in a flicker for commands that finishes immediately (e.g. 'xdg-open' in the above example). If you want to use asynchronous shell commands (i.e. '&') but also want to use the terminal when necessary (e.g. 'vi' in the above exxample), you can use a remote command: Note, asynchronous shell commands run in their own process group by default so they do not require the manual use of 'setsid'. Following command is provided by default: You may also use any other existing file openers as you like. Possible options are 'libfile-mimeinfo-perl' (executable name is 'mimeopen'), 'rifle' (ranger's default file opener), or 'mimeo' to name a few. # Previewing Files fm previews files on the preview pane by printing the file until the end or the preview pane is filled. This output can be enhanced by providing a custom preview script for filtering. This can be used to highlight source codes, list contents of archive files or view pdf or image files to name a few. For coloring fm recognizes ansi escape codes. In order to use this feature you need to set the value of 'previewer' option to the path of an executable file. Five arguments are passed to the file, (1) current file name, (2) width, (3) height, (4) horizontal position, and (5) vertical position of preview pane respectively. Output of the execution is printed in the preview pane. You may also want to use the same script in your pager mapping as well: For 'less' pager, you may instead utilize 'LESSOPEN' mechanism so that useful information about the file such as the full path of the file can still be displayed in the statusline below: Since this script is called for each file selection change it needs to be as efficient as possible and this responsibility is left to the user. You may use file extensions to determine the type of file more efficiently compared to obtaining mime types from 'file' command. Extensions can then be used to match cleanly within a conditional: Another important consideration for efficiency is the use of programs with short startup times for preview. For this reason, 'highlight' is recommended over 'pygmentize' for syntax highlighting. Besides, it is also important that the application is processing the file on the fly rather than first reading it to the memory and then do the processing afterwards. This is especially relevant for big files. fm automatically closes the previewer script output pipe with a SIGPIPE when enough lines are read. When everything else fails, you can make use of the height argument to only feed the first portion of the file to a program for preview. Note that some programs may not respond well to SIGPIPE to exit with a non-zero return code and avoid caching. You may add a trailing '|| true' command to avoid such errors: You may also use an existing preview filter as you like. Your system may already come with a preview filter named 'lesspipe'. These filters may have a mechanism to add user customizations as well. See the related documentations for more information. # Changing Directory fm changes the working directory of the process to the current directory so that shell commands always work in the displayed directory. After quitting, it returns to the original directory where it is first launched like all shell programs. There is a special command 'on-cd' that runs a shell command when it is defined and the directory is changed. You can define it just as you would define any other command: If you want to print escape sequences, you may redirect 'printf' output to '/dev/tty'. The following xterm specific escape sequence sets the terminal title to the working directory: This command runs whenever you change directory but not on startup. You can add an extra call to make it run on startup as well: Note that all shell commands are possible but '%' and '&' are usually more appropriate as '$' and '!' causes flickers and pauses respectively. There is also a 'pre-cd' command, that works like 'on-cd', but is run before the directory is actually changed. # Colors fm tries to automatically adapt its colors to the environment. It starts with a default colorscheme and updates colors using values of existing environment variables possibly by overwriting its previous values. Colors are set in the following order: Please refer to the corresponding man pages for more information about 'LSCOLORS' and 'LS_COLORS'. 'FM_COLORS' is provided with the same syntax as 'LS_COLORS' in case you want to configure colors only for fm but not ls. This can be useful since there are some differences between ls and fm, though one should expect the same behavior for common cases. Colors file is provided for easier configuration without environment variables. This file should consist of whitespace separated pairs with '#' character to start comments until the end of line. You can configure fm colors in two different ways. First, you can only configure 8 basic colors used by your terminal and fm should pick up those colors automatically. Depending on your terminal, you should be able to select your colors from a 24-bit palette. This is the recommended approach as colors used by other programs will also match each other. Second, you can set the values of environment variables or colors file mentioned above for fine grained customization. Note that 'LS_COLORS/FM_COLORS' are more powerful than 'LSCOLORS' and they can be used even when GNU programs are not installed on the system. You can combine this second method with the first method for best results. Lastly, you may also want to configure the colors of the prompt line to match the rest of the colors. Colors of the prompt line can be configured using the 'promptfmt' option which can include hardcoded colors as ansi escapes. See the default value of this option to have an idea about how to color this line. It is worth noting that fm uses as many colors advertised by your terminal's entry in terminfo or infocmp databases on your system. If an entry is not present, it falls back to an internal database. If your terminal supports 24-bit colors but either does not have a database entry or does not advertise all capabilities, you can enable support by setting the '$COLORTERM' variable to 'truecolor' or ensuring '$TERM' is set to a value that ends with '-truecolor'. Default fm colors are mostly taken from GNU dircolors defaults. These defaults use 8 basic colors and bold attribute. Default dircolors entries with background colors are simplified to avoid confusion with current file selection in fm. Similarly, there are only file type matchings and extension matchings are left out for simplicity. Default values are as follows given with their matching order in fm: Note that fm first tries matching file names and then falls back to file types. The full order of matchings from most specific to least are as follows: For example, given a regular text file '/path/to/README.txt', the following entries are checked in the configuration and the first one to match is used: Given a regular directory '/path/to/example.d', the following entries are checked in the configuration and the first one to match is used: Note that glob-like patterns do not actually perform glob matching due to performance reasons. For example, you can set a variable as follows: Having all entries on a single line can make it hard to read. You may instead divide it to multiple lines in between double quotes by escaping newlines with backslashes as follows: Having such a long variable definition in a shell configuration file might be undesirable. You may instead use the colors file for configuration. You may also see the wiki page for ansi escape codes https://en.wikipedia.org/wiki/ANSI_escape_code # Icons Icons are configured using 'FM_ICONS' environment variable or an icons file. The variable uses the same syntax as 'LS_COLORS/FM_COLORS'. Instead of colors, you should put a single characters as values of entries. Icons file should consist of whitespace separated pairs with '#' character to start comments until the end of line. Do not forget to enable 'icons' option to see the icons. Default values are as follows given with their matching order in fm:
Package sshego is a golang libary that does secure port forwarding over ssh. Also `gosshtun` is a command line utility included here that demonstrates use of the library; and may be useful standalone. The intent of having a Go library is so that it can be used to secure (via SSH tunnel) any other traffic that your Go application would normally have to do over cleartext TCP. While you could always run a tunnel as a separate process, by running the tunnel in process with your application, you know the tunnel is running when the process is running. It's just simpler to administer; only one thing to start instead of two. Also this is much simpler, and much faster, than using a virtual private network (VPN). For a speed comparison, consider [1] where SSH is seen to be at least 2x faster than OpenVPN. [1] http://serverfault.com/questions/653211/ssh-tunneling-is-faster-than-openvpn-could-it-be The sshego library typically acts as an ssh client, but also provides options to support running an embedded sshd server daemon. Port forwarding is the most typical use of the client, and this is the equivalent of using the standalone `ssh` client program and giving the `-L` and/or `-R` flags. If you only trust the user running your application and not your entire host, you can further restrict access by using either DialConfig.Dial() for a direct-tcpip connection, or by using the unix-domain-socket support. For example, is equivalent to with the addendum that `gosshtun` requires the use of passwordless private `-key` file, and will never prompt you for a password at the keyboard. This makes it ideal for embedding inside your application to secure your (e.g. mysql, postgres, other cleartext) traffic. As many connections as you need will be multiplexed over the same ssh tunnel. We check the sshd server's host key. We prevent MITM attacks by only allowing new servers if `-new` is given. You should give `-new` only once at setup time. Then the lack of `-new` can protect you on subsequent runs, because the server's host key must match what we were given the first time. means the following two network hops will happen, when a local browser connects to localhost:8888 where (a) takes place inside the previously established ssh tunnel. Connection (b) takes place over basic, un-adorned, un-encrypted TCP/IP. Of course you could always run `gosshtun` again on the remote host to secure the additional hop as well, but typically -remote is aimed at the 127.0.0.1, which will be internal to the remote host itself and so needs no encryption.
Package twilio provides server side authentication utilities for applications to integrate with Twilio service. Access Tokens are short-lived tokens that can be used to authenticate Twilio Client SDKs like Video and IP Messaging. See detail in https://www.twilio.com/docs/api/rest/access-tokens. Capability Tokens are a secure way of setting up your device to access various features of Twilio, allowing you to add Twilio capabilities to web and mobile applications without exposing your AuthToken in any client-side environment. See detail in https://www.twilio.com/docs/api/client/capability-tokens.
Package ntp provides an implementation of a Simple NTP (SNTP) client capable of querying the current time from a remote NTP server. See RFC5905 (https://tools.ietf.org/html/rfc5905) for more details. This approach grew out of a go-nuts post by Michael Hofmann: https://groups.google.com/forum/?fromgroups#!topic/golang-nuts/FlcdMU5fkLQ
Summary: rmq passes msgpack2 messages over websockets between Golang and the R statistical language. It is an R package. ## Or: How to utilize Go libraries from R. The much anticipated Go 1.5 release brought strong support for building C-style shared libraries (.so files) from Go source code and libraries. *This is huge*. It opens up many exciting new possibilities. In this project (rmq), we explore using this new capability to extend R with Go libraries. Package rmq provides messaging based on msgpack and websockets. It demonstrates calling from R into Golang (Go) libraries to extend R with functionality available in Go. ## why msgpack Msgpack is binary and self-describing. It can be extremely fast to parse. Moreover I don't have to worry about where to get the schema .proto file. The thorny problem of how to *create* new types of objects when I'm inside R just goes away. The data is self-describing, and new structures can be created at run-time. Msgpack supports a similar forward evolution/backwards compatibility strategy as protobufs. Hence it allows incremental rolling upgrades of large compute clusters using it as a protocol. That was the whole raison d'etre of protobufs. Old code ignores new data fields. New code uses defaults for missing fields when given old data. Icing on the cake: msgpack (like websocket) is usable from javascript in the browser (unlike most everything else). Because it is very simple, msgpack has massive cross-language support (55 bindings are listed at http://msgpack.org). Overall, msgpack is flexible while being fast, simple and widely supported, making it a great fit for data exchange between interpretted and compiled environments. ## implementation We use the Go library https://github.com/ugorji/go codec for msgpack encoding and decoding. This is a high performance implementation. We use it in a mode where it only supports the updated msgpack 2 (current) spec. This is critical for interoperability with other compiled languages that distinguish between utf8 strings and binary blobs (otherwise embedded '\0' zeros in blobs cause problems). For websockets, we use the terrific https://github.com/gorilla/websocket library. As time permits in the future, we may extend more features aiming towards message queuing as well. The gorilla library supports securing your communication with TLS certificates. ##Status Excellent. Tested on OSX and Linux. Documentation has been written and is available. The package is functionally complete for the RPC over websockets and msgpack based serialization. After interactive usage, I added SIGINT handling so that the web-server can be stopped during development with a simple Ctrl-c at the R console. The client side will be blocked during calls (it does not poll back to R while waiting on the network) but has a configurable timeout (default 5 seconds), that allows easy client-side error handling. ## structure of this repo This repository is mainly structured as an R package. It is designed to be built and installed into an R (statistical environment) installation, using the standard tools for R. This package doesn't directly create a re-usable go library. Instead we target a c-shared library (rmq.so) that will install into R using 'R CMD INSTALL rmq'. See: 'make install' or 'make build' followed by doing `install.packages('./rmq_1.0.1.tar.gz', repos=NULL)` from inside R (assuming the package is in your current directory; if not then adjust the ./ part of the package path). The code also serves as an example of how to use golang inside R. ## embedding R in Golang While RMQ is mainly designed to embed Go under R, it defines functions, in particular SexpToIface(), that make embedding R in Go quite easy too. See the comments and example in main() of the central rmq.go file (https://github.com/glycerine/rmq/blob/master/src/rmq/rmq.go) for a demonstration.
Package twilio provides server side authentication utilities for applications to integrate with Twilio service. Access Tokens are short-lived tokens that can be used to authenticate Twilio Client SDKs like Video and IP Messaging. See detail in https://www.twilio.com/docs/api/rest/access-tokens. Capability Tokens are a secure way of setting up your device to access various features of Twilio, allowing you to add Twilio capabilities to web and mobile applications without exposing your AuthToken in any client-side environment. See detail in https://www.twilio.com/docs/api/client/capability-tokens.
Package controllerruntime provides tools to construct Kubernetes-style controllers that manipulate both Kubernetes CRDs and aggregated/built-in Kubernetes APIs. It defines easy helpers for the common use cases when building CRDs, built on top of customizable layers of abstraction. Common cases should be easy, and uncommon cases should be possible. In general, controller-runtime tries to guide users towards Kubernetes controller best-practices. The main entrypoint for controller-runtime is this root package, which contains all of the common types needed to get started building controllers: The examples in this package walk through a basic controller setup. The kubebuilder book (https://book.kubebuilder.io) has some more in-depth walkthroughs. controller-runtime favors structs with sane defaults over constructors, so it's fairly common to see structs being used directly in controller-runtime. A brief-ish walkthrough of the layout of this library can be found below. Each package contains more information about how to use it. Frequently asked questions about using controller-runtime and designing controllers can be found at https://github.com/kubernetes-sigs/controller-runtime/blob/master/FAQ.md. Every controller and webhook is ultimately run by a Manager (pkg/manager). A manager is responsible for running controllers and webhooks, and setting up common dependencies (pkg/runtime/inject), like shared caches and clients, as well as managing leader election (pkg/leaderelection). Managers are generally configured to gracefully shut down controllers on pod termination by wiring up a signal handler (pkg/manager/signals). Controllers (pkg/controller) use events (pkg/events) to eventually trigger reconcile requests. They may be constructed manually, but are often constructed with a Builder (pkg/builder), which eases the wiring of event sources (pkg/source), like Kubernetes API object changes, to event handlers (pkg/handler), like "enqueue a reconcile request for the object owner". Predicates (pkg/predicate) can be used to filter which events actually trigger reconciles. There are pre-written utilities for the common cases, and interfaces and helpers for advanced cases. Controller logic is implemented in terms of Reconcilers (pkg/reconcile). A Reconciler implements a function which takes a reconcile Request containing the name and namespace of the object to reconcile, reconciles the object, and returns a Response or an error indicating whether to requeue for a second round of processing. Reconcilers use Clients (pkg/client) to access API objects. The default client provided by the manager reads from a local shared cache (pkg/cache) and writes directly to the API server, but clients can be constructed that only talk to the API server, without a cache. The Cache will auto-populate with watched objects, as well as when other structured objects are requested. The default split client does not promise to invalidate the cache during writes (nor does it promise sequential create/get coherence), and code should not assume a get immediately following a create/update will return the updated resource. Caches may also have indexes, which can be created via a FieldIndexer (pkg/client) obtained from the manager. Indexes can used to quickly and easily look up all objects with certain fields set. Reconcilers may retrieve event recorders (pkg/recorder) to emit events using the manager. Clients, Caches, and many other things in Kubernetes use Schemes (pkg/scheme) to associate Go types to Kubernetes API Kinds (Group-Version-Kinds, to be specific). Similarly, webhooks (pkg/webhook/admission) may be implemented directly, but are often constructed using a builder (pkg/webhook/admission/builder). They are run via a server (pkg/webhook) which is managed by a Manager. Logging (pkg/log) in controller-runtime is done via structured logs, using a log set of interfaces called logr (https://godoc.org/github.com/go-logr/logr). While controller-runtime provides easy setup for using Zap (https://go.uber.org/zap, pkg/log/zap), you can provide any implementation of logr as the base logger for controller-runtime. Metrics (pkg/metrics) provided by controller-runtime are registered into a controller-runtime-specific Prometheus metrics registry. The manager can serve these by an HTTP endpoint, and additional metrics may be registered to this Registry as normal. You can easily build integration and unit tests for your controllers and webhooks using the test Environment (pkg/envtest). This will automatically stand up a copy of etcd and kube-apiserver, and provide the correct options to connect to the API server. It's designed to work well with the Ginkgo testing framework, but should work with any testing setup. This example creates a simple application Controller that is configured for ReplicaSets and Pods. * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application. TODO(pwittrock): Update this example when we have better dependency injection support This example creates a simple application Controller that is configured for ReplicaSets and Pods. This application controller will be running leader election with the provided configuration in the manager options. If leader election configuration is not provided, controller runs leader election with default values. Default values taken from: https://github.com/kubernetes/apiserver/blob/master/pkg/apis/config/v1alpha1/defaults.go * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application. TODO(pwittrock): Update this example when we have better dependency injection support
Simple websocket client with CLI shell. It can send text messages to ws server. Simple websocket server echoed requests back to client.
Package freeipa provides a client for the FreeIPA API. It provides access to almost all methods available through the API. Every API method has generated go structs for request parameters and output. This code is generated from a schema which was queried from a FreeIPA server using its "schema" method. This client performs basic response validation. Since the FreeIPA server does not always conform to its own schema, it can happen that this libary fails to unmarshal a response from FreeIPA. If you run into that, please open an issue for this client library. With that said, this is still the most extensive golang FreeIPA client and it's probably easier to fix those issues here than to write a new client from scratch. Since FreeIPA cares about the presence or abscence of fields in requests, all optional fields are defined as pointers. There are utility functions like freeipa.String to make filling these less painful. The client uses FreeIPA's JSON-RPC interface with username/password authentication. There is no support for connecting to FreeIPA with Kerberos authentication. There is currently no support for batched requests. See https://github.com/stefanabl/go-freeipa/blob/master/developing.md for information on how this library is generated.
Package main is a stub for wr's command line interface, with the actual implementation in the cmd package. wr is a workflow runner. You use it to run the commands in your workflow easily, automatically, reliably, with repeatability, and while making optimal use of your available computing resources. wr is implemented as a polling-free in-memory job queue with an on-disk acid transactional embedded database, written in go. Its main benefits over other software workflow management systems are its very low latency and overhead, its high performance at scale, its real-time status updates with a view on all your workflows on one screen, its permanent searchable history of all the commands you have ever run, and its "live" dependencies enabling easy automation of on-going projects. Start up the manager daemon, which gives you a url you can view the web interface on: In addition to the "local" scheduler, which will run your commands on all available cores of the local machine, you can also have it run your commands on your LSF cluster or in your OpenStack environment (where it will scale the number of servers needed up and down automatically). Now, stick the commands you want to run in a text file and: Arbitrarily complex workflows can be formed by specifying command dependencies. Use the --help option of `wr add` for details. wr's core is implemented in the queue package. This is the in-memory job queue that holds commands that still need to be run. Its multiple sub-queues enable certain guarantees: a given command will only get run by a single client at any one time; if a client dies, the command will get run by another client instead; if a command cannot be run, it is buried until the user takes action; if a command has a dependency, it won't run until its dependencies are complete. The jobqueue package provides client+server code for interacting with the in-memory queue from the queue package, and by storing all new commands in an on-disk database, provides an additional guarantee: that (dynamic) workflows won't break because a job that was added got "lost" before it got run. It also retains all completed jobs, enabling searching through of past workflows and allowing for "live" dependencies, triggering the rerunning of previously completed commands if their dependencies change. The jobqueue package is also what actually does the main "work" of the system: the server component knows how many commands need to be run and what their resource requirements (memory, time, cpus etc.) are, and submits the appropriate number of jobqueue runner clients to the job scheduler. The jobqueue/scheduler package has the scheduler-specific code that ensures that these runner clients get run on the configured system in the most efficient way possible. Eg. for LSF, if we have 10 commands that need 2GB of memory to run, we will submit a job array of size 10 with 2GB of memory reservation to LSF. The most limited (and therefore potentially least contended) queue capable of running the commands will be chosen. For OpenStack, the cheapest server (in terms of cores and memory) that can run the commands will be spawned, and once there is no more work to do on those servers, they get terminated to free up resources. The cloud package implements methods for interacting with cloud environments such as OpenStack. The corresponding jobqueue/scheduler package uses these methods to do their work. The static subdirectory contains the html, css and javascript needed for the web interface. See jobqueue/serverWebI.go for how the web interface backend is implemented. The internal package contains general utility functions, and most notably config.go holds the code for how the command line interface deals with config options.
Package twilio provides server side authentication utilities for applications to integrate with Twilio service. Access Tokens are short-lived tokens that can be used to authenticate Twilio Client SDKs like Video and IP Messaging. See detail in https://www.twilio.com/docs/api/rest/access-tokens. Capability Tokens are a secure way of setting up your device to access various features of Twilio, allowing you to add Twilio capabilities to web and mobile applications without exposing your AuthToken in any client-side environment. See detail in https://www.twilio.com/docs/api/client/capability-tokens.
lf is a terminal file manager. Source code can be found in the repository at https://github.com/gokcehan/lf. This documentation can either be read from terminal using 'lf -doc' or online at https://godoc.org/github.com/gokcehan/lf. You can also use 'doc' command (default '<f-1>') inside lf to view the documentation in a pager. You can run 'lf -help' to see descriptions of command line options. The following commands are provided by lf: The following command line commands are provided by lf: The following options can be used to customize the behavior of lf: The following environment variables are exported for shell commands: The following commands/keybindings are provided by default: The following additional keybindings are provided by default: Configuration files should be located at: Marks file should be located at: History file should be located at: You can configure the default values of following variables to change these locations: A sample configuration file can be found at https://github.com/gokcehan/lf/blob/master/etc/lfrc.example. This section shows information about builtin commands. Modal commands do not take any arguments, but instead change the operation mode to read their input conveniently, and so they are meant to be assigned to keybindings. Quit lf and return to the shell. Move the current file selection upwards/downwards by one/half a page/full page. Change the current working directory to the parent directory. If the current file is a directory, then change the current directory to it, otherwise, execute the 'open' command. A default 'open' command is provided to call the default system opener asynchronously with the current file as the argument. A custom 'open' command can be defined to override this default. (See also 'OPENER' variable and 'Opening Files' section) Move the current file selection to the top/bottom of the directory. Toggle the selection of the current file or files given as arguments. Reverse the selection of all files in the current directory (i.e. 'toggle' all files). Selections in other directories are not effected by this command. You can define a new command to select all files in the directory by combining 'invert' with 'unselect' (i.e. `cmd select-all :unselect; invert`), though this will also remove selections in other directories. Remove the selection of all files in all directories. Select files that match the given glob. Unselect files that match the given glob. If there are no selections, save the path of the current file to the copy buffer, otherwise, copy the paths of selected files. If there are no selections, save the path of the current file to the cut buffer, otherwise, copy the paths of selected files. Copy/Move files in copy/cut buffer to the current working directory. Clear file paths in copy/cut buffer. Synchronize copied/cut files with server. This command is automatically called when required. Draw the screen. This command is automatically called when required. Synchronize the terminal and redraw the screen. Load modified files and directories. This command is automatically called when required. Flush the cache and reload all files and directories. Print given arguments to the message line at the bottom. Print given arguments to the message line at the bottom and also to the log file. Print given arguments to the message line at the bottom in red color and also to the log file. Change the working directory to the given argument. Change the current file selection to the given argument. Remove the current file or selected file(s). Rename the current file using the builtin method. A custom 'rename' command can be defined to override this default. Read the configuration file given in the argument. Simulate key pushes given in the argument. Read a command to evaluate. Read a shell command to execute. (See also 'Prefixes' and 'Shell Commands' sections) Read a shell command to execute piping its standard I/O to the bottom statline. (See also 'Prefixes' and 'Piping Shell Commands' sections) Read a shell command to execute and wait for a key press in the end. (See also 'Prefixes' and 'Waiting Shell Commands' sections) Read a shell command to execute synchronously without standard I/O. Read key(s) to find the appropriate file name match in the forward/backward direction and jump to the next/previous match. (See also 'anchorfind', 'findlen', 'wrapscan', 'ignorecase', 'smartcase', 'ignoredia', and 'smartdia' options and 'Searching Files' section) Read a pattern to search for a file name match in the forward/backward direction and jump to the next/previous match. (See also 'globsearch', 'incsearch', 'wrapscan', 'ignorecase', 'smartcase', 'ignoredia', and 'smartdia' options and 'Searching Files' section) Save the current directory as a bookmark assigned to the given key. Change the current directory to the bookmark assigned to the given key. A special bookmark "'" holds the previous directory after a 'mark-load', 'cd', or 'select' command. Remove a bookmark assigned to the given key. This section shows information about command line commands. These should be mostly compatible with readline keybindings. A character refers to a unicode code point, a word consists of letters and digits, and a unix word consists of any non-blank characters. Quit command line mode and return to normal mode. Autocomplete the current word. Autocomplete the current word, then you can press the binded key/s again to cycle completition options. Autocomplete the current word, then you can press the binded key/s again to cycle completition options backwards. Execute the current line. Interrupt the current shell-pipe command and return to the normal mode. Go to next/previous item in the history. Move the cursor to the left/right. Move the cursor to the beginning/end of line. Delete the next character in forward/backward direction. Delete everything up to the beginning/end of line. Delete the previous unix word. Paste the buffer content containing the last deleted item. Transpose the positions of last two characters/words. Move the cursor by one word in forward/backward direction. Delete the next word in forward direction. Capitalize/uppercase/lowercase the current word and jump to the next word. This section shows information about options to customize the behavior. Character ':' is used as the separator for list options '[]int' and '[]string'. When this option is enabled, find command starts matching patterns from the beginning of file names, otherwise, it can match at an arbitrary position. When this option is enabled, directory sizes show the number of items inside instead of the size of directory file. The former needs to be calculated by reading the directory and counting the items inside. The latter is directly provided by the operating system and it does not require any calculation, though it is non-intuitive and it can often be misleading. This option is disabled by default for performance reasons. This option only has an effect when 'info' has a 'size' field and the pane is wide enough to show the information. A thousand items are counted per directory at most, and bigger directories are shown as '999+'. Show directories first above regular files. Draw boxes around panes with box drawing characters. Format string of error messages shown in the bottom message line. File separator used in environment variables 'fs' and 'fx'. Number of characters prompted for the find command. When this value is set to 0, find command prompts until there is only a single match left. When this option is enabled, search command patterns are considered as globs, otherwise they are literals. With globbing, '*' matches any sequence, '?' matches any character, and '[...]' or '[^...] matches character sets or ranges. Otherwise, these characters are interpreted as they are. Show hidden files. On unix systems, hidden files are determined by the value of 'hiddenfiles'. On windows, only files with hidden attributes are considered hidden files. List of hidden file glob patterns. Patterns can be given as relative or absolute paths. Globbing supports the usual special characters, '*' to match any sequence, '?' to match any character, and '[...]' or '[^...] to match character sets or ranges. In addition, if a pattern starts with '!', then its matches are excluded from hidden files. Show icons before each item in the list. By default, only two icons, 🗀 (U+1F5C0) and 🗎 (U+1F5CE), are used for directories and files respectively, as they are supported in the unicode standard. Icons can be configured with an environment variable named 'LF_ICONS'. The syntax of this variable is similar to 'LS_COLORS'. See the wiki page for an example icon configuration. Sets 'IFS' variable in shell commands. It works by adding the assignment to the beginning of the command string as 'IFS='...'; ...'. The reason is that 'IFS' variable is not inherited by the shell for security reasons. This method assumes a POSIX shell syntax and so it can fail for non-POSIX shells. This option has no effect when the value is left empty. This option does not have any effect on windows. Ignore case in sorting and search patterns. Ignore diacritics in sorting and search patterns. Jump to the first match after each keystroke during searching. List of information shown for directory items at the right side of pane. Currently supported information types are 'size', 'time', 'atime', and 'ctime'. Information is only shown when the pane width is more than twice the width of information. Send mouse events as input. Show the position number for directory items at the left side of pane. When 'relativenumber' is enabled, only the current line shows the absolute position and relative positions are shown for the rest. Set the interval in seconds for periodic checks of directory updates. This works by periodically calling the 'load' command. Note that directories are already updated automatically in many cases. This option can be useful when there is an external process changing the displayed directory and you are not doing anything in lf. Periodic checks are disabled when the value of this option is set to zero. Show previews of files and directories at the right most pane. If the file has more lines than the preview pane, rest of the lines are not read. Files containing the null character (U+0000) in the read portion are considered binary files and displayed as 'binary'. Set the path of a previewer file to filter the content of regular files for previewing. The file should be executable. Five arguments are passed to the file, first is the current file name; the second, third, fourth, and fifth are width, height, horizontal position, and vertical position of preview pane respectively. SIGPIPE signal is sent when enough lines are read. If the previewer returns a non-zero exit code, then the preview cache for the given file is disabled. This means that if the file is selected in the future, the previewer is called once again. Preview filtering is disabled and files are displayed as they are when the value of this option is left empty. Set the path of a cleaner file. This file will be called if previewing is enabled, the previewer is set, and the previously selected file had its preview cache disabled. The file should be executable. One argument is passed to the file; the path to the file whose preview should be cleaned. Preview clearing is disabled when the value of this option is left empty. Format string of the prompt shown in the top line. Special expansions are provided, '%u' as the user name, '%h' as the host name, '%w' as the working directory, '%d' as the working directory with a trailing path separator, and '%f' as the file name. Home folder is shown as '~' in the working directory expansion. Directory names are automatically shortened to a single character starting from the left most parent when the prompt does not fit to the screen. List of ratios of pane widths. Number of items in the list determines the number of panes in the ui. When 'preview' option is enabled, the right most number is used for the width of preview pane. Show the position number relative to the current line. When 'number' is enabled, current line shows the absolute position, otherwise nothing is shown. Reverse the direction of sort. Minimum number of offset lines shown at all times in the top and the bottom of the screen when scrolling. The current line is kept in the middle when this option is set to a large value that is bigger than the half of number of lines. A smaller offset can be used when the current file is close to the beginning or end of the list to show the maximum number of items. Shell executable to use for shell commands. On unix, a POSIX compatible shell is required. Shell commands are executed as 'shell shellopts -c command -- arguments'. On windows, '/c' is used instead of '-c' which should work in 'cmd' and 'powershell'. List of shell options to pass to the shell executable. Override 'ignorecase' option when the pattern contains an uppercase character. This option has no effect when 'ignorecase' is disabled. Override 'ignoredia' option when the pattern contains a character with diacritic. This option has no effect when 'ignoredia' is disabled. Sort type for directories. Currently supported sort types are 'natural', 'name', 'size', 'time', 'ctime', 'atime', and 'ext'. Number of space characters to show for horizontal tabulation (U+0009) character. Format string of the file modification time shown in the bottom line. Truncate character shown at the end when the file name does not fit to the pane. Searching can wrap around the file list. Scrolling can wrap around the file list. The following variables are exported for shell commands: These are referred with a '$' prefix on POSIX shells (e.g. '$f'), between '%' characters on Windows cmd (e.g. '%f%'), and with a '$env:' prefix on Windows powershell (e.g. '$env:f'). Current file selection as a full path. Selected file(s) separated with the value of 'filesep' option as full path(s). Selected file(s) (i.e. 'fs') if there are any selected files, otherwise current file selection (i.e. 'f'). Id of the running client. The value of this variable is set to the current nesting level when you run lf from a shell spawned inside lf. You can add the value of this variable to your shell prompt to make it clear that your shell runs inside lf. For example, with POSIX shells, you can use '[ -n "$LF_LEVEL" ] && PS1="$PS1""(lf level: $LF_LEVEL) "' in your shell configuration file (e.g. '~/.bashrc'). If this variable is set in the environment, use the same value, otherwise set the value to 'start' in Windows, 'open' in MacOS, 'xdg-open' in others. If this variable is set in the environment, use the same value, otherwise set the value to 'vi' on unix, 'notepad' in Windows. If this variable is set in the environment, use the same value, otherwise set the value to 'less' on unix, 'more' in Windows. If this variable is set in the environment, use the same value, otherwise set the value to 'sh' on unix, 'cmd' in Windows. The following command prefixes are used by lf: The same evaluator is used for the command line and the configuration file for read and shell commands. The difference is that prefixes are not necessary in the command line. Instead, different modes are provided to read corresponding commands. These modes are mapped to the prefix keys above by default. Characters from '#' to newline are comments and ignored: There are three special commands ('set', 'map', and 'cmd') and their variants for configuration. Command 'set' is used to set an option which can be boolean, integer, or string: Command 'map' is used to bind a key to a command which can be builtin command, custom command, or shell command: Command 'cmap' is used to bind a key to a command line command which can only be one of the builtin commands: You can delete an existing binding by leaving the expression empty: Command 'cmd' is used to define a custom command: You can delete an existing command by leaving the expression empty: If there is no prefix then ':' is assumed: An explicit ':' can be provided to group statements until a newline which is especially useful for 'map' and 'cmd' commands: If you need multiline you can wrap statements in '{{' and '}}' after the proper prefix. Regular keys are assigned to a command with the usual syntax: Keys combined with the shift key simply use the uppercase letter: Special keys are written in between '<' and '>' characters and always use lowercase letters: Angle brackets can be assigned with their special names: Function keys are prefixed with 'f' character: Keys combined with the control key are prefixed with 'c' character: Keys combined with the alt key are assigned in two different ways depending on the behavior of your terminal. Older terminals (e.g. xterm) may set the 8th bit of a character when the alt key is pressed. On these terminals, you can use the corresponding byte for the mapping: Newer terminals (e.g. gnome-terminal) may prefix the key with an escape key when the alt key is pressed. lf uses the escape delaying mechanism to recognize alt keys in these terminals (delay is 100ms). On these terminals, keys combined with the alt key are prefixed with 'a' character: Please note that, some key combinations are not possible due to the way terminals work (e.g. control and h combination sends a backspace key instead). The easiest way to find the name of a key combination is to press the key while lf is running and read the name of the key from the unknown mapping error. Mouse buttons are prefixed with 'm' character: Mouse wheel events are also prefixed with 'm' character: The usual way to map a key sequence is to assign it to a named or unnamed command. While this provides a clean way to remap builtin keys as well as other commands, it can be limiting at times. For this reason 'push' command is provided by lf. This command is used to simulate key pushes given as its arguments. You can 'map' a key to a 'push' command with an argument to create various keybindings. This is mainly useful for two purposes. First, it can be used to map a command with a command count: Second, it can be used to avoid typing the name when a command takes arguments: One thing to be careful is that since 'push' command works with keys instead of commands it is possible to accidentally create recursive bindings: These types of bindings create a deadlock when executed. Regular shell commands are the most basic command type that is useful for many purposes. For example, we can write a shell command to move selected file(s) to trash. A first attempt to write such a command may look like this: We check '$fs' to see if there are any selected files. Otherwise we just delete the current file. Since this is such a common pattern, a separate '$fx' variable is provided. We can use this variable to get rid of the conditional: The trash directory is checked each time the command is executed. We can move it outside of the command so it would only run once at startup: Since these are one liners, we can drop '{{' and '}}': Finally note that we set 'IFS' variable manually in these commands. Instead we could use the 'ifs' option to set it for all shell commands (i.e. 'set ifs "\n"'). This can be especially useful for interactive use (e.g. '$rm $f' or '$rm $fs' would simply work). This option is not set by default as it can behave unexpectedly for new users. However, use of this option is highly recommended and it is assumed in the rest of the documentation. Regular shell commands have some limitations in some cases. When an output or error message is given and the command exits afterwards, the ui is immediately resumed and there is no way to see the message without dropping to shell again. Also, even when there is no output or error, the ui still needs to be paused while the command is running. This can cause flickering on the screen for short commands and similar distractions for longer commands. Instead of pausing the ui, piping shell commands connects stdin, stdout, and stderr of the command to the statline in the bottom of the ui. This can be useful for programs following the unix philosophy to give no output in the success case, and brief error messages or prompts in other cases. For example, following rename command prompts for overwrite in the statline if there is an existing file with the given name: You can also output error messages in the command and it will show up in the statline. For example, an alternative rename command may look like this: Note that input is line buffered and output and error are byte buffered. Waiting shell commands are similar to regular shell commands except that they wait for a key press when the command is finished. These can be useful to see the output of a program before the ui is resumed. Waiting shell commands are more appropriate than piping shell commands when the command is verbose and the output is best displayed as multiline. Asynchronous shell commands are used to start a command in the background and then resume operation without waiting for the command to finish. Stdin, stdout, and stderr of the command is neither connected to the terminal nor to the ui. One of the more advanced features in lf is remote commands. All clients connect to a server on startup. It is possible to send commands to all or any of the connected clients over the common server. This is used internally to notify file selection changes to other clients. To use this feature, you need to use a client which supports communicating with a UNIX-domain socket. OpenBSD implementation of netcat (nc) is one such example. You can use it to send a command to the socket file: Since such a client may not be available everywhere, lf comes bundled with a command line flag to be used as such. When using lf, you do not need to specify the address of the socket file. This is the recommended way of using remote commands since it is shorter and immune to socket file address changes: In this command 'send' is used to send the rest of the string as a command to all connected clients. You can optionally give it an id number to send a command to a single client: All clients have a unique id number but you may not be aware of the id number when you are writing a command. For this purpose, an '$id' variable is exported to the environment for shell commands. You can use it to send a remote command from a client to the server which in return sends a command back to itself. So now you can display a message in the current client by calling the following in a shell command: Since lf does not have control flow syntax, remote commands are used for such needs. For example, you can configure the number of columns in the ui with respect to the terminal width as follows: Besides 'send' command, there are also two commands to get or set the current file selection. Two possible modes 'copy' and 'move' specify whether selected files are to be copied or moved. File names are separated by newline character. Setting the file selection is done with 'save' command: Getting the file selection is similarly done with 'load' command: There is a 'quit' command to close client connections and quit the server: Lastly, there is a 'conn' command to connect the server as a client. This should not be needed for users. lf uses its own builtin copy and move operations by default. These are implemented as asynchronous operations and progress is shown in the bottom ruler. These commands do not overwrite existing files or directories with the same name. Instead, a suffix that is compatible with '--backup=numbered' option in GNU cp is added to the new files or directories. Only file modes are preserved and all other attributes are ignored including ownership, timestamps, context, and xattr. Special files such as character and block devices, named pipes, and sockets are skipped and links are not followed. Moving is performed using the rename operation of the underlying OS. For cross-device moving, lf falls back to copying and then deletes the original files if there are no errors. Operation errors are shown in the message line as well as the log file and they do not preemptively finish the corresponding file operation. File operations can be performed on the current selected file or alternatively on multiple files by selecting them first. When you 'copy' a file, lf doesn't actually copy the file on the disk, but only records its name to memory. The actual file copying takes place when you 'paste'. Similarly 'paste' after a 'cut' operation moves the file. You can customize copy and move operations by defining a 'paste' command. This is a special command that is called when it is defined instead of the builtin implementation. You can use the following example as a starting point: Some useful things to be considered are to use the backup ('--backup') and/or preserve attributes ('-a') options with 'cp' and 'mv' commands if they support it (i.e. GNU implementation), change the command type to asynchronous, or use 'rsync' command with progress bar option for copying and feed the progress to the client periodically with remote 'echo' calls. By default, lf does not assign 'delete' command to a key to protect new users. You can customize file deletion by defining a 'delete' command. You can also assign a key to this command if you like. An example command to move selected files to a trash folder and remove files completely after a prompt are provided in the example configuration file. There are two mechanisms implemented in lf to search a file in the current directory. Searching is the traditional method to move the selection to a file matching a given pattern. Finding is an alternative way to search for a pattern possibly using fewer keystrokes. Searching mechanism is implemented with commands 'search' (default '/'), 'search-back' (default '?'), 'search-next' (default 'n'), and 'search-prev' (default 'N'). You can enable 'globsearch' option to match with a glob pattern. Globbing supports '*' to match any sequence, '?' to match any character, and '[...]' or '[^...] to match character sets or ranges. You can enable 'incsearch' option to jump to the current match at each keystroke while typing. In this mode, you can either use 'cmd-enter' to accept the search or use 'cmd-escape' to cancel the search. Alternatively, you can also map some other commands with 'cmap' to accept the search and execute the command immediately afterwards. Possible candidates are 'up', 'down' and their variants, 'top', 'bottom', 'updir', and 'open' commands. For example, you can use arrow keys to finish the search with the following mappings: Finding mechanism is implemented with commands 'find' (default 'f'), 'find-back' (default 'F'), 'find-next' (default ';'), 'find-prev' (default ','). You can disable 'anchorfind' option to match a pattern at an arbitrary position in the filename instead of the beginning. You can set the number of keys to match using 'findlen' option. If you set this value to zero, then the the keys are read until there is only a single match. Default values of these two options are set to jump to the first file with the given initial. Some options effect both searching and finding. You can disable 'wrapscan' option to prevent searches to wrap around at the end of the file list. You can disable 'ignorecase' option to match cases in the pattern and the filename. This option is already automatically overridden if the pattern contains upper case characters. You can disable 'smartcase' option to disable this behavior. Two similar options 'ignoredia' and 'smartdia' are provided to control matching diacritics in latin letters. You can define a an 'open' command (default 'l' and '<right>') to configure file opening. This command is only called when the current file is not a directory, otherwise the directory is entered instead. You can define it just as you would define any other command: It is possible to use different command types: You may want to use either file extensions or mime types from 'file' command: You may want to use 'setsid' before your opener command to have persistent processes that continue to run after lf quits. Following command is provided by default: You may also use any other existing file openers as you like. Possible options are 'libfile-mimeinfo-perl' (executable name is 'mimeopen'), 'rifle' (ranger's default file opener), or 'mimeo' to name a few. lf previews files on the preview pane by printing the file until the end or the preview pane is filled. This output can be enhanced by providing a custom preview script for filtering. This can be used to highlight source codes, list contents of archive files or view pdf or image files as text to name few. For coloring lf recognizes ansi escape codes. In order to use this feature you need to set the value of 'previewer' option to the path of an executable file. lf passes the current file name as the first argument and the height of the preview pane as the second argument when running this file. Output of the execution is printed in the preview pane. You may want to use the same script in your pager mapping as well if any: For 'less' pager, you may instead utilize 'LESSOPEN' mechanism so that useful information about the file such as the full path of the file can be displayed in the statusline below: Since this script is called for each file selection change it needs to be as efficient as possible and this responsibility is left to the user. You may use file extensions to determine the type of file more efficiently compared to obtaining mime types from 'file' command. Extensions can then be used to match cleanly within a conditional: Another important consideration for efficiency is the use of programs with short startup times for preview. For this reason, 'highlight' is recommended over 'pygmentize' for syntax highlighting. Besides, it is also important that the application is processing the file on the fly rather than first reading it to the memory and then do the processing afterwards. This is especially relevant for big files. lf automatically closes the previewer script output pipe with a SIGPIPE when enough lines are read. When everything else fails, you can make use of the height argument to only feed the first portion of the file to a program for preview. Note that some programs may not respond well to SIGPIPE to exit with a non-zero return code and avoid caching. You may add a trailing '|| true' command to avoid such errors: You may also use an existing preview filter as you like. Your system may already come with a preview filter named 'lesspipe'. These filters may have a mechanism to add user customizations as well. See the related documentations for more information. lf changes the working directory of the process to the current directory so that shell commands always work in the displayed directory. After quitting, it returns to the original directory where it is first launched like all shell programs. If you want to stay in the current directory after quitting, you can use one of the example wrapper shell scripts provided in the repository. There is a special command 'on-cd' that runs a shell command when it is defined and the directory is changed. You can define it just as you would define any other command: If you want to print escape sequences, you may redirect 'printf' output to '/dev/tty'. The following xterm specific escape sequence sets the terminal title to the working directory: This command runs whenever you change directory but not on startup. You can add an extra call to make it run on startup as well: Note that all shell commands are possible but `%` and `&` are usually more appropriate as `$` and `!` causes flickers and pauses respectively. lf tries to automatically adapt its colors to the environment. It starts with a default colorscheme and updates colors using values of existing environment variables possibly by overwriting its previous values. Colors are set in the following order: Please refer to the corresponding man pages for more information about 'LSCOLORS' and 'LS_COLORS'. 'LF_COLORS' is provided with the same syntax as 'LS_COLORS' in case you want to configure colors only for lf but not ls. This can be useful since there are some differences between ls and lf, though one should expect the same behavior for common cases. You can configure lf colors in two different ways. First, you can only configure 8 basic colors used by your terminal and lf should pick up those colors automatically. Depending on your terminal, you should be able to select your colors from a 24-bit palette. This is the recommended approach as colors used by other programs will also match each other. Second, you can set the values of environmental variables mentioned above for fine grained customization. Note that 'LS_COLORS/LF_COLORS' are more powerful than 'LSCOLORS' and they can be used even when GNU programs are not installed on the system. You can combine this second method with the first method for best results. Lastly, you may also want to configure the colors of the prompt line to match the rest of the colors. Colors of the prompt line can be configured using the 'promptfmt' option which can include hardcoded colors as ansi escapes. See the default value of this option to have an idea about how to color this line. It is worth noting that lf uses as many colors are advertised by your terminal's entry in your systems terminfo or infocmp database, if this is not present lf will default to an internal database. For terminals supporting 24-bit (or "true") color that do not have a database entry (or one that does not advertise all capabilities), support can be enabled by either setting the '$COLORTERM' variable to "truecolor" or ensuring '$TERM' is set to a value that ends with "-truecolor". Default lf colors are mostly taken from GNU dircolors defaults. These defaults use 8 basic colors and bold attribute. Default dircolors entries with background colors are simplified to avoid confusion with current file selection in lf. Similarly, there are only file type matchings and extension matchings are left out for simplicity. Default values are as follows given with their matching order in lf: Note that, lf first tries matching file names and then falls back to file types. The full order of matchings from most specific to least are as follows: For example, given a regular text file '/path/to/README.txt', the following entries are checked in the configuration and the first one to match is used: Given a regular directory '/path/to/example.d', the following entries are checked in the configuration and the first one to match is used: Note that glob-like patterns do not actually perform glob matching due to performance reasons. For example, you can set a variable as follows: Having all entries on a single line can make it hard to read. You may instead divide it to multiple lines in between double quotes by escaping newlines with backslashes as follows: Having such a long variable definition in a shell configuration file might be undesirable. You may instead put this definition in a separate file and source it in your shell configuration file as follows: See the wiki page for ansi escape codes https://en.wikipedia.org/wiki/ANSI_escape_code. Icons are configured using 'LF_ICONS' environment variable. This variable uses the same syntax as 'LS_COLORS/LF_COLORS'. Instead of colors, you should put a single characters as values of entries. Do not forget to enable 'icons' option to see the icons. Default values are as follows given with their matching order in lf: See the wiki page for an example icons configuration https://github.com/gokcehan/lf/wiki/Icons.
Package encrypt will encrypt and decrypt data securely using the same password for both operations. It was developed specifically for safe file encryption. WARNING: These functions are not suitable for client-server communication protocols. See details bellow. The author used VeraCrypt and TrueCrypt as inspirations for the implementation. Unlike these two products, we don't need to encrypt whole dynamic filesystems, or hidden volumes so many steps are greatly simplified. Support was also added for more advanced password hash, such as adding Argon2 password hashing on top of PBKDF2 used by VeraCrypt. Just like VeraCrypt and BitLocker (Microsoft), we rely on AES-256 in XTS mode symmetric-key encryption. It's a modern block cipher developed for disk encryption that is a bit less malleable than the more traditional CBC mode. While AES provides fast content encryption, it's not a complete solution. AES keys are fixed-length 256 bits and unlike user passwords, they must have excellent entropy. To create fixed-length keys with excellent entropy, we rely on password hash functions. These are built to spread the entropy to the full length of the key and it gives ample protection against password brute force attacks. Rainbow table attacks (precalculated hashes) are mitigated with a 512 bits random password salt. The salt can be public, as long as the password stays private. For password hashing, we joint a battle-tested algorithm, PBKDF2, with a next gen password hash: Argon2id. Argon2 helps protect against GPU-based attacks, but is a very recent algo. If flaws are ever discovered in it, we have a fallback algorithm. Settings for both password hash functions are secure and stronger the usually recommended settings as of 2018. This does mean that our password hashing function is very expensive (benchmarked around 1s on my desktop computer), but this is not usually an issue for tasks such as file encryption or decryption and the added protection is significant. AES with XTS mode doesn't prevent an attacker from maliciously modifying the encrypted content. To ensure that we catch these cases, we calculate a SHA-512 digest on the plain content and we encrypt it too. Once we decrypt that content, if the header matches, it's likely (although not 100% certain) that the password is correct. If the header matches, but the SHA-512 digest doesn't match, it's likely that the data has been tampered with and we reject it. Finally, decrypting with the AES cypher will always seem to work, whether the password is correct or not. The only difference is that the output will be valid content or garbage. To make the distinction between a bad password and tampered data in a user-friendly way, we include a small header in the plain content ('GOODPW'). (1) These encryption utilities are not suitable as a secure client-server communication protocol, which must deal with additional security constraints. For example, depending on how a server would use it, it could be vulnerable to padding oracle attacks. (2) We store and cache passwords and AES keys in memory, which can then also be swapped to disk by the OS. Encrypter and Decrypter will erase the password and EAS when they are closed explicitly, but this is weak defense in depth only so there is an assumption that the attacker doesn't have memory read access. Data Format We store the salt along with the data.This is because these utilities are geared toward file encryption and its impractical to store it separately. AES: https://en.wikipedia.org/wiki/Advanced_Encryption_Standard PBKDF2: https://en.wikipedia.org/wiki/PBKDF2 Argon2: https://en.wikipedia.org/wiki/Argon2 VeraCrypt: https://veracrypt.fr TrueCrypt implementations: http://blog.bjrn.se/2008/01/truecrypt-explained.html Oracle attack: https://en.wikipedia.org/wiki/Oracle_attack NIST Digital Security Guidelines: https://pages.nist.gov/800-63-3/sp800-63b.html
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
NanoMux is a package of HTTP request routers for the Go language. The package has three types that can be used as routers. The first one is the Resource, which represents the path segment resource. The second one is the Host. it represents the host segment of the URL but also takes on the role of the root resource when HTTP method handlers are set. The third one is the Router which supports registering multiple hosts and resources. It passes the request to the matching host and, when there is no matching host, to the root resource. In NanoMux terms, hosts and resources are called responders. Responders are organized into a tree. The request's URL segments are matched against the host and corresponding resources' templates in the tree. The request passes through each matching responder in its URL until it reaches the last segment's responder. To pass the request to the next segment's responder, the request passers of the Router, Host, and Resource are called. When the request reaches the last responder, that responder's request handler is called. The request handler is responsible for calling the responder's HTTP method handler. The request passer, request handler, and HTTP method handlers can all be wrapped with middleware. The NanoMux types provide many methods, but most of them are for convenience. Sections below discuss the main features of the package. Based on the segments they comprise, there are three types of templates: static, pattern, and wildcard. Static templates have no regex or wildcard segments. Pattern templates have one or more regex segments and/or one wildcard segment and static segments. A regex segment must be in curly braces and consists of a value name and a regex pattern separated by a colon: "{valueName:regexPattern}". The wildcard segment only has a value name: "{valueName}". There can be only one wildcard segment in a template. Wildcard templates have only one wildcard segment and no static or regex segments. The host segment templates must always follow the scheme with the colon ":" and the two slashes "//" of the authority component. The path segment templates may be preceded by a slash "/" or a scheme, a colon ":", and three slashes "///" (two authority component slashes and the third separator slash). Like in "https:///blog". The preceding slash is just a separator. It doesn't denote the root resource, except when it is used alone. The template "/" or "https:///" denotes the root resource. Both the host and path segment templates can have a trailing slash. When its template starts with "https" unless configured to redirect, the host or resource will not handle a request when used under HTTP and respond with a "404 Not Found" status code. When its template has a trailing slash unless configured to be lenient or strict, the resource will redirect the request that was made to a URL without a trailing slash to the one with a trailing slash and vice versa. The trailing slash has no effect on the host. But if the host is a subtree handler and should respond to the request, its configurations related to the trailing slash will be used on the last path segment. As a side note, every parent resource should have a trailing slash in its template. For example, in a resource tree "/parent/child/grandchild", two non-leaf resources should be referenced with templates "parent/" and "child/". NanoMux doesn't force this, but it's good practice to follow. It helps the clients avoid forming a broken URL when adding a relative URL to the base URL. By default, if the resource has a trailing slash, NanoMux redirects the requests made to the URL without a trailing slash to the URL with a trailing slash. So the clients will have the correct URL. Templates can have a name. The name segment comes at the beginning of the host or path segment templates, but after the slashes. The name segment begins with a "$" sign and is separated from the template's contents by a colon ":". The template's name comes between the "$" sign and the colon ":". The name given in the template can be used to retrieve the host or resource from its parent (the router is considered the host's parent). For convenience, a regex segment's or wildcard segment's value name is used as the name of the template when there is no template name and no other regex segments. For example, templates "{color:red|green|blue}", `day {day:(?:3[01]|[12]\d|0?[1-9])}`, "{model}" have names color, day, and model, respectively. If the template's static part needs a "$" sign at the beginning or curly braces anywhere, they can be escaped with a backslash `\`. Like in the template `\$tatic\{template\}`. For some reason, if the template name or value name needs a colon ":", it can also be escapbed with a backslash: `$smileys\:):{smiley\:)}`. When retrieving the host, resource, or value of the regex or wildcard segment, names are used unescaped without a backslash `\`. Constructors of the Host and Resource types and some methods take URL templates or path templates. In the URL and path templates, the scheme and trailing slash belong to the last segment. In the URL template, after the host segment, if the path component contains only a slash "/", it's considered a trailing slash of the host template. The host template's trailing slash is used only for the last path segment when the host is a subtree handler and should respond to the request. It has no effect on the host itself. In templates, disallowed characters must be used without a percent-encoding, except for a slash "/". Because the slash "/" is a separator when needed in a template, it must be replaced with %2f or %2F. Hosts and resources may have child resources with all three types of templates. In that case, the request's path segments are first matched against child resources with a static template. If no resource with a static template matches, then child resources with a pattern template are matched in the order of their registration. Lastly, if there is no child resource with a pattern template that matches the request's path segment, a child resource with a wildcard template accepts the request. Hosts and resources can have only one direct child resource with a wildcard template. The Host and Resource types implement the http.Handler interface. They can be constructed, have HTTP method handlers set with the SetHandlerFor method, and can be registered with the RegisterResource or RegisterResourceUnder methods to form a resource tree. There is one restriction: the root resource cannot be registered under a host. It can be used without a host or be registered with the router. When the Host type is used, the root resource is implied. Handlers must return true if they respond to the request. Sometimes middlewares and the responder itself need to know whether the request was handled or not. For example, when the middleware responds to the request instead of calling the request passer, the responder may assume that none of its child resources responded to the request in its subtree, so it responds with "404 Not Found" to the request that was already responded to. To prevent this, the middleware's handler must return true if it responds to the request, or it must return the value returned from the argument handler. Sometimes resources have to handle HTTP methods they don't support. NanoMux provides a default not allowed HTTP method handler that responds with a "405 Method Not Allowed" status code, listing all HTTP methods the resource supports in the "Allow" header. But when the host or resource needs a custom implementation, its SetHandlerFor method can be used to replace the default handler. To denote the not allowed HTTP method handler, the exclamation mark "!" must be used instead of an HTTP method. In addition to the not allowed HTTP method handler, if the host or resource has at least one HTTP method handler, NanoMux also provides a default OPTIONS HTTP method handler. The host and resource allow setting a handler for a child resource in their subtree. If the subtree resource doesn't exist, it will be created. It's possible to retrieve a subtree resource with the method Resource. If the subtree resource doesn't exist, the method Resource creates it. If an existing resource must be retrieved, the method RegisteredResource can be used. If there is a need for shared data, it can be set with the SetSharedData method. The shared data can be retrieved in the handlers using the ResponderSharedData method of the passed *Args argument. Hosts and resources can have their own shared data. Handlers retrieve the shared data of their responder. Hosts and resources can be implemented as a type with methods. Each method that has a name beginning with "Handle" and has the signature of a nanomux.Handler is used as an HTTP method handler. The remaining part of the method's name is considered an HTTP method. It is possible to set the implementation later with the SetImplementation method of the Host and Resource. The implementation may also be set for a child resource in the subtree with the method SetImplementationAt. Hosts and resources configured to be a subtree handler respond to the request when there is no matching resource in their subtree. In the resource tree, if resource-12 is a subtree handler, it handles the request to a path "/resource-1/resource-12/non-existent-resource". Subtree handlers can get the remaining part of the path with the RemainingPath method of the *Args argument. The remaining path starts with a slash "/" if the subtree handler has no trailing slash in its template, otherwise it starts without a trailing slash. A subtree handler can also be used as a file server. Let's say we have the following resource tree: When the request is made to a URL "http://example.com/resource-1/resource-12", the host's request passer is called. It passes the request to the resource-1. Then the resource-1's request passer is called and it passes the request to the resource-12. The resource-12 is the last resource in the URL, so its request handler is called. The request handler is responsible for calling the resource's HTTP method handler. If there is no handler for the request's method, it calls the not allowed HTTP method handler. All of these handlers and the request passer can be wrapped in middleware. In the above snippet, the WrapRequestPasserAt method wraps the request passer of the "admin" resource with the CheckCredentials middleware. The CheckCredentials calls the "admin" resource's request passer if the credentials are valid; if not, no further segments will be matched, the request will be dropped, and the client will be responded with a "404 Not Found" status code. In the above case, "admin" is a dormant resource. But as its name states, the request passer's purpose is to pass the request to the next resource. It's called even when the resource is dormant. Unlike the request passer, the request handler is called only when the host or resource is the one that must respond to the request. The request handler can be wrapped when the middleware must be called before any HTTP method handler. The WrapRequestPasser, WrapRequestHandler, and WrapHandlerOf methods of the Host and Resource types wrap their request passer, request handler, and the HTTP method handlers, respectively. The WrapRequestPasserAt, WrapRequestHandlerAt, and WrapPathHandlerOf methods wrap the request passer and the respective handlers of the child resource at the path. The WrapSubtreeRequestPassers, WrapSubtreeRequestHandlers, and WrapSubtreeHandlersOf methods wrap the request passer and the respective handlers of all the resources in the host's or resource's subtree. When calling the WrapHandlerOf, WrapPathHandlerOf, and WrapSubtreeHandlersOf methods, "*" may be used to denote all HTTP methods for which handlers exist. When "*" is used instead of an HTTP method, all the existing HTTP method handlers of the responder are wrapped. The Router type is more suitable when multiple hosts with different root domains or subdomains are needed. http.Handler and http.HandlerFunc It is possible to use an http.Handler and an http.HandlerFunc with NanoMux. For that, NanoMux provides four converters: Hr, HrWithArgs, FnHr, and FnHrWithArgs. The Hr and HrWithArgs convert the http.Handler, while the FnHr and FnHrWithArgs convert the function with the signature of the http.HandlerFunc to the nanomux.Handler. Most of the time, when there is a need to use an http.Handler and http.HandlerFunc, it's to utilize the handlers written outside the context of NanoMux, and those handlers don't use the *Args argument. The Hr and FnHr converters return a handler that ignores the *Args argument instead of inserting it into the request's context, which is a slower operation. These converters must be used when the http.Handler and http.HandlerFunc handlers don't need the *Args argument. If they are written to use the *Args argument, then it's better to change their signatures as well. One situation where http.Handler and http.HandlerFunc can be considered when writing handlers might be to use a middleware with the signature of func(http.Handler) http.Handler. But for middleware with that signature, NanoMux provides an Mw converter. The Mw converter converts the middleware with the signature of func(http.Handler) http.Handler to the middleware with the signature of func(nanomux.Handler) nanomux.Handler, so it can be used to wrap the NanoMux handlers.
Package nokiahealth is a client module for working with the Withings (previously Nokia Health (previously Withings)) API. The current version (v2) of this module has been updated to work with the newer Oauth2 implementation of the API. As with all Oauth2 APIs, you must obtain authorization to access the users data. To do so you must first register your application with the Withings api to obtain a ClientID and ClientSecret. Once you have your client information you can now create a new client. You will need to also provide the redirectURL you provided during application registration. This URL is where the user will be redirected to and will include the code you need to generate the accessToken needed to access the users data. Under normal situations you should have an http server listening on that address and pull the code from the URL query parameters. You can now use the client to generate the authorization URL the user needs to navigate to. This URL will take the user to a page that will prompt them to allow your application to access their data. The state string returned by the method is a randomly generated BASE64 string using the crypto/rand module. It will be returned in the redirect as a query parameter and the two values verified they match. It's not required, but useful for security reasons. Using the code returned by the redirect in the query parameters, you can generate a new user. This user struct can be used to immediately perform actions against the users data. It also contains the token you should save somewhere for reuse. Obviously use whatever context you would like here. Make sure the save at least the refreshToken for accessing the user data at a later date. You may also save the accessToken, but it does expire and creating a new client from saved token data only requires the refreshToken. You can easily create a user from a saved token using the NewUserFromRefreshToken method. A working configured client is required for the user generated from this method to work. The user struct has various methods associated with each API endpoint to perform data retrieval. The methods take a specific param struct specifying the api options to use on the request. The API is a bit "special" so the params vary a bit between each method. The client does what it can to smooth those out but there is only so much that can be done. Every method has two forms, one that accepts a context and one that does now. This allows you to provide a context on each request if you would like to. By default all methods utilize a context to timeout the request to the API. The value of the timeout is stored on the Client and can be access as/set on Client.Timeout. Setting is _not_ thread safe and should only be set on client creation. If you need to change the timeout for different requests use the methodCtx variant of the method. By default the state generated by the AuthCodeURL utilized crypto/rand. If you would like to implement your own random method you can do so by assigning the function to Rand field of the Client struct. The function should support the Rand type. Also this is _not_ thread safe so only perform this action on client creation. By default every returned response will be parsed and the parsed data returned. If you need access to the raw request data you can enable it by setting the SaveRawResponse field of the client struct to true. This should be done at client creation time. With it set to true the RawResponse field of the returned structs will include the raw response. Some data request methods include a parseResponse field on the params struct. If this is included additional parsing is performed to make the data more usable. This can be seen on GetBodyMeasures for example. You can include the path fields sent to the API by setting IncludePath to true on the client. This is primarily used for debugging but could be helpful in some situations. By default the client will request all known scopes. If you would like to pair down this you can change the scope by using the SetScope method of the client. Consts in the form of ScopeXxx are provided to aid selection.
Package smpp implements SMPP protocol v3.4. It allows easier creation of SMPP clients and servers by providing utilities for PDU and session handling. In order to do any kind of interaction you first need to create an SMPP Session(https://godoc.org/github.com/sam-ish/smpp#Session). Session is the main carrier of the protocol and enforcer of the specification rules. Naked session can be created with: But it's much more convenient to use helpers that would do the binding with the remote SMSC and return you session prepared for sending: And once you have the session it can be used for sending PDUs to the binded peer. Session that is no longer used must be closed: If you want to handle incoming requests to the session specify SMPPHandler in session configuration when creating new session similarly to HTTPHandler from _net/http_ package: Detailed examples for SMPP client and server can be found in the examples dir.
This package is a single binary which combines all of the binaries shipped with the daemon system into one simple package and you can run the servers using subcommand arguments. You want to install it as root because you will run spawn as root so it is best for security to make the binary owned by root, otherwise the user that owns the binary could edit it and run any code as root. The easiest way to do this and allow for updating daemon is to just run go install as root. If you run spawn with no arguments it will create the example config and run portal plus the dashboard. If you want to just print the example config run: For more info read the README! Expand it above on the go docs site. For servers written in go, you can use the portal client library ask.systems/daemon/portal/gate to register with portal, automatically select a port to listen on that won't conflict and even automatically use a newly generated TLS certificate to encrypt local traffic (this time it's easy!). To do this you will call ask.systems/daemon/portal/gate.StartTLSRegistration, set up any application handlers with net/http.Handle then call ask.systems/daemon/tools.RunHTTPServerTLS. The easiest way to configure access to portal registration RPCs is via the environment variables PORTAL_ADDR and PORTAL_TOKEN. You can find the portal token printed in the portal logs on startup. If you set the portal flags on spawn it will propegate them to child processes with thes env vars, and you can set them up in your shell dotfiles. You can also import _ ask.systems/daemon/portal/flags if you'd like to configure the portal address and token with flags instead of the environment variables. Make sure to take a look at the other utility functions in ask.systems/daemon/tools too! There's a second flags flags package which provides the version stamp flag and the syslog support via the log package: ask.systems/daemon/tools/flags. Take a look at the package example for the client library ask.systems/daemon/portal/gate for a simple go client of portal with encrypted internal traffic. It uses the standard net/http.Handle system. The source code of ask.systems/daemon/host is a good basic server example too. You can then sudo go install your own binary (or copy your binary to /root/) and add an entry to your config.pbtxt with binary name and arguments. By default spawn checks the working dir for binaries named in the config and you can set the spawn -path argument to change it. The way it works is each of the individual binaries in daemon have all of their code packed into the <bin>/embed<bin> packages. Each of them have a standard Run function that accepts commandline arguments. Then the packages you actually install, such as ask.systems/daemon/assimilate or ask.systems/daemon have a simple main function that just calls the Run function from the appropriate embed package. If you would like to, you can use the public interfaces in the embed packages for your applications as well, if you would like to for example embed a copy of ask.systems/daemon/host instead of calling the helper functions in ask.systems/daemon/tools (which cover pretty much all of host's functionality). Also if you rename the ask.systems/daemon binary to one of the subcommands, it will act as if it just that individual binary. Spawn actually uses this when copying the megabinary to chroots so it will show in your process list and syslog as the correct name.
Package smpp implements SMPP protocol v3.4. It allows easier creation of SMPP clients and servers by providing utilities for PDU and session handling. In order to do any kind of interaction you first need to create an SMPP Session(https://godoc.org/github.com/dolnikov/smpp#Session). Session is the main carrier of the protocol and enforcer of the specification rules. Naked session can be created with: But it's much more convenient to use helpers that would do the binding with the remote SMSC and return you session prepared for sending: And once you have the session it can be used for sending PDUs to the binded peer. Session that is no longer used must be closed: If you want to handle incoming requests to the session specify SMPPHandler in session configuration when creating new session similarly to HTTPHandler from _net/http_ package: Detailed examples for SMPP client and server can be found in the examples dir.
Package wayland is the root of the wayland sample repository. Simple shm demo, draws into shared memory, does not rely on the window package. Smoke demo. Reacts on mouse input, uses the window package. ImageViewer demo. Displays image file. Does not use window package, draws it's own decorations. Draws fonts in the titlebar, for this it needs the Deja Vu font fonts-dejavu. Editor demo. Currently does not work. Provides basic OS functions like the creation of anonymous temporary file (CreateAnonymousFile), Mmap, Munmap, and Socket communication. Utility functions found in the wayland-client, provided for convenience. Wrapper around the C library libxkbcommon. Used inside the window package. Needs libxkbcommon-dev for compilation and recommends libx11-data for run time operation. Implements a window model on top of wayland. Aims to be a lot like the original window.c code. Uses wl. Loads an X cursors, requires to have a cursor theme installed, for instance dmz-cursor-theme. Like cairo but not cairo. Does not depend on anything. External contains an error checking code and the swizzle function for multiple architectures. No dependencies. Unstable wayland protocols. Depends on wl. The wayland itself, does not require any external deps (except for wayland server during runtime). Stable xdg protocol. Depends on wl.
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy(request level configuration), alternatively, global(all services) or client level RetryPolicy configration is also possible. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go If you are trying to make a PUT/POST API call with binary request body, please make sure the binary request body is resettable, which means the request body should inherit Seeker interface. The Retry behavior Precedence (Highest to lowest) is defined as below:- The OCI Go SDK defines a default retry policy that retries on the errors suitable for retries (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm), for a recommended period of time (up to 7 attempts spread out over at most approximately 1.5 minutes). The default retry policy is defined by : Default Retry-able Errors Below is the list of default retry-able errors for which retry attempts should be made. The following errors should be retried (with backoff). HTTP Code Customer-facing Error Code Apart from the above errors, retries should also be attempted in the following Client Side errors : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) The above errors can be avoided through retrying and hence, are classified as the default retry-able errors. Additionally, retries should also be made for Circuit Breaker exceptions (Exceptions raised by Circuit Breaker in an open state) Default Termination Strategy The termination strategy defines when SDKs should stop attempting to retry. In other words, it's the deadline for retries. The OCI SDKs should stop retrying the operation after 7 retry attempts. This means the SDKs will have retried for ~98 seconds or ~1.5 minutes have elapsed due to total delays. SDKs will make a total of 8 attempts. (1 initial request + 7 retries) Default Delay Strategy Default Delay Strategy - The delay strategy defines the amount of time to wait between each of the retry attempts. The default delay strategy chosen for the SDK – Exponential backoff with jitter, using: 1. The base time to use in retry calculations will be 1 second 2. An exponent of 2. When calculating the next retry time, the SDK will raise this to the power of the number of attempts 3. A maximum wait time between calls of 30 seconds (Capped) 4. Added jitter value between 0-1000 milliseconds to spread out the requests Configure and use default retry policy You can set this retry policy for a single request: or for all requests made by a client: or for all requests made by all clients: or setting default retry via environment varaible, which is a global switch for all services: Some services enable retry for operations by default, this can be overridden using any alternatives mentioned above. To know which service operations have retries enabled by default, look at the operation's description in the SDK - it will say whether that it has retries enabled by default Some resources may have to be replicated across regions and are only eventually consistent. That means the request to create, update, or delete the resource succeeded, but the resource is not available everywhere immediately. Creating, updating, or deleting any resource in the Identity service is affected by eventual consistency, and doing so may cause other operations in other services to fail until the Identity resource has been replicated. For example, the request to CreateTag in the Identity service in the home region succeeds, but immediately using that created tag in another region in a request to LaunchInstance in the Compute service may fail. If you are creating, updating, or deleting resources in the Identity service, we recommend using an eventually consistent retry policy for any service you access. The default retry policy already deals with eventual consistency. Example: This retry policy will use a different strategy if an eventually consistent change was made in the recent past (called the "eventually consistent window", currently defined to be 4 minutes after the eventually consistent change). This special retry policy for eventual consistency will: 1. make up to 9 attempts (including the initial attempt); if an attempt is successful, no more attempts will be made 2. retry at most until (a) approximately the end of the eventually consistent window or (b) the end of the default retry period of about 1.5 minutes, whichever is farther in the future; if an attempt is successful, no more attempts will be made, and the OCI Go SDK will not wait any longer 3. retry on the error codes 400-RelatedResourceNotAuthorizedOrNotFound, 404-NotAuthorizedOrNotFound, and 409-NotAuthorizedOrResourceAlreadyExists, for which the default retry policy does not retry, in addition to the errors the default retry policy retries on (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm) If there were no eventually consistent actions within the recent past, then this special retry strategy is not used. If you want a retry policy that does not handle eventual consistency in a special way, for example because you retry on all error responses, you can use DefaultRetryPolicyWithoutEventualConsistency or NewRetryPolicyWithOptions with the common.ReplaceWithValuesFromRetryPolicy(common.DefaultRetryPolicyWithoutEventualConsistency()) option: The NewRetryPolicy function also creates a retry policy without eventual consistency. Circuit Breaker can prevent an application repeatedly trying to execute an operation that is likely to fail, allowing it to continue without waiting for the fault to be rectified or wasting CPU cycles, of course, it also enables an application to detect whether the fault has been resolved. If the problem appears to have been rectified, the application can attempt to invoke the operation. Go SDK intergrates sony/gobreaker solution, wraps in a circuit breaker object, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, this also saves the service from being overwhelmed with network calls in case of an outage. Circuit Breaker Configuration Definitions 1. Failure Rate Threshold - The state of the CircuitBreaker changes from CLOSED to OPEN when the failure rate is equal or greater than a configurable threshold. For example when more than 50% of the recorded calls have failed. 2. Reset Timeout - The timeout after which an open circuit breaker will attempt a request if a request is made 3. Failure Exceptions - The list of Exceptions that will be regarded as failures for the circuit. 4. Minimum number of calls/ Volume threshold - Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. 1. Failure Rate Threshold - 80% - This means when 80% of the requests calculated for a time window of 120 seconds have failed then the circuit will transition from closed to open. 2. Minimum number of calls/ Volume threshold - A value of 10, for the above defined time window of 120 seconds. 3. Reset Timeout - 30 seconds to wait before setting the breaker to halfOpen state, and trying the action again. 4. Failure Exceptions - The failures for the circuit will only be recorded for the retryable/transient exceptions. This means only the following exceptions will be regarded as failure for the circuit. HTTP Code Customer-facing Error Code Apart from the above, the following client side exceptions will also be treated as a failure for the circuit : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) Go SDK enable circuit breaker with default configuration for most of the service clients, if you don't want to enable the solution, can disable the functionality before your application running Go SDK also supports customize Circuit Breaker with specified configurations. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_circuitbreaker_test.go To know which service clients have circuit breakers enabled, look at the service client's description in the SDK - it will say whether that it has circuit breakers enabled by default The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy(request level configuration), alternatively, global(all services) or client level RetryPolicy configration is also possible. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go If you are trying to make a PUT/POST API call with binary request body, please make sure the binary request body is resettable, which means the request body should inherit Seeker interface. The Retry behavior Precedence (Highest to lowest) is defined as below:- The OCI Go SDK defines a default retry policy that retries on the errors suitable for retries (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm), for a recommended period of time (up to 7 attempts spread out over at most approximately 1.5 minutes). The default retry policy is defined by : Default Retry-able Errors Below is the list of default retry-able errors for which retry attempts should be made. The following errors should be retried (with backoff). HTTP Code Customer-facing Error Code Apart from the above errors, retries should also be attempted in the following Client Side errors : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) The above errors can be avoided through retrying and hence, are classified as the default retry-able errors. Additionally, retries should also be made for Circuit Breaker exceptions (Exceptions raised by Circuit Breaker in an open state) Default Termination Strategy The termination strategy defines when SDKs should stop attempting to retry. In other words, it's the deadline for retries. The OCI SDKs should stop retrying the operation after 7 retry attempts. This means the SDKs will have retried for ~98 seconds or ~1.5 minutes have elapsed due to total delays. SDKs will make a total of 8 attempts. (1 initial request + 7 retries) Default Delay Strategy Default Delay Strategy - The delay strategy defines the amount of time to wait between each of the retry attempts. The default delay strategy chosen for the SDK – Exponential backoff with jitter, using: 1. The base time to use in retry calculations will be 1 second 2. An exponent of 2. When calculating the next retry time, the SDK will raise this to the power of the number of attempts 3. A maximum wait time between calls of 30 seconds (Capped) 4. Added jitter value between 0-1000 milliseconds to spread out the requests Configure and use default retry policy You can set this retry policy for a single request: or for all requests made by a client: or for all requests made by all clients: or setting default retry via environment varaible, which is a global switch for all services: Some services enable retry for operations by default, this can be overridden using any alternatives mentioned above. To know which service operations have retries enabled by default, look at the operation's description in the SDK - it will say whether that it has retries enabled by default Some resources may have to be replicated across regions and are only eventually consistent. That means the request to create, update, or delete the resource succeeded, but the resource is not available everywhere immediately. Creating, updating, or deleting any resource in the Identity service is affected by eventual consistency, and doing so may cause other operations in other services to fail until the Identity resource has been replicated. For example, the request to CreateTag in the Identity service in the home region succeeds, but immediately using that created tag in another region in a request to LaunchInstance in the Compute service may fail. If you are creating, updating, or deleting resources in the Identity service, we recommend using an eventually consistent retry policy for any service you access. The default retry policy already deals with eventual consistency. Example: This retry policy will use a different strategy if an eventually consistent change was made in the recent past (called the "eventually consistent window", currently defined to be 4 minutes after the eventually consistent change). This special retry policy for eventual consistency will: 1. make up to 9 attempts (including the initial attempt); if an attempt is successful, no more attempts will be made 2. retry at most until (a) approximately the end of the eventually consistent window or (b) the end of the default retry period of about 1.5 minutes, whichever is farther in the future; if an attempt is successful, no more attempts will be made, and the OCI Go SDK will not wait any longer 3. retry on the error codes 400-RelatedResourceNotAuthorizedOrNotFound, 404-NotAuthorizedOrNotFound, and 409-NotAuthorizedOrResourceAlreadyExists, for which the default retry policy does not retry, in addition to the errors the default retry policy retries on (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm) If there were no eventually consistent actions within the recent past, then this special retry strategy is not used. If you want a retry policy that does not handle eventual consistency in a special way, for example because you retry on all error responses, you can use DefaultRetryPolicyWithoutEventualConsistency or NewRetryPolicyWithOptions with the common.ReplaceWithValuesFromRetryPolicy(common.DefaultRetryPolicyWithoutEventualConsistency()) option: The NewRetryPolicy function also creates a retry policy without eventual consistency. Circuit Breaker can prevent an application repeatedly trying to execute an operation that is likely to fail, allowing it to continue without waiting for the fault to be rectified or wasting CPU cycles, of course, it also enables an application to detect whether the fault has been resolved. If the problem appears to have been rectified, the application can attempt to invoke the operation. Go SDK intergrates sony/gobreaker solution, wraps in a circuit breaker object, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, this also saves the service from being overwhelmed with network calls in case of an outage. Circuit Breaker Configuration Definitions 1. Failure Rate Threshold - The state of the CircuitBreaker changes from CLOSED to OPEN when the failure rate is equal or greater than a configurable threshold. For example when more than 50% of the recorded calls have failed. 2. Reset Timeout - The timeout after which an open circuit breaker will attempt a request if a request is made 3. Failure Exceptions - The list of Exceptions that will be regarded as failures for the circuit. 4. Minimum number of calls/ Volume threshold - Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. 1. Failure Rate Threshold - 80% - This means when 80% of the requests calculated for a time window of 120 seconds have failed then the circuit will transition from closed to open. 2. Minimum number of calls/ Volume threshold - A value of 10, for the above defined time window of 120 seconds. 3. Reset Timeout - 30 seconds to wait before setting the breaker to halfOpen state, and trying the action again. 4. Failure Exceptions - The failures for the circuit will only be recorded for the retryable/transient exceptions. This means only the following exceptions will be regarded as failure for the circuit. HTTP Code Customer-facing Error Code Apart from the above, the following client side exceptions will also be treated as a failure for the circuit : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) Go SDK enable circuit breaker with default configuration for most of the service clients, if you don't want to enable the solution, can disable the functionality before your application running Go SDK also supports customize Circuit Breaker with specified configurations. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_circuitbreaker_test.go To know which service clients have circuit breakers enabled, look at the service client's description in the SDK - it will say whether that it has circuit breakers enabled by default The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
Package dynamodb provides the client and types for making API requests to Amazon DynamoDB. Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables' throughput capacity without downtime or performance degradation, and use the AWS Management Console to monitor resource utilization and performance metrics. DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an AWS region, providing built-in high availability and data durability. See https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10 for more information on this service. See dynamodb package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/ To Amazon DynamoDB with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon DynamoDB client DynamoDB for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/#New Utility helpers to marshal and unmarshal AttributeValue to and from Go types can be found in the dynamodbattribute sub package. This package provides has specialized functions for the common ways of working with AttributeValues. Such as map[string]*AttributeValue, []*AttributeValue, and directly with *AttributeValue. This is helpful for marshaling Go types for API operations such as PutItem, and unmarshaling Query and Scan APIs' responses. See the dynamodbattribute package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/dynamodbattribute/ The expression package provides utility types and functions to build DynamoDB expression for type safe construction of API ExpressionAttributeNames, and ExpressionAttribute Values. The package represents the various DynamoDB Expressions as structs named accordingly. For example, ConditionBuilder represents a DynamoDB Condition Expression, an UpdateBuilder represents a DynamoDB Update Expression, and so on. See the expression package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/expression/
Package twilio provides server side authentication utilities for applications to integrate with Twilio service. Access Tokens are short-lived tokens that can be used to authenticate Twilio Client SDKs like Video and IP Messaging. See detail in https://www.twilio.com/docs/api/rest/access-tokens. Capability Tokens are a secure way of setting up your device to access various features of Twilio, allowing you to add Twilio capabilities to web and mobile applications without exposing your AuthToken in any client-side environment. See detail in https://www.twilio.com/docs/api/client/capability-tokens.
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
Package upload is an http service for that stores uploaded files and a client to this service The service can resume upload of a file if a client is uploader. It has a directory in its root storage for each user. There may be no user if a client is just a curl or other not specialized client. Service consists of these packages: ./cmd/uploader - is a client. It can resume upload of a file any time later. ./cmd/uploadserver - is a server side of an upload sevice. ./cmd/decodejournal - is an utility used to dump the content of a journal file. Journal is used to allow a resume of upload. You build all these packages by: - make all
Package kvcert is a simple utility that utilizes the azure-sdk-for-go to fetch a Certificate from Azure Key Vault. The certificate can then be used in your Go web server to support TLS communication. A trivial example is below. This example uses the following environment variables: KEY_VAULT_NAME: name of your Azure Key Vault KEY_VAULT_CERT_NAME: name of your certificate in Azure Key Vault AZURE_TENANT_ID: azure tenant id (not visible in example, but required by azure-sdk-for-go) AZURE_CLIENT_ID: azure client id (not visible in example, but required by azure-sdk-for-go) AZURE_CLIENT_SECRET: azure client secret (not visible in example, but required by azure-sdk-for-go)
Package dynamodb provides the client and types for making API requests to Amazon DynamoDB. Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables' throughput capacity without downtime or performance degradation, and use the AWS Management Console to monitor resource utilization and performance metrics. DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an AWS region, providing built-in high availability and data durability. See https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10 for more information on this service. See dynamodb package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/ To Amazon DynamoDB with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon DynamoDB client DynamoDB for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/#New Utility helpers to marshal and unmarshal AttributeValue to and from Go types can be found in the dynamodbattribute sub package. This package provides has specialized functions for the common ways of working with AttributeValues. Such as map[string]*AttributeValue, []*AttributeValue, and directly with *AttributeValue. This is helpful for marshaling Go types for API operations such as PutItem, and unmarshaling Query and Scan APIs' responses. See the dynamodbattribute package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/dynamodbattribute/ The expression package provides utility types and functions to build DynamoDB expression for type safe construction of API ExpressionAttributeNames, and ExpressionAttribute Values. The package represents the various DynamoDB Expressions as structs named accordingly. For example, ConditionBuilder represents a DynamoDB Condition Expression, an UpdateBuilder represents a DynamoDB Update Expression, and so on. See the expression package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/expression/
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
Package gamelift provides the client and types for making API requests to Amazon GameLift. Amazon GameLift is a managed service for developers who need a scalable, dedicated server solution for their multiplayer games. Use Amazon GameLift for these tasks: (1) set up computing resources and deploy your game servers, (2) run game sessions and get players into games, (3) automatically scale your resources to meet player demand and manage costs, and (4) track in-depth metrics on game server performance and player usage. The Amazon GameLift service API includes two important function sets: Manage game sessions and player access -- Retrieve information on available game sessions; create new game sessions; send player requests to join a game session. Configure and manage game server resources -- Manage builds, fleets, queues, and aliases; set auto-scaling policies; retrieve logs and metrics. This reference guide describes the low-level service API for Amazon GameLift. You can use the API functionality with these tools: The Amazon Web Services software development kit (AWS SDK (http://aws.amazon.com/tools/#sdk)) is available in multiple languages (http://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-supported.html#gamelift-supported-clients) including C++ and C#. Use the SDK to access the API programmatically from an application, such as a game client. The AWS command-line interface (http://aws.amazon.com/cli/) (CLI) tool is primarily useful for handling administrative actions, such as setting up and managing Amazon GameLift settings and resources. You can use the AWS CLI to manage all of your AWS services. The AWS Management Console (https://console.aws.amazon.com/gamelift/home) for Amazon GameLift provides a web interface to manage your Amazon GameLift settings and resources. The console includes a dashboard for tracking key resources, including builds and fleets, and displays usage and performance metrics for your games as customizable graphs. Amazon GameLift Local is a tool for testing your game's integration with Amazon GameLift before deploying it on the service. This tools supports a subset of key API actions, which can be called from either the AWS CLI or programmatically. See Testing an Integration (http://docs.aws.amazon.com/gamelift/latest/developerguide/integration-testing-local.html). Learn more Developer Guide (http://docs.aws.amazon.com/gamelift/latest/developerguide/) -- Read about Amazon GameLift features and how to use them. Tutorials (https://gamedev.amazon.com/forums/tutorials) -- Get started fast with walkthroughs and sample projects. GameDev Blog (http://aws.amazon.com/blogs/gamedev/) -- Stay up to date with new features and techniques. GameDev Forums (https://gamedev.amazon.com/forums/spaces/123/gamelift-discussion.html) -- Connect with the GameDev community. Release notes (http://aws.amazon.com/releasenotes/Amazon-GameLift/) and document history (http://docs.aws.amazon.com/gamelift/latest/developerguide/doc-history.html) -- Stay current with updates to the Amazon GameLift service, SDKs, and documentation. This list offers a functional overview of the Amazon GameLift service API. Use these actions to start new game sessions, find existing game sessions, track game session status and other information, and enable player access to game sessions. SearchGameSessions -- Retrieve all available game sessions or search for Start new games with Queues to find the best available hosting resources StartGameSessionPlacement -- Request a new game session placement and add DescribeGameSessionPlacement -- Get details on a placement request, including StopGameSessionPlacement -- Cancel a placement request. CreateGameSession -- Start a new game session on a specific fleet. Available StartMatchmaking -- Request matchmaking for one players or a group who want StartMatchBackfill - Request additional player matches to fill empty slots DescribeMatchmaking -- Get details on a matchmaking request, including status. AcceptMatch -- Register that a player accepts a proposed match, for matches StopMatchmaking -- Cancel a matchmaking request. DescribeGameSessions -- Retrieve metadata for one or more game sessions, DescribeGameSessionDetails -- Retrieve metadata and the game session protection UpdateGameSession -- Change game session settings, such as maximum player GetGameSessionLogUrl -- Get the location of saved logs for a game session. CreatePlayerSession -- Send a request for a player to join a game session. CreatePlayerSessions -- Send a request for multiple players to join a game DescribePlayerSessions -- Get details on player activity, including status, When setting up Amazon GameLift resources for your game, you first create a game build (http://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-build-intro.html) and upload it to Amazon GameLift. You can then use these actions to configure and manage a fleet of resources to run your game servers, scale capacity to meet player demand, access performance and utilization metrics, and more. CreateBuild -- Create a new build using files stored in an Amazon S3 bucket. ListBuilds -- Get a list of all builds uploaded to a Amazon GameLift region. DescribeBuild -- Retrieve information associated with a build. UpdateBuild -- Change build metadata, including build name and version. DeleteBuild -- Remove a build from Amazon GameLift. CreateFleet -- Configure and activate a new fleet to run a build's game servers. ListFleets -- Get a list of all fleet IDs in a Amazon GameLift region (all DeleteFleet -- Terminate a fleet that is no longer running game servers or View / update fleet configurations. DescribeFleetAttributes / UpdateFleetAttributes -- View or change a fleet's DescribeFleetPortSettings / UpdateFleetPortSettings -- View or change the DescribeRuntimeConfiguration / UpdateRuntimeConfiguration -- View or change DescribeEC2InstanceLimits -- Retrieve maximum number of instances allowed DescribeFleetCapacity / UpdateFleetCapacity -- Retrieve the capacity settings Autoscale -- Manage auto-scaling rules and apply them to a fleet. PutScalingPolicy -- Create a new auto-scaling policy, or update an existing DescribeScalingPolicies -- Retrieve an existing auto-scaling policy. DeleteScalingPolicy -- Delete an auto-scaling policy and stop it from affecting StartFleetActions -- Restart a fleet's auto-scaling policies. StopFleetActions -- Suspend a fleet's auto-scaling policies. CreateVpcPeeringAuthorization -- Authorize a peering connection to one of DescribeVpcPeeringAuthorizations -- Retrieve valid peering connection authorizations. DeleteVpcPeeringAuthorization -- Delete a peering connection authorization. CreateVpcPeeringConnection -- Establish a peering connection between the DescribeVpcPeeringConnections -- Retrieve information on active or pending DeleteVpcPeeringConnection -- Delete a VPC peering connection with a Amazon DescribeFleetUtilization -- Get current data on the number of server processes, DescribeFleetEvents -- Get a fleet's logged events for a specified time span. DescribeGameSessions -- Retrieve metadata associated with one or more game DescribeInstances -- Get information on each instance in a fleet, including GetInstanceAccess -- Request access credentials needed to remotely connect CreateAlias -- Define a new alias and optionally assign it to a fleet. ListAliases -- Get all fleet aliases defined in a Amazon GameLift region. DescribeAlias -- Retrieve information on an existing alias. UpdateAlias -- Change settings for a alias, such as redirecting it from one DeleteAlias -- Remove an alias from the region. ResolveAlias -- Get the fleet ID that a specified alias points to. CreateGameSessionQueue -- Create a queue for processing requests for new DescribeGameSessionQueues -- Retrieve game session queues defined in a Amazon UpdateGameSessionQueue -- Change the configuration of a game session queue. DeleteGameSessionQueue -- Remove a game session queue from the region. CreateMatchmakingConfiguration -- Create a matchmaking configuration with DescribeMatchmakingConfigurations -- Retrieve matchmaking configurations UpdateMatchmakingConfiguration -- Change settings for matchmaking configuration. DeleteMatchmakingConfiguration -- Remove a matchmaking configuration from CreateMatchmakingRuleSet -- Create a set of rules to use when searching for DescribeMatchmakingRuleSets -- Retrieve matchmaking rule sets defined in ValidateMatchmakingRuleSet -- Verify syntax for a set of matchmaking rules. See https://docs.aws.amazon.com/goto/WebAPI/gamelift-2015-10-01 for more information on this service. See gamelift package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/gamelift/ To Amazon GameLift with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon GameLift client GameLift for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/gamelift/#New
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
Package twilio provides server side authentication utilities for applications to integrate with Twilio service. Access Tokens are short-lived tokens that can be used to authenticate Twilio Client SDKs like Video and IP Messaging. See detail in https://www.twilio.com/docs/api/rest/access-tokens. Capability Tokens are a secure way of setting up your device to access various features of Twilio, allowing you to add Twilio capabilities to web and mobile applications without exposing your AuthToken in any client-side environment. See detail in https://www.twilio.com/docs/api/client/capability-tokens.
Package nutsak (Network Utility Swiss-Army Knife) will provide a means of tunneling traffic between two endpoints. Here is sample code to create a Pair: This will create a TCP listener on port 4444 that forks each new connection. Any received data will be written to STDOUT. The Network Utilties (NUts) are created from seeds. Below are the supported SEED TYPES along with their documentation: FILE:addr[,mode=(append|read|write)] This seed takes an address that is an absolute or relative filename. This seed is used to read or write a file on disk. The default mode is read. STDIO: Aliases: -, STDIN, STDOUT This seed takes no address or options. It can be used to read from stdin or write to stdout. TCP:addr This seed takes an address of the form [IP:]PORT. The IP is optional and essentially defaults to localhost, or more specifically, 0.0.0.0 or [::]. This seed is used to make an outgoing TCP connection. TCP-LISTEN:addr[,fork] Aliases: TCP-L This seed takes an address of the form [IP:]PORT. The IP is optional and essentially defaults to localhost, or more specifically, 0.0.0.0 or [::]. This seed is used to listen on the provided TCP address. The fork option causes the TCP listener to accept multiple connections in parallel. TLS:addr[,ca=PATH,cert=PATH,key=PATH,verify] This seed takes an address of the form [IP:]PORT. The IP is optional and essentially defaults to localhost, or more specifically, 0.0.0.0 or [::]. This seed is used to make an outgoing TLS connection. The ca, cert, and key options take a filepath (DER or PEM formatted). The verify option determines if the the server-side CA should be verified. The cert and key options must be used together. If verify is specified, a ca must also be specified. TLS-LISTEN:addr[,ca=PATH,cert=PATH,fork,key=PATH,verify] Aliases: TLS-L This seed takes an address of the form [IP:]PORT. The IP is optional and essentially defaults to localhost, or more specifically, 0.0.0.0 or [::]. This seed is used to listen on the provided TCP address. The ca, cert, and key options take a filepath (DER or PEM formatted). The fork option causes the TLS listener to accept multiple connections in parallel. The verify option determines if the client-side certificate should be verified. The cert and key options are mandatory. If verify is specified, a ca must also be specified. UDP:addr This seed takes an address of the form [IP:]PORT. The IP is optional and essentially defaults to localhost, or more specifically, 0.0.0.0 or [::]. This seed is used to make an outgoing UDP connection. UDP-LISTEN:addr Aliases: UDP-L This seed takes an address of the form [IP:]PORT. The IP is optional and essentially defaults to localhost, or more specifically, 0.0.0.0 or [::]. This seed is used to listen on the provided UDP address.
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go If you are trying to make a PUT/POST API call with binary request body, please make sure the binary request body is resettable, which means the request body should inherit Seeker interface. The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server: