Package sshego is a golang libary that does secure port forwarding over ssh. Also `gosshtun` is a command line utility included here that demonstrates use of the library; and may be useful standalone. The intent of having a Go library is so that it can be used to secure (via SSH tunnel) any other traffic that your Go application would normally have to do over cleartext TCP. While you could always run a tunnel as a separate process, by running the tunnel in process with your application, you know the tunnel is running when the process is running. It's just simpler to administer; only one thing to start instead of two. Also this is much simpler, and much faster, than using a virtual private network (VPN). For a speed comparison, consider [1] where SSH is seen to be at least 2x faster than OpenVPN. [1] http://serverfault.com/questions/653211/ssh-tunneling-is-faster-than-openvpn-could-it-be The sshego library typically acts as an ssh client, but also provides options to support running an embedded sshd server daemon. Port forwarding is the most typical use of the client, and this is the equivalent of using the standalone `ssh` client program and giving the `-L` and/or `-R` flags. If you only trust the user running your application and not your entire host, you can further restrict access by using either DialConfig.Dial() for a direct-tcpip connection, or by using the unix-domain-socket support. For example, is equivalent to with the addendum that `gosshtun` requires the use of passwordless private `-key` file, and will never prompt you for a password at the keyboard. This makes it ideal for embedding inside your application to secure your (e.g. mysql, postgres, other cleartext) traffic. As many connections as you need will be multiplexed over the same ssh tunnel. We check the sshd server's host key. We prevent MITM attacks by only allowing new servers if `-new` is given. You should give `-new` only once at setup time. Then the lack of `-new` can protect you on subsequent runs, because the server's host key must match what we were given the first time. means the following two network hops will happen, when a local browser connects to localhost:8888 where (a) takes place inside the previously established ssh tunnel. Connection (b) takes place over basic, un-adorned, un-encrypted TCP/IP. Of course you could always run `gosshtun` again on the remote host to secure the additional hop as well, but typically -remote is aimed at the 127.0.0.1, which will be internal to the remote host itself and so needs no encryption.
Package twilio provides server side authentication utilities for applications to integrate with Twilio service. Access Tokens are short-lived tokens that can be used to authenticate Twilio Client SDKs like Video and IP Messaging. See detail in https://www.twilio.com/docs/api/rest/access-tokens. Capability Tokens are a secure way of setting up your device to access various features of Twilio, allowing you to add Twilio capabilities to web and mobile applications without exposing your AuthToken in any client-side environment. See detail in https://www.twilio.com/docs/api/client/capability-tokens.
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
lf is a terminal file manager. Source code can be found in the repository at https://github.com/gokcehan/lf. This documentation can either be read from terminal using 'lf -doc' or online at https://godoc.org/github.com/gokcehan/lf. You can also use 'doc' command (default '<f-1>') inside lf to view the documentation in a pager. You can run 'lf -help' to see descriptions of command line options. The following commands are provided by lf: The following command line commands are provided by lf: The following options can be used to customize the behavior of lf: The following environment variables are exported for shell commands: The following commands/keybindings are provided by default: The following additional keybindings are provided by default: Configuration files should be located at: Marks file should be located at: History file should be located at: You can configure the default values of following variables to change these locations: A sample configuration file can be found at https://github.com/gokcehan/lf/blob/master/etc/lfrc.example. This section shows information about builtin commands. Modal commands do not take any arguments, but instead change the operation mode to read their input conveniently, and so they are meant to be assigned to keybindings. Quit lf and return to the shell. Move the current file selection upwards/downwards by one/half a page/full page. Change the current working directory to the parent directory. If the current file is a directory, then change the current directory to it, otherwise, execute the 'open' command. A default 'open' command is provided to call the default system opener asynchronously with the current file as the argument. A custom 'open' command can be defined to override this default. (See also 'OPENER' variable and 'Opening Files' section) Move the current file selection to the top/bottom of the directory. Toggle the selection of the current file or files given as arguments. Reverse the selection of all files in the current directory (i.e. 'toggle' all files). Selections in other directories are not effected by this command. You can define a new command to select all files in the directory by combining 'invert' with 'unselect' (i.e. `cmd select-all :unselect; invert`), though this will also remove selections in other directories. Remove the selection of all files in all directories. Select files that match the given glob. Unselect files that match the given glob. If there are no selections, save the path of the current file to the copy buffer, otherwise, copy the paths of selected files. If there are no selections, save the path of the current file to the cut buffer, otherwise, copy the paths of selected files. Copy/Move files in copy/cut buffer to the current working directory. Clear file paths in copy/cut buffer. Synchronize copied/cut files with server. This command is automatically called when required. Draw the screen. This command is automatically called when required. Synchronize the terminal and redraw the screen. Load modified files and directories. This command is automatically called when required. Flush the cache and reload all files and directories. Print given arguments to the message line at the bottom. Print given arguments to the message line at the bottom and also to the log file. Print given arguments to the message line at the bottom in red color and also to the log file. Change the working directory to the given argument. Change the current file selection to the given argument. Remove the current file or selected file(s). Rename the current file using the builtin method. A custom 'rename' command can be defined to override this default. Read the configuration file given in the argument. Simulate key pushes given in the argument. Read a command to evaluate. Read a shell command to execute. (See also 'Prefixes' and 'Shell Commands' sections) Read a shell command to execute piping its standard I/O to the bottom statline. (See also 'Prefixes' and 'Piping Shell Commands' sections) Read a shell command to execute and wait for a key press in the end. (See also 'Prefixes' and 'Waiting Shell Commands' sections) Read a shell command to execute synchronously without standard I/O. Read key(s) to find the appropriate file name match in the forward/backward direction and jump to the next/previous match. (See also 'anchorfind', 'findlen', 'wrapscan', 'ignorecase', 'smartcase', 'ignoredia', and 'smartdia' options and 'Searching Files' section) Read a pattern to search for a file name match in the forward/backward direction and jump to the next/previous match. (See also 'globsearch', 'incsearch', 'wrapscan', 'ignorecase', 'smartcase', 'ignoredia', and 'smartdia' options and 'Searching Files' section) Save the current directory as a bookmark assigned to the given key. Change the current directory to the bookmark assigned to the given key. A special bookmark "'" holds the previous directory after a 'mark-load', 'cd', or 'select' command. Remove a bookmark assigned to the given key. This section shows information about command line commands. These should be mostly compatible with readline keybindings. A character refers to a unicode code point, a word consists of letters and digits, and a unix word consists of any non-blank characters. Quit command line mode and return to normal mode. Autocomplete the current word. Autocomplete the current word, then you can press the binded key/s again to cycle completition options. Autocomplete the current word, then you can press the binded key/s again to cycle completition options backwards. Execute the current line. Interrupt the current shell-pipe command and return to the normal mode. Go to next/previous item in the history. Move the cursor to the left/right. Move the cursor to the beginning/end of line. Delete the next character in forward/backward direction. Delete everything up to the beginning/end of line. Delete the previous unix word. Paste the buffer content containing the last deleted item. Transpose the positions of last two characters/words. Move the cursor by one word in forward/backward direction. Delete the next word in forward direction. Capitalize/uppercase/lowercase the current word and jump to the next word. This section shows information about options to customize the behavior. Character ':' is used as the separator for list options '[]int' and '[]string'. When this option is enabled, find command starts matching patterns from the beginning of file names, otherwise, it can match at an arbitrary position. When this option is enabled, directory sizes show the number of items inside instead of the size of directory file. The former needs to be calculated by reading the directory and counting the items inside. The latter is directly provided by the operating system and it does not require any calculation, though it is non-intuitive and it can often be misleading. This option is disabled by default for performance reasons. This option only has an effect when 'info' has a 'size' field and the pane is wide enough to show the information. A thousand items are counted per directory at most, and bigger directories are shown as '999+'. Show directories first above regular files. Draw boxes around panes with box drawing characters. Format string of error messages shown in the bottom message line. File separator used in environment variables 'fs' and 'fx'. Number of characters prompted for the find command. When this value is set to 0, find command prompts until there is only a single match left. When this option is enabled, search command patterns are considered as globs, otherwise they are literals. With globbing, '*' matches any sequence, '?' matches any character, and '[...]' or '[^...] matches character sets or ranges. Otherwise, these characters are interpreted as they are. Show hidden files. On unix systems, hidden files are determined by the value of 'hiddenfiles'. On windows, only files with hidden attributes are considered hidden files. List of hidden file glob patterns. Patterns can be given as relative or absolute paths. Globbing supports the usual special characters, '*' to match any sequence, '?' to match any character, and '[...]' or '[^...] to match character sets or ranges. In addition, if a pattern starts with '!', then its matches are excluded from hidden files. Show icons before each item in the list. By default, only two icons, 🗀 (U+1F5C0) and 🗎 (U+1F5CE), are used for directories and files respectively, as they are supported in the unicode standard. Icons can be configured with an environment variable named 'LF_ICONS'. The syntax of this variable is similar to 'LS_COLORS'. See the wiki page for an example icon configuration. Sets 'IFS' variable in shell commands. It works by adding the assignment to the beginning of the command string as 'IFS='...'; ...'. The reason is that 'IFS' variable is not inherited by the shell for security reasons. This method assumes a POSIX shell syntax and so it can fail for non-POSIX shells. This option has no effect when the value is left empty. This option does not have any effect on windows. Ignore case in sorting and search patterns. Ignore diacritics in sorting and search patterns. Jump to the first match after each keystroke during searching. List of information shown for directory items at the right side of pane. Currently supported information types are 'size', 'time', 'atime', and 'ctime'. Information is only shown when the pane width is more than twice the width of information. Send mouse events as input. Show the position number for directory items at the left side of pane. When 'relativenumber' is enabled, only the current line shows the absolute position and relative positions are shown for the rest. Set the interval in seconds for periodic checks of directory updates. This works by periodically calling the 'load' command. Note that directories are already updated automatically in many cases. This option can be useful when there is an external process changing the displayed directory and you are not doing anything in lf. Periodic checks are disabled when the value of this option is set to zero. Show previews of files and directories at the right most pane. If the file has more lines than the preview pane, rest of the lines are not read. Files containing the null character (U+0000) in the read portion are considered binary files and displayed as 'binary'. Set the path of a previewer file to filter the content of regular files for previewing. The file should be executable. Five arguments are passed to the file, first is the current file name; the second, third, fourth, and fifth are width, height, horizontal position, and vertical position of preview pane respectively. SIGPIPE signal is sent when enough lines are read. If the previewer returns a non-zero exit code, then the preview cache for the given file is disabled. This means that if the file is selected in the future, the previewer is called once again. Preview filtering is disabled and files are displayed as they are when the value of this option is left empty. Set the path of a cleaner file. This file will be called if previewing is enabled, the previewer is set, and the previously selected file had its preview cache disabled. The file should be executable. One argument is passed to the file; the path to the file whose preview should be cleaned. Preview clearing is disabled when the value of this option is left empty. Format string of the prompt shown in the top line. Special expansions are provided, '%u' as the user name, '%h' as the host name, '%w' as the working directory, '%d' as the working directory with a trailing path separator, and '%f' as the file name. Home folder is shown as '~' in the working directory expansion. Directory names are automatically shortened to a single character starting from the left most parent when the prompt does not fit to the screen. List of ratios of pane widths. Number of items in the list determines the number of panes in the ui. When 'preview' option is enabled, the right most number is used for the width of preview pane. Show the position number relative to the current line. When 'number' is enabled, current line shows the absolute position, otherwise nothing is shown. Reverse the direction of sort. Minimum number of offset lines shown at all times in the top and the bottom of the screen when scrolling. The current line is kept in the middle when this option is set to a large value that is bigger than the half of number of lines. A smaller offset can be used when the current file is close to the beginning or end of the list to show the maximum number of items. Shell executable to use for shell commands. On unix, a POSIX compatible shell is required. Shell commands are executed as 'shell shellopts -c command -- arguments'. On windows, '/c' is used instead of '-c' which should work in 'cmd' and 'powershell'. List of shell options to pass to the shell executable. Override 'ignorecase' option when the pattern contains an uppercase character. This option has no effect when 'ignorecase' is disabled. Override 'ignoredia' option when the pattern contains a character with diacritic. This option has no effect when 'ignoredia' is disabled. Sort type for directories. Currently supported sort types are 'natural', 'name', 'size', 'time', 'ctime', 'atime', and 'ext'. Number of space characters to show for horizontal tabulation (U+0009) character. Format string of the file modification time shown in the bottom line. Truncate character shown at the end when the file name does not fit to the pane. Searching can wrap around the file list. Scrolling can wrap around the file list. The following variables are exported for shell commands: These are referred with a '$' prefix on POSIX shells (e.g. '$f'), between '%' characters on Windows cmd (e.g. '%f%'), and with a '$env:' prefix on Windows powershell (e.g. '$env:f'). Current file selection as a full path. Selected file(s) separated with the value of 'filesep' option as full path(s). Selected file(s) (i.e. 'fs') if there are any selected files, otherwise current file selection (i.e. 'f'). Id of the running client. The value of this variable is set to the current nesting level when you run lf from a shell spawned inside lf. You can add the value of this variable to your shell prompt to make it clear that your shell runs inside lf. For example, with POSIX shells, you can use '[ -n "$LF_LEVEL" ] && PS1="$PS1""(lf level: $LF_LEVEL) "' in your shell configuration file (e.g. '~/.bashrc'). If this variable is set in the environment, use the same value, otherwise set the value to 'start' in Windows, 'open' in MacOS, 'xdg-open' in others. If this variable is set in the environment, use the same value, otherwise set the value to 'vi' on unix, 'notepad' in Windows. If this variable is set in the environment, use the same value, otherwise set the value to 'less' on unix, 'more' in Windows. If this variable is set in the environment, use the same value, otherwise set the value to 'sh' on unix, 'cmd' in Windows. The following command prefixes are used by lf: The same evaluator is used for the command line and the configuration file for read and shell commands. The difference is that prefixes are not necessary in the command line. Instead, different modes are provided to read corresponding commands. These modes are mapped to the prefix keys above by default. Characters from '#' to newline are comments and ignored: There are three special commands ('set', 'map', and 'cmd') and their variants for configuration. Command 'set' is used to set an option which can be boolean, integer, or string: Command 'map' is used to bind a key to a command which can be builtin command, custom command, or shell command: Command 'cmap' is used to bind a key to a command line command which can only be one of the builtin commands: You can delete an existing binding by leaving the expression empty: Command 'cmd' is used to define a custom command: You can delete an existing command by leaving the expression empty: If there is no prefix then ':' is assumed: An explicit ':' can be provided to group statements until a newline which is especially useful for 'map' and 'cmd' commands: If you need multiline you can wrap statements in '{{' and '}}' after the proper prefix. Regular keys are assigned to a command with the usual syntax: Keys combined with the shift key simply use the uppercase letter: Special keys are written in between '<' and '>' characters and always use lowercase letters: Angle brackets can be assigned with their special names: Function keys are prefixed with 'f' character: Keys combined with the control key are prefixed with 'c' character: Keys combined with the alt key are assigned in two different ways depending on the behavior of your terminal. Older terminals (e.g. xterm) may set the 8th bit of a character when the alt key is pressed. On these terminals, you can use the corresponding byte for the mapping: Newer terminals (e.g. gnome-terminal) may prefix the key with an escape key when the alt key is pressed. lf uses the escape delaying mechanism to recognize alt keys in these terminals (delay is 100ms). On these terminals, keys combined with the alt key are prefixed with 'a' character: Please note that, some key combinations are not possible due to the way terminals work (e.g. control and h combination sends a backspace key instead). The easiest way to find the name of a key combination is to press the key while lf is running and read the name of the key from the unknown mapping error. Mouse buttons are prefixed with 'm' character: Mouse wheel events are also prefixed with 'm' character: The usual way to map a key sequence is to assign it to a named or unnamed command. While this provides a clean way to remap builtin keys as well as other commands, it can be limiting at times. For this reason 'push' command is provided by lf. This command is used to simulate key pushes given as its arguments. You can 'map' a key to a 'push' command with an argument to create various keybindings. This is mainly useful for two purposes. First, it can be used to map a command with a command count: Second, it can be used to avoid typing the name when a command takes arguments: One thing to be careful is that since 'push' command works with keys instead of commands it is possible to accidentally create recursive bindings: These types of bindings create a deadlock when executed. Regular shell commands are the most basic command type that is useful for many purposes. For example, we can write a shell command to move selected file(s) to trash. A first attempt to write such a command may look like this: We check '$fs' to see if there are any selected files. Otherwise we just delete the current file. Since this is such a common pattern, a separate '$fx' variable is provided. We can use this variable to get rid of the conditional: The trash directory is checked each time the command is executed. We can move it outside of the command so it would only run once at startup: Since these are one liners, we can drop '{{' and '}}': Finally note that we set 'IFS' variable manually in these commands. Instead we could use the 'ifs' option to set it for all shell commands (i.e. 'set ifs "\n"'). This can be especially useful for interactive use (e.g. '$rm $f' or '$rm $fs' would simply work). This option is not set by default as it can behave unexpectedly for new users. However, use of this option is highly recommended and it is assumed in the rest of the documentation. Regular shell commands have some limitations in some cases. When an output or error message is given and the command exits afterwards, the ui is immediately resumed and there is no way to see the message without dropping to shell again. Also, even when there is no output or error, the ui still needs to be paused while the command is running. This can cause flickering on the screen for short commands and similar distractions for longer commands. Instead of pausing the ui, piping shell commands connects stdin, stdout, and stderr of the command to the statline in the bottom of the ui. This can be useful for programs following the unix philosophy to give no output in the success case, and brief error messages or prompts in other cases. For example, following rename command prompts for overwrite in the statline if there is an existing file with the given name: You can also output error messages in the command and it will show up in the statline. For example, an alternative rename command may look like this: Note that input is line buffered and output and error are byte buffered. Waiting shell commands are similar to regular shell commands except that they wait for a key press when the command is finished. These can be useful to see the output of a program before the ui is resumed. Waiting shell commands are more appropriate than piping shell commands when the command is verbose and the output is best displayed as multiline. Asynchronous shell commands are used to start a command in the background and then resume operation without waiting for the command to finish. Stdin, stdout, and stderr of the command is neither connected to the terminal nor to the ui. One of the more advanced features in lf is remote commands. All clients connect to a server on startup. It is possible to send commands to all or any of the connected clients over the common server. This is used internally to notify file selection changes to other clients. To use this feature, you need to use a client which supports communicating with a UNIX-domain socket. OpenBSD implementation of netcat (nc) is one such example. You can use it to send a command to the socket file: Since such a client may not be available everywhere, lf comes bundled with a command line flag to be used as such. When using lf, you do not need to specify the address of the socket file. This is the recommended way of using remote commands since it is shorter and immune to socket file address changes: In this command 'send' is used to send the rest of the string as a command to all connected clients. You can optionally give it an id number to send a command to a single client: All clients have a unique id number but you may not be aware of the id number when you are writing a command. For this purpose, an '$id' variable is exported to the environment for shell commands. You can use it to send a remote command from a client to the server which in return sends a command back to itself. So now you can display a message in the current client by calling the following in a shell command: Since lf does not have control flow syntax, remote commands are used for such needs. For example, you can configure the number of columns in the ui with respect to the terminal width as follows: Besides 'send' command, there are also two commands to get or set the current file selection. Two possible modes 'copy' and 'move' specify whether selected files are to be copied or moved. File names are separated by newline character. Setting the file selection is done with 'save' command: Getting the file selection is similarly done with 'load' command: There is a 'quit' command to close client connections and quit the server: Lastly, there is a 'conn' command to connect the server as a client. This should not be needed for users. lf uses its own builtin copy and move operations by default. These are implemented as asynchronous operations and progress is shown in the bottom ruler. These commands do not overwrite existing files or directories with the same name. Instead, a suffix that is compatible with '--backup=numbered' option in GNU cp is added to the new files or directories. Only file modes are preserved and all other attributes are ignored including ownership, timestamps, context, and xattr. Special files such as character and block devices, named pipes, and sockets are skipped and links are not followed. Moving is performed using the rename operation of the underlying OS. For cross-device moving, lf falls back to copying and then deletes the original files if there are no errors. Operation errors are shown in the message line as well as the log file and they do not preemptively finish the corresponding file operation. File operations can be performed on the current selected file or alternatively on multiple files by selecting them first. When you 'copy' a file, lf doesn't actually copy the file on the disk, but only records its name to memory. The actual file copying takes place when you 'paste'. Similarly 'paste' after a 'cut' operation moves the file. You can customize copy and move operations by defining a 'paste' command. This is a special command that is called when it is defined instead of the builtin implementation. You can use the following example as a starting point: Some useful things to be considered are to use the backup ('--backup') and/or preserve attributes ('-a') options with 'cp' and 'mv' commands if they support it (i.e. GNU implementation), change the command type to asynchronous, or use 'rsync' command with progress bar option for copying and feed the progress to the client periodically with remote 'echo' calls. By default, lf does not assign 'delete' command to a key to protect new users. You can customize file deletion by defining a 'delete' command. You can also assign a key to this command if you like. An example command to move selected files to a trash folder and remove files completely after a prompt are provided in the example configuration file. There are two mechanisms implemented in lf to search a file in the current directory. Searching is the traditional method to move the selection to a file matching a given pattern. Finding is an alternative way to search for a pattern possibly using fewer keystrokes. Searching mechanism is implemented with commands 'search' (default '/'), 'search-back' (default '?'), 'search-next' (default 'n'), and 'search-prev' (default 'N'). You can enable 'globsearch' option to match with a glob pattern. Globbing supports '*' to match any sequence, '?' to match any character, and '[...]' or '[^...] to match character sets or ranges. You can enable 'incsearch' option to jump to the current match at each keystroke while typing. In this mode, you can either use 'cmd-enter' to accept the search or use 'cmd-escape' to cancel the search. Alternatively, you can also map some other commands with 'cmap' to accept the search and execute the command immediately afterwards. Possible candidates are 'up', 'down' and their variants, 'top', 'bottom', 'updir', and 'open' commands. For example, you can use arrow keys to finish the search with the following mappings: Finding mechanism is implemented with commands 'find' (default 'f'), 'find-back' (default 'F'), 'find-next' (default ';'), 'find-prev' (default ','). You can disable 'anchorfind' option to match a pattern at an arbitrary position in the filename instead of the beginning. You can set the number of keys to match using 'findlen' option. If you set this value to zero, then the the keys are read until there is only a single match. Default values of these two options are set to jump to the first file with the given initial. Some options effect both searching and finding. You can disable 'wrapscan' option to prevent searches to wrap around at the end of the file list. You can disable 'ignorecase' option to match cases in the pattern and the filename. This option is already automatically overridden if the pattern contains upper case characters. You can disable 'smartcase' option to disable this behavior. Two similar options 'ignoredia' and 'smartdia' are provided to control matching diacritics in latin letters. You can define a an 'open' command (default 'l' and '<right>') to configure file opening. This command is only called when the current file is not a directory, otherwise the directory is entered instead. You can define it just as you would define any other command: It is possible to use different command types: You may want to use either file extensions or mime types from 'file' command: You may want to use 'setsid' before your opener command to have persistent processes that continue to run after lf quits. Following command is provided by default: You may also use any other existing file openers as you like. Possible options are 'libfile-mimeinfo-perl' (executable name is 'mimeopen'), 'rifle' (ranger's default file opener), or 'mimeo' to name a few. lf previews files on the preview pane by printing the file until the end or the preview pane is filled. This output can be enhanced by providing a custom preview script for filtering. This can be used to highlight source codes, list contents of archive files or view pdf or image files as text to name few. For coloring lf recognizes ansi escape codes. In order to use this feature you need to set the value of 'previewer' option to the path of an executable file. lf passes the current file name as the first argument and the height of the preview pane as the second argument when running this file. Output of the execution is printed in the preview pane. You may want to use the same script in your pager mapping as well if any: For 'less' pager, you may instead utilize 'LESSOPEN' mechanism so that useful information about the file such as the full path of the file can be displayed in the statusline below: Since this script is called for each file selection change it needs to be as efficient as possible and this responsibility is left to the user. You may use file extensions to determine the type of file more efficiently compared to obtaining mime types from 'file' command. Extensions can then be used to match cleanly within a conditional: Another important consideration for efficiency is the use of programs with short startup times for preview. For this reason, 'highlight' is recommended over 'pygmentize' for syntax highlighting. Besides, it is also important that the application is processing the file on the fly rather than first reading it to the memory and then do the processing afterwards. This is especially relevant for big files. lf automatically closes the previewer script output pipe with a SIGPIPE when enough lines are read. When everything else fails, you can make use of the height argument to only feed the first portion of the file to a program for preview. Note that some programs may not respond well to SIGPIPE to exit with a non-zero return code and avoid caching. You may add a trailing '|| true' command to avoid such errors: You may also use an existing preview filter as you like. Your system may already come with a preview filter named 'lesspipe'. These filters may have a mechanism to add user customizations as well. See the related documentations for more information. lf changes the working directory of the process to the current directory so that shell commands always work in the displayed directory. After quitting, it returns to the original directory where it is first launched like all shell programs. If you want to stay in the current directory after quitting, you can use one of the example wrapper shell scripts provided in the repository. There is a special command 'on-cd' that runs a shell command when it is defined and the directory is changed. You can define it just as you would define any other command: If you want to print escape sequences, you may redirect 'printf' output to '/dev/tty'. The following xterm specific escape sequence sets the terminal title to the working directory: This command runs whenever you change directory but not on startup. You can add an extra call to make it run on startup as well: Note that all shell commands are possible but `%` and `&` are usually more appropriate as `$` and `!` causes flickers and pauses respectively. lf tries to automatically adapt its colors to the environment. It starts with a default colorscheme and updates colors using values of existing environment variables possibly by overwriting its previous values. Colors are set in the following order: Please refer to the corresponding man pages for more information about 'LSCOLORS' and 'LS_COLORS'. 'LF_COLORS' is provided with the same syntax as 'LS_COLORS' in case you want to configure colors only for lf but not ls. This can be useful since there are some differences between ls and lf, though one should expect the same behavior for common cases. You can configure lf colors in two different ways. First, you can only configure 8 basic colors used by your terminal and lf should pick up those colors automatically. Depending on your terminal, you should be able to select your colors from a 24-bit palette. This is the recommended approach as colors used by other programs will also match each other. Second, you can set the values of environmental variables mentioned above for fine grained customization. Note that 'LS_COLORS/LF_COLORS' are more powerful than 'LSCOLORS' and they can be used even when GNU programs are not installed on the system. You can combine this second method with the first method for best results. Lastly, you may also want to configure the colors of the prompt line to match the rest of the colors. Colors of the prompt line can be configured using the 'promptfmt' option which can include hardcoded colors as ansi escapes. See the default value of this option to have an idea about how to color this line. It is worth noting that lf uses as many colors are advertised by your terminal's entry in your systems terminfo or infocmp database, if this is not present lf will default to an internal database. For terminals supporting 24-bit (or "true") color that do not have a database entry (or one that does not advertise all capabilities), support can be enabled by either setting the '$COLORTERM' variable to "truecolor" or ensuring '$TERM' is set to a value that ends with "-truecolor". Default lf colors are mostly taken from GNU dircolors defaults. These defaults use 8 basic colors and bold attribute. Default dircolors entries with background colors are simplified to avoid confusion with current file selection in lf. Similarly, there are only file type matchings and extension matchings are left out for simplicity. Default values are as follows given with their matching order in lf: Note that, lf first tries matching file names and then falls back to file types. The full order of matchings from most specific to least are as follows: For example, given a regular text file '/path/to/README.txt', the following entries are checked in the configuration and the first one to match is used: Given a regular directory '/path/to/example.d', the following entries are checked in the configuration and the first one to match is used: Note that glob-like patterns do not actually perform glob matching due to performance reasons. For example, you can set a variable as follows: Having all entries on a single line can make it hard to read. You may instead divide it to multiple lines in between double quotes by escaping newlines with backslashes as follows: Having such a long variable definition in a shell configuration file might be undesirable. You may instead put this definition in a separate file and source it in your shell configuration file as follows: See the wiki page for ansi escape codes https://en.wikipedia.org/wiki/ANSI_escape_code. Icons are configured using 'LF_ICONS' environment variable. This variable uses the same syntax as 'LS_COLORS/LF_COLORS'. Instead of colors, you should put a single characters as values of entries. Do not forget to enable 'icons' option to see the icons. Default values are as follows given with their matching order in lf: See the wiki page for an example icons configuration https://github.com/gokcehan/lf/wiki/Icons.
Package kvcert is a simple utility that utilizes the azure-sdk-for-go to fetch a Certificate from Azure Key Vault. The certificate can then be used in your Go web server to support TLS communication. A trivial example is below. This example uses the following environment variables: KEY_VAULT_NAME: name of your Azure Key Vault KEY_VAULT_CERT_NAME: name of your certificate in Azure Key Vault AZURE_TENANT_ID: azure tenant id (not visible in example, but required by azure-sdk-for-go) AZURE_CLIENT_ID: azure client id (not visible in example, but required by azure-sdk-for-go) AZURE_CLIENT_SECRET: azure client secret (not visible in example, but required by azure-sdk-for-go)
Package nokiahealth is a client module for working with the Withings (previously Nokia Health (previously Withings)) API. The current version (v2) of this module has been updated to work with the newer Oauth2 implementation of the API. As with all Oauth2 APIs, you must obtain authorization to access the users data. To do so you must first register your application with the Withings api to obtain a ClientID and ClientSecret. Once you have your client information you can now create a new client. You will need to also provide the redirectURL you provided during application registration. This URL is where the user will be redirected to and will include the code you need to generate the accessToken needed to access the users data. Under normal situations you should have an http server listening on that address and pull the code from the URL query parameters. You can now use the client to generate the authorization URL the user needs to navigate to. This URL will take the user to a page that will prompt them to allow your application to access their data. The state string returned by the method is a randomly generated BASE64 string using the crypto/rand module. It will be returned in the redirect as a query parameter and the two values verified they match. It's not required, but useful for security reasons. Using the code returned by the redirect in the query parameters, you can generate a new user. This user struct can be used to immediately perform actions against the users data. It also contains the token you should save somewhere for reuse. Obviously use whatever context you would like here. Make sure the save at least the refreshToken for accessing the user data at a later date. You may also save the accessToken, but it does expire and creating a new client from saved token data only requires the refreshToken. You can easily create a user from a saved token using the NewUserFromRefreshToken method. A working configured client is required for the user generated from this method to work. The user struct has various methods associated with each API endpoint to perform data retrieval. The methods take a specific param struct specifying the api options to use on the request. The API is a bit "special" so the params vary a bit between each method. The client does what it can to smooth those out but there is only so much that can be done. Every method has two forms, one that accepts a context and one that does now. This allows you to provide a context on each request if you would like to. By default all methods utilize a context to timeout the request to the API. The value of the timeout is stored on the Client and can be access as/set on Client.Timeout. Setting is _not_ thread safe and should only be set on client creation. If you need to change the timeout for different requests use the methodCtx variant of the method. By default the state generated by the AuthCodeURL utilized crypto/rand. If you would like to implement your own random method you can do so by assigning the function to Rand field of the Client struct. The function should support the Rand type. Also this is _not_ thread safe so only perform this action on client creation. By default every returned response will be parsed and the parsed data returned. If you need access to the raw request data you can enable it by setting the SaveRawResponse field of the client struct to true. This should be done at client creation time. With it set to true the RawResponse field of the returned structs will include the raw response. Some data request methods include a parseResponse field on the params struct. If this is included additional parsing is performed to make the data more usable. This can be seen on GetBodyMeasures for example. You can include the path fields sent to the API by setting IncludePath to true on the client. This is primarily used for debugging but could be helpful in some situations. By default the client will request all known scopes. If you would like to pair down this you can change the scope by using the SetScope method of the client. Consts in the form of ScopeXxx are provided to aid selection.
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
Package twilio provides server side authentication utilities for applications to integrate with Twilio service. Access Tokens are short-lived tokens that can be used to authenticate Twilio Client SDKs like Video and IP Messaging. See detail in https://www.twilio.com/docs/api/rest/access-tokens. Capability Tokens are a secure way of setting up your device to access various features of Twilio, allowing you to add Twilio capabilities to web and mobile applications without exposing your AuthToken in any client-side environment. See detail in https://www.twilio.com/docs/api/client/capability-tokens.
Parallel Sync - parallel recursive copying of directories psync is a tool which copies a directory recursively to another directory. Unlike "cp -r", which walks through the files and subdirectories in sequential order, psync copies several files concurrently by using threads. psync was written to speed up copying of large trees to or from a network file system like GlusterFS, Ceph, WebDAV or NFS. Operations on such file systems are usually latency bound, especially with small files. Parallel execution can help to utilize the bandwidth better and avoid that the latencies sum up, as this is the case in sequential operations. Currently, psync does only copy directory trees, similar to "cp -r". A "sync" mode, similar to "rsync -rl" is planned. See [GOALS.md](GOALD.md) on how psync finally may look like. psync is invoked as follows: Copy all files and subdirectories from /data/src into /data/dest. /data/src and /data/dest must exist and must be directories. A recursive copy of a directory can be a throughput bound or latency bound operation, depending on the size of files and characteristics of the source and/or destination file system. When copying between standard file systems on two local hard disks, the operation is typically throughput bound, and copying in parallel has no performance advantage over copying sequentially. In this case, you have a bunch of options, including "cp -r" or "rsync". However, when copying from or to network file systems (NFS, CIFS), WAN storage (WebDAV, external cloud services), distributed file systems (GlusterFS, CephFS) or file systems that live on a DRBD device, the latency for each file access is often limiting performance factor. With sequential copies, the operation can consume lots of time, although the bandwidth is not saturated. In this case, it can make up a significant performance boost if the files are copied in parallel. Parallel copying is typically not so useful when copying between local or very fast hard disks. psync can be used there, and with a moderate concurrency level like 2..5, it can be slightly faster than a sequential copy. Parallel copying should never be used when duplicating directories on the same physical hard disk. Even sequential copies suffer from the frequent hard disk head movements which are needed to read and write concurrently on/to the same disk. Parallel copying even increases the amount of head movements. psync uses goroutines for copying files in parallel. By default, 16 copy workers are spawned as goroutines, but the number can be adjusted with the -threads switch. Each worker waits for a directory to be submitted. It then handles all the directory entries sequentially. Files are copied one by one to the destination directory. When subdirectories are discovered, they are created on the destination side. Traversal of the subdirecory is then submitted to other workers and thus done in parallel to the current workload. Here are some performance values comparing psync to cp and rsync when copying a large directory structure with many small files from a local file system to an NFS share. The NFS server has an AMD E-350 CPU, 8 GB of RAM, a 2TB hard drive (WD Green series) running Debian GNU/Linux 10 (Linux kernel 4.19). The NFS export is a logical volume on the HDD with ext4 file system. The NFS export options are: rw,no_root_squash,async,no_subtree_check. The client is a workstation with AMD Ryzen 7 1700 CPU, 64 GB of RAM, running Ubuntu 18.04 LTS with HWE stack (Linux kernel 5.3). The data to copy is located on a 1TB SATA SSD with XFS, and buffered in memory. The NFS mount options are: fstype=nfs,vers=3,soft,intr,async. The hosts are connected over ethernet with 1 Gbit/s, ping latency is 160µs. The data is an extracted linux kernel source code 4.15.2 tarball, containing 62273 files and 32 symbolic links in 4377 directories, summing up to 892 MB (as seen by "du -s"). It is copied from the workstation to the server over NFS. The options for the three commands are selected comparably. They copy the files and links recursively and preserve permissions, but no ownership or time stamps. psync currently can only handle directories, regular files, and symbolic links. Other filesystem entries like devices, sockets or named pipes are silently ignored. psync preserves the Unix permissions (rwx) of the copied files and directories, but has currently no support for preserving other permission bits (suid, sticky). When using the according options, psync tries to preserve the ownership (user/group) and/or the access and modification time stamps. Preserve ownership does only work when psync is running under the root user account. Preserving the time stamps does only work for regular files and directories, not for symbolic links. psync does currently implement a simple recursive copy, like "cp -r", and not a versatile sync algorithm like rsync. There is no check wether a file already exists in the destination, nor its content and timestamps. Existing files on the destination side are not deleted when they don't exist on the source side. psync is being developed under Linux (Debian, Ubuntu, CentOS). It should work on other distributions, but this has not been tested. It does currently not compile for Windows, Darwin (MacOS), NetBSD and FreeBSD (but this should easily be fixed). psync is released under the terms of the GNU General Public License, version 3.
Package main is a stub for wr's command line interface, with the actual implementation in the cmd package. wr is a workflow runner. You use it to run the commands in your workflow easily, automatically, reliably, with repeatability, and while making optimal use of your available computing resources. wr is implemented as a polling-free in-memory job queue with an on-disk acid transactional embedded database, written in go. Its main benefits over other software workflow management systems are its very low latency and overhead, its high performance at scale, its real-time status updates with a view on all your workflows on one screen, its permanent searchable history of all the commands you have ever run, and its "live" dependencies enabling easy automation of on-going projects. Start up the manager daemon, which gives you a url you can view the web interface on: In addition to the "local" scheduler, which will run your commands on all available cores of the local machine, you can also have it run your commands on your LSF cluster or in your OpenStack environment (where it will scale the number of servers needed up and down automatically). Now, stick the commands you want to run in a text file and: Arbitrarily complex workflows can be formed by specifying command dependencies. Use the --help option of `wr add` for details. wr's core is implemented in the queue package. This is the in-memory job queue that holds commands that still need to be run. Its multiple sub-queues enable certain guarantees: a given command will only get run by a single client at any one time; if a client dies, the command will get run by another client instead; if a command cannot be run, it is buried until the user takes action; if a command has a dependency, it won't run until its dependencies are complete. The jobqueue package provides client+server code for interacting with the in-memory queue from the queue package, and by storing all new commands in an on-disk database, provides an additional guarantee: that (dynamic) workflows won't break because a job that was added got "lost" before it got run. It also retains all completed jobs, enabling searching through of past workflows and allowing for "live" dependencies, triggering the rerunning of previously completed commands if their dependencies change. The jobqueue package is also what actually does the main "work" of the system: the server component knows how many commands need to be run and what their resource requirements (memory, time, cpus etc.) are, and submits the appropriate number of jobqueue runner clients to the job scheduler. The jobqueue/scheduler package has the scheduler-specific code that ensures that these runner clients get run on the configured system in the most efficient way possible. Eg. for LSF, if we have 10 commands that need 2GB of memory to run, we will submit a job array of size 10 with 2GB of memory reservation to LSF. The most limited (and therefore potentially least contended) queue capable of running the commands will be chosen. For OpenStack, the cheapest server (in terms of cores and memory) that can run the commands will be spawned, and once there is no more work to do on those servers, they get terminated to free up resources. The cloud package implements methods for interacting with cloud environments such as OpenStack. The corresponding jobqueue/scheduler package uses these methods to do their work. The static subdirectory contains the html, css and javascript needed for the web interface. See jobqueue/serverWebI.go for how the web interface backend is implemented. The internal package contains general utility functions, and most notably config.go holds the code for how the command line interface deals with config options.
Maestro is a SQL-centric tool for orchestrating BigQuery jobs. Maestro also supports data transfers from and to Google Cloud Storage (GCS) and relational databases (presently PostgresSQL and MySQL). Maestro is a "catalog" of SQL statements. Key feature of Maestro is the ability to infer dependencies by examining the SQL and without any additional configuration. Maestro can execute all tasks in correct order without a manually specified order (i.e. a "DAG"). Execution can be associated with a frequency (cadence) without requiring any cron or cron-like configuration. Maestro is an ever-running service (daemon). It uses PostgreSQL to store the SQL and all other configuration, state and history. The daemon takes great care to maintain all of its state in PostgreSQL so that it can be stopped or restarted without interrupting any in-progress jobs (in most cases). Maestro records all BigQuery job and other history and has a notion of users and groups which is useful for attributing costs and resource utilization to users and groups. Maestro has a basic web-based user interface implemented in React, though its API can also be used directly. Maestro can notify arbitrary applications of job completion via a simple HTTP request. Maestro also provides a Python client library for a more native Python experience. Maestro integrates with Google OAuth for authentication, Google Sheets for simple exports, Github (for SQL revision control) and Slack (for alerts and notifications). Maestro was designed with simplicity as one of its primary goals. It trades flexibility usually afforded by configurability in various languages for the transaprency and clarity achievable by leveraging the declarative nature of SQL. Maestro works best for environments where BigQuery is the primary store of all data for analyitcal purposes. E.g. the data may be periodically imported into BigQuery from various databases. Once imported, data may be subsequently summarized or transformed via a sequence of BigQuery jobs. The summarized data can then be exported to external databases/application for additional processing (e.g. SciPy) and possibly be imported back into BigQiery, and so on. Every step of this process can be orchestrated by Maestro without relying on any external scheduling facility such as cron. Below is the listing of all the key conepts with explanations. A table is the central object in Maestro. It always corresponds to a table in BigQuery. Maestro code and documentation use the verb "run" with respect to tables. To "run a table" means to perform whatever action is called for in its configuration and store the result in a BigQuery table. A table is (in most cases) defined by a BigQuery SQL statement. There are three kinds of tables in Maestro. A summary table is produced by executing a BigQuery SQL statement (a Query job). An import table is produced by executing SQL on an external database and importing the result into BigQuery. The SQL statement in this case it intentionally restricted to a primitive which supports only SELECT, FROM, WHERE and LIMIT. This is so as to discourage the users from running a complex and taxing query on the database server. The main reason for this SQL statement is to filter out or transform columns, any other processing is best done subsequently in BigQuery. This is a table whose data comes from GCS. The import is triggered via the Maestro API. Such tables are generally used when BigQuery data needs to be processed by an external tool, e.g. SciPy, etc. A job is a BigQuery job. BigQquery has three types of jobs: query, extract and load. All three types are used in Maestro. These details are internal but should be familiar to developers. A BigQquery query job is executed as part of running a table. A BigQuery extract job is executed as part of running a table, after the query job is complete. It results in one or more extract files in GCS. Maestro provides signed URLs to the GCS files so that external tools require no authentication to access the data. This is also facilitated via the Maestro pythonlib. A BigQuery load job is executed as part of running an import table. It is the last step of the import, after the external database table data has been copied to GCS. A run is a complex process which happens periodically, according to a frequency. For example if a daily frequency is defined, then Maestro will construct a run once per day, selecting all tables (including import tables) assigned to this frequency, computing the dependency graph and creating jobs for each table. The jobs are then executed in correct order based on the position in the graph and the number of workers available. Maestro will also assign priority based on the number of child dependencies a table has, thus running the most "important" tables first. PostgreSQL 9.6 or later is required to run Maestro. Building a "production" binary, i.e. with all assets included in the binary itself requires Webpack. Webpack is not necessary for "development" mode which uses Babel for transpilation. Download and compile Maestro with "go get github.com/voxmedia/maestro". (Note that this will create a $GOPATH/bin/maestro binary, which is not very useful, you can delete it). From here cd $GOPATH/src/github.com/voxmedia/maestro and go build. You should now have a "maestro" binary in this directory. You can also create a "production" binary by running "make build". This will combine all the javascript code into a single file and pack it and all other assets into the maestro binary itself, so that to deploy you only need the binary and no other files. Create a PostgreSQL database named "maestro". If you name it something other than that, you will need to provide that name to Maestro via the -db-connect flag which defaults to "host=/var/run/postgresql dbname=maestro sslmode=disable", which should work on most Linux distros. On MacOS the Postgres socket is likely to be in "/private/tmp" and one way to address this is to run "ln -s /private/tmp /var/run/postgresql" Maestro connects to many services and needs credentials for all of them. These credentials are stored in the database, all encrypted using the same shared secret which must be specified on the command line via the -secret argument. The -secret argument is meant mostly for development, in production it is much more secure to use the -secretpath option pointing to the location of a file containing the secret. From the Google Cloud perspective, it is best to create a project entirely dedicated to Maestro, with BigQuery and GCS API's enabled, then create a Service Account (in IAM) dedicated to Maestro, as well as OAuth credentials. The service account will need BigQuery Editor, Job User and Storage Object Admin roles. Run Maestro like so: ./maestro -secret=whatever where "whatever" is the shared secret you invent and need to remember. You should now be able to visit the Maestro UI, by default it is at http://localhost:3000 When you click on the log-in link, since at this point Maestro has no OAuth configuration, you will be presented with a form asking for the relevant info, which you will need to provide. You should then be redirected to the Google OAuth login page. From here on the configuration is stored in the database in encrypted form. As the first user of this Maestro instance, you are automatically marked as "admin", which means you can perform any action. As an admin, you should see the "Admin" menu in the upper right. Click on it and select the "Credentials" option. You now need to populate the credentials. The BigQuery, default dataset and GCS bucket are required, while Git and Slack are optional, but highly recommended. Note that the BigQuery dataset and the GCS bucket are not created by Maestro, you need to create those manually. The GCS bucket is used for data exports, and it is generally a good idea to set the data in it to expire after several days or whatever works for you. If you need to import data from external databases, you can add those credentials under the Admin / Databases menu. You may want to create a frequency (also under Admin menu). A frequency is how periodic jobs are scheduled in Maestro. It is defined by a period and an offset. The period is passed to time.Truncate() function, and if the result is 0, this is when a run is triggered. The offset is an offset into the period. E.g. to define a frequency that start a run at 4am UTC, you need to specify a period of 86400 and an offset of 14400. Note that Maestro needs to be restarted after these configuration changes (this will be fixed later). At this point you should be able to create a summary table with some simple SQL, e.g. "SELECT 'hello' AS world", save it and run it. If it executes correctly, you should be able to see this new table in the BigQuery UI.
lf is a terminal file manager. Source code can be found in the repository at https://github.com/gokcehan/lf. This documentation can either be read from terminal using 'lf -doc' or online at https://godoc.org/github.com/gokcehan/lf. You can also use 'doc' command (default '<f-1>') inside lf to view the documentation in a pager. You can run 'lf -help' to see descriptions of command line options. The following commands are provided by lf: The following command line commands are provided by lf: The following options can be used to customize the behavior of lf: The following environment variables are exported for shell commands: The following commands/keybindings are provided by default: The following additional keybindings are provided by default: Configuration files should be located at: Selection file should be located at: Marks file should be located at: History file should be located at: You can configure the default values of following variables to change these locations: A sample configuration file can be found at https://github.com/gokcehan/lf/blob/master/etc/lfrc.example. This section shows information about builtin commands. Modal commands do not take any arguments, but instead change the operation mode to read their input conveniently, and so they are meant to be assigned to keybindings. Quit lf and return to the shell. Move the current file selection upwards/downwards by one/half a page/full page. Change the current working directory to the parent directory. If the current file is a directory, then change the current directory to it, otherwise, execute the 'open' command. A default 'open' command is provided to call the default system opener asynchronously with the current file as the argument. A custom 'open' command can be defined to override this default. (See also 'OPENER' variable and 'Opening Files' section) Move the current file selection to the top/bottom of the directory. Toggle the selection of the current file or files given as arguments. Reverse the selection of all files in the current directory (i.e. 'toggle' all files). Selections in other directories are not effected by this command. You can define a new command to select all files in the directory by combining 'invert' with 'unselect' (i.e. `cmd select-all :unselect; invert`), though this will also remove selections in other directories. Remove the selection of all files in all directories. Select files that match the given glob. Unselect files that match the given glob. If there are no selections, save the path of the current file to the copy buffer, otherwise, copy the paths of selected files. If there are no selections, save the path of the current file to the cut buffer, otherwise, copy the paths of selected files. Copy/Move files in copy/cut buffer to the current working directory. Clear file paths in copy/cut buffer. Synchronize copied/cut files with server. This command is automatically called when required. Draw the screen. This command is automatically called when required. Synchronize the terminal and redraw the screen. Load modified files and directories. This command is automatically called when required. Flush the cache and reload all files and directories. Print given arguments to the message line at the bottom. Print given arguments to the message line at the bottom and also to the log file. Print given arguments to the message line at the bottom in red color and also to the log file. Change the working directory to the given argument. Change the current file selection to the given argument. Remove the current file or selected file(s). Rename the current file using the builtin method. A custom 'rename' command can be defined to override this default. Read the configuration file given in the argument. Simulate key pushes given in the argument. Read a command to evaluate. Read a shell command to execute. (See also 'Prefixes' and 'Shell Commands' sections) Read a shell command to execute piping its standard I/O to the bottom statline. (See also 'Prefixes' and 'Piping Shell Commands' sections) Read a shell command to execute and wait for a key press in the end. (See also 'Prefixes' and 'Waiting Shell Commands' sections) Read a shell command to execute synchronously without standard I/O. Read key(s) to find the appropriate file name match in the forward/backward direction and jump to the next/previous match. (See also 'anchorfind', 'findlen', 'wrapscan', 'ignorecase', 'smartcase', 'ignoredia', and 'smartdia' options and 'Searching Files' section) Read a pattern to search for a file name match in the forward/backward direction and jump to the next/previous match. (See also 'globsearch', 'incsearch', 'wrapscan', 'ignorecase', 'smartcase', 'ignoredia', and 'smartdia' options and 'Searching Files' section) Save the current directory as a bookmark assigned to the given key. Change the current directory to the bookmark assigned to the given key. A special bookmark "'" holds the previous directory after a 'mark-load', 'cd', or 'select' command. Remove a bookmark assigned to the given key. This section shows information about command line commands. These should be mostly compatible with readline keybindings. A character refers to a unicode code point, a word consists of letters and digits, and a unix word consists of any non-blank characters. Quit command line mode and return to normal mode. Autocomplete the current word. Autocomplete the current word, then you can press the binded key/s again to cycle completition options. Autocomplete the current word, then you can press the binded key/s again to cycle completition options backwards. Execute the current line. Interrupt the current shell-pipe command and return to the normal mode. Go to next/previous item in the history. Move the cursor to the left/right. Move the cursor to the beginning/end of line. Delete the next character in forward/backward direction. Delete everything up to the beginning/end of line. Delete the previous unix word. Paste the buffer content containing the last deleted item. Transpose the positions of last two characters/words. Move the cursor by one word in forward/backward direction. Delete the next word in forward direction. Capitalize/uppercase/lowercase the current word and jump to the next word. This section shows information about options to customize the behavior. Character ':' is used as the separator for list options '[]int' and '[]string'. When this option is enabled, find command starts matching patterns from the beginning of file names, otherwise, it can match at an arbitrary position. Automatically quit server when there are no clients left connected. When this option is enabled, directory sizes show the number of items inside instead of the size of directory file. The former needs to be calculated by reading the directory and counting the items inside. The latter is directly provided by the operating system and it does not require any calculation, though it is non-intuitive and it can often be misleading. This option is disabled by default for performance reasons. This option only has an effect when 'info' has a 'size' field and the pane is wide enough to show the information. A thousand items are counted per directory at most, and bigger directories are shown as '999+'. Show directories first above regular files. Draw boxes around panes with box drawing characters. Format string of error messages shown in the bottom message line. File separator used in environment variables 'fs' and 'fx'. Number of characters prompted for the find command. When this value is set to 0, find command prompts until there is only a single match left. When this option is enabled, search command patterns are considered as globs, otherwise they are literals. With globbing, '*' matches any sequence, '?' matches any character, and '[...]' or '[^...] matches character sets or ranges. Otherwise, these characters are interpreted as they are. Show hidden files. On unix systems, hidden files are determined by the value of 'hiddenfiles'. On windows, only files with hidden attributes are considered hidden files. List of hidden file glob patterns. Patterns can be given as relative or absolute paths. Globbing supports the usual special characters, '*' to match any sequence, '?' to match any character, and '[...]' or '[^...] to match character sets or ranges. In addition, if a pattern starts with '!', then its matches are excluded from hidden files. Show icons before each item in the list. By default, only two icons, 🗀 (U+1F5C0) and 🗎 (U+1F5CE), are used for directories and files respectively, as they are supported in the unicode standard. Icons can be configured with an environment variable named 'LF_ICONS'. The syntax of this variable is similar to 'LS_COLORS'. See the wiki page for an example icon configuration. Sets 'IFS' variable in shell commands. It works by adding the assignment to the beginning of the command string as 'IFS='...'; ...'. The reason is that 'IFS' variable is not inherited by the shell for security reasons. This method assumes a POSIX shell syntax and so it can fail for non-POSIX shells. This option has no effect when the value is left empty. This option does not have any effect on windows. Ignore case in sorting and search patterns. Ignore diacritics in sorting and search patterns. Jump to the first match after each keystroke during searching. List of information shown for directory items at the right side of pane. Currently supported information types are 'size', 'time', 'atime', and 'ctime'. Information is only shown when the pane width is more than twice the width of information. Send mouse events as input. Show the position number for directory items at the left side of pane. When 'relativenumber' is enabled, only the current line shows the absolute position and relative positions are shown for the rest. Set the interval in seconds for periodic checks of directory updates. This works by periodically calling the 'load' command. Note that directories are already updated automatically in many cases. This option can be useful when there is an external process changing the displayed directory and you are not doing anything in lf. Periodic checks are disabled when the value of this option is set to zero. Show previews of files and directories at the right most pane. If the file has more lines than the preview pane, rest of the lines are not read. Files containing the null character (U+0000) in the read portion are considered binary files and displayed as 'binary'. Set the path of a previewer file to filter the content of regular files for previewing. The file should be executable. Five arguments are passed to the file, first is the current file name; the second, third, fourth, and fifth are width, height, horizontal position, and vertical position of preview pane respectively. SIGPIPE signal is sent when enough lines are read. If the previewer returns a non-zero exit code, then the preview cache for the given file is disabled. This means that if the file is selected in the future, the previewer is called once again. Preview filtering is disabled and files are displayed as they are when the value of this option is left empty. Set the path of a cleaner file. This file will be called if previewing is enabled, the previewer is set, and the previously selected file had its preview cache disabled. The file should be executable. One argument is passed to the file; the path to the file whose preview should be cleaned. Preview clearing is disabled when the value of this option is left empty. Format string of the prompt shown in the top line. Special expansions are provided, '%u' as the user name, '%h' as the host name, '%w' as the working directory, '%d' as the working directory with a trailing path separator, and '%f' as the file name. Home folder is shown as '~' in the working directory expansion. Directory names are automatically shortened to a single character starting from the left most parent when the prompt does not fit to the screen. List of ratios of pane widths. Number of items in the list determines the number of panes in the ui. When 'preview' option is enabled, the right most number is used for the width of preview pane. Show the position number relative to the current line. When 'number' is enabled, current line shows the absolute position, otherwise nothing is shown. Reverse the direction of sort. Minimum number of offset lines shown at all times in the top and the bottom of the screen when scrolling. The current line is kept in the middle when this option is set to a large value that is bigger than the half of number of lines. A smaller offset can be used when the current file is close to the beginning or end of the list to show the maximum number of items. Shell executable to use for shell commands. Shell commands are executed as 'shell shellopts shellflag command -- arguments'. Command line flag used to pass shell commands. List of shell options to pass to the shell executable. Override 'ignorecase' option when the pattern contains an uppercase character. This option has no effect when 'ignorecase' is disabled. Override 'ignoredia' option when the pattern contains a character with diacritic. This option has no effect when 'ignoredia' is disabled. Sort type for directories. Currently supported sort types are 'natural', 'name', 'size', 'time', 'ctime', 'atime', and 'ext'. Number of space characters to show for horizontal tabulation (U+0009) character. Format string of the file modification time shown in the bottom line. Truncate character shown at the end when the file name does not fit to the pane. String shown after commands of shell-wait type. Searching can wrap around the file list. Scrolling can wrap around the file list. The following variables are exported for shell commands: These are referred with a '$' prefix on POSIX shells (e.g. '$f'), between '%' characters on Windows cmd (e.g. '%f%'), and with a '$env:' prefix on Windows powershell (e.g. '$env:f'). Current file selection as a full path. Selected file(s) separated with the value of 'filesep' option as full path(s). Selected file(s) (i.e. 'fs') if there are any selected files, otherwise current file selection (i.e. 'f'). Id of the running client. Present working directory. Initial working directory. The value of this variable is set to the current nesting level when you run lf from a shell spawned inside lf. You can add the value of this variable to your shell prompt to make it clear that your shell runs inside lf. For example, with POSIX shells, you can use '[ -n "$LF_LEVEL" ] && PS1="$PS1""(lf level: $LF_LEVEL) "' in your shell configuration file (e.g. '~/.bashrc'). If this variable is set in the environment, use the same value, otherwise set the value to 'start' in Windows, 'open' in MacOS, 'xdg-open' in others. If this variable is set in the environment, use the same value, otherwise set the value to 'vi' on unix, 'notepad' in Windows. If this variable is set in the environment, use the same value, otherwise set the value to 'less' on unix, 'more' in Windows. If this variable is set in the environment, use the same value, otherwise set the value to 'sh' on unix, 'cmd' in Windows. The following command prefixes are used by lf: The same evaluator is used for the command line and the configuration file for read and shell commands. The difference is that prefixes are not necessary in the command line. Instead, different modes are provided to read corresponding commands. These modes are mapped to the prefix keys above by default. Characters from '#' to newline are comments and ignored: There are three special commands ('set', 'map', and 'cmd') and their variants for configuration. Command 'set' is used to set an option which can be boolean, integer, or string: Command 'map' is used to bind a key to a command which can be builtin command, custom command, or shell command: Command 'cmap' is used to bind a key to a command line command which can only be one of the builtin commands: You can delete an existing binding by leaving the expression empty: Command 'cmd' is used to define a custom command: You can delete an existing command by leaving the expression empty: If there is no prefix then ':' is assumed: An explicit ':' can be provided to group statements until a newline which is especially useful for 'map' and 'cmd' commands: If you need multiline you can wrap statements in '{{' and '}}' after the proper prefix. Regular keys are assigned to a command with the usual syntax: Keys combined with the shift key simply use the uppercase letter: Special keys are written in between '<' and '>' characters and always use lowercase letters: Angle brackets can be assigned with their special names: Function keys are prefixed with 'f' character: Keys combined with the control key are prefixed with 'c' character: Keys combined with the alt key are assigned in two different ways depending on the behavior of your terminal. Older terminals (e.g. xterm) may set the 8th bit of a character when the alt key is pressed. On these terminals, you can use the corresponding byte for the mapping: Newer terminals (e.g. gnome-terminal) may prefix the key with an escape key when the alt key is pressed. lf uses the escape delaying mechanism to recognize alt keys in these terminals (delay is 100ms). On these terminals, keys combined with the alt key are prefixed with 'a' character: Please note that, some key combinations are not possible due to the way terminals work (e.g. control and h combination sends a backspace key instead). The easiest way to find the name of a key combination is to press the key while lf is running and read the name of the key from the unknown mapping error. Mouse buttons are prefixed with 'm' character: Mouse wheel events are also prefixed with 'm' character: The usual way to map a key sequence is to assign it to a named or unnamed command. While this provides a clean way to remap builtin keys as well as other commands, it can be limiting at times. For this reason 'push' command is provided by lf. This command is used to simulate key pushes given as its arguments. You can 'map' a key to a 'push' command with an argument to create various keybindings. This is mainly useful for two purposes. First, it can be used to map a command with a command count: Second, it can be used to avoid typing the name when a command takes arguments: One thing to be careful is that since 'push' command works with keys instead of commands it is possible to accidentally create recursive bindings: These types of bindings create a deadlock when executed. Regular shell commands are the most basic command type that is useful for many purposes. For example, we can write a shell command to move selected file(s) to trash. A first attempt to write such a command may look like this: We check '$fs' to see if there are any selected files. Otherwise we just delete the current file. Since this is such a common pattern, a separate '$fx' variable is provided. We can use this variable to get rid of the conditional: The trash directory is checked each time the command is executed. We can move it outside of the command so it would only run once at startup: Since these are one liners, we can drop '{{' and '}}': Finally note that we set 'IFS' variable manually in these commands. Instead we could use the 'ifs' option to set it for all shell commands (i.e. 'set ifs "\n"'). This can be especially useful for interactive use (e.g. '$rm $f' or '$rm $fs' would simply work). This option is not set by default as it can behave unexpectedly for new users. However, use of this option is highly recommended and it is assumed in the rest of the documentation. Regular shell commands have some limitations in some cases. When an output or error message is given and the command exits afterwards, the ui is immediately resumed and there is no way to see the message without dropping to shell again. Also, even when there is no output or error, the ui still needs to be paused while the command is running. This can cause flickering on the screen for short commands and similar distractions for longer commands. Instead of pausing the ui, piping shell commands connects stdin, stdout, and stderr of the command to the statline in the bottom of the ui. This can be useful for programs following the unix philosophy to give no output in the success case, and brief error messages or prompts in other cases. For example, following rename command prompts for overwrite in the statline if there is an existing file with the given name: You can also output error messages in the command and it will show up in the statline. For example, an alternative rename command may look like this: Note that input is line buffered and output and error are byte buffered. Waiting shell commands are similar to regular shell commands except that they wait for a key press when the command is finished. These can be useful to see the output of a program before the ui is resumed. Waiting shell commands are more appropriate than piping shell commands when the command is verbose and the output is best displayed as multiline. Asynchronous shell commands are used to start a command in the background and then resume operation without waiting for the command to finish. Stdin, stdout, and stderr of the command is neither connected to the terminal nor to the ui. One of the more advanced features in lf is remote commands. All clients connect to a server on startup. It is possible to send commands to all or any of the connected clients over the common server. This is used internally to notify file selection changes to other clients. To use this feature, you need to use a client which supports communicating with a UNIX-domain socket. OpenBSD implementation of netcat (nc) is one such example. You can use it to send a command to the socket file: Since such a client may not be available everywhere, lf comes bundled with a command line flag to be used as such. When using lf, you do not need to specify the address of the socket file. This is the recommended way of using remote commands since it is shorter and immune to socket file address changes: In this command 'send' is used to send the rest of the string as a command to all connected clients. You can optionally give it an id number to send a command to a single client: All clients have a unique id number but you may not be aware of the id number when you are writing a command. For this purpose, an '$id' variable is exported to the environment for shell commands. The value of this variable is set to the process id of the client. You can use it to send a remote command from a client to the server which in return sends a command back to itself. So now you can display a message in the current client by calling the following in a shell command: Since lf does not have control flow syntax, remote commands are used for such needs. For example, you can configure the number of columns in the ui with respect to the terminal width as follows: Besides 'send' command, there is a 'quit' command to quit the server when there are no connected clients left, and a 'quit!' command to force quit the server by closing client connections first: Lastly, there is a 'conn' command to connect the server as a client. This should not be needed for users. lf uses its own builtin copy and move operations by default. These are implemented as asynchronous operations and progress is shown in the bottom ruler. These commands do not overwrite existing files or directories with the same name. Instead, a suffix that is compatible with '--backup=numbered' option in GNU cp is added to the new files or directories. Only file modes are preserved and all other attributes are ignored including ownership, timestamps, context, and xattr. Special files such as character and block devices, named pipes, and sockets are skipped and links are not followed. Moving is performed using the rename operation of the underlying OS. For cross-device moving, lf falls back to copying and then deletes the original files if there are no errors. Operation errors are shown in the message line as well as the log file and they do not preemptively finish the corresponding file operation. File operations can be performed on the current selected file or alternatively on multiple files by selecting them first. When you 'copy' a file, lf doesn't actually copy the file on the disk, but only records its name to a file. The actual file copying takes place when you 'paste'. Similarly 'paste' after a 'cut' operation moves the file. You can customize copy and move operations by defining a 'paste' command. This is a special command that is called when it is defined instead of the builtin implementation. You can use the following example as a starting point: Some useful things to be considered are to use the backup ('--backup') and/or preserve attributes ('-a') options with 'cp' and 'mv' commands if they support it (i.e. GNU implementation), change the command type to asynchronous, or use 'rsync' command with progress bar option for copying and feed the progress to the client periodically with remote 'echo' calls. By default, lf does not assign 'delete' command to a key to protect new users. You can customize file deletion by defining a 'delete' command. You can also assign a key to this command if you like. An example command to move selected files to a trash folder and remove files completely after a prompt are provided in the example configuration file. There are two mechanisms implemented in lf to search a file in the current directory. Searching is the traditional method to move the selection to a file matching a given pattern. Finding is an alternative way to search for a pattern possibly using fewer keystrokes. Searching mechanism is implemented with commands 'search' (default '/'), 'search-back' (default '?'), 'search-next' (default 'n'), and 'search-prev' (default 'N'). You can enable 'globsearch' option to match with a glob pattern. Globbing supports '*' to match any sequence, '?' to match any character, and '[...]' or '[^...] to match character sets or ranges. You can enable 'incsearch' option to jump to the current match at each keystroke while typing. In this mode, you can either use 'cmd-enter' to accept the search or use 'cmd-escape' to cancel the search. Alternatively, you can also map some other commands with 'cmap' to accept the search and execute the command immediately afterwards. Possible candidates are 'up', 'down' and their variants, 'top', 'bottom', 'updir', and 'open' commands. For example, you can use arrow keys to finish the search with the following mappings: Finding mechanism is implemented with commands 'find' (default 'f'), 'find-back' (default 'F'), 'find-next' (default ';'), 'find-prev' (default ','). You can disable 'anchorfind' option to match a pattern at an arbitrary position in the filename instead of the beginning. You can set the number of keys to match using 'findlen' option. If you set this value to zero, then the the keys are read until there is only a single match. Default values of these two options are set to jump to the first file with the given initial. Some options effect both searching and finding. You can disable 'wrapscan' option to prevent searches to wrap around at the end of the file list. You can disable 'ignorecase' option to match cases in the pattern and the filename. This option is already automatically overridden if the pattern contains upper case characters. You can disable 'smartcase' option to disable this behavior. Two similar options 'ignoredia' and 'smartdia' are provided to control matching diacritics in latin letters. You can define a an 'open' command (default 'l' and '<right>') to configure file opening. This command is only called when the current file is not a directory, otherwise the directory is entered instead. You can define it just as you would define any other command: It is possible to use different command types: You may want to use either file extensions or mime types from 'file' command: You may want to use 'setsid' before your opener command to have persistent processes that continue to run after lf quits. Following command is provided by default: You may also use any other existing file openers as you like. Possible options are 'libfile-mimeinfo-perl' (executable name is 'mimeopen'), 'rifle' (ranger's default file opener), or 'mimeo' to name a few. lf previews files on the preview pane by printing the file until the end or the preview pane is filled. This output can be enhanced by providing a custom preview script for filtering. This can be used to highlight source codes, list contents of archive files or view pdf or image files as text to name few. For coloring lf recognizes ansi escape codes. In order to use this feature you need to set the value of 'previewer' option to the path of an executable file. lf passes the current file name as the first argument and the height of the preview pane as the second argument when running this file. Output of the execution is printed in the preview pane. You may want to use the same script in your pager mapping as well if any: For 'less' pager, you may instead utilize 'LESSOPEN' mechanism so that useful information about the file such as the full path of the file can be displayed in the statusline below: Since this script is called for each file selection change it needs to be as efficient as possible and this responsibility is left to the user. You may use file extensions to determine the type of file more efficiently compared to obtaining mime types from 'file' command. Extensions can then be used to match cleanly within a conditional: Another important consideration for efficiency is the use of programs with short startup times for preview. For this reason, 'highlight' is recommended over 'pygmentize' for syntax highlighting. Besides, it is also important that the application is processing the file on the fly rather than first reading it to the memory and then do the processing afterwards. This is especially relevant for big files. lf automatically closes the previewer script output pipe with a SIGPIPE when enough lines are read. When everything else fails, you can make use of the height argument to only feed the first portion of the file to a program for preview. Note that some programs may not respond well to SIGPIPE to exit with a non-zero return code and avoid caching. You may add a trailing '|| true' command to avoid such errors: You may also use an existing preview filter as you like. Your system may already come with a preview filter named 'lesspipe'. These filters may have a mechanism to add user customizations as well. See the related documentations for more information. lf changes the working directory of the process to the current directory so that shell commands always work in the displayed directory. After quitting, it returns to the original directory where it is first launched like all shell programs. If you want to stay in the current directory after quitting, you can use one of the example wrapper shell scripts provided in the repository. There is a special command 'on-cd' that runs a shell command when it is defined and the directory is changed. You can define it just as you would define any other command: If you want to print escape sequences, you may redirect 'printf' output to '/dev/tty'. The following xterm specific escape sequence sets the terminal title to the working directory: This command runs whenever you change directory but not on startup. You can add an extra call to make it run on startup as well: Note that all shell commands are possible but `%` and `&` are usually more appropriate as `$` and `!` causes flickers and pauses respectively. lf tries to automatically adapt its colors to the environment. It starts with a default colorscheme and updates colors using values of existing environment variables possibly by overwriting its previous values. Colors are set in the following order: Please refer to the corresponding man pages for more information about 'LSCOLORS' and 'LS_COLORS'. 'LF_COLORS' is provided with the same syntax as 'LS_COLORS' in case you want to configure colors only for lf but not ls. This can be useful since there are some differences between ls and lf, though one should expect the same behavior for common cases. You can configure lf colors in two different ways. First, you can only configure 8 basic colors used by your terminal and lf should pick up those colors automatically. Depending on your terminal, you should be able to select your colors from a 24-bit palette. This is the recommended approach as colors used by other programs will also match each other. Second, you can set the values of environmental variables mentioned above for fine grained customization. Note that 'LS_COLORS/LF_COLORS' are more powerful than 'LSCOLORS' and they can be used even when GNU programs are not installed on the system. You can combine this second method with the first method for best results. Lastly, you may also want to configure the colors of the prompt line to match the rest of the colors. Colors of the prompt line can be configured using the 'promptfmt' option which can include hardcoded colors as ansi escapes. See the default value of this option to have an idea about how to color this line. It is worth noting that lf uses as many colors are advertised by your terminal's entry in your systems terminfo or infocmp database, if this is not present lf will default to an internal database. For terminals supporting 24-bit (or "true") color that do not have a database entry (or one that does not advertise all capabilities), support can be enabled by either setting the '$COLORTERM' variable to "truecolor" or ensuring '$TERM' is set to a value that ends with "-truecolor". Default lf colors are mostly taken from GNU dircolors defaults. These defaults use 8 basic colors and bold attribute. Default dircolors entries with background colors are simplified to avoid confusion with current file selection in lf. Similarly, there are only file type matchings and extension matchings are left out for simplicity. Default values are as follows given with their matching order in lf: Note that, lf first tries matching file names and then falls back to file types. The full order of matchings from most specific to least are as follows: For example, given a regular text file '/path/to/README.txt', the following entries are checked in the configuration and the first one to match is used: Given a regular directory '/path/to/example.d', the following entries are checked in the configuration and the first one to match is used: Note that glob-like patterns do not actually perform glob matching due to performance reasons. For example, you can set a variable as follows: Having all entries on a single line can make it hard to read. You may instead divide it to multiple lines in between double quotes by escaping newlines with backslashes as follows: Having such a long variable definition in a shell configuration file might be undesirable. You may instead put this definition in a separate file and source it in your shell configuration file as follows: See the wiki page for ansi escape codes https://en.wikipedia.org/wiki/ANSI_escape_code. Icons are configured using 'LF_ICONS' environment variable. This variable uses the same syntax as 'LS_COLORS/LF_COLORS'. Instead of colors, you should put a single characters as values of entries. Do not forget to enable 'icons' option to see the icons. Default values are as follows given with their matching order in lf: See the wiki page for an example icons configuration https://github.com/gokcehan/lf/wiki/Icons.
NanoMux is a package of HTTP request routers for the Go language. The package has three types that can be used as routers. The first one is the Resource, which represents the path segment resource. The second one is the Host. it represents the host segment of the URL but also takes on the role of the root resource when HTTP method handlers are set. The third one is the Router which supports registering multiple hosts and resources. It passes the request to the matching host and, when there is no matching host, to the root resource. In NanoMux terms, hosts and resources are called responders. Responders are organized into a tree. The request's URL segments are matched against the host and corresponding resources' templates in the tree. The request passes through each matching responder in its URL until it reaches the last segment's responder. To pass the request to the next segment's responder, the request passers of the Router, Host, and Resource are called. When the request reaches the last responder, that responder's request handler is called. The request handler is responsible for calling the responder's HTTP method handler. The request passer, request handler, and HTTP method handlers can all be wrapped with middleware. The NanoMux types provide many methods, but most of them are for convenience. Sections below discuss the main features of the package. Based on the segments they comprise, there are three types of templates: static, pattern, and wildcard. Static templates have no regex or wildcard segments. Pattern templates have one or more regex segments and/or one wildcard segment and static segments. A regex segment must be in curly braces and consists of a value name and a regex pattern separated by a colon: "{valueName:regexPattern}". The wildcard segment only has a value name: "{valueName}". There can be only one wildcard segment in a template. Wildcard templates have only one wildcard segment and no static or regex segments. The host segment templates must always follow the scheme with the colon ":" and the two slashes "//" of the authority component. The path segment templates may be preceded by a slash "/" or a scheme, a colon ":", and three slashes "///" (two authority component slashes and the third separator slash). Like in "https:///blog". The preceding slash is just a separator. It doesn't denote the root resource, except when it is used alone. The template "/" or "https:///" denotes the root resource. Both the host and path segment templates can have a trailing slash. When its template starts with "https" unless configured to redirect, the host or resource will not handle a request when used under HTTP and respond with a "404 Not Found" status code. When its template has a trailing slash unless configured to be lenient or strict, the resource will redirect the request that was made to a URL without a trailing slash to the one with a trailing slash and vice versa. The trailing slash has no effect on the host. But if the host is a subtree handler and should respond to the request, its configurations related to the trailing slash will be used on the last path segment. As a side note, every parent resource should have a trailing slash in its template. For example, in a resource tree "/parent/child/grandchild", two non-leaf resources should be referenced with templates "parent/" and "child/". NanoMux doesn't force this, but it's good practice to follow. It helps the clients avoid forming a broken URL when adding a relative URL to the base URL. By default, if the resource has a trailing slash, NanoMux redirects the requests made to the URL without a trailing slash to the URL with a trailing slash. So the clients will have the correct URL. Templates can have a name. The name segment comes at the beginning of the host or path segment templates, but after the slashes. The name segment begins with a "$" sign and is separated from the template's contents by a colon ":". The template's name comes between the "$" sign and the colon ":". The name given in the template can be used to retrieve the host or resource from its parent (the router is considered the host's parent). For convenience, a regex segment's or wildcard segment's value name is used as the name of the template when there is no template name and no other regex segments. For example, templates "{color:red|green|blue}", `day {day:(?:3[01]|[12]\d|0?[1-9])}`, "{model}" have names color, day, and model, respectively. If the template's static part needs a "$" sign at the beginning or curly braces anywhere, they can be escaped with a backslash `\`. Like in the template `\$tatic\{template\}`. For some reason, if the template name or value name needs a colon ":", it can also be escapbed with a backslash: `$smileys\:):{smiley\:)}`. When retrieving the host, resource, or value of the regex or wildcard segment, names are used unescaped without a backslash `\`. Constructors of the Host and Resource types and some methods take URL templates or path templates. In the URL and path templates, the scheme and trailing slash belong to the last segment. In the URL template, after the host segment, if the path component contains only a slash "/", it's considered a trailing slash of the host template. The host template's trailing slash is used only for the last path segment when the host is a subtree handler and should respond to the request. It has no effect on the host itself. In templates, disallowed characters must be used without a percent-encoding, except for a slash "/". Because the slash "/" is a separator when needed in a template, it must be replaced with %2f or %2F. Hosts and resources may have child resources with all three types of templates. In that case, the request's path segments are first matched against child resources with a static template. If no resource with a static template matches, then child resources with a pattern template are matched in the order of their registration. Lastly, if there is no child resource with a pattern template that matches the request's path segment, a child resource with a wildcard template accepts the request. Hosts and resources can have only one direct child resource with a wildcard template. The Host and Resource types implement the http.Handler interface. They can be constructed, have HTTP method handlers set with the SetHandlerFor method, and can be registered with the RegisterResource or RegisterResourceUnder methods to form a resource tree. There is one restriction: the root resource cannot be registered under a host. It can be used without a host or be registered with the router. When the Host type is used, the root resource is implied. Handlers must return true if they respond to the request. Sometimes middlewares and the responder itself need to know whether the request was handled or not. For example, when the middleware responds to the request instead of calling the request passer, the responder may assume that none of its child resources responded to the request in its subtree, so it responds with "404 Not Found" to the request that was already responded to. To prevent this, the middleware's handler must return true if it responds to the request, or it must return the value returned from the argument handler. Sometimes resources have to handle HTTP methods they don't support. NanoMux provides a default not allowed HTTP method handler that responds with a "405 Method Not Allowed" status code, listing all HTTP methods the resource supports in the "Allow" header. But when the host or resource needs a custom implementation, its SetHandlerFor method can be used to replace the default handler. To denote the not allowed HTTP method handler, the exclamation mark "!" must be used instead of an HTTP method. In addition to the not allowed HTTP method handler, if the host or resource has at least one HTTP method handler, NanoMux also provides a default OPTIONS HTTP method handler. The host and resource allow setting a handler for a child resource in their subtree. If the subtree resource doesn't exist, it will be created. It's possible to retrieve a subtree resource with the method Resource. If the subtree resource doesn't exist, the method Resource creates it. If an existing resource must be retrieved, the method RegisteredResource can be used. If there is a need for shared data, it can be set with the SetSharedData method. The shared data can be retrieved in the handlers using the ResponderSharedData method of the passed *Args argument. Hosts and resources can have their own shared data. Handlers retrieve the shared data of their responder. Hosts and resources can be implemented as a type with methods. Each method that has a name beginning with "Handle" and has the signature of a nanomux.Handler is used as an HTTP method handler. The remaining part of the method's name is considered an HTTP method. It is possible to set the implementation later with the SetImplementation method of the Host and Resource. The implementation may also be set for a child resource in the subtree with the method SetImplementationAt. Hosts and resources configured to be a subtree handler respond to the request when there is no matching resource in their subtree. In the resource tree, if resource-12 is a subtree handler, it handles the request to a path "/resource-1/resource-12/non-existent-resource". Subtree handlers can get the remaining part of the path with the RemainingPath method of the *Args argument. The remaining path starts with a slash "/" if the subtree handler has no trailing slash in its template, otherwise it starts without a trailing slash. A subtree handler can also be used as a file server. Let's say we have the following resource tree: When the request is made to a URL "http://example.com/resource-1/resource-12", the host's request passer is called. It passes the request to the resource-1. Then the resource-1's request passer is called and it passes the request to the resource-12. The resource-12 is the last resource in the URL, so its request handler is called. The request handler is responsible for calling the resource's HTTP method handler. If there is no handler for the request's method, it calls the not allowed HTTP method handler. All of these handlers and the request passer can be wrapped in middleware. In the above snippet, the WrapRequestPasserAt method wraps the request passer of the "admin" resource with the CheckCredentials middleware. The CheckCredentials calls the "admin" resource's request passer if the credentials are valid; if not, no further segments will be matched, the request will be dropped, and the client will be responded with a "404 Not Found" status code. In the above case, "admin" is a dormant resource. But as its name states, the request passer's purpose is to pass the request to the next resource. It's called even when the resource is dormant. Unlike the request passer, the request handler is called only when the host or resource is the one that must respond to the request. The request handler can be wrapped when the middleware must be called before any HTTP method handler. The WrapRequestPasser, WrapRequestHandler, and WrapHandlerOf methods of the Host and Resource types wrap their request passer, request handler, and the HTTP method handlers, respectively. The WrapRequestPasserAt, WrapRequestHandlerAt, and WrapPathHandlerOf methods wrap the request passer and the respective handlers of the child resource at the path. The WrapSubtreeRequestPassers, WrapSubtreeRequestHandlers, and WrapSubtreeHandlersOf methods wrap the request passer and the respective handlers of all the resources in the host's or resource's subtree. When calling the WrapHandlerOf, WrapPathHandlerOf, and WrapSubtreeHandlersOf methods, "*" may be used to denote all HTTP methods for which handlers exist. When "*" is used instead of an HTTP method, all the existing HTTP method handlers of the responder are wrapped. The Router type is more suitable when multiple hosts with different root domains or subdomains are needed. http.Handler and http.HandlerFunc It is possible to use an http.Handler and an http.HandlerFunc with NanoMux. For that, NanoMux provides four converters: Hr, HrWithArgs, FnHr, and FnHrWithArgs. The Hr and HrWithArgs convert the http.Handler, while the FnHr and FnHrWithArgs convert the function with the signature of the http.HandlerFunc to the nanomux.Handler. Most of the time, when there is a need to use an http.Handler and http.HandlerFunc, it's to utilize the handlers written outside the context of NanoMux, and those handlers don't use the *Args argument. The Hr and FnHr converters return a handler that ignores the *Args argument instead of inserting it into the request's context, which is a slower operation. These converters must be used when the http.Handler and http.HandlerFunc handlers don't need the *Args argument. If they are written to use the *Args argument, then it's better to change their signatures as well. One situation where http.Handler and http.HandlerFunc can be considered when writing handlers might be to use a middleware with the signature of func(http.Handler) http.Handler. But for middleware with that signature, NanoMux provides an Mw converter. The Mw converter converts the middleware with the signature of func(http.Handler) http.Handler to the middleware with the signature of func(nanomux.Handler) nanomux.Handler, so it can be used to wrap the NanoMux handlers.
Package dynamodb provides the client and types for making API requests to Amazon DynamoDB. Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables' throughput capacity without downtime or performance degradation, and use the AWS Management Console to monitor resource utilization and performance metrics. DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an AWS region, providing built-in high availability and data durability. See https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10 for more information on this service. See dynamodb package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/ To Amazon DynamoDB with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon DynamoDB client DynamoDB for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/#New Utility helpers to marshal and unmarshal AttributeValue to and from Go types can be found in the dynamodbattribute sub package. This package provides has specialized functions for the common ways of working with AttributeValues. Such as map[string]*AttributeValue, []*AttributeValue, and directly with *AttributeValue. This is helpful for marshaling Go types for API operations such as PutItem, and unmarshaling Query and Scan APIs' responses. See the dynamodbattribute package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/dynamodbattribute/ The expression package provides utility types and functions to build DynamoDB expression for type safe construction of API ExpressionAttributeNames, and ExpressionAttribute Values. The package represents the various DynamoDB Expressions as structs named accordingly. For example, ConditionBuilder represents a DynamoDB Condition Expression, an UpdateBuilder represents a DynamoDB Update Expression, and so on. See the expression package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/expression/
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
Package withings is a client module for working with the Withings (previously Nokia Health (previously Withings)) API. The current version (v2) of this module has been updated to work with the newer Oauth2 implementation of the API. As with all Oauth2 APIs, you must obtain authorization to access the users data. To do so you must first register your application with the Withings api to obtain a ClientID and ClientSecret. Once you have your client information you can now create a new client. You will need to also provide the redirectURL you provided during application registration. This URL is where the user will be redirected to and will include the code you need to generate the accessToken needed to access the users data. Under normal situations you should have an http server listening on that address and pull the code from the URL query parameters. You can now use the client to generate the authorization URL the user needs to navigate to. This URL will take the user to a page that will prompt them to allow your application to access their data. The state string returned by the method is a randomly generated BASE64 string using the crypto/rand module. It will be returned in the redirect as a query parameter and the two values verified they match. It's not required, but useful for security reasons. Using the code returned by the redirect in the query parameters, you can generate a new user. This user struct can be used to immediately perform actions against the users data. It also contains the token you should save somewhere for reuse. Obviously use whatever context you would like here. Make sure the save at least the refreshToken for accessing the user data at a later date. You may also save the accessToken, but it does expire and creating a new client from saved token data only requires the refreshToken. You can easily create a user from a saved token using the NewUserFromRefreshToken method. A working configured client is required for the user generated from this method to work. The user struct has various methods associated with each API endpoint to perform data retrieval. The methods take a specific param struct specifying the api options to use on the request. The API is a bit "special" so the params vary a bit between each method. The client does what it can to smooth those out but there is only so much that can be done. Every method has two forms, one that accepts a context and one that does now. This allows you to provide a context on each request if you would like to. By default all methods utilize a context to timeout the request to the API. The value of the timeout is stored on the Client and can be access as/set on Client.Timeout. Setting is _not_ thread safe and should only be set on client creation. If you need to change the timeout for different requests use the methodCtx variant of the method. By default the state generated by the AuthCodeURL utilized crypto/rand. If you would like to implement your own random method you can do so by assigning the function to Rand field of the Client struct. The function should support the Rand type. Also this is _not_ thread safe so only perform this action on client creation. By default every returned response will be parsed and the parsed data returned. If you need access to the raw request data you can enable it by setting the SaveRawResponse field of the client struct to true. This should be done at client creation time. With it set to true the RawResponse field of the returned structs will include the raw response. Some data request methods include a parseResponse field on the params struct. If this is included additional parsing is performed to make the data more usable. This can be seen on GetBodyMeasures for example. You can include the path fields sent to the API by setting IncludePath to true on the client. This is primarily used for debugging but could be helpful in some situations. By default the client will request all known scopes. If you would like to pair down this you can change the scope by using the SetScope method of the client. Consts in the form of ScopeXxx are provided to aid selection.
Package twilio provides server side authentication utilities for applications to integrate with Twilio service. Access Tokens are short-lived tokens that can be used to authenticate Twilio Client SDKs like Video and IP Messaging. See detail in https://www.twilio.com/docs/api/rest/access-tokens. Capability Tokens are a secure way of setting up your device to access various features of Twilio, allowing you to add Twilio capabilities to web and mobile applications without exposing your AuthToken in any client-side environment. See detail in https://www.twilio.com/docs/api/client/capability-tokens.
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
Package twilio provides server side authentication utilities for applications to integrate with Twilio service. Access Tokens are short-lived tokens that can be used to authenticate Twilio Client SDKs like Video and IP Messaging. See detail in https://www.twilio.com/docs/api/rest/access-tokens. Capability Tokens are a secure way of setting up your device to access various features of Twilio, allowing you to add Twilio capabilities to web and mobile applications without exposing your AuthToken in any client-side environment. See detail in https://www.twilio.com/docs/api/client/capability-tokens.
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
Package kvcert is a simple utility that utilizes the azure-sdk-for-go to fetch a Certificate from Azure Key Vault. The certificate can then be used in your Go web server to support TLS communication. A trivial example is below. This example uses the following environment variables: KEY_VAULT_NAME: name of your Azure Key Vault KEY_VAULT_CERT_NAME: name of your certificate in Azure Key Vault AZURE_TENANT_ID: azure tenant id (not visible in example, but required by azure-sdk-for-go) AZURE_CLIENT_ID: azure client id (not visible in example, but required by azure-sdk-for-go) AZURE_CLIENT_SECRET: azure client secret (not visible in example, but required by azure-sdk-for-go)
Package upload is an http service for that stores uploaded files and a client to this service The service can resume upload of a file if a client is uploader. It has a directory in its root storage for each user. There may be no user if a client is just a curl or other not specialized client. Service consists of these packages: ./cmd/uploader - is a client. It can resume upload of a file any time later. ./cmd/uploadserver - is a server side of an upload sevice. ./cmd/decodejournal - is an utility used to dump the content of a journal file. Journal is used to allow a resume of upload. You build all these packages by: - make all
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
Package nutsak (Network Utility Swiss-Army Knife) will provide a means of tunneling traffic between two endpoints. Here is sample code to create a Pair: This will create a TCP listener on port 4444 that forks each new connection. Any received data will be written to STDOUT. The Network Utilties (NUts) are created from seeds. Below are the supported SEED TYPES along with their documentation: FILE:addr[,mode=(append|read|write)] This seed takes an address that is an absolute or relative filename. This seed is used to read or write a file on disk. The default mode is read. STDIO: Aliases: -, STDIN, STDOUT This seed takes no address or options. It can be used to read from stdin or write to stdout. TCP:addr This seed takes an address of the form [IP:]PORT. The IP is optional and essentially defaults to localhost, or more specifically, 0.0.0.0 or [::]. This seed is used to make an outgoing TCP connection. TCP-LISTEN:addr[,fork] Aliases: TCP-L This seed takes an address of the form [IP:]PORT. The IP is optional and essentially defaults to localhost, or more specifically, 0.0.0.0 or [::]. This seed is used to listen on the provided TCP address. The fork option causes the TCP listener to accept multiple connections in parallel. TLS:addr[,ca=PATH,cert=PATH,key=PATH,verify] This seed takes an address of the form [IP:]PORT. The IP is optional and essentially defaults to localhost, or more specifically, 0.0.0.0 or [::]. This seed is used to make an outgoing TLS connection. The ca, cert, and key options take a filepath (DER or PEM formatted). The verify option determines if the the server-side CA should be verified. The cert and key options must be used together. If verify is specified, a ca must also be specified. TLS-LISTEN:addr[,ca=PATH,cert=PATH,fork,key=PATH,verify] Aliases: TLS-L This seed takes an address of the form [IP:]PORT. The IP is optional and essentially defaults to localhost, or more specifically, 0.0.0.0 or [::]. This seed is used to listen on the provided TCP address. The ca, cert, and key options take a filepath (DER or PEM formatted). The fork option causes the TLS listener to accept multiple connections in parallel. The verify option determines if the client-side certificate should be verified. The cert and key options are mandatory. If verify is specified, a ca must also be specified. UDP:addr This seed takes an address of the form [IP:]PORT. The IP is optional and essentially defaults to localhost, or more specifically, 0.0.0.0 or [::]. This seed is used to make an outgoing UDP connection. UDP-LISTEN:addr Aliases: UDP-L This seed takes an address of the form [IP:]PORT. The IP is optional and essentially defaults to localhost, or more specifically, 0.0.0.0 or [::]. This seed is used to listen on the provided UDP address.
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
Package smpp implements SMPP protocol v3.4. It allows easier creation of SMPP clients and servers by providing utilities for PDU and session handling. In order to do any kind of interaction you first need to create an SMPP Session(https://godoc.org/github.com/Derek-meng/smpp#Session). Session is the main carrier of the protocol and enforcer of the specification rules. Naked session can be created with: But it's much more convenient to use helpers that would do the binding with the remote SMSC and return you session prepared for sending: And once you have the session it can be used for sending PDUs to the binded peer. Session that is no longer used must be closed: If you want to handle incoming requests to the session specify SMPPHandler in session configuration when creating new session similarly to HTTPHandler from _net/http_ package: Detailed examples for SMPP client and server can be found in the examples dir.
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
Package gamelift provides the client and types for making API requests to Amazon GameLift. Amazon GameLift is a managed service for developers who need a scalable, dedicated server solution for their multiplayer games. Use Amazon GameLift for these tasks: (1) set up computing resources and deploy your game servers, (2) run game sessions and get players into games, (3) automatically scale your resources to meet player demand and manage costs, and (4) track in-depth metrics on game server performance and player usage. The Amazon GameLift service API includes two important function sets: Manage game sessions and player access -- Retrieve information on available game sessions; create new game sessions; send player requests to join a game session. Configure and manage game server resources -- Manage builds, fleets, queues, and aliases; set auto-scaling policies; retrieve logs and metrics. This reference guide describes the low-level service API for Amazon GameLift. You can use the API functionality with these tools: The Amazon Web Services software development kit (AWS SDK (http://aws.amazon.com/tools/#sdk)) is available in multiple languages (http://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-supported.html#gamelift-supported-clients) including C++ and C#. Use the SDK to access the API programmatically from an application, such as a game client. The AWS command-line interface (http://aws.amazon.com/cli/) (CLI) tool is primarily useful for handling administrative actions, such as setting up and managing Amazon GameLift settings and resources. You can use the AWS CLI to manage all of your AWS services. The AWS Management Console (https://console.aws.amazon.com/gamelift/home) for Amazon GameLift provides a web interface to manage your Amazon GameLift settings and resources. The console includes a dashboard for tracking key resources, including builds and fleets, and displays usage and performance metrics for your games as customizable graphs. Amazon GameLift Local is a tool for testing your game's integration with Amazon GameLift before deploying it on the service. This tools supports a subset of key API actions, which can be called from either the AWS CLI or programmatically. See Testing an Integration (http://docs.aws.amazon.com/gamelift/latest/developerguide/integration-testing-local.html). Learn more Developer Guide (http://docs.aws.amazon.com/gamelift/latest/developerguide/) -- Read about Amazon GameLift features and how to use them. Tutorials (https://gamedev.amazon.com/forums/tutorials) -- Get started fast with walkthroughs and sample projects. GameDev Blog (http://aws.amazon.com/blogs/gamedev/) -- Stay up to date with new features and techniques. GameDev Forums (https://gamedev.amazon.com/forums/spaces/123/gamelift-discussion.html) -- Connect with the GameDev community. Release notes (http://aws.amazon.com/releasenotes/Amazon-GameLift/) and document history (http://docs.aws.amazon.com/gamelift/latest/developerguide/doc-history.html) -- Stay current with updates to the Amazon GameLift service, SDKs, and documentation. This list offers a functional overview of the Amazon GameLift service API. Use these actions to start new game sessions, find existing game sessions, track game session status and other information, and enable player access to game sessions. SearchGameSessions -- Retrieve all available game sessions or search for Start new games with Queues to find the best available hosting resources StartGameSessionPlacement -- Request a new game session placement and add DescribeGameSessionPlacement -- Get details on a placement request, including StopGameSessionPlacement -- Cancel a placement request. CreateGameSession -- Start a new game session on a specific fleet. Available StartMatchmaking -- Request matchmaking for one players or a group who want StartMatchBackfill - Request additional player matches to fill empty slots DescribeMatchmaking -- Get details on a matchmaking request, including status. AcceptMatch -- Register that a player accepts a proposed match, for matches StopMatchmaking -- Cancel a matchmaking request. DescribeGameSessions -- Retrieve metadata for one or more game sessions, DescribeGameSessionDetails -- Retrieve metadata and the game session protection UpdateGameSession -- Change game session settings, such as maximum player GetGameSessionLogUrl -- Get the location of saved logs for a game session. CreatePlayerSession -- Send a request for a player to join a game session. CreatePlayerSessions -- Send a request for multiple players to join a game DescribePlayerSessions -- Get details on player activity, including status, When setting up Amazon GameLift resources for your game, you first create a game build (http://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-build-intro.html) and upload it to Amazon GameLift. You can then use these actions to configure and manage a fleet of resources to run your game servers, scale capacity to meet player demand, access performance and utilization metrics, and more. CreateBuild -- Create a new build using files stored in an Amazon S3 bucket. ListBuilds -- Get a list of all builds uploaded to a Amazon GameLift region. DescribeBuild -- Retrieve information associated with a build. UpdateBuild -- Change build metadata, including build name and version. DeleteBuild -- Remove a build from Amazon GameLift. CreateFleet -- Configure and activate a new fleet to run a build's game servers. ListFleets -- Get a list of all fleet IDs in a Amazon GameLift region (all DeleteFleet -- Terminate a fleet that is no longer running game servers or View / update fleet configurations. DescribeFleetAttributes / UpdateFleetAttributes -- View or change a fleet's DescribeFleetPortSettings / UpdateFleetPortSettings -- View or change the DescribeRuntimeConfiguration / UpdateRuntimeConfiguration -- View or change DescribeEC2InstanceLimits -- Retrieve maximum number of instances allowed DescribeFleetCapacity / UpdateFleetCapacity -- Retrieve the capacity settings Autoscale -- Manage auto-scaling rules and apply them to a fleet. PutScalingPolicy -- Create a new auto-scaling policy, or update an existing DescribeScalingPolicies -- Retrieve an existing auto-scaling policy. DeleteScalingPolicy -- Delete an auto-scaling policy and stop it from affecting StartFleetActions -- Restart a fleet's auto-scaling policies. StopFleetActions -- Suspend a fleet's auto-scaling policies. CreateVpcPeeringAuthorization -- Authorize a peering connection to one of DescribeVpcPeeringAuthorizations -- Retrieve valid peering connection authorizations. DeleteVpcPeeringAuthorization -- Delete a peering connection authorization. CreateVpcPeeringConnection -- Establish a peering connection between the DescribeVpcPeeringConnections -- Retrieve information on active or pending DeleteVpcPeeringConnection -- Delete a VPC peering connection with a Amazon DescribeFleetUtilization -- Get current data on the number of server processes, DescribeFleetEvents -- Get a fleet's logged events for a specified time span. DescribeGameSessions -- Retrieve metadata associated with one or more game DescribeInstances -- Get information on each instance in a fleet, including GetInstanceAccess -- Request access credentials needed to remotely connect CreateAlias -- Define a new alias and optionally assign it to a fleet. ListAliases -- Get all fleet aliases defined in a Amazon GameLift region. DescribeAlias -- Retrieve information on an existing alias. UpdateAlias -- Change settings for a alias, such as redirecting it from one DeleteAlias -- Remove an alias from the region. ResolveAlias -- Get the fleet ID that a specified alias points to. CreateGameSessionQueue -- Create a queue for processing requests for new DescribeGameSessionQueues -- Retrieve game session queues defined in a Amazon UpdateGameSessionQueue -- Change the configuration of a game session queue. DeleteGameSessionQueue -- Remove a game session queue from the region. CreateMatchmakingConfiguration -- Create a matchmaking configuration with DescribeMatchmakingConfigurations -- Retrieve matchmaking configurations UpdateMatchmakingConfiguration -- Change settings for matchmaking configuration. DeleteMatchmakingConfiguration -- Remove a matchmaking configuration from CreateMatchmakingRuleSet -- Create a set of rules to use when searching for DescribeMatchmakingRuleSets -- Retrieve matchmaking rule sets defined in ValidateMatchmakingRuleSet -- Verify syntax for a set of matchmaking rules. See https://docs.aws.amazon.com/goto/WebAPI/gamelift-2015-10-01 for more information on this service. See gamelift package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/gamelift/ To Amazon GameLift with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon GameLift client GameLift for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/gamelift/#New
Package twilio provides server side authentication utilities for applications to integrate with Twilio service. Access Tokens are short-lived tokens that can be used to authenticate Twilio Client SDKs like Video and IP Messaging. See detail in https://www.twilio.com/docs/api/rest/access-tokens. Capability Tokens are a secure way of setting up your device to access various features of Twilio, allowing you to add Twilio capabilities to web and mobile applications without exposing your AuthToken in any client-side environment. See detail in https://www.twilio.com/docs/api/client/capability-tokens.
Package dynamodb provides the client and types for making API requests to Amazon DynamoDB. Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables' throughput capacity without downtime or performance degradation, and use the AWS Management Console to monitor resource utilization and performance metrics. DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an AWS region, providing built-in high availability and data durability. See https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10 for more information on this service. See dynamodb package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/ To Amazon DynamoDB with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon DynamoDB client DynamoDB for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/#New Utility helpers to marshal and unmarshal AttributeValue to and from Go types can be found in the dynamodbattribute sub package. This package provides has specialized functions for the common ways of working with AttributeValues. Such as map[string]*AttributeValue, []*AttributeValue, and directly with *AttributeValue. This is helpful for marshaling Go types for API operations such as PutItem, and unmarshaling Query and Scan APIs' responses. See the dynamodbattribute package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/dynamodbattribute/ The expression package provides utility types and functions to build DynamoDB expression for type safe construction of API ExpressionAttributeNames, and ExpressionAttribute Values. The package represents the various DynamoDB Expressions as structs named accordingly. For example, ConditionBuilder represents a DynamoDB Condition Expression, an UpdateBuilder represents a DynamoDB Update Expression, and so on. See the expression package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/expression/
Package dynamodb provides the client and types for making API requests to Amazon DynamoDB. Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables' throughput capacity without downtime or performance degradation, and use the AWS Management Console to monitor resource utilization and performance metrics. DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an AWS region, providing built-in high availability and data durability. See https://docs.aws.amazon.com/goto/WebAPI/dynamodb-2012-08-10 for more information on this service. See dynamodb package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/ To Amazon DynamoDB with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon DynamoDB client DynamoDB for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/#New Utility helpers to marshal and unmarshal AttributeValue to and from Go types can be found in the dynamodbattribute sub package. This package provides has specialized functions for the common ways of working with AttributeValues. Such as map[string]*AttributeValue, []*AttributeValue, and directly with *AttributeValue. This is helpful for marshaling Go types for API operations such as PutItem, and unmarshaling Query and Scan APIs' responses. See the dynamodbattribute package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/dynamodbattribute/ The expression package provides utility types and functions to build DynamoDB expression for type safe construction of API ExpressionAttributeNames, and ExpressionAttribute Values. The package represents the various DynamoDB Expressions as structs named accordingly. For example, ConditionBuilder represents a DynamoDB Condition Expression, an UpdateBuilder represents a DynamoDB Update Expression, and so on. See the expression package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/dynamodb/expression/
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go If you are trying to make a PUT/POST API call with binary request body, please make sure the binary request body is resettable, which means the request body should inherit Seeker interface. The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
The socketio package is a simple abstraction layer for different web browser- supported transport mechanisms. It is fully compatible with the Socket.IO client side JavaScript socket API library by LearnBoost Labs (http://socket.io/), but through custom codecs it might fit other client implementations too. It (together with the LearnBoost's client-side libraries) provides an easy way for developers to access the most popular browser transport mechanism today: multipart- and long-polling XMLHttpRequests, HTML5 WebSockets and forever-frames. The socketio package works hand-in-hand with the standard http package by plugging itself into a configurable ServeMux. It has an callback-style API for handling connection events. The callbacks are: - SocketIO.OnConnect - SocketIO.OnDisconnect - SocketIO.OnMessage Other utility-methods include: - SocketIO.ServeMux - SocketIO.Broadcast - SocketIO.BroadcastExcept - SocketIO.GetConn - Conn.Send Each new connection will be automatically assigned an unique session id and using those the clients can reconnect without losing messages: the server persists clients' pending messages (until some configurable point) if they can't be immediately delivered. All writes through Conn.Send by design asynchronous. Finally, the actual format on the wire is described by a separate Codec. The default codecs (SIOCodec and SIOStreamingCodec) are compatible with the LearnBoost's Socket.IO client. For example, here is a simple chat server:
Package controllerruntime provides tools to construct Kubernetes-style controllers that manipulate both Kubernetes CRDs and aggregated/built-in Kubernetes APIs. It defines easy helpers for the common use cases when building CRDs, built on top of customizable layers of abstraction. Common cases should be easy, and uncommon cases should be possible. In general, controller-runtime tries to guide users towards Kubernetes controller best-practices. The main entrypoint for controller-runtime is this root package, which contains all of the common types needed to get started building controllers: The examples in this package walk through a basic controller setup. The kubebuilder book (https://book.kubebuilder.io) has some more in-depth walkthroughs. controller-runtime favors structs with sane defaults over constructors, so it's fairly common to see structs being used directly in controller-runtime. A brief-ish walkthrough of the layout of this library can be found below. Each package contains more information about how to use it. Frequently asked questions about using controller-runtime and designing controllers can be found at https://github.com/kubernetes-sigs/controller-runtime/blob/master/FAQ.md. Every controller and webhook is ultimately run by a Manager (pkg/manager). A manager is responsible for running controllers and webhooks, and setting up common dependencies (pkg/runtime/inject), like shared caches and clients, as well as managing leader election (pkg/leaderelection). Managers are generally configured to gracefully shut down controllers on pod termination by wiring up a signal handler (pkg/manager/signals). Controllers (pkg/controller) use events (pkg/event) to eventually trigger reconcile requests. They may be constructed manually, but are often constructed with a Builder (pkg/builder), which eases the wiring of event sources (pkg/source), like Kubernetes API object changes, to event handlers (pkg/handler), like "enqueue a reconcile request for the object owner". Predicates (pkg/predicate) can be used to filter which events actually trigger reconciles. There are pre-written utilities for the common cases, and interfaces and helpers for advanced cases. Controller logic is implemented in terms of Reconcilers (pkg/reconcile). A Reconciler implements a function which takes a reconcile Request containing the name and namespace of the object to reconcile, reconciles the object, and returns a Response or an error indicating whether to requeue for a second round of processing. Reconcilers use Clients (pkg/client) to access API objects. The default client provided by the manager reads from a local shared cache (pkg/cache) and writes directly to the API server, but clients can be constructed that only talk to the API server, without a cache. The Cache will auto-populate with watched objects, as well as when other structured objects are requested. The default split client does not promise to invalidate the cache during writes (nor does it promise sequential create/get coherence), and code should not assume a get immediately following a create/update will return the updated resource. Caches may also have indexes, which can be created via a FieldIndexer (pkg/client) obtained from the manager. Indexes can used to quickly and easily look up all objects with certain fields set. Reconcilers may retrieve event recorders (pkg/recorder) to emit events using the manager. Clients, Caches, and many other things in Kubernetes use Schemes (pkg/scheme) to associate Go types to Kubernetes API Kinds (Group-Version-Kinds, to be specific). Similarly, webhooks (pkg/webhook/admission) may be implemented directly, but are often constructed using a builder (pkg/webhook/admission/builder). They are run via a server (pkg/webhook) which is managed by a Manager. Logging (pkg/log) in controller-runtime is done via structured logs, using a log set of interfaces called logr (https://godoc.org/github.com/go-logr/logr). While controller-runtime provides easy setup for using Zap (https://go.uber.org/zap, pkg/log/zap), you can provide any implementation of logr as the base logger for controller-runtime. Metrics (pkg/metrics) provided by controller-runtime are registered into a controller-runtime-specific Prometheus metrics registry. The manager can serve these by an HTTP endpoint, and additional metrics may be registered to this Registry as normal. You can easily build integration and unit tests for your controllers and webhooks using the test Environment (pkg/envtest). This will automatically stand up a copy of etcd and kube-apiserver, and provide the correct options to connect to the API server. It's designed to work well with the Ginkgo testing framework, but should work with any testing setup. This example creates a simple application Controller that is configured for ReplicaSets and Pods. * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application. TODO(pwittrock): Update this example when we have better dependency injection support. This example creates a simple application Controller that is configured for ReplicaSets and Pods. This application controller will be running leader election with the provided configuration in the manager options. If leader election configuration is not provided, controller runs leader election with default values. Default values taken from: https://github.com/kubernetes/component-base/blob/master/config/v1alpha1/defaults.go * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application. TODO(pwittrock): Update this example when we have better dependency injection support.
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
Package gamelift provides the client and types for making API requests to Amazon GameLift. Amazon GameLift is a managed service for developers who need a scalable, dedicated server solution for their multiplayer games. Use Amazon GameLift for these tasks: (1) set up computing resources and deploy your game servers, (2) run game sessions and get players into games, (3) automatically scale your resources to meet player demand and manage costs, and (4) track in-depth metrics on game server performance and player usage. The Amazon GameLift service API includes two important function sets: Manage game sessions and player access -- Retrieve information on available game sessions; create new game sessions; send player requests to join a game session. Configure and manage game server resources -- Manage builds, fleets, queues, and aliases; set auto-scaling policies; retrieve logs and metrics. This reference guide describes the low-level service API for Amazon GameLift. You can use the API functionality with these tools: The Amazon Web Services software development kit (AWS SDK (http://aws.amazon.com/tools/#sdk)) is available in multiple languages (http://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-supported.html#gamelift-supported-clients) including C++ and C#. Use the SDK to access the API programmatically from an application, such as a game client. The AWS command-line interface (http://aws.amazon.com/cli/) (CLI) tool is primarily useful for handling administrative actions, such as setting up and managing Amazon GameLift settings and resources. You can use the AWS CLI to manage all of your AWS services. The AWS Management Console (https://console.aws.amazon.com/gamelift/home) for Amazon GameLift provides a web interface to manage your Amazon GameLift settings and resources. The console includes a dashboard for tracking key resources, including builds and fleets, and displays usage and performance metrics for your games as customizable graphs. Amazon GameLift Local is a tool for testing your game's integration with Amazon GameLift before deploying it on the service. This tools supports a subset of key API actions, which can be called from either the AWS CLI or programmatically. See Testing an Integration (http://docs.aws.amazon.com/gamelift/latest/developerguide/integration-testing-local.html). Learn more Developer Guide (http://docs.aws.amazon.com/gamelift/latest/developerguide/) -- Read about Amazon GameLift features and how to use them. Tutorials (https://gamedev.amazon.com/forums/tutorials) -- Get started fast with walkthroughs and sample projects. GameDev Blog (http://aws.amazon.com/blogs/gamedev/) -- Stay up to date with new features and techniques. GameDev Forums (https://gamedev.amazon.com/forums/spaces/123/gamelift-discussion.html) -- Connect with the GameDev community. Release notes (http://aws.amazon.com/releasenotes/Amazon-GameLift/) and document history (http://docs.aws.amazon.com/gamelift/latest/developerguide/doc-history.html) -- Stay current with updates to the Amazon GameLift service, SDKs, and documentation. This list offers a functional overview of the Amazon GameLift service API. Use these actions to start new game sessions, find existing game sessions, track game session status and other information, and enable player access to game sessions. SearchGameSessions -- Retrieve all available game sessions or search for Start new games with Queues to find the best available hosting resources StartGameSessionPlacement -- Request a new game session placement and add DescribeGameSessionPlacement -- Get details on a placement request, including StopGameSessionPlacement -- Cancel a placement request. CreateGameSession -- Start a new game session on a specific fleet. Available StartMatchmaking -- Request matchmaking for one players or a group who want StartMatchBackfill - Request additional player matches to fill empty slots DescribeMatchmaking -- Get details on a matchmaking request, including status. AcceptMatch -- Register that a player accepts a proposed match, for matches StopMatchmaking -- Cancel a matchmaking request. DescribeGameSessions -- Retrieve metadata for one or more game sessions, DescribeGameSessionDetails -- Retrieve metadata and the game session protection UpdateGameSession -- Change game session settings, such as maximum player GetGameSessionLogUrl -- Get the location of saved logs for a game session. CreatePlayerSession -- Send a request for a player to join a game session. CreatePlayerSessions -- Send a request for multiple players to join a game DescribePlayerSessions -- Get details on player activity, including status, When setting up Amazon GameLift resources for your game, you first create a game build (http://docs.aws.amazon.com/gamelift/latest/developerguide/gamelift-build-intro.html) and upload it to Amazon GameLift. You can then use these actions to configure and manage a fleet of resources to run your game servers, scale capacity to meet player demand, access performance and utilization metrics, and more. CreateBuild -- Create a new build using files stored in an Amazon S3 bucket. ListBuilds -- Get a list of all builds uploaded to a Amazon GameLift region. DescribeBuild -- Retrieve information associated with a build. UpdateBuild -- Change build metadata, including build name and version. DeleteBuild -- Remove a build from Amazon GameLift. CreateFleet -- Configure and activate a new fleet to run a build's game servers. ListFleets -- Get a list of all fleet IDs in a Amazon GameLift region (all DeleteFleet -- Terminate a fleet that is no longer running game servers or View / update fleet configurations. DescribeFleetAttributes / UpdateFleetAttributes -- View or change a fleet's DescribeFleetPortSettings / UpdateFleetPortSettings -- View or change the DescribeRuntimeConfiguration / UpdateRuntimeConfiguration -- View or change DescribeEC2InstanceLimits -- Retrieve maximum number of instances allowed DescribeFleetCapacity / UpdateFleetCapacity -- Retrieve the capacity settings Autoscale -- Manage auto-scaling rules and apply them to a fleet. PutScalingPolicy -- Create a new auto-scaling policy, or update an existing DescribeScalingPolicies -- Retrieve an existing auto-scaling policy. DeleteScalingPolicy -- Delete an auto-scaling policy and stop it from affecting StartFleetActions -- Restart a fleet's auto-scaling policies. StopFleetActions -- Suspend a fleet's auto-scaling policies. CreateVpcPeeringAuthorization -- Authorize a peering connection to one of DescribeVpcPeeringAuthorizations -- Retrieve valid peering connection authorizations. DeleteVpcPeeringAuthorization -- Delete a peering connection authorization. CreateVpcPeeringConnection -- Establish a peering connection between the DescribeVpcPeeringConnections -- Retrieve information on active or pending DeleteVpcPeeringConnection -- Delete a VPC peering connection with a Amazon DescribeFleetUtilization -- Get current data on the number of server processes, DescribeFleetEvents -- Get a fleet's logged events for a specified time span. DescribeGameSessions -- Retrieve metadata associated with one or more game DescribeInstances -- Get information on each instance in a fleet, including GetInstanceAccess -- Request access credentials needed to remotely connect CreateAlias -- Define a new alias and optionally assign it to a fleet. ListAliases -- Get all fleet aliases defined in a Amazon GameLift region. DescribeAlias -- Retrieve information on an existing alias. UpdateAlias -- Change settings for a alias, such as redirecting it from one DeleteAlias -- Remove an alias from the region. ResolveAlias -- Get the fleet ID that a specified alias points to. CreateGameSessionQueue -- Create a queue for processing requests for new DescribeGameSessionQueues -- Retrieve game session queues defined in a Amazon UpdateGameSessionQueue -- Change the configuration of a game session queue. DeleteGameSessionQueue -- Remove a game session queue from the region. CreateMatchmakingConfiguration -- Create a matchmaking configuration with DescribeMatchmakingConfigurations -- Retrieve matchmaking configurations UpdateMatchmakingConfiguration -- Change settings for matchmaking configuration. DeleteMatchmakingConfiguration -- Remove a matchmaking configuration from CreateMatchmakingRuleSet -- Create a set of rules to use when searching for DescribeMatchmakingRuleSets -- Retrieve matchmaking rule sets defined in ValidateMatchmakingRuleSet -- Verify syntax for a set of matchmaking rules. See https://docs.aws.amazon.com/goto/WebAPI/gamelift-2015-10-01 for more information on this service. See gamelift package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/gamelift/ To Amazon GameLift with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon GameLift client GameLift for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/gamelift/#New
Package git is a low level and highly extensible git client library for reading repositories from git servers. It is written in Go from scratch, without any C dependencies. We have been following the open/close principle in its design to facilitate extensions. Small example extracting the commits from a repository:
Package git is a low level and highly extensible git client library for reading repositories from git servers. It is written in Go from scratch, without any C dependencies. We have been following the open/close principle in its design to facilitate extensions. Small example extracting the commits from a repository:
Package git is a low level and highly extensible git client library for reading repositories from git servers. It is written in Go from scratch, without any C dependencies. We have been following the open/close principle in its design to facilitate extensions. Small example extracting the commits from a repository:
Package git is a low level and highly extensible git client library for reading repositories from git servers. It is written in Go from scratch, without any C dependencies. We have been following the open/close principle in its design to facilitate extensions. Small example extracting the commits from a repository:
Package git is a low level and highly extensible git client library for reading repositories from git servers. It is written in Go from scratch, without any C dependencies. We have been following the open/close principle in its design to facilitate extensions. Small example extracting the commits from a repository:
Package git is a low level and highly extensible git client library for reading repositories from git servers. It is written in Go from scratch, without any C dependencies. We have been following the open/close principle in its design to facilitate extensions. Small example extracting the commits from a repository:
Package git is a low level and highly extensible git client library for reading repositories from git servers. It is written in Go from scratch, without any C dependencies. We have been following the open/close principle in its design to facilitate extensions. Small example extracting the commits from a repository:
NanoMux is a package of HTTP request routers for the Go language. The package has three types that can be used as routers. The first one is the Resource, which represents the path segment resource. The second one is the Host. it represents the host segment of the URL but also takes on the role of the root resource when HTTP method handlers are set. The third one is the Router which supports registering multiple hosts and resources. It passes the request to the matching host and, when there is no matching host, to the root resource. In NanoMux terms, hosts and resources are called responders. Responders are organized into a tree. The request's URL segments are matched against the host and corresponding resources' templates in the tree. The request passes through each matching responder in its URL until it reaches the last segment's responder. To pass the request to the next segment's responder, the request passers of the Router, Host, and Resource are called. When the request reaches the last responder, that responder's request handler is called. The request handler is responsible for calling the responder's HTTP method handler. The request passer, request handler, and HTTP method handlers can all be wrapped with middleware. The NanoMux types provide many methods, but most of them are for convenience. Sections below discuss the main features of the package. Based on the segments they comprise, there are three types of templates: static, pattern, and wildcard. Static templates have no regex or wildcard segments. Pattern templates have one or more regex segments and/or one wildcard segment and static segments. A regex segment must be in curly braces and consists of a value name and a regex pattern separated by a colon: "{valueName:regexPattern}". The wildcard segment only has a value name: "{valueName}". There can be only one wildcard segment in a template. Wildcard templates have only one wildcard segment and no static or regex segments. The host segment templates must always follow the scheme with the colon ":" and the two slashes "//" of the authority component. The path segment templates may be preceded by a slash "/" or a scheme, a colon ":", and three slashes "///" (two authority component slashes and the third separator slash). Like in "https:///blog". The preceding slash is just a separator. It doesn't denote the root resource, except when it is used alone. The template "/" or "https:///" denotes the root resource. Both the host and path segment templates can have a trailing slash. When its template starts with "https" unless configured to redirect, the host or resource will not handle a request when used under HTTP and respond with a "404 Not Found" status code. When its template has a trailing slash unless configured to be lenient or strict, the resource will redirect the request that was made to a URL without a trailing slash to the one with a trailing slash and vice versa. The trailing slash has no effect on the host. But if the host is a subtree handler and should respond to the request, its configurations related to the trailing slash will be used on the last path segment. As a side note, every parent resource should have a trailing slash in its template. For example, in a resource tree "/parent/child/grandchild", two non-leaf resources should be referenced with templates "parent/" and "child/". NanoMux doesn't force this, but it's good practice to follow. It helps the clients avoid forming a broken URL when adding a relative URL to the base URL. By default, if the resource has a trailing slash, NanoMux redirects the requests made to the URL without a trailing slash to the URL with a trailing slash. So the clients will have the correct URL. Templates can have a name. The name segment comes at the beginning of the host or path segment templates, but after the slashes. The name segment begins with a "$" sign and is separated from the template's contents by a colon ":". The template's name comes between the "$" sign and the colon ":". The name given in the template can be used to retrieve the host or resource from its parent (the router is considered the host's parent). For convenience, a regex segment's or wildcard segment's value name is used as the name of the template when there is no template name and no other regex segments. For example, templates "{color:red|green|blue}", `day {day:(?:3[01]|[12]\d|0?[1-9])}`, "{model}" have names color, day, and model, respectively. If the template's static part needs a "$" sign at the beginning or curly braces anywhere, they can be escaped with a backslash `\`. Like in the template `\$tatic\{template\}`. For some reason, if the template name or value name needs a colon ":", it can also be escapbed with a backslash: `$smileys\:):{smiley\:)}`. When retrieving the host, resource, or value of the regex or wildcard segment, names are used unescaped without a backslash `\`. Constructors of the Host and Resource types and some methods take URL templates or path templates. In the URL and path templates, the scheme and trailing slash belong to the last segment. In the URL template, after the host segment, if the path component contains only a slash "/", it's considered a trailing slash of the host template. The host template's trailing slash is used only for the last path segment when the host is a subtree handler and should respond to the request. It has no effect on the host itself. In templates, disallowed characters must be used without a percent-encoding, except for a slash "/". Because the slash "/" is a separator when needed in a template, it must be replaced with %2f or %2F. Hosts and resources may have child resources with all three types of templates. In that case, the request's path segments are first matched against child resources with a static template. If no resource with a static template matches, then child resources with a pattern template are matched in the order of their registration. Lastly, if there is no child resource with a pattern template that matches the request's path segment, a child resource with a wildcard template accepts the request. Hosts and resources can have only one direct child resource with a wildcard template. The Host and Resource types implement the http.Handler interface. They can be constructed, have HTTP method handlers set with the SetHandlerFor method, and can be registered with the RegisterResource or RegisterResourceUnder methods to form a resource tree. There is one restriction: the root resource cannot be registered under a host. It can be used without a host or be registered with the router. When the Host type is used, the root resource is implied. Handlers must return true if they respond to the request. Sometimes middlewares and the responder itself need to know whether the request was handled or not. For example, when the middleware responds to the request instead of calling the request passer, the responder may assume that none of its child resources responded to the request in its subtree, so it responds with "404 Not Found" to the request that was already responded to. To prevent this, the middleware's handler must return true if it responds to the request, or it must return the value returned from the argument handler. Sometimes resources have to handle HTTP methods they don't support. NanoMux provides a default not allowed HTTP method handler that responds with a "405 Method Not Allowed" status code, listing all HTTP methods the resource supports in the "Allow" header. But when the host or resource needs a custom implementation, its SetHandlerFor method can be used to replace the default handler. To denote the not allowed HTTP method handler, the exclamation mark "!" must be used instead of an HTTP method. In addition to the not allowed HTTP method handler, if the host or resource has at least one HTTP method handler, NanoMux also provides a default OPTIONS HTTP method handler. The host and resource allow setting a handler for a child resource in their subtree. If the subtree resource doesn't exist, it will be created. It's possible to retrieve a subtree resource with the method Resource. If the subtree resource doesn't exist, the method Resource creates it. If an existing resource must be retrieved, the method RegisteredResource can be used. If there is a need for shared data, it can be set with the SetSharedData method of the Host and Resource. The shared data can then be retrieved in the handlers calling the ResponderSharedData method of the passed *Args argument. Hosts and resources can have their own shared data. Handlers retrieve the shared data of their responder. The responder's SetSharedData is useful for sharing data between its handlers. But, when the data needs to be shared between responders in the resource tree, the *Args argument can be used. The Set method of the *Args argument takes a key and a value of any type and sets the value for the key. The rules for defining a key are the same as for "context.Context". Any responder can set as many values as needed when it's passing or handling a request. The *Args argument carries the values between middlewares and handlers as the request passes through the matching responders in the resource tree. Hosts and resources can be implemented as a type with methods. Each method that has a name beginning with "Handle" and has the signature of a nanomux.Handler is used as an HTTP method handler. The remaining part of the method's name is considered an HTTP method. It is possible to set the implementation later with the SetImplementation method of the Host and Resource. The implementation may also be set for a child resource in the subtree with the method SetImplementationAt. Hosts and resources configured to be a subtree handler respond to the request when there is no matching resource in their subtree. In the resource tree, if resource-12 is a subtree handler, it handles the request to a path "/resource-1/resource-12/non-existent-resource". Subtree handlers can get the remaining part of the path with the RemainingPath method of the *Args argument. The remaining path starts with a slash "/" if the subtree handler has no trailing slash in its template, otherwise it starts without a trailing slash. A subtree handler can also be used as a file server. Let's say we have the following resource tree: When the request is made to a URL "http://example.com/resource-1/resource-12", the host's request passer is called. It passes the request to the resource-1. Then the resource-1's request passer is called and it passes the request to the resource-12. The resource-12 is the last resource in the URL, so its request handler is called. The request handler is responsible for calling the resource's HTTP method handler. If there is no handler for the request's method, it calls the not allowed HTTP method handler. All of these handlers and the request passer can be wrapped in middleware. In the above snippet, the WrapRequestPasserAt method wraps the request passer of the "admin" resource with the CheckCredentials middleware. The CheckCredentials calls the "admin" resource's request passer if the credentials are valid; if not, no further segments will be matched, the request will be dropped, and the client will be responded with a "404 Not Found" status code. In the above case, "admin" is a dormant resource. But as its name states, the request passer's purpose is to pass the request to the next resource. It's called even when the resource is dormant. Unlike the request passer, the request handler is called only when the host or resource is the one that must respond to the request. The request handler can be wrapped when the middleware must be called before any HTTP method handler. The WrapRequestPasser, WrapRequestHandler, and WrapHandlerOf methods of the Host and Resource types wrap their request passer, request handler, and the HTTP method handlers, respectively. The WrapRequestPasserAt, WrapRequestHandlerAt, and WrapPathHandlerOf methods wrap the request passer and the respective handlers of the child resource at the path. The WrapSubtreeRequestPassers, WrapSubtreeRequestHandlers, and WrapSubtreeHandlersOf methods wrap the request passer and the respective handlers of all the resources in the host's or resource's subtree. When calling the WrapHandlerOf, WrapPathHandlerOf, and WrapSubtreeHandlersOf methods, "*" may be used to denote all HTTP methods for which handlers exist. When "*" is used instead of an HTTP method, all the existing HTTP method handlers of the responder are wrapped. The Router type is more suitable when multiple hosts with different root domains or subdomains are needed. It is possible to use an http.Handler and an http.HandlerFunc with NanoMux. For that, NanoMux provides four converters: Hr, HrWithArgs, FnHr, and FnHrWithArgs. The Hr and HrWithArgs convert the http.Handler, while the FnHr and FnHrWithArgs convert the function with the signature of the http.HandlerFunc to the nanomux.Handler. Most of the time, when there is a need to use an http.Handler and http.HandlerFunc, it's to utilize the handlers written outside the context of NanoMux, and those handlers don't use the *Args argument. The Hr and FnHr converters return a handler that ignores the *Args argument instead of inserting it into the request's context, which is a slower operation. These converters must be used when the http.Handler and http.HandlerFunc handlers don't need the *Args argument. If they are written to use the *Args argument, then it's better to change their signatures as well. One situation where http.Handler and http.HandlerFunc can be considered when writing handlers might be to use a middleware with the signature of func(http.Handler) http.Handler. But for middleware with that signature, NanoMux provides an Mw converter. The Mw converter converts the middleware with the signature of func(http.Handler) http.Handler to the middleware with the signature of func(nanomux.Handler) nanomux.Handler, so it can be used to wrap the NanoMux handlers.
An HTTP client for interacting with the Kubecost Allocation API. For documentation on the Go standard library net/http package, see the following: For documentation on the Kubecost Allocation API, see the following: Package main is a generated GoMock package. Application configuration. For documentation on Viper, see the following: An HTTP server for exposing cost allocation metrics retrieved from Kubecost. Metrics are exposed via an HTTP metrics endpoint. Applications that provide a Prometheus OpenMetrics integration can gather cost allocation metrics from this endpoint to store and visualize the data. Generate Prometheus metrics from configuration. For documentation on the Go client library for Prometheus, see the following: Utility functions.
Package ociregistry provides an abstraction that represents the capabilities provided by an OCI registry. See the OCI distribution specification for more information on OCI registries. Packages within this module provide the capability to translate to and from the HTTP protocol documented in that specification: - cuelabs.dev/go/oci/ociregistry/ociclient provides an Interface value that acts as an HTTP client. - cuelabs.dev/go/oci/ociregistry/ociserver provides an HTTP server that serves the distribution protocol by making calls to an arbitrary Interface value. When used together in a stack, the above two packages can be used to provide a simple proxy server. The cuelabs.dev/go/oci/ociregistry/ocimem package provides a trivial in-memory implementation of the interface. Other packages provide some utilities that manipulate Interface values: - cuelabs.dev/go/oci/ociregistry/ocifilter provides functionality for exposing modified or restricted views onto a registry. - cuelabs.dev/go/oci/ociregistry/ociunify can combine two registries into one unified view across both. In general, the caller cannot assume that the implementation of a given Interface value is present on the network. For example, cuelabs.dev/go/oci/ociregistry/ocimem doesn't know about the network at all. But there are times when an implementation might want to provide information about the location of blobs or manifests so that a client can go direct if it wishes. That is, a proxy might not wish to ship all the traffic for all blobs through itself, but instead redirect clients to talk to some other location on the internet. When an Interface implementation wishes to provide that information, it can do so by setting the `URLs` field on the descriptor that it returns for a given blob or manifest. Although it is not mandatory for a caller to use this, some callers (specifically the ociserver package) can use this information to redirect clients appropriately.