Package appconfig provides the API client, operations, and parameter types for Amazon AppConfig. AppConfig feature flags and dynamic configurations help software builders quickly and securely adjust application behavior in production environments without full code deployments. AppConfig speeds up software release frequency, improves application resiliency, and helps you address emergent issues more quickly. With feature flags, you can gradually release new capabilities to users and measure the impact of those changes before fully deploying the new capabilities to all users. With operational flags and dynamic configurations, you can update block lists, allow lists, throttling limits, logging verbosity, and perform other operational tuning to quickly respond to issues in production environments. AppConfig is a capability of Amazon Web Services Systems Manager. Despite the fact that application configuration content can vary greatly from application to application, AppConfig supports the following use cases, which cover a broad spectrum of customer needs: Feature flags and toggles - Safely release new capabilities to your customers in a controlled environment. Instantly roll back changes if you experience a problem. Application tuning - Carefully introduce application changes while testing the impact of those changes with users in production environments. Allow list or block list - Control access to premium features or instantly block specific users without deploying new code. Centralized configuration storage - Keep your configuration data organized and consistent across all of your workloads. You can use AppConfig to deploy configuration data stored in the AppConfig hosted configuration store, Secrets Manager, Systems Manager, Parameter Store, or Amazon S3. This section provides a high-level description of how AppConfig works and how you get started. 1. Identify configuration values in code you want to manage in the cloud Before you start creating AppConfig artifacts, we recommend you identify configuration data in your code that you want to dynamically manage using AppConfig. Good examples include feature flags or toggles, allow and block lists, logging verbosity, service limits, and throttling rules, to name a few. If your configuration data already exists in the cloud, you can take advantage of AppConfig validation, deployment, and extension features to further streamline configuration data management. 2. Create an application namespace To create a namespace, you create an AppConfig artifact called an application. An application is simply an organizational construct like a folder. 3. Create environments For each AppConfig application, you define one or more environments. An environment is a logical grouping of targets, such as applications in a Beta or Production environment, Lambda functions, or containers. You can also define environments for application subcomponents, such as the Web , Mobile , and Back-end . You can configure Amazon CloudWatch alarms for each environment. The system monitors alarms during a configuration deployment. If an alarm is triggered, the system rolls back the configuration. 4. Create a configuration profile A configuration profile includes, among other things, a URI that enables AppConfig to locate your configuration data in its stored location and a profile type. AppConfig supports two configuration profile types: feature flags and freeform configurations. Feature flag configuration profiles store their data in the AppConfig hosted configuration store and the URI is simply hosted . For freeform configuration profiles, you can store your data in the AppConfig hosted configuration store or any Amazon Web Services service that integrates with AppConfig, as described in Creating a free form configuration profilein the the AppConfig User Guide. A configuration profile can also include optional validators to ensure your configuration data is syntactically and semantically correct. AppConfig performs a check using the validators when you start a deployment. If any errors are detected, the deployment rolls back to the previous configuration data. 5. Deploy configuration data When you create a new deployment, you specify the following: An application ID A configuration profile ID A configuration version An environment ID where you want to deploy the configuration data A deployment strategy ID that defines how fast you want the changes to take effect When you call the StartDeployment API action, AppConfig performs the following tasks: Retrieves the configuration data from the underlying data store by using the location URI in the configuration profile. Verifies the configuration data is syntactically and semantically correct by using the validators you specified when you created your configuration profile. Caches a copy of the data so it is ready to be retrieved by your application. This cached copy is called the deployed data. 6. Retrieve the configuration You can configure AppConfig Agent as a local host and have the agent poll AppConfig for configuration updates. The agent calls the StartConfigurationSessionand GetLatestConfiguration API actions and caches your configuration data locally. To retrieve the data, your application makes an HTTP call to the localhost server. AppConfig Agent supports several use cases, as described in Simplified retrieval methodsin the the AppConfig User Guide. If AppConfig Agent isn't supported for your use case, you can configure your application to poll AppConfig for configuration updates by directly calling the StartConfigurationSession and GetLatestConfigurationAPI actions. This reference is intended to be used with the AppConfig User Guide.
Package peer provides a common base for creating and managing Decred network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the Decred wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the decred network to use, which services it supports, and callbacks to invoke when decred messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to decred messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported decred messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, PushGetHeadersMsg, and PushRejectMsg functions are provided as a convenience. While it is of course possible to create and send these message manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Next, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. Finally, the PushRejectMsg function can be used to easily create and send an appropriate reject message based on the provided parameters as well as optionally provides a flag to cause it to block until the message is actually sent. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a slog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all improvement proposals supported by the wire package. (https://godoc.org/github.com/decred/dcrd/wire#hdr-Bitcoin_Improvement_Proposals) This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
Package peer provides a common base for creating and managing Decred network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the Decred wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the decred network to use, which services it supports, and callbacks to invoke when decred messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to decred messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported decred messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, and PushGetHeadersMsg functions are provided as a convenience. While it is of course possible to create and send these messages manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Finally, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a slog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all improvement proposals supported by the wire package. This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
Package peer provides a common base for creating and managing Decred network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the Decred wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the decred network to use, which services it supports, and callbacks to invoke when decred messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to decred messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported decred messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, PushGetHeadersMsg, and PushRejectMsg functions are provided as a convenience. While it is of course possible to create and send these message manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Next, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. Finally, the PushRejectMsg function can be used to easily create and send an appropriate reject message based on the provided parameters as well as optionally provides a flag to cause it to block until the message is actually sent. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a slog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all improvement proposals supported by the wire package. This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
Package applicationdiscoveryservice provides the API client, operations, and parameter types for AWS Application Discovery Service. Amazon Web Services Application Discovery Service (Application Discovery Service) helps you plan application migration projects. It automatically identifies servers, virtual machines (VMs), and network dependencies in your on-premises data centers. For more information, see the Amazon Web Services Application Discovery Service FAQ. Application Discovery Service offers three ways of performing discovery and collecting data about your on-premises servers: Agentless discovery using Amazon Web Services Application Discovery Service Agentless Collector (Agentless Collector), which doesn't require you to install an agent on each host. Agentless Collector gathers server information regardless of the operating systems, which minimizes the time required for initial on-premises infrastructure assessment. Agentless Collector doesn't collect information about network dependencies, only agent-based discovery collects that information. Agent-based discovery using the Amazon Web Services Application Discovery Agent (Application Discovery Agent) collects a richer set of data than agentless discovery, which you install on one or more hosts in your data center. The agent captures infrastructure and application information, including an inventory of running processes, system performance information, resource utilization, and network dependencies. The information collected by agents is secured at rest and in transit to the Application Discovery Service database in the Amazon Web Services cloud. For more information, see Amazon Web Services Application Discovery Agent. Amazon Web Services Partner Network (APN) solutions integrate with Application Discovery Service, enabling you to import details of your on-premises environment directly into Amazon Web Services Migration Hub (Migration Hub) without using Agentless Collector or Application Discovery Agent. Third-party application discovery tools can query Amazon Web Services Application Discovery Service, and they can write to the Application Discovery Service database using the public API. In this way, you can import data into Migration Hub and view it, so that you can associate applications with servers and track migrations. This API reference provides descriptions, syntax, and usage examples for each of the actions and data types for Application Discovery Service. The topic for each action shows the API request parameters and the response. Alternatively, you can use one of the Amazon Web Services SDKs to access an API that is tailored to the programming language or platform that you're using. For more information, see Amazon Web Services SDKs. Remember that you must set your Migration Hub home Region before you call any of these APIs. You must make API calls for write actions (create, notify, associate, disassociate, import, or put) while in your home Region, or a HomeRegionNotSetException error is returned. API calls for read actions (list, describe, stop, and delete) are permitted outside of your home Region. Although it is unlikely, the Migration Hub home Region could change. If you call APIs outside the home Region, an InvalidInputException is returned. You must call GetHomeRegion to obtain the latest Migration Hub home Region. This guide is intended for use with the Amazon Web Services Application Discovery Service User Guide. All data is handled according to the Amazon Web Services Privacy Policy. You can operate Application Discovery Service offline to inspect collected data before it is shared with the service.
* Package middleware provides a net/http middleware for APIs and web servers. It largely works in much the same way as `node-autoscaling` does: it wraps a struct that implements `http.Handler` by: 1. Minting a requestID 2. Running, and timing, the wrapped handler 3. Adding the request ID to the response 4. _responding_ with this data 5. Logging out, to STDOUT, request metadata A simple implementation would look like: A request to `localhost:8008` would then log out: With the response: Trying ::1... TCP_NODELAY set Connected to localhost (::1) port 8008 (#0) > HEAD / HTTP/1.1 > Host: localhost:8008 > User-Agent: curl/7.51.0 > Accept: text/plain > < HTTP/1.1 200 OK HTTP/1.1 200 OK < Content-Type: text/plain; charset=utf-8 Content-Type: text/plain; charset=utf-8 < X-Request-Id: 72c2b7aa-3bcb-478f-8724-66f38cd3abc0 X-Request-Id: 72c2b7aa-3bcb-478f-8724-66f38cd3abc0 < Content-Length: 10 Content-Length: 10 < Curl_http_done: called premature == 0 Connection #0 to host localhost left intact *
Package peer provides a common base for creating and managing Bitcoin network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the bitcoin wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: Provides a basic concurrent safe bitcoin peer for handling bitcoin communications via the peer-to-peer protocol Full duplex reading and writing of bitcoin protocol messages Automatic handling of the initial handshake process including protocol version negotiation Asynchronous message queuing of outbound messages with optional channel for notification when the message is actually sent Flexible peer configuration Caller is responsible for creating outgoing connections and listening for incoming connections so they have flexibility to establish connections asthey see fit (proxies, etc) User agent name and version Bitcoin network Service support signalling (full nodes, bloom filters, etc) Maximum supported protocol version Ability to register callbacks for handling bitcoin protocol messages Inventory message batching and send trickling with known inventory detection and avoidance Automatic periodic keep-alive pinging and pong responses Random Nonce generation and self connection detection Proper handling of bloom filter related commands when the caller does not specify the related flag to signal support Disconnects the peer when the protocol version is high enough Does not invoke the related callbacks for older protocol versions Snapshottable peer statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version Helper functions pushing addresses, getblocks, getheaders, and reject messages These could all be sent manually via the standard message output function, but the helpers provide additional nice functionality such as duplicate filtering and address randomization Ability to wait for shutdown/disconnect Comprehensive test coverage All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the bitcoin network to use, which services it supports, and callbacks to invoke when bitcoin messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to bitcoin messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported bitcoin messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, PushGetHeadersMsg, and PushRejectMsg functions are provided as a convenience. While it is of course possible to create and send these message manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Next, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. Finally, the PushRejectMsg function can be used to easily create and send an appropriate reject message based on the provided parameters as well as optionally provides a flag to cause it to block until the message is actually sent. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a btclog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all BIPS supported by the wire package. (https://godoc.org/github.com/p9c/pod/wire#hdr-Bitcoin_Improvement_Proposals) This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
Package Authaus is an authentication and authorization system. Authaus brings together the following pluggable components: Any of these five components can be swapped out, and in fact the fourth, and fifth ones (Role Groups and User Store) are entirely optional. A typical setup is to use LDAP as an Authenticator, and Postgres as a Session, Permit, and Role Groups database. Your session database does not need to be particularly performant, since Authaus maintains an in-process cache of session keys and their associated tokens. Authaus was NOT designed to be a "Facebook Scale" system. The target audience is a system of perhaps 100,000 users. There is nothing fundamentally limiting about the API of Authaus, but the internals certainly have not been built with millions of users in mind. The intended usage model is this: Authaus is intended to be embedded inside your security system, and run as a standalone HTTP service (aka a REST service). This HTTP service CAN be open to the wide world, but it's also completely OK to let it listen only to servers inside your DMZ. Authaus only gives you the skeleton and some examples of HTTP responders. It's up to you to flesh out the details of your authentication HTTP interface, and whether you'd like that to face the world, or whether it should only be accessible via other services that you control. At startup, your services open an HTTP connection to the Authaus service. This connection will typically live for the duration of the service. For every incoming request, you peel off whatever authentication information is associated with that request. This is either a session key, or a username/password combination. Let's call it the authorization information. You then ask Authaus to tell you WHO this authorization information belongs to, as well as WHAT this authorization information allows the requester to do (ie Authentication and Authorization). Authaus responds either with a 401 (Unauthorized), 403 (Forbidden), or a 200 (OK) and a JSON object that tells you the identity of the agent submitting this request, as well the permissions that this agent posesses. It's up to your individual services to decide what to do with that information. It should be very easy to expose Authaus over a protocol other than HTTP, since Authaus is intended to be easy to embed. The HTTP API is merely an illustrative example. A `Session Key` is the long random number that is typically stored as a cookie. A `Permit` is a set of roles that has been granted to a user. Authaus knows nothing about the contents of a permit. It simply treats it as a binary blob, and when writing it to an SQL database, encodes it as base64. The interpretation of the permit is application dependent. Typically, a Permit will hold information such as "Allowed to view billing information", or "Allowed to paint your bathroom yellow". Authaus does have a built-in module called RoleGroupDB, which has its own interpretation of what a Permit is, but you do not need to use this. A `Token` is the result of a successful authentication. It stores the identity of a user, an expiry date, and a Permit. A token will usually be retrieved by a session key. However, you can also perform a once-off authentication, which also yields you a token, which you will typically throw away when you are finished with it. All public methods of the `Central` object are callable from multiple threads. Reader-Writer locks are used in all of the caching systems. The number of concurrent connections is limited only by the limits of the Go runtime, and the performance limits that are inherent to the simple reader-writer locks used to protect shared state. Authaus must be deployed as a single process (which implies running on a single logical machine). The sole reason why it must run on only one process and not more, is because of the state that lives inside the various Authaus caches. Were it not for these caches, then there would be nothing preventing you from running Authaus on as many machines as necessary. The cached state stored inside the Authaus server is: If you wanted to make Authaus runnable across multiple processes, then you would need to implement a cache invalidation system for these caches. Authaus makes no attempt to mitigate DOS attacks. The most sane approach in this domain seems to be this (http://security.stackexchange.com/questions/12101/prevent-denial-of-service-attacks-against-slow-hashing-functions). The password database (created via NewAuthenticationDB_SQL) stores password hashes using the scrypt key derivation system (http://www.tarsnap.com/scrypt.html). Internally, we store our hash in a format that can later be extended, should we wish to double-hash the passwords, etc. The hash is 65 bytes and looks like this: The first byte of the hash is a version number of the hash. The remaining 64 bytes are the salt and the hash itself. At present, only one version is supported, which is version 1. It consists of 32 bytes of salt, and 32 bytes of scrypt'ed hash, with scrypt parameters N=256 r=8 p=1. Note that the parameter N=256 is quite low, meaning that it is possible to compute this in approximately 1 millisecond (1,000,000 nanoseconds) on a 2009-era Intel Core i7. This is a deliberate tradeoff. On the same CPU, a SHA256 hash takes about 500 nanoseconds to compute, so we are still making it 2000 times harder to brute force the passwords than an equivalent system storing only a SHA256 salted hash. This discussion is only of relevance in the event that the password table is compromised. No cookie signing mechanism is implemented. Cookies are not presently transmitted with Secure:true. This must change. The LDAP Authenticator is extremely simple, and provides only one function: Authenticate a user against an LDAP system (often this means Active Directory, AKA a Windows Domain). It calls the LDAP "Bind" method, and if that succeeds for the given identity/password, then the user is considered authenticated. We take care not to allow an "anonymous bind", which many LDAP servers allow when the password is blank. The Session Database runs on Postgres. It stores a table of sessions, where each row contains the following information: When a permit is altered with Authaus, then all existing sessions have their permits altered transparently. For example, imagine User X is logged in, and his administrator grants him a new permission. User X does not need to log out and log back in again in order for his new permissions to be reflected. His new permissions will be available immediately. Similarly, if a password is changed with Authaus, then all sessions are invalidated. Do take note though, that if a password is changed through an external mechanism (such as with LDAP), then Authaus will have no way of knowing this, and will continue to serve up sessions that were authenticated with the old password. This is a problem that needs addressing. You can limit the number of concurrent sessions per user to 1, by setting MaxActiveSessions.ConfigSessionDB to 1. This setting may only be zero or one. Zero, which is the default, means an unlimited number of concurrent sessions per user. Authaus will always place your Session Database behind its own Session Cache. This session cache is a very simple single-process in-memory cache of recent sessions. The limit on the number of entries in this cache is hard-coded, and that should probably change. The Permit database runs on Postgres. It stores a table of permits, which is simply a 1:1 mapping from Identity -> Permit. The Permit is just an array of bytes, which we store base64 encoded, inside a text field. This part of the system doesn't care how you interpret that blob. The Role Group Database is an entirely optional component of Authaus. The other components of Authaus (Authenticator, PermitDB, SessionDB) do not understand your Permits. To them, a Permit is simply an arbitrary array of bytes. The Role Group Database is a component that adds a specific meaning to a permit blob. Let's see what that specific meaning looks like... The built-in Role Group Database interprets a permit blob as a string of 32-bit integer IDs: These 32-bit integer IDs refer to "role groups" inside a database table. The "role groups" table might look like this: The Role Group IDs use 32-bit indices, because we assume that you are not going to create more than 2^32 different role groups. The worst case we assume here is that of an automated system that creates 100,000 roles per day. Such a system would run for more than 100 years, given a 32-bit ID. These constraints are extraordinary, suggesting that we do not even need 32 bits, but could even get away with just a 16-bit group ID. However, we expect the number of groups to be relatively small. Our aim here, arbitrary though it may be, is to fit the permit and identity into a single ethernet packet, which one can reasonably peg at 1500 bytes. 1500 / 4 = 375. We assume that no sane human administrator will assign 375 security groups to any individual. We expect the number of groups assigned to any individual to be in the range of 1 to 20. This makes 375 a gigantic buffer. OAuth support in Authaus is limited to a very simple scenario: * You wish to allow your users to login using an OAuth service - thereby outsourcing the Authentication to that external service, and using it to populate the email address of your users. OAuth was developed in order to work with Microsoft Azure Active Directory, however it should be fairly easy to extend the code to be able to handle other OAuth providers. Inside the database are two tables related to OAuth: oauthchallenge: The challenge table holds OAuth sessions which have been started, and which are expected to either succeed or fail within the next few minutes. The default timeout for a challenge is 5 minutes. A challenge record is usually created the moment the user clicks on the "Sign in with Microsoft" button on your site, and it tracks that authentication attempt. oauthsession: The session table holds OAuth sessions which have successfully authenticated, and also the token that was retrieved by a successful authorization. If a token has expired, then it is refreshed and updated in-place, inside the oauthsession table. An OAuth login follows this sequence of events: 1. User clicks on a "Signin with X" button on your login page 2. A record is created in the oauthchallenge table, with a unique ID. This ID is a secret known only to the authaus server and the OAuth server. It is used as the `state` parameter in the OAuth login mechanism. 3. The HTTP call which prompts #2 return a redirect URL (eg via an HTTP 302 response), which redirects the user's browser to the OAuth website, so that the user can either grant or refuse access. If the user refuses, or fails to login, then the login sequence ends here. 4. Upon successful authorization with the OAuth system, the OAuth website redirects the user back to your website, to a URL such as example.com/auth/oauth/finish, and you'll typically want Authaus to handle this request directly (via HttpHandlerOAuthFinish). Authaus will extract the secrets from the URL, perform any validations necessary, and then move the record from the oauthchallenge table, into the oauthsession table. While 'moving' the record over, it will also add any additional information that was provided by the successful authentication, such as the token provided by the OAuth provider. 5. Authaus makes an API call to the OAuth system, to retrieve the email address and name of the person that just logged in, using the token just received. 6. If that email address does not exist inside authuserstore, then create a new user record for this identity. 7. Log the user into Authaus, by creating a record inside authsession, for the relevant identity. Inside the authsession table, store a link to the oauthsession record, so that there is a 1:1 link from the authsession table, to the oauthsession table (ie Authaus Session to OAuth Token). 8. Return an Authaus session cookie to the browser, thereby completing the login. Although we only use our OAuth token a single time, during login, to retrieve the user's email address and name, we retain the OAuth token, and so we maintain the ability to make other API calls on behalf of that user. This hasn't proven necessary yet, but it seems like a reasonable bit of future-proofing. See the guidelines at the top of all_test.go for testing instructions.
Package negotiator is a library that handles content negotiation in web applications written in Go. Content negotiation is specified by RFC (http://tools.ietf.org/html/rfc7231) and, less formally, by Ajax (https://en.wikipedia.org/wiki/XMLHttpRequest). A Negotiator contains a list of ResponseProcessor. For each call to Negotiate, one or more offers of possibly-matching data is compared with the headers in the request. The best matching response processor is chosen and given the task of sending the response. For more information visit http://github.com/rickb777/negotiator Accept - from https://tools.ietf.org/html/rfc7231#section-5.3.2: The "Accept" header field can be used by user agents to specify response media types that are acceptable. Accept header fields can be used to indicate that the request is specifically limited to a small set of desired types, as in the case of a request for an in-line image. A request without any Accept header field implies that the user agent will accept any media type in response. If the header field is present in a request and none of the available representations for the response have a media type that is listed as acceptable, the origin server can either honor the header field by sending a 406 (Not Acceptable) response, or disregard the header field by treating the response as if it is not subject to content negotiation. Accept-Language - from https://tools.ietf.org/html/rfc7231#section-5.3.5: The "Accept-Language" header field can be used by user agents to indicate the set of natural languages that are preferred in the response. A request without any Accept-Language header field implies that the user agent will accept any language in response. If the header field is present in a request and none of the available representations for the response have a matching language tag, the origin server can either disregard the header field by treating the response as if it is not subject to content negotiation or honor the header field by sending a 406 (Not Acceptable) response. However, the latter is not encouraged, as doing so can prevent users from accessing content that they might be able to use (with translation software, for example).
Package peer provides a common base for creating and managing Decred network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the Decred wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the decred network to use, which services it supports, and callbacks to invoke when decred messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to decred messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported decred messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, PushGetHeadersMsg, and PushRejectMsg functions are provided as a convenience. While it is of course possible to create and send these message manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Next, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. Finally, the PushRejectMsg function can be used to easily create and send an appropriate reject message based on the provided parameters as well as optionally provides a flag to cause it to block until the message is actually sent. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a slog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all improvement proposals supported by the wire package. (https://godoc.org/github.com/Decred-Next/dcrnd/wire#hdr-Bitcoin_Improvement_Proposals) This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
Package peer provides a common base for creating and managing Decred network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the Decred wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the decred network to use, which services it supports, and callbacks to invoke when decred messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to decred messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported decred messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, PushGetHeadersMsg, and PushRejectMsg functions are provided as a convenience. While it is of course possible to create and send these message manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Next, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. Finally, the PushRejectMsg function can be used to easily create and send an appropriate reject message based on the provided parameters as well as optionally provides a flag to cause it to block until the message is actually sent. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a slog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all improvement proposals supported by the wire package. This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
sshagentca is an ssh server forwarded agent certificate authority A server to add ssh user certificates to ssh forwarded agents. Running the server: Example client usage using a key pair whose public key is registered in the server settings.yaml (see https://github.com/rorycl/sshagentca-docker for a docker image to test this out): The login username that the client provides when connecting to `sshagentca` is ignored - it does not have to match the `name:` in `settings.yaml`. Certificates from sshagentca can be conveniently used with pam-ussh (see https://github.com/uber/pam-ussh) to control sudo privileges on suitably configured servers. Please refer to the specification at PROTOCOL.certkeys at https://www.openssh.com/specs.html and the related go documentation at https://godoc.org/golang.org/x/crypto/ssh. version 0.0.7-beta : 20 September 2021 The server requires an ssh private key and ssh certificate authority (CA) private key, with a password required for the CA key at least. The server will prompt for passwords on startup, or the environmental variables `SSHAGENTCA_PVT_KEY` and `SSHAGENTCA_CA_KEY` can be set. Configuration is done in the settings.yaml file and include certificate settings such as the validity period and organisation name, the prompt received by the client. Users are configured in the `user_principals` section, where each user is required to have a name, ssh public key and list of principals to be set out. The server will run on the specified IP address and port, by default 0.0.0.0:2222. If the server runs successfully, it will respond to ssh connections that have a public key listed in `user_principals` section and which have a forwarded agent. This response will be to insert an ssh user certificate into the forwarded agent which is signed by `caprivatekey` with the parameters set out in `settings.yaml` and restrictions as noted below. sshagentca generates a new key and corresponding certificate to insert into the client's ssh-agent, signed using ed25519 keys. The CA key you provide to sign the certificate may be a different key. Clients can authenticate to sshagentca using any key type supported by go's `x/crypto/ssh` package, including ed25519 keys introduced in go 1.13. Key type support includes the ecdsa-sk key used with U2F security keys, introduced in OpenSSH 8.2. As a result, you can use a physical U2F token with an OpenSSH 8.2 client to authenticate to sshagentca, whilst the keys and certificates it issues can be used to login to older versions of sshd. ## Certificate Restrictions The project currently has no support for host certificates, although these could be easily added. With reference to https://cvsweb.openbsd.org/src/usr.bin/ssh/PROTOCOL.certkeys?annotate=HEAD there is no support presently for customising *critical options*, and only the standard *extensions*, such as `permit-agent-forwarding`, `permit-port-forwarding` and `permit-pty` are permitted. Each certificate's principals settings are taken from the principals set out for the specific connecting client public key from the `user_principals` settings. The `valid after` timestamp in the generated certificates is set according to the `validity` settings parameter, specified in minutes. A `validity` duration of 24 hours or more is not permitted. ## Key generation To generate new server keys, refer to man ssh-keygen. For example: and specify a password. The id_server file is the private key. Certificate authority keys are generated in the same way, although adding a comment is often considered sensible for CA key management, e.g.: and choose a password. The ca file is the private key. The ca.pub key in this example should be used in the sshd_config file on any server for which you wish to grant certificate-authenticated access. For example: The use of principals to provide "zone" based access to servers is set out at https://engineering.fb.com/security/scalable-and-secure-access-with-ssh/ Thanks to Peter Moody for his pam-ussh announcement at https://medium.com/uber-security-privacy/introducing-the-uber-ssh-certificate-authority-4f840839c5cc which was the inspiration for this project, and the comments and help from him and others on the ssh mailing list. This project is licensed under the MIT Licence. Rory Campbell-Lange 25 September 2021
Package applicationdiscoveryservice provides the client and types for making API requests to AWS Application Discovery Service. AWS Application Discovery Service helps you plan application migration projects by automatically identifying servers, virtual machines (VMs), software, and software dependencies running in your on-premises data centers. Application Discovery Service also collects application performance data, which can help you assess the outcome of your migration. The data collected by Application Discovery Service is securely retained in an Amazon-hosted and managed database in the cloud. You can export the data as a CSV or XML file into your preferred visualization tool or cloud-migration solution to plan your migration. For more information, see the Application Discovery Service FAQ (http://aws.amazon.com/application-discovery/faqs/). Application Discovery Service offers two modes of operation. Agentless discovery mode is recommended for environments that use VMware vCenter Server. This mode doesn't require you to install an agent on each host. Agentless discovery gathers server information regardless of the operating systems, which minimizes the time required for initial on-premises infrastructure assessment. Agentless discovery doesn't collect information about software and software dependencies. It also doesn't work in non-VMware environments. We recommend that you use agent-based discovery for non-VMware environments and if you want to collect information about software and software dependencies. You can also run agent-based and agentless discovery simultaneously. Use agentless discovery to quickly complete the initial infrastructure assessment and then install agents on select hosts to gather information about software and software dependencies. Agent-based discovery mode collects a richer set of data than agentless discovery by using Amazon software, the AWS Application Discovery Agent, which you install on one or more hosts in your data center. The agent captures infrastructure and application information, including an inventory of installed software applications, system and process performance, resource utilization, and network dependencies between workloads. The information collected by agents is secured at rest and in transit to the Application Discovery Service database in the cloud. Application Discovery Service integrates with application discovery solutions from AWS Partner Network (APN) partners. Third-party application discovery tools can query Application Discovery Service and write to the Application Discovery Service database using a public API. You can then import the data into either a visualization tool or cloud-migration solution. Application Discovery Service doesn't gather sensitive information. All data is handled according to the AWS Privacy Policy (http://aws.amazon.com/privacy/). You can operate Application Discovery Service using offline mode to inspect collected data before it is shared with the service. Your AWS account must be granted access to Application Discovery Service, a process called whitelisting. This is true for AWS partners and customers alike. To request access, sign up for AWS Application Discovery Service here (http://aws.amazon.com/application-discovery/preview/). We send you information about how to get started. This API reference provides descriptions, syntax, and usage examples for each of the actions and data types for Application Discovery Service. The topic for each action shows the API request parameters and the response. Alternatively, you can use one of the AWS SDKs to access an API that is tailored to the programming language or platform that you're using. For more information, see AWS SDKs (http://aws.amazon.com/tools/#SDKs). This guide is intended for use with the AWS Application Discovery Service User Guide (http://docs.aws.amazon.com/application-discovery/latest/userguide/). See https://docs.aws.amazon.com/goto/WebAPI/discovery-2015-11-01 for more information on this service. See applicationdiscoveryservice package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/applicationdiscoveryservice/ To AWS Application Discovery Service with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the AWS Application Discovery Service client ApplicationDiscoveryService for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/applicationdiscoveryservice/#New
* Package middleware provides a net/http middleware for APIs and web servers. It largely works in much the same way as `node-autoscaling` does: it wraps a struct that implements `http.Handler` by: 1. Minting a requestID 2. Running, and timing, the wrapped handler 3. Adding the request ID to the response 4. _responding_ with this data 5. Logging out, to STDOUT, request metadata A simple implementation would look like: A request to `localhost:8008` would then log out: With the response: Trying ::1... TCP_NODELAY set Connected to localhost (::1) port 8008 (#0) > HEAD / HTTP/1.1 > Host: localhost:8008 > User-Agent: curl/7.51.0 > Accept: text/plain > < HTTP/1.1 200 OK HTTP/1.1 200 OK < Content-Type: text/plain; charset=utf-8 Content-Type: text/plain; charset=utf-8 < X-Request-Id: 72c2b7aa-3bcb-478f-8724-66f38cd3abc0 X-Request-Id: 72c2b7aa-3bcb-478f-8724-66f38cd3abc0 < Content-Length: 10 Content-Length: 10 < Curl_http_done: called premature == 0 Connection #0 to host localhost left intact *
Package Authaus is an authentication and authorization system. Authaus brings together the following pluggable components: Any of these five components can be swapped out, and in fact the fourth, and fifth ones (Role Groups and User Store) are entirely optional. A typical setup is to use LDAP as an Authenticator, and Postgres as a Session, Permit, and Role Groups database. Your session database does not need to be particularly performant, since Authaus maintains an in-process cache of session keys and their associated tokens. Authaus was NOT designed to be a "Facebook Scale" system. The target audience is a system of perhaps 100,000 users. There is nothing fundamentally limiting about the API of Authaus, but the internals certainly have not been built with millions of users in mind. The intended usage model is this: Authaus is intended to be embedded inside your security system, and run as a standalone HTTP service (aka a REST service). This HTTP service CAN be open to the wide world, but it's also completely OK to let it listen only to servers inside your DMZ. Authaus only gives you the skeleton and some examples of HTTP responders. It's up to you to flesh out the details of your authentication HTTP interface, and whether you'd like that to face the world, or whether it should only be accessible via other services that you control. At startup, your services open an HTTP connection to the Authaus service. This connection will typically live for the duration of the service. For every incoming request, you peel off whatever authentication information is associated with that request. This is either a session key, or a username/password combination. Let's call it the authorization information. You then ask Authaus to tell you WHO this authorization information belongs to, as well as WHAT this authorization information allows the requester to do (ie Authentication and Authorization). Authaus responds either with a 401 (Unauthorized), 403 (Forbidden), or a 200 (OK) and a JSON object that tells you the identity of the agent submitting this request, as well the permissions that this agent posesses. It's up to your individual services to decide what to do with that information. It should be very easy to expose Authaus over a protocol other than HTTP, since Authaus is intended to be easy to embed. The HTTP API is merely an illustrative example. A `Session Key` is the long random number that is typically stored as a cookie. A `Permit` is a set of roles that has been granted to a user. Authaus knows nothing about the contents of a permit. It simply treats it as a binary blob, and when writing it to an SQL database, encodes it as base64. The interpretation of the permit is application dependent. Typically, a Permit will hold information such as "Allowed to view billing information", or "Allowed to paint your bathroom yellow". Authaus does have a built-in module called RoleGroupDB, which has its own interpretation of what a Permit is, but you do not need to use this. A `Token` is the result of a successful authentication. It stores the identity of a user, an expiry date, and a Permit. A token will usually be retrieved by a session key. However, you can also perform a once-off authentication, which also yields you a token, which you will typically throw away when you are finished with it. All public methods of the `Central` object are callable from multiple threads. Reader-Writer locks are used in all of the caching systems. The number of concurrent connections is limited only by the limits of the Go runtime, and the performance limits that are inherent to the simple reader-writer locks used to protect shared state. Authaus must be deployed as a single process (which implies running on a single logical machine). The sole reason why it must run on only one process and not more, is because of the state that lives inside the various Authaus caches. Were it not for these caches, then there would be nothing preventing you from running Authaus on as many machines as necessary. The cached state stored inside the Authaus server is: If you wanted to make Authaus runnable across multiple processes, then you would need to implement a cache invalidation system for these caches. Authaus makes no attempt to mitigate DOS attacks. The most sane approach in this domain seems to be this (http://security.stackexchange.com/questions/12101/prevent-denial-of-service-attacks-against-slow-hashing-functions). The password database (created via NewAuthenticationDB_SQL) stores password hashes using the scrypt key derivation system (http://www.tarsnap.com/scrypt.html). Internally, we store our hash in a format that can later be extended, should we wish to double-hash the passwords, etc. The hash is 65 bytes and looks like this: The first byte of the hash is a version number of the hash. The remaining 64 bytes are the salt and the hash itself. At present, only one version is supported, which is version 1. It consists of 32 bytes of salt, and 32 bytes of scrypt'ed hash, with scrypt parameters N=256 r=8 p=1. Note that the parameter N=256 is quite low, meaning that it is possible to compute this in approximately 1 millisecond (1,000,000 nanoseconds) on a 2009-era Intel Core i7. This is a deliberate tradeoff. On the same CPU, a SHA256 hash takes about 500 nanoseconds to compute, so we are still making it 2000 times harder to brute force the passwords than an equivalent system storing only a SHA256 salted hash. This discussion is only of relevance in the event that the password table is compromised. No cookie signing mechanism is implemented. Cookies are not presently transmitted with Secure:true. This must change. The LDAP Authenticator is extremely simple, and provides only one function: Authenticate a user against an LDAP system (often this means Active Directory, AKA a Windows Domain). It calls the LDAP "Bind" method, and if that succeeds for the given identity/password, then the user is considered authenticated. We take care not to allow an "anonymous bind", which many LDAP servers allow when the password is blank. The Session Database runs on Postgres. It stores a table of sessions, where each row contains the following information: When a permit is altered with Authaus, then all existing sessions have their permits altered transparently. For example, imagine User X is logged in, and his administrator grants him a new permission. User X does not need to log out and log back in again in order for his new permissions to be reflected. His new permissions will be available immediately. Similarly, if a password is changed with Authaus, then all sessions are invalidated. Do take note though, that if a password is changed through an external mechanism (such as with LDAP), then Authaus will have no way of knowing this, and will continue to serve up sessions that were authenticated with the old password. This is a problem that needs addressing. You can limit the number of concurrent sessions per user to 1, by setting MaxActiveSessions.ConfigSessionDB to 1. This setting may only be zero or one. Zero, which is the default, means an unlimited number of concurrent sessions per user. Authaus will always place your Session Database behind its own Session Cache. This session cache is a very simple single-process in-memory cache of recent sessions. The limit on the number of entries in this cache is hard-coded, and that should probably change. The Permit database runs on Postgres. It stores a table of permits, which is simply a 1:1 mapping from Identity -> Permit. The Permit is just an array of bytes, which we store base64 encoded, inside a text field. This part of the system doesn't care how you interpret that blob. The Role Group Database is an entirely optional component of Authaus. The other components of Authaus (Authenticator, PermitDB, SessionDB) do not understand your Permits. To them, a Permit is simply an arbitrary array of bytes. The Role Group Database is a component that adds a specific meaning to a permit blob. Let's see what that specific meaning looks like... The built-in Role Group Database interprets a permit blob as a string of 32-bit integer IDs: These 32-bit integer IDs refer to "role groups" inside a database table. The "role groups" table might look like this: The Role Group IDs use 32-bit indices, because we assume that you are not going to create more than 2^32 different role groups. The worst case we assume here is that of an automated system that creates 100,000 roles per day. Such a system would run for more than 100 years, given a 32-bit ID. These constraints are extraordinary, suggesting that we do not even need 32 bits, but could even get away with just a 16-bit group ID. However, we expect the number of groups to be relatively small. Our aim here, arbitrary though it may be, is to fit the permit and identity into a single ethernet packet, which one can reasonably peg at 1500 bytes. 1500 / 4 = 375. We assume that no sane human administrator will assign 375 security groups to any individual. We expect the number of groups assigned to any individual to be in the range of 1 to 20. This makes 375 a gigantic buffer. See the guidelines at the top of all_test.go for testing instructions.
Package peer provides a common base for creating and managing Decred network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the Decred wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the decred network to use, which services it supports, and callbacks to invoke when decred messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to decred messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported decred messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, PushGetHeadersMsg, and PushRejectMsg functions are provided as a convenience. While it is of course possible to create and send these message manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Next, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. Finally, the PushRejectMsg function can be used to easily create and send an appropriate reject message based on the provided parameters as well as optionally provides a flag to cause it to block until the message is actually sent. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a btclog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all improvement proposals supported by the wire package. (https://godoc.org/github.com/decred/dcrd/wire#hdr-Bitcoin_Improvement_Proposals) This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
Package negronimodified implements checking and setting of last modified headers handler middleware for Negroni. HTTP servers or content generators in the background can supress sending the HTTP body content along with the response based on some of the headers set in client request, most notably the "Last-Modified" on the server side and "If-Modified-Since" on the user-agent side. There are some other headers that impact this functionality, such as "Cache-Control" (more on this later), but these two are the main ones we deal with on average basis. When the server (or whatever is responsible for setting the header content) decides that the content of interest has a certain generated time attached to it, it can inform the client about in the response it sends back along side the actual content. The client can then leverage this data for its cacheing purposes. It does this in a way that in the next identical request it sends this same time to the server where it can check it against the requested content time and if the times match, none of it needs to be sent to the client. This way we save on processing resources, bandwidth and, consequentially, time. Usage The above code initializes the middleware with default settings. These include the setting that will add "Cache-Control" header with a value of "public, must-revalidate", which means that everything should be cached and reevaluated. You can set your own value (or remove it all-together) by initializing the middleware like this: To avoid having to study on how to manipulate the headers correctly, you can use some of the included convenience function. To set the modification time use: If you worry about setting the value in a correct date and time format, a version of a function whose second argument is a time.Date type has also been included: Because the header checkup happens further down the line, it means that a lot of content processing and generation goes on at first only to find out at the end that it will be discarded due to the fact that the client already has the latest version. We can prevent all this unnecessary data processing by executing and checking for the returned value of the following included convenience function: If times are off, if should return true and we can proceed as usual, otherwise halt any further operations on data generating. A practical example would look like this: Naturally, a time.Date variant of this function also exists: For additional information, please check the following link on official definition: http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.3.5
Package peer provides a common base for creating and managing Decred network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the Decred wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the decred network to use, which services it supports, and callbacks to invoke when decred messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to decred messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported decred messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, PushGetHeadersMsg, and PushRejectMsg functions are provided as a convenience. While it is of course possible to create and send these message manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Next, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. Finally, the PushRejectMsg function can be used to easily create and send an appropriate reject message based on the provided parameters as well as optionally provides a flag to cause it to block until the message is actually sent. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a btclog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all improvement proposals supported by the wire package. (https://godoc.org/github.com/decred/dcrd/wire#hdr-Bitcoin_Improvement_Proposals) This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
Package peer provides a common base for creating and managing MultiCash network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the MultiCash wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the multicash network to use, which services it supports, and callbacks to invoke when multicash messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to multicash messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported multicash messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, and PushGetHeadersMsg functions are provided as a convenience. While it is of course possible to create and send these messages manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Finally, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a slog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all improvement proposals supported by the wire package. This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
Package acceptable is a library that handles headers for content negotiation and conditional requests in web applications written in Go. Content negotiation is specified by RFC (http://tools.ietf.org/html/rfc7231) and, less formally, by Ajax (https://en.wikipedia.org/wiki/XMLHttpRequest). * contenttype, headername - bundles of useful constants * data - for holding response data & metadata prior to rendering the response, also allowing lazy evaluation * header - for parsing and representing certain HTTP headers * offer - for enumerating offers to be matched against requests * templates - for rendering Go templates Server-based content negotiation is essentially simple: the user agent sends a request including some preferences (accept headers), then the server selects one of several possible ways of sending the response. Finding the best match depends on you listing your available response representations. This is all rolled up into a simple-to-use function `acceptable.RenderBestMatch`. What this does is described in detail in [RFC-7231](https://tools.ietf.org/html/rfc7231#section-5.3), but it's easy to use in practice. For example The RenderBestMatch function searches for the offer that best matches the request headers. If none match, the response will be 406-Not Acceptable. If you need to have a catch-all case, include offer.Of(p, contenttype.TextAny) or offer.Of(p, contenttype.Any) last in the list. Note that contenttype.TextAny is "text/*" and will typically return "text/plain"; contenttype.Any is "*/*" and will likewise return "application/octet-stream". Each offer will (usually) have a suitable offer.Processor, which is a rendering function. Several are provided (for JSON, XML etc), but you can also provide your own. Also, the templates sub-package provides Go template support. Offers are restricted both by content-type matching and by language matching. The `With` method provides data and specifies its content language. Use it as many times as you need to. The language(s) is matched against the Accept-Language header using the basic prefix algorithm. This means for example that if you specify "en" it will match "en", "en-GB" and everything else beginning with "en-", but if you specify "en-GB", it only matches "en-GB" and "en-GB-*", but won't match "en-US" or even "en". (This implements the basic filtering language matching algorithm defined in https://tools.ietf.org/html/rfc4647.) If your data doesn't need to specify a language, the With method should simply use the "*" wildcard instead. For example, myOffer.With(data, "*") attaches data to myOffer and doesn't restrict the offer to any particular language. The language wildcard could also be used as a catch-all case if it comes after one or more With with a specified language. However, the standard (RFC-7231) advises that a response should be returned even when language matching has failed; RenderBestMatch will do this by picking the first language listed as a fallback, so the catch-all case is only necessary if its data is different to that of the first case. The response data (en and fr above) can be structs, slices, maps, or other values that the rendering processors accept. They will be wrapped as data.Data values, which you can provid explicitly. These allow for lazy evaluation of the content and also support conditional requests. This comes into its own when there are several offers each with their own data model - if these were all to be read from the database before selection of the best match, all but one would be wasted. Lazy evaluation of the selected data easily overcomes this problem. Besides the data and error returned values, some metadata can optionally be returned. This is the basis for easy support for conditional requests (see [RFC-7232](https://tools.ietf.org/html/rfc7232)). If the metadata is nil, it is simply ignored. However, if it contains a hash of the data (e.g. via MD5) known as the entity tag or etag, then the response will have an ETag header. User agents that recognise this will later repeat the request along with an If-None-Match header. If present, If-None-Match is recognised before rendering starts and a successful match will avoid the need for any rendering. Due to the lazy content fetching, it can reduce unnecessary database traffic etc. The metadata can also carry the last-modified timestamp of the data, if this is known. When present, this becomes the Last-Modified header and is checked on subsequent requests using the If-Modified-Since. The template and language parameters are used for templated/web content data; otherwise they are ignored. Sequences of data can also be produced. This is done with data.Sequence() and this takes the same supplier function as used by data.Lazy(). The difference is that, in a sequence, the supplier function will be called repeatedly until its result value is nil. All the values will be streamed in the response (how this is done depends on the rendering processor. Most responses will be UTF-8, sometimes UTF-16. All other character sets (e.g. Windows-1252) are now strongly deprecated. However, legacy support for other character sets is provided. Transcoding is implemented by Match.ApplyHeaders so that the Accept-Charset content negotiation can be implemented. This depends on finding an encoder in golang.org/x/text/encoding/htmlindex (this has an extensive list, however no other encoders are supported). Whenever possible, responses will be UTF-8. Not only is this strongly recommended, it also avoids any transcoding processing overhead. It means for example that "Accept-Charset: iso-8859-1, utf-8" will ignore the iso-8859-1 preference because it can use UTF-8. Conversely, "Accept-Charset: iso-8859-1" will always have to transcode into ISO-8859-1 because there is no UTF-8 option.
Package rfsb provides the primitives to build configuration management systems. As a starting point, a very basic usage example: This is a complete rfsb config management system that defines a resource to be created, and then creates it (materializes it). The atomic units of computation in the RFSB model are types implementing the Resource interface. This is a pretty simple interface, basically being 'a thing that can be run (and has a name, and a logger)'. RFSB comes with most of the resources a standard configuration management system would need, but if other resources are required, the interface is simple to understand and implement. Resources can be composed together via a ResourceGraph: Which can be then composed further, building up higher level abstractions: Resources (and ResourceGraphs) can also have dependencies between them: The `When` and `Do` methods here set up a Dependency between the filesystem provisioning resource and the admin users provisioning resource. This ensures that admin users will not be provisioned before the filesystem has been set up. When all of the Resources have been defined, and composed together into a single ResourceGraph, we can Materialize the graph: And there we have it. Everything you need to know about rfsb. Puppet/Chef/Ansible (henceforth referred to collectively as CAP) all suck. For small deployments, the centralised server of CP is annoying, and for larger deployments, A is limiting, P devolves into a mess, and C, well I guess it can work if you devote an entire team to it. RFSB aims to grow gracefully from small to large, and give you the power to run it in any way you like. Let's take a look at some features that RFSB doesn't force you to provide. A popular feature of many systems is periodic agent runs. RFSB doesn't have any built in support for running periodically, so we'll need to build it ourselves. Let's take a look at what that might look like: Here we've used the radical method of running our Materialize function in a loop. But what you don't see is all the complexity you just avoided. For example, Chef supports both 'interactive' and 'daemon' mode. You can trigger an interactive Chef run via "chefctl -i". I think this communicates with the "chef-client" process, which then actually performs the Chef run, and reports the results back to chefctl. I think. In RFSB, this complexity goes away. If I want to understand how your RFSB job runs, I open its main.go and read. If you don't ever use periodic runs, you don't ever need to think about them. Another common feature is some kind of fact store. This allows you to see in one place all of the information about all of your servers. Or at least, all of the information that Puppet or Chef figured might be useful. In RFSB, we take the radical approach of letting you use a fact store if you want, or not if you don't want. For example, if you have a facts store with some Go bindings, you might use it to customize how you provision things: Now, this might be the point where you cry out and say 'but I can't have a big if else statement for all of my roles!'. The reality is that you probably can, and if you can't you should cut down the number of unique configurations. But anyway, assuming you have a good reason, RFSB has ways to handle this. Mostly, it relies on an interesting property, known as 'being just Go'. For example, you could use an interesting abstraction known as the 'map': Now you might say "but that's basically just a switch statement in map form". And you'd be correct. But you get the point. You have all the power of Go at your disposal. Go build the abstractions that make sense to you. I'm not going to tell you how to build your software; the person with the context needed to solve your problems is you.
Package negronicompress implements a Negroni middleware handler for various HTTP content compression methods. A lot of content that gets sent out form the HTTP server is usually in text format. This kind of output content can be large in size and has very good compression potential. In HTTP we can take advantage of this potential with the use of various compression encoding schemes. Most notably with "deflate" and "gzip" (more others are still in the work) methods of content encoding. If the user agent supports any such mechanisms, it specifies its support with the use of the "Accept-Encoding" HTTP header value. The middleware can pick this up and respond accordingly by encoding any output content in one of the user agents supported compression formats. By doing this, any content sent back to client is greatly reduced in size and thus saving on bandwidth and consequentially loading time. Usage The above code initializes the middleware with default settings. These include compressing only content types that are the most widely used in rendering a web page at a level that is a good compromise between speed of compression/decompression and compression ratio. You can define your own level of compression by initializing the middleware like this: Where higher value means better compression but also more processing time and power while lower number outputs encoded content faster but yields worse compression ratio. Keep in mind that the value cannot go below 1 nor above 9. You can specify additional content types to check for compression. This will compress all .pdf files and images when the middeleware comes across them in the "Content-Type" HTTP header usually set by the other backend services. If you have multiple instances of this middleware and all share the same custom list of content types allowed to compress, you can alter the global list with a helper function that works in the same way as the middlerware method. Now all new middleware instances instantiated after this function call will have this new content list set as the default list. The list belonging to the middeware itself can be then further altered with the method call without affecting any other lists. To clear the list, you can call the same function by specifying all types. Empty list means match any type and thus compress it. After the function call you can then freely create your own custom list of types.
Package peer provides a common base for creating and managing Decred network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the Decred wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the decred network to use, which services it supports, and callbacks to invoke when decred messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to decred messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported decred messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, PushGetHeadersMsg, and PushRejectMsg functions are provided as a convenience. While it is of course possible to create and send these message manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Next, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. Finally, the PushRejectMsg function can be used to easily create and send an appropriate reject message based on the provided parameters as well as optionally provides a flag to cause it to block until the message is actually sent. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a btclog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all improvement proposals supported by the wire package. (https://godoc.org/github.com/decred/dcrd/wire#hdr-Bitcoin_Improvement_Proposals) This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
Package peer provides a common base for creating and managing Decred network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the Decred wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the decred network to use, which services it supports, and callbacks to invoke when decred messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to decred messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported decred messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, and PushGetHeadersMsg functions are provided as a convenience. While it is of course possible to create and send these messages manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Finally, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a slog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all improvement proposals supported by the wire package. This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
Package peer provides a common base for creating and managing Bitcoin network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the bitcoin wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the bitcoin network to use, which services it supports, and callbacks to invoke when bitcoin messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to bitcoin messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported bitcoin messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, PushGetHeadersMsg, and PushRejectMsg functions are provided as a convenience. While it is of course possible to create and send these message manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Next, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. Finally, the PushRejectMsg function can be used to easily create and send an appropriate reject message based on the provided parameters as well as optionally provides a flag to cause it to block until the message is actually sent. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a btclog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all BIPS supported by the wire package. (https://pkg.go.dev/github.com/John-Tonny/vclsuite_vcld/wire#hdr-Bitcoin_Improvement_Proposals) This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
Gobuild deterministically compiles programs written in Go that are availablethrough the Go module proxy, and returns the binary. The Go module proxy at https://proxy.golang.org ensures source code stays available, and you are highly likely to get the same code each time you fetch it. Gobuild aims to do the same for binaries. You can compose URLs to a specific module, build or result: The first URL fetches the requested Go module to find the commands (main packages). In case of a single command, it redirects to a URL of the second form. In case of multiple commands, it lists them, linking to URLs of the second form. Links are to the latest module and Go versions, and with goos/goarch guessed based on user-agent. The second URL first resolves "latest" for the module and Go version with a redirect. For URLs with explicit versions, it starts a build for the requested parameters if no build is available yet. After a successful build, it redirects to a URL of the third kind. The third URL represents a successful build. The URL includes the sum: The versioned raw-base64-url-encoded 20-byte prefix of the sha256 sum. The page links to the binary, the build output log file, and to builds of the same command with different module versions, goversions, goos/goarch. You need not and cannot refresh a successful build: they would give the same result. Gobuild maintains a transparency log containing the hashes of all successful builds, similar to the Go module checksum database at https://sum.golang.org/. Gobuild's "get" subcommand looks up a content hash through the transparency log, locally keeping track of the last known tree state. This ensures the list of successful builds and their hashes is append-only, and modifications or removals by the server will be detected when you run "gobuild get". Examples: Only "go build" is run, for pure Go code. None of "go test", "go generate", build tags, cgo, custom compile/link flags, makefiles, etc. This means gobuild cannot build all Go applications. Gobuild looks up module versions through the Go module proxy. That's why shorthandversions like "@v1" don't resolve. Gobuild automatically downloads a Go toolchain (SDK) from https://go.dev/dl/ when it is first referenced. It also periodically queries that page for the latest supported releases, for redirecting to the latest supported toolchains. Gobuild can be configured to verify builds with other gobuild instances, requiring all to return the same hash for a build to be considered successful. It's easy to run a local instance, or an instance internal to your organization. To build, gobuild executes: For the stripped variant, -ldflags="-buildid= -s" is used. Get binaries for any module without having a Go toolchain installed: Useful when working on a machine that's not yours, or for your colleagues or friends who don't have a Go compiler installed. Simplify your software release process: You no longer need to cross compile for many architectures and upload binaries to a release page. You never forget a GOOS/GOARCH target. Just link to the build URL for your module and binaries will be created on demand. Binaries for the most recent Go toolchain: Go binaries include the runtime and standard library of the Go toolchain used to compile, including bugs. Gobuild links or can redirect to binaries built with the latest Go toolchain, so no need to publish new binaries after an updated Go toolchain is released. Verify reproducibility: Configure gobuild to check against other gobuild instances with different configuration to build trust that your binaries are indeed reproducible. A central service like gobuilds.org that provides binaries is an attractive target for attackers. By only building code available through the Go module proxy, and only building with official Go toolchains, the options for attack are limited. Further security measures are the isolation of the gobuild process and of the build commands (minimal file system view, mostly read-only; limited network; disallowing escalation of privileges). The transparency log is only used when downloading binaries using the "gobuild get" command, which uses and updates the users local cache of the signed append-only transparency log with hashes of built binaries. If users only download binaries through the convenient web interface, no verification of the transparency log takes place. The transparency log gives the option of verification, that alone may give users confidence the binaries are not tampered with. A nice way of continuously verifying that a gobuild instance, such as gobuilds.org, is behaving correctly is to set up your own gobuild instance that uses gobuilds.org as URL to verify builds against. Gobuild will build binaries with different (typically newer) Go toolchains than an author has tested their software with. So those binary are essentially untested. This may cause bugs. However, point releases typically contains only stability/security fixes that don't normally cause issues and are desired. The Go 1 compatibility promise means code will typically work as intended with new Go toolchain versions. But an author can always link to a build with a specific Go toolchain version. A user simply has the additional option to download a build by a newer Go toolchain version. Gobuild should work on all platforms for which you can download a Go toolchain on https://go.dev/dl/, including Linux, macOS, BSDs, Windows. Start gobuild by running: You can optionally pass a configuration file. Create an example config file with: Test it with: By default, build results and sumdb files are stored in ./data, $HOME is set to ./home during builds and Go toolchains are installed in ./sdk. You can configure your own signer key for your transparency log. Create new keys with: Now configure the signer key in the config file. And run "gobuild get" with the -verifierkey flag. Keep security in mind when offering public access to your gobuild instance. Run gobuild in a locked down environment, with restricted system access (files, network, processes, kernel features), possibly through systemd or with containers. You could make all outgoing network traffic go through an HTTPS proxy by setting an environment variable HTTPS_PROXY=... (refuse all other outgoing connections). The proxy should allow the following addresses:
Package peer provides a common base for creating and managing Bitcoin network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the bitcoin wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the bitcoin network to use, which services it supports, and callbacks to invoke when bitcoin messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to bitcoin messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported bitcoin messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, PushGetHeadersMsg, and PushRejectMsg functions are provided as a convenience. While it is of course possible to create and send these message manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Next, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. Finally, the PushRejectMsg function can be used to easily create and send an appropriate reject message based on the provided parameters as well as optionally provides a flag to cause it to block until the message is actually sent. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a btclog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all BIPS supported by the wire package. (https://pkg.go.dev/github.com/John-Tonny/vclsuite_vcld/wire#hdr-Bitcoin_Improvement_Proposals) This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
Package peer provides a common base for creating and managing Decred network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the Decred wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the decred network to use, which services it supports, and callbacks to invoke when decred messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to decred messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported decred messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, PushGetHeadersMsg, and PushRejectMsg functions are provided as a convenience. While it is of course possible to create and send these message manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Next, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. Finally, the PushRejectMsg function can be used to easily create and send an appropriate reject message based on the provided parameters as well as optionally provides a flag to cause it to block until the message is actually sent. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a btclog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all improvement proposals supported by the wire package. (https://godoc.org/github.com/decred/dcrd/wire#hdr-Bitcoin_Improvement_Proposals) This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
Package peer provides a common base for creating and managing Decred network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the Decred wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the decred network to use, which services it supports, and callbacks to invoke when decred messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to decred messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported decred messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, and PushGetHeadersMsg functions are provided as a convenience. While it is of course possible to create and send these messages manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Finally, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a slog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all improvement proposals supported by the wire package. This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
The Plan 9 File Protocol, 9P, is used for messages between clients and servers. A client transmits requests (T-messages) to a server, which subsequently returns replies (R-messages) to the client. The combined acts of transmitting (receiving) a request of a particular type, and receiving (transmitting) its reply is called a transaction of that type. Each message consists of a sequence of bytes. Two, four, and eight-byte fields hold unsigned integers represented in little-endian order (least significant byte first). Data items of larger or variable lengths are represented by a two-byte field specifying a count, n, followed by n bytes of data. Text strings are represented this way, with the text itself stored as a UTF-8 encoded sequence of Unicode characters (see utf(6)). Text strings in 9P messages are not NUL-terminated: n counts the bytes of UTF-8 data, which include no final zero byte. The NUL character is illegal in all text strings in 9P, and is therefore excluded from file names, user names, and so on. Each 9P message begins with a four-byte size field specifying the length in bytes of the complete message including the four bytes of the size field itself. The next byte is the message type, one of the constants in the enumeration in the include file <fcall.h>. The next two bytes are an iden- tifying tag, described below. The remaining bytes are parameters of different sizes. In the message descriptions, the number of bytes in a field is given in brackets after the field name. The notation parameter[n] where n is not a constant represents a variable-length parameter: n[2] followed by n bytes of data forming the parameter. The notation string[s] (using a literal s character) is shorthand for s[2] followed by s bytes of UTF-8 text. (Systems may choose to reduce the set of legal characters to reduce syntactic problems, for example to remove slashes from name compo- nents, but the protocol has no such restriction. Plan 9 names may contain any printable character (that is, any character outside hexadecimal 00-1F and 80-9F) except slash.) Messages are transported in byte form to allow for machine independence; fcall(2) describes routines that convert to and from this form into a machine-dependent C struc- ture. Each T-message has a tag field, chosen and used by the client to identify the message. The reply to the message will have the same tag. Clients must arrange that no two outstanding messages on the same connection have the same tag. An exception is the tag NOTAG, defined as (ushort)~0 in <fcall.h>: the client can use it, when establishing a connection, to override tag matching in version messages. The type of an R-message will either be one greater than the type of the corresponding T-message or Rerror, indicating that the request failed. In the latter case, the ename field contains a string describing the reason for failure. The version message identifies the version of the protocol and indicates the maximum message size the system is prepared to handle. It also initializes the connection and aborts all outstanding I/O on the connection. The set of messages between version requests is called a session. See: version(5) ## Version string format A version must always begin with "9P". If a the server does not understand a client's version string, it should respond with an Rversion message (not Rerror) with the string "unknown". If the client string contains one or more period characters, the intial substring up to but not including any single period in the version strings defines a version of the protocol. After stripping any such period-separated suffix, the server is allowed to respond a string of the form 9Pnnnn, where nnnn is less than or equal to the digits sent by the client. The client and server will use the protocol version defined by the server's response for all subsequent communication on the connection. A successful version request initiliazes the connection. All outstanding I/O on the connection is aborted; all active fids are freed ("clunked") automatically. The set of messages between the version requests is called a session. See: 9pclient(3) Most T-messages contain a fid, a 32-bit unsigned integer that the client uses to identify a "current file" on the server. Fids are somewhat like file descriptors in a user process, but they are not restricted to files open for I/O: directories being examined, files being accessed by stat(2) calls, and so on - all files being manipulated by the operating system - are identified by fids. Fids are chosen by the client. All requests on a connection share the same fid space; when several clients share a connection, the agent managing the sharing must arrange that no two clients choose the same fid. The fid supplied in an attach message will be taken by the server to refer to the root of the served file tree. The attach identifies the user to the server and may specify a particular file tree served by the server (for those that supply more than one). Permission to attach to the service is proven by providing a special fid, called afid, in the attach message. This afid is established by exchanging auth messages and subsequently manipulated using read and write messages to exchange authentication information not defined explicitly by 9P. Once the authentication protocol is complete, the afid is presented in the attach to permit the user to access the service. A walk message causes the server to change the current file associated with a fid to be a file in the directory that is the old current file, or one of its subdirectories. Walk returns a new fid that refers to the resulting file. Usu- ally, a client maintains a fid for the root, and navigates by walks from the root fid. A client can send multiple T-messages without waiting for the corresponding R-messages, but all outstanding T-messages must specify different tags. The server may delay the response to a request and respond to later ones; this is sometimes necessary, for example when the client reads from a file that the server synthesizes from external events such as keyboard characters. Replies (R-messages) to auth, attach, walk, open, and create requests convey a qid field back to the client. The qid represents the server's unique identification for the file being accessed: two files on the same server hierarchy are the same if and only if their qids are the same. (The client may have multiple fids pointing to a single file on a server and hence having a single qid.) The thirteen-byte qid fields hold a one-byte type, specifying whether the file is a directory, append-only file, etc., and two unsigned integers: first the four-byte qid version, then the eight- byte qid path. The path is an integer unique among all files in the hierarchy. If a file is deleted and recreated with the same name in the same directory, the old and new path components of the qids should be different. The version is a version number for a file; typically, it is incremented every time the file is modified. An existing file can be opened, or a new file may be created in the current (directory) file. I/O of a given number of bytes at a given offset on an open file is done by read and write. A client should clunk any fid that is no longer needed. The remove transaction deletes files. The stat transaction retrieves information about the file. The stat field in the reply includes the file's name, access permissions (read, write and execute for owner, group and public), access and modification times, and owner and group identifications (see stat(2)). The owner and group identifications are textual names. The wstat transaction allows some of a file's properties to be changed. A request can be aborted with a flush request. When a server receives a Tflush, it should not reply to the message with tag oldtag (unless it has already replied), and it should immediately send an Rflush. The client must wait until it gets the Rflush (even if the reply to the original message arrives in the interim), at which point oldtag may be reused. Because the message size is negotiable and some elements of the protocol are variable length, it is possible (although unlikely) to have a situation where a valid message is too large to fit within the negotiated size. For example, a very long file name may cause a Rstat of the file or Rread of its directory entry to be too large to send. In most such cases, the server should generate an error rather than modify the data to fit, such as by truncating the file name. The exception is that a long error string in an Rerror message should be truncated if necessary, since the string is only advisory and in some sense arbitrary. Most programs do not see the 9P protocol directly; instead calls to library routines that access files are translated by the mount driver, mnt(3), into 9P messages. Directories are created by "create with DMDIR set in the permissions argument (see stat(5)). The members of a directory can be found with read(5). All directories must support walks to the directory ".." (dot-dot) meaning parent directory, although by convention directories contain no explicit entry for ".." or "." (dot). The parent of the root directory of a server's tree is itself. Each file server maintains a set of user and group names. Each user can be a member of any number of groups. Each group has a group leader who has special privileges. See: stat(5) and users(6). Every file request has an implicit user id (copied from the original attach) and an implicit set of groups (every group of which the user is a member). ## File ownership Each file has an associated owner and group ID and three sets of permissions: those of the owner, those of the group, and those of other users. When the owner attempts to do something to a file, the owner, group, and other permissions are consulted, and if any of them grant the requested permission, the operation is allowed. For someone who is not the owner, but is a member of the file's group, the group and other permissions are consulted. For everyone else, the other permissions are used. Each set of permissions says whether reading is allowed, whether writing is allowed, and whether executing is allowed. A walk in a directory is regarded as executing the directory, not reading it. Per- missions are kept in the low-order bits of the file mode: owner read/write/execute permission represented as 1 in bits 8, 7, and 6 respectively (using 0 to number the low order). The group permissions are in bits 5, 4, and 3, and the other permissions are in bits 2, 1, and 0. ## File modes The file mode contains some additional attributes besides the permissions. If bit 31 (DMDIR) is set, the file is a directory; if bit 30 (DMAPPEND) is set, the file is append-only (offset is ignored in writes); if bit 29 (DMEXCL) is set, the file is exclusive-use (only one client may have it open at a time); if bit 27 (DMAUTH) is set, the file is an authentication file established by auth messages; if bit 26 (DMTMP) is set, the contents of the file (or directory) are not included in nightly archives. (Bit 28 is skipped for historical reasons.) These bits are reproduced, from the top bit down, in the type byte of the Qid: The name QTFILE, defined to be zero, identifies the value of the type for a plain file.
Package dcxl implements draft-coretta-x660-ldap-08, an Internet Draft of which I am the author. The Internet Draft (henceforth referred to as "the ID") can be viewed here: https://datatracker.ietf.org/doc/html/draft-coretta-x660-ldap. It is currently EXPIRED, pending review. There may or may not be an update of this ID in the near future. A text copy of the ID is also included in the package directory. No past or present version of the ID is meant to be used in any production environment and -- because it is not yet approved for an RFC track (and may never be) -- it should be considered PURELY EXPERIMENTAL and totally devoid of any official standing or acceptance. That said, this specification COULD very well allow some nifty functionality that would be beneficial to any individual, entity or institution that deals with or manages ASN.1 object identifiers regularly, and requires a central (non-ORS) means of OID resolution / interrogation. This package exists to provide abstract access to such capabilities through the Go language. This package is under heavy, active development. In addition to the above advisory, adopters should expect some unannounced volatility with regards to the state of the package. This package may include subdirectories, one containing a versioned reference implementation that corresponds to the terms and precepts of the same document version. One will exist for each new release of the document if and when one is released. Previous versions will remain intact for backwards compatibility, however pull requests will not be accepted. The package contents in the top-most package directory represent the latest release, and will be identical to the highest version-numbered folder. Pull requests are always welcome for the latest release. The ID describes a possible means for storing the OID spectrum (in whole or in part) within an X.500 directory. It is effectively an "LDAP alternative" to an ORS implementation, which would typically be DNS-based. To that end, the ID introduces myriad attribute types and a small handful of objectClass definitions -- each facilitating the storage of information pertaining to certain abstract types defined within ITU-T X-Series X.660, et al. This package, although designed to work well in LDAP-related operations, DOES NOT import the go-ldap/ldap package. Users interacting with an X.500 Directory System Agent (DSA, a.k.a.: an "LDAP server") within the context of the ID are expected to craft their own Directory User Agent (DUA, a.k.a.: an "LDAP client"). This package offers the following: • Total compliance with the terms of the ID -- all eighty eight (88) attributeTypes, four (4) objectClasses and two (2) dimensional abstracts defined in the ID are available via this package through any instances of the Registration and Registrant types • Plentiful documentation with useful examples • Complete interface support, negating the need for manual type assertion involving any Registration or Registrant types during composition and interrogation procedures • Extensible design; user-authored "Get" and "Set" closure functions are supported (but not required) at virtually every point, allowing limitless control over the contents and presentation of relevant objects/values • Seamless compatibility with *ldap.NewEntry using the map-populating Unmarshal method, which is extended through any Registration or Registrant type instance • Portable configuration type (per s. 2.2.4 of the ID), allowing critical information and configuration settings to be kept within individual Registration/Registrant instances for efficient and reliable operation For those dealing with ASN.1 Notation, Dot Notation, NumberForm and NameAndNumberForm values, OR for situations requiring uint128 NumberForm support, consider my objectid package (https://pkg.go.dev/github.com/JesseCoretta/go-objectid) for inclusion in any custom Set/Get functions in use. It can offer a potent advantage in scenarios relevant to this very package, and extends several very useful methods to greatly reduce the tedium (and error-prone nature) of this subject matter. For those interested in the actual schema definitions relating to this ID, see https://github.com/JesseCoretta/draft-coretta-x660-ldap. The schemata is available for download in one (1) of three (3) distinct schema formats: OpenLDAP, ApacheDS and Netscape.