Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Note: between version 0.2.0 and the current master include changes to the API that will break your existing code a little. This change turned some fields from pointers to a single optional struct into the more correct slice of struct, and to pluralize the field name. For example, `IDPSSODescriptor *IDPSSODescriptor` has become `IDPSSODescriptors []IDPSSODescriptor`. This more accurately reflects the standard. The struct `Metadata` has been renamed to `EntityDescriptor`. In 0.2.0 and before, every struct derived from the standard has the same name as in the standard, *except* for `Metadata` which should always have been called `EntityDescriptor`. In various places `url.URL` is now used where `string` was used <= version 0.1.0. In various places where keys and certificates were modeled as `string` <= version 0.1.0 (what was I thinking?!) they are now modeled as `*rsa.PrivateKey`, `*x509.Certificate`, or `crypto.PrivateKey` as appropriate. Let us assume we have a simple web appliation to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import "net/http" ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [testshib.org](https://www.testshib.org/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identiy provider to establish trust from the service provider to the IDP. For [testshib.org](https://www.testshib.org/), you can do something like: Naviate to https://www.testshib.org/register.html and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://idp.testshib.org/idp/profile/SAML2/Redirect/SSO` 1. testshib.org prompts you for a username and password. 1. testshib.org returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `examples/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package supports signed and encrypted SAML assertions. It does not support signed or encrypted requests. The *RelayState* parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originaly requested link, rather than the root. Unfortunately, *RelayState* is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [TestShib](https://www.testshib.org/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `8EA205C01C425FF195A5E9A43FA0768F26FD2554`](https://keybase.io/crewjam)).
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Version 0.4.0 introduces a few breaking changes to the _samlsp_ package in order to make the package more extensible, and to clean up the interfaces a bit. The default behavior remains the same, but you can now provide interface implementations of _RequestTracker_ (which tracks pending requests), _Session_ (which handles maintaining a session) and _OnError_ which handles reporting errors. Public fields of _samlsp.Middleware_ have changed, so some usages may require adjustment. See [issue 231](https://github.com/watercraft/saml/issues/231) for details. The option to provide an IDP metadata URL has been deprecated. Instead, we recommend that you use the `FetchMetadata()` function, or fetch the metadata yourself and use the new `ParseMetadata()` function, and pass the metadata in _samlsp.Options.IDPMetadata_. Similarly, the _HTTPClient_ field is now deprecated because it was only used for fetching metdata, which is no longer directly implemented. The fields that manage how cookies are set are deprecated as well. To customize how cookies are managed, provide custom implementation of _RequestTracker_ and/or _Session_, perhaps by extending the default implementations. The deprecated fields have not been removed from the Options structure, but will be in future. In particular we have deprecated the following fields in _samlsp.Options_: - `Logger` - This was used to emit errors while validating, which is an anti-pattern. - `IDPMetadataURL` - Instead use `FetchMetadata()` - `HTTPClient` - Instead pass httpClient to FetchMetadata - `CookieMaxAge` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieName` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider Let us assume we have a simple web application to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import ( ) ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [samltest.id](https://samltest.id/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identity provider to establish trust from the service provider to the IDP. For [samltest.id](https://samltest.id/), you can do something like: Navigate to https://samltest.id/upload.php and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://samltest.id/idp/profile/SAML2/Redirect/SSO` 1. samltest.id prompts you for a username and password. 1. samltest.id returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `example/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package can produce signed SAML assertions, and can validate both signed and encrypted SAML assertions. It does not support signed or encrypted requests. The _RelayState_ parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, _RelayState_ is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [SAMLtest](https://samltest.id/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `78B6038B3B9DFB88`](https://keybase.io/watercraft)).
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Version 0.4.0 introduces a few breaking changes to the _samlsp_ package in order to make the package more extensible, and to clean up the interfaces a bit. The default behavior remains the same, but you can now provide interface implementations of _RequestTracker_ (which tracks pending requests), _Session_ (which handles maintaining a session) and _OnError_ which handles reporting errors. Public fields of _samlsp.Middleware_ have changed, so some usages may require adjustment. See [issue 231](https://github.com/crewjam/saml/issues/231) for details. The option to provide an IDP metadata URL has been deprecated. Instead, we recommend that you use the `FetchMetadata()` function, or fetch the metadata yourself and use the new `ParseMetadata()` function, and pass the metadata in _samlsp.Options.IDPMetadata_. Similarly, the _HTTPClient_ field is now deprecated because it was only used for fetching metdata, which is no longer directly implemented. The fields that manage how cookies are set are deprecated as well. To customize how cookies are managed, provide custom implementation of _RequestTracker_ and/or _Session_, perhaps by extending the default implementations. The deprecated fields have not been removed from the Options structure, but will be in future. In particular we have deprecated the following fields in _samlsp.Options_: - `Logger` - This was used to emit errors while validating, which is an anti-pattern. - `IDPMetadataURL` - Instead use `FetchMetadata()` - `HTTPClient` - Instead pass httpClient to FetchMetadata - `CookieMaxAge` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieName` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider Let us assume we have a simple web application to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import ( ) ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [samltest.id](https://samltest.id/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identity provider to establish trust from the service provider to the IDP. For [samltest.id](https://samltest.id/), you can do something like: Navigate to https://samltest.id/upload.php and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://samltest.id/idp/profile/SAML2/Redirect/SSO` 1. samltest.id prompts you for a username and password. 1. samltest.id returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `example/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package can produce signed SAML assertions, and can validate both signed and encrypted SAML assertions. It does not support signed or encrypted requests. The _RelayState_ parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, _RelayState_ is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [SAMLtest](https://samltest.id/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `78B6038B3B9DFB88`](https://keybase.io/crewjam)).
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package hcashrpcclient implements a websocket-enabled Hypercash JSON-RPC client. This client provides a robust and easy to use client for interfacing with a Hypercash RPC server that uses a mostly btcd/bitcoin core style Hypercash JSON-RPC API. This client has been tested with hcashd (https://github.com/HcashOrg/hcashd) and hcashwallet (https://github.com/HcashOrg/hcashwallet). In addition to the compatible standard HTTP POST JSON-RPC API, hcashd and hcashwallet provide a websocket interface that is more efficient than the standard HTTP POST method of accessing RPC. The section below discusses the differences between HTTP POST and websockets. By default, this client assumes the RPC server supports websockets and has TLS enabled. In practice, this currently means it assumes you are talking to hcashd or hcashwallet by default. However, configuration options are provided to fall back to HTTP POST and disable TLS to support talking with inferior bitcoin core style RPC servers. In HTTP POST-based JSON-RPC, every request creates a new HTTP connection, issues the call, waits for the response, and closes the connection. This adds quite a bit of overhead to every call and lacks flexibility for features such as notifications. In contrast, the websocket-based JSON-RPC interface provided by hcashd and hcashwallet only uses a single connection that remains open and allows asynchronous bi-directional communication. The websocket interface supports all of the same commands as HTTP POST, but they can be invoked without having to go through a connect/disconnect cycle for every call. In addition, the websocket interface provides other nice features such as the ability to register for asynchronous notifications of various events. The client provides both a synchronous (blocking) and asynchronous API. The synchronous (blocking) API is typically sufficient for most use cases. It works by issuing the RPC and blocking until the response is received. This allows straightforward code where you have the response as soon as the function returns. The asynchronous API works on the concept of futures. When you invoke the async version of a command, it will quickly return an instance of a type that promises to provide the result of the RPC at some future time. In the background, the RPC call is issued and the result is stored in the returned instance. Invoking the Receive method on the returned instance will either return the result immediately if it has already arrived, or block until it has. This is useful since it provides the caller with greater control over concurrency. The first important part of notifications is to realize that they will only work when connected via websockets. This should intuitively make sense because HTTP POST mode does not keep a connection open! All notifications provided by hcashd require registration to opt-in. For example, if you want to be notified when funds are received by a set of addresses, you register the addresses via the NotifyReceived (or NotifyReceivedAsync) function. Notifications are exposed by the client through the use of callback handlers which are setup via a NotificationHandlers instance that is specified by the caller when creating the client. It is important that these notification handlers complete quickly since they are intentionally in the main read loop and will block further reads until they complete. This provides the caller with the flexibility to decide what to do when notifications are coming in faster than they are being handled. In particular this means issuing a blocking RPC call from a callback handler will cause a deadlock as more server responses won't be read until the callback returns, but the callback would be waiting for a response. Thus, any additional RPCs must be issued an a completely decoupled manner. By default, when running in websockets mode, this client will automatically keep trying to reconnect to the RPC server should the connection be lost. There is a back-off in between each connection attempt until it reaches one try per minute. Once a connection is re-established, all previously registered notifications are automatically re-registered and any in-flight commands are re-issued. This means from the caller's perspective, the request simply takes longer to complete. The caller may invoke the Shutdown method on the client to force the client to cease reconnect attempts and return ErrClientShutdown for all outstanding commands. The automatic reconnection can be disabled by setting the DisableAutoReconnect flag to true in the connection config when creating the client. Minor RPC Server Differences and Chain/Wallet Separation Some of the commands are extensions specific to a particular RPC server. For example, the DebugLevel call is an extension only provided by hcashd (and hcashwallet passthrough). Therefore if you call one of these commands against an RPC server that doesn't provide them, you will get an unimplemented error from the server. An effort has been made to call out which commmands are extensions in their documentation. Also, it is important to realize that hcashd intentionally separates the wallet functionality into a separate process named hcashwallet. This means if you are connected to the hcashd RPC server directly, only the RPCs which are related to chain services will be available. Depending on your application, you might only need chain-related RPCs. In contrast, hcashwallet provides pass through treatment for chain-related RPCs, so it supports them in addition to wallet-related RPCs. There are 3 categories of errors that will be returned throughout this package: The first category of errors are typically one of ErrInvalidAuth, ErrInvalidEndpoint, ErrClientDisconnect, or ErrClientShutdown. NOTE: The ErrClientDisconnect will not be returned unless the DisableAutoReconnect flag is set since the client automatically handles reconnect by default as previously described. The second category of errors typically indicates a programmer error and as such the type can vary, but usually will be best handled by simply showing/logging it. The third category of errors, that is errors returned by the server, can be detected by type asserting the error in a *hcashjson.RPCError. For example, to detect if a command is unimplemented by the remote RPC server: The following full-blown client examples are in the examples directory:
** We are working on testify v2 and would love to hear what you'd like to see in it, have your say here: https://cutt.ly/testify ** Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package zcashrpcclient implements a websocket-enabled Bitcoin JSON-RPC client. This client provides a robust and easy to use client for interfacing with a Bitcoin RPC server that uses a btcd/bitcoin core compatible Bitcoin JSON-RPC API. This client has been tested with btcd (https://github.com/btcsuite/btcd), btcwallet (https://github.com/btcsuite/btcwallet), and bitcoin core (https://github.com/bitcoin). In addition to the compatible standard HTTP POST JSON-RPC API, btcd and btcwallet provide a websocket interface that is more efficient than the standard HTTP POST method of accessing RPC. The section below discusses the differences between HTTP POST and websockets. By default, this client assumes the RPC server supports websockets and has TLS enabled. In practice, this currently means it assumes you are talking to btcd or btcwallet by default. However, configuration options are provided to fall back to HTTP POST and disable TLS to support talking with inferior bitcoin core style RPC servers. In HTTP POST-based JSON-RPC, every request creates a new HTTP connection, issues the call, waits for the response, and closes the connection. This adds quite a bit of overhead to every call and lacks flexibility for features such as notifications. In contrast, the websocket-based JSON-RPC interface provided by btcd and btcwallet only uses a single connection that remains open and allows asynchronous bi-directional communication. The websocket interface supports all of the same commands as HTTP POST, but they can be invoked without having to go through a connect/disconnect cycle for every call. In addition, the websocket interface provides other nice features such as the ability to register for asynchronous notifications of various events. The client provides both a synchronous (blocking) and asynchronous API. The synchronous (blocking) API is typically sufficient for most use cases. It works by issuing the RPC and blocking until the response is received. This allows straightforward code where you have the response as soon as the function returns. The asynchronous API works on the concept of futures. When you invoke the async version of a command, it will quickly return an instance of a type that promises to provide the result of the RPC at some future time. In the background, the RPC call is issued and the result is stored in the returned instance. Invoking the Receive method on the returned instance will either return the result immediately if it has already arrived, or block until it has. This is useful since it provides the caller with greater control over concurrency. The first important part of notifications is to realize that they will only work when connected via websockets. This should intuitively make sense because HTTP POST mode does not keep a connection open! All notifications provided by btcd require registration to opt-in. For example, if you want to be notified when funds are received by a set of addresses, you register the addresses via the NotifyReceived (or NotifyReceivedAsync) function. Notifications are exposed by the client through the use of callback handlers which are setup via a NotificationHandlers instance that is specified by the caller when creating the client. It is important that these notification handlers complete quickly since they are intentionally in the main read loop and will block further reads until they complete. This provides the caller with the flexibility to decide what to do when notifications are coming in faster than they are being handled. In particular this means issuing a blocking RPC call from a callback handler will cause a deadlock as more server responses won't be read until the callback returns, but the callback would be waiting for a response. Thus, any additional RPCs must be issued an a completely decoupled manner. By default, when running in websockets mode, this client will automatically keep trying to reconnect to the RPC server should the connection be lost. There is a back-off in between each connection attempt until it reaches one try per minute. Once a connection is re-established, all previously registered notifications are automatically re-registered and any in-flight commands are re-issued. This means from the caller's perspective, the request simply takes longer to complete. The caller may invoke the Shutdown method on the client to force the client to cease reconnect attempts and return ErrClientShutdown for all outstanding commands. The automatic reconnection can be disabled by setting the DisableAutoReconnect flag to true in the connection config when creating the client. Minor RPC Server Differences and Chain/Wallet Separation Some of the commands are extensions specific to a particular RPC server. For example, the DebugLevel call is an extension only provided by btcd (and btcwallet passthrough). Therefore if you call one of these commands against an RPC server that doesn't provide them, you will get an unimplemented error from the server. An effort has been made to call out which commmands are extensions in their documentation. Also, it is important to realize that btcd intentionally separates the wallet functionality into a separate process named btcwallet. This means if you are connected to the btcd RPC server directly, only the RPCs which are related to chain services will be available. Depending on your application, you might only need chain-related RPCs. In contrast, btcwallet provides pass through treatment for chain-related RPCs, so it supports them in addition to wallet-related RPCs. There are 3 categories of errors that will be returned throughout this package: The first category of errors are typically one of ErrInvalidAuth, ErrInvalidEndpoint, ErrClientDisconnect, or ErrClientShutdown. NOTE: The ErrClientDisconnect will not be returned unless the DisableAutoReconnect flag is set since the client automatically handles reconnect by default as previously described. The second category of errors typically indicates a programmer error and as such the type can vary, but usually will be best handled by simply showing/logging it. The third category of errors, that is errors returned by the server, can be detected by type asserting the error in a *btcjson.RPCError. For example, to detect if a command is unimplemented by the remote RPC server: The following full-blown client examples are in the examples directory:
Package anaconda provides structs and functions for accessing version 1.1 of the Twitter API. Successful API queries return native Go structs that can be used immediately, with no need for type assertions. If you already have the access token (and secret) for your user (Twitter provides this for your own account on the developer portal), creating the client is simple: Executing queries on an authenticated TwitterApi struct is simple. Certain endpoints allow separate optional parameter; if desired, these can be passed as the final parameter. Anaconda implements most of the endpoints defined in the Twitter API documentation: https://dev.twitter.com/docs/api/1.1. For clarity, in most cases, the function name is simply the name of the HTTP method and the endpoint (e.g., the endpoint `GET /friendships/incoming` is provided by the function `GetFriendshipsIncoming`). In a few cases, a shortened form has been chosen to make life easier (for example, retweeting is simply the function `Retweet`) More detailed information about the behavior of each particular endpoint can be found at the official Twitter API documentation.
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Let us assume we have a simple web appliation to protect. We'll modify this application so it uses SAML to authenticate users. Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [testshib.org](testshib.org), an identity provider designed for testing. Next we'll have to register our service provider with the identiy provider to establish trust from the service provider to the IDP. For [testshib.org](testshib.org), you can do something like: Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 2. The middleware redirects you to `https://idp.testshib.org/idp/profile/SAML2/Redirect/SSO` 3. testshib.org prompts you for a username and password. 4. testshib.org returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 5. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 6. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `examples/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows fromthe IDP to the service provider are supported vai the HTTP POST binding. The package supports signed and encrypted SAML assertions. It does not support signed or encrypted requests. The *RelayState* parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, *RelayState* is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [TestShib](http://www.testshib.org/) is a testing ground for SAML service and identity providers.
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package hcrpcclient implements a websocket-enabled Hcd JSON-RPC client. This client provides a robust and easy to use client for interfacing with a Hcd RPC server that uses a mostly btcd/bitcoin core style Hcd JSON-RPC API. This client has been tested with hcd (https://github.com/HcashOrg/hcd) and hcwallet (https://github.com/HcashOrg/hcwallet). In addition to the compatible standard HTTP POST JSON-RPC API, hcd and hcwallet provide a websocket interface that is more efficient than the standard HTTP POST method of accessing RPC. The section below discusses the differences between HTTP POST and websockets. By default, this client assumes the RPC server supports websockets and has TLS enabled. In practice, this currently means it assumes you are talking to hcd or hcwallet by default. However, configuration options are provided to fall back to HTTP POST and disable TLS to support talking with inferior bitcoin core style RPC servers. In HTTP POST-based JSON-RPC, every request creates a new HTTP connection, issues the call, waits for the response, and closes the connection. This adds quite a bit of overhead to every call and lacks flexibility for features such as notifications. In contrast, the websocket-based JSON-RPC interface provided by hcd and hcwallet only uses a single connection that remains open and allows asynchronous bi-directional communication. The websocket interface supports all of the same commands as HTTP POST, but they can be invoked without having to go through a connect/disconnect cycle for every call. In addition, the websocket interface provides other nice features such as the ability to register for asynchronous notifications of various events. The client provides both a synchronous (blocking) and asynchronous API. The synchronous (blocking) API is typically sufficient for most use cases. It works by issuing the RPC and blocking until the response is received. This allows straightforward code where you have the response as soon as the function returns. The asynchronous API works on the concept of futures. When you invoke the async version of a command, it will quickly return an instance of a type that promises to provide the result of the RPC at some future time. In the background, the RPC call is issued and the result is stored in the returned instance. Invoking the Receive method on the returned instance will either return the result immediately if it has already arrived, or block until it has. This is useful since it provides the caller with greater control over concurrency. The first important part of notifications is to realize that they will only work when connected via websockets. This should intuitively make sense because HTTP POST mode does not keep a connection open! All notifications provided by hcd require registration to opt-in. For example, if you want to be notified when funds are received by a set of addresses, you register the addresses via the NotifyReceived (or NotifyReceivedAsync) function. Notifications are exposed by the client through the use of callback handlers which are setup via a NotificationHandlers instance that is specified by the caller when creating the client. It is important that these notification handlers complete quickly since they are intentionally in the main read loop and will block further reads until they complete. This provides the caller with the flexibility to decide what to do when notifications are coming in faster than they are being handled. In particular this means issuing a blocking RPC call from a callback handler will cause a deadlock as more server responses won't be read until the callback returns, but the callback would be waiting for a response. Thus, any additional RPCs must be issued an a completely decoupled manner. By default, when running in websockets mode, this client will automatically keep trying to reconnect to the RPC server should the connection be lost. There is a back-off in between each connection attempt until it reaches one try per minute. Once a connection is re-established, all previously registered notifications are automatically re-registered and any in-flight commands are re-issued. This means from the caller's perspective, the request simply takes longer to complete. The caller may invoke the Shutdown method on the client to force the client to cease reconnect attempts and return ErrClientShutdown for all outstanding commands. The automatic reconnection can be disabled by setting the DisableAutoReconnect flag to true in the connection config when creating the client. Minor RPC Server Differences and Chain/Wallet Separation Some of the commands are extensions specific to a particular RPC server. For example, the DebugLevel call is an extension only provided by hcd (and hcwallet passthrough). Therefore if you call one of these commands against an RPC server that doesn't provide them, you will get an unimplemented error from the server. An effort has been made to call out which commmands are extensions in their documentation. Also, it is important to realize that hcd intentionally separates the wallet functionality into a separate process named hcwallet. This means if you are connected to the hcd RPC server directly, only the RPCs which are related to chain services will be available. Depending on your application, you might only need chain-related RPCs. In contrast, hcwallet provides pass through treatment for chain-related RPCs, so it supports them in addition to wallet-related RPCs. There are 3 categories of errors that will be returned throughout this package: The first category of errors are typically one of ErrInvalidAuth, ErrInvalidEndpoint, ErrClientDisconnect, or ErrClientShutdown. NOTE: The ErrClientDisconnect will not be returned unless the DisableAutoReconnect flag is set since the client automatically handles reconnect by default as previously described. The second category of errors typically indicates a programmer error and as such the type can vary, but usually will be best handled by simply showing/logging it. The third category of errors, that is errors returned by the server, can be detected by type asserting the error in a *hcjson.RPCError. For example, to detect if a command is unimplemented by the remote RPC server: The following full-blown client examples are in the examples directory:
Package abide is a testing utility for http response snapshots inspired by Facebook's Jest. It is designed to be used alongside Go's existing testing package and enable broader coverage of http APIs. When included in version control it can provide a historical log of API and application changes over time. A snapshot is essentially a lockfile representing an http response. In addition to testing `http.Response`, abide provides methods for testing `io.Reader` and any object that implements `Assertable`. Snapshots are saved in a directory named __snapshots__ at the root of the package. These files are intended to be saved and included in version control. Snapshots are automatically generated during the initial test run. For example this will create a snapshot identified by "example" for this http.Response. In subsequent test runs the existing snapshot is compared to the new results. In the event they do not match, the test will fail, and the diff will be printed. If the change was intentional, the snapshot can be updated.
A set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package rpcclient implements a websocket-enabled MultiCash JSON-RPC client. This client provides a robust and easy to use client for interfacing with a MultiCash RPC server that uses a mostly btcd/bitcoin core style MultiCash JSON-RPC API. This client has been tested with mcxd (https://github.com/multicash/mcxd) and mcxwallet (https://github.com/multicash/mcxwallet). In addition to the compatible standard HTTP POST JSON-RPC API, mcxd and mcxwallet provide a websocket interface that is more efficient than the standard HTTP POST method of accessing RPC. The section below discusses the differences between HTTP POST and websockets. By default, this client assumes the RPC server supports websockets and has TLS enabled. In practice, this currently means it assumes you are talking to mcxd or mcxwallet by default. However, configuration options are provided to fall back to HTTP POST and disable TLS to support talking with inferior bitcoin core style RPC servers. In HTTP POST-based JSON-RPC, every request creates a new HTTP connection, issues the call, waits for the response, and closes the connection. This adds quite a bit of overhead to every call and lacks flexibility for features such as notifications. In contrast, the websocket-based JSON-RPC interface provided by mcxd and mcxwallet only uses a single connection that remains open and allows asynchronous bi-directional communication. The websocket interface supports all of the same commands as HTTP POST, but they can be invoked without having to go through a connect/disconnect cycle for every call. In addition, the websocket interface provides other nice features such as the ability to register for asynchronous notifications of various events. The client provides both a synchronous (blocking) and asynchronous API. The synchronous (blocking) API is typically sufficient for most use cases. It works by issuing the RPC and blocking until the response is received. This allows straightforward code where you have the response as soon as the function returns. The asynchronous API works on the concept of futures. When you invoke the async version of a command, it will quickly return an instance of a type that promises to provide the result of the RPC at some future time. In the background, the RPC call is issued and the result is stored in the returned instance. Invoking the Receive method on the returned instance will either return the result immediately if it has already arrived, or block until it has. This is useful since it provides the caller with greater control over concurrency. The first important part of notifications is to realize that they will only work when connected via websockets. This should intuitively make sense because HTTP POST mode does not keep a connection open! All notifications provided by mcxd require registration to opt-in. For example, if you want to be notified when funds are received by a set of addresses, you register the addresses via the NotifyReceived (or NotifyReceivedAsync) function. Notifications are exposed by the client through the use of callback handlers which are setup via a NotificationHandlers instance that is specified by the caller when creating the client. It is important that these notification handlers complete quickly since they are intentionally in the main read loop and will block further reads until they complete. This provides the caller with the flexibility to decide what to do when notifications are coming in faster than they are being handled. In particular this means issuing a blocking RPC call from a callback handler will cause a deadlock as more server responses won't be read until the callback returns, but the callback would be waiting for a response. Thus, any additional RPCs must be issued an a completely decoupled manner. By default, when running in websockets mode, this client will automatically keep trying to reconnect to the RPC server should the connection be lost. There is a back-off in between each connection attempt until it reaches one try per minute. Once a connection is re-established, all previously registered notifications are automatically re-registered and any in-flight commands are re-issued. This means from the caller's perspective, the request simply takes longer to complete. The caller may invoke the Shutdown method on the client to force the client to cease reconnect attempts and return ErrClientShutdown for all outstanding commands. The automatic reconnection can be disabled by setting the DisableAutoReconnect flag to true in the connection config when creating the client. This package only provides methods for mcxd RPCs. Using the websocket connection and request-response mapping provided by rpcclient with arbitrary methods or different servers is possible through the generic RawRequest and RawRequestAsync methods (each of which deal with json.RawMessage for parameters and return results). Previous versions of this package provided methods for mcxwallet's JSON-RPC server in addition to mcxd. These were removed in major version 6 of this module. Projects depending on these calls are advised to use the multicash.org/mcxwallet/rpc/client/mcxwallet package which is able to wrap rpcclient.Client using the aforementioned RawRequest method: Using struct embedding, it is possible to create a single variable with the combined method sets of both rpcclient.Client and mcxwallet.Client: This technique is valuable as mcxwallet (syncing in RPC mode) will passthrough any unknown RPCs to the backing mcxd server, proxying requests and responses for the client. There are 3 categories of errors that will be returned throughout this package: The first category of errors are typically one of ErrInvalidAuth, ErrInvalidEndpoint, ErrClientDisconnect, or ErrClientShutdown. NOTE: The ErrClientDisconnect will not be returned unless the DisableAutoReconnect flag is set since the client automatically handles reconnect by default as previously described. The second category of errors typically indicates a programmer error and as such the type can vary, but usually will be best handled by simply showing/logging it. The third category of errors, that is errors returned by the server, can be detected by type asserting the error in a *mcxjson.RPCError. For example, to detect if a command is unimplemented by the remote RPC server: The following full-blown client examples are in the examples directory:
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/erikcai/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/erikcai/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/erikcai/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/erikcai/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/erikcai/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic json objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic json request handling can be found here: https://github.com/erikcai/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/erikcai/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy. You can find the examples here: https://github.com/erikcai/oci-go-sdk/blob/master/example/example_retry_test.go The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic json response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic json response. Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/erikcai/oci-go-sdk Licensing information available at: https://github.com/erikcai/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/erikcai/oci-go-sdk/releases.atom Please refer to this link: https://github.com/erikcai/oci-go-sdk#help
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package gotest provides rich assertions for use within and beyond Go's `testing` package. Grab the package and import it: Code normal `testing`-style tests, but use `gotest.Assert` and assertions found in `should` like this: Assertions are just functions that accept interfaces and return a non-empty string if there's an error. Look at `gotest.should.Assertion` for the details. Look at `should/doc.go` and `should/assertion.go` ofr more details. Gotest adds some command-line options to your tests. You can see them via `go test . -args -help`. Note that gotest has its own verbosity flag which controls different aspects than `-test.v`. GoTest plays well with "vernacular" Go `testing` tests, providing a rich set of assertions to test HTTP Responses, JSON and JSON:API data, general equality, numeric comparison, collections, strings, panics, types, and time. Most of these rich assertions are provided by SmartyStreet's excellent assertion library (https://github.com/smartystreets/assertions) which builds off Aaron Jacobs' Oglematchers (https://github.com/jacobsa/oglematchers). In addition, any SmartyTreets-style assertion can be used as is (see https://github.com/smartystreets/goconvey/wiki/Custom-Assertions). We like Go's stdlib `testing` because it's simple, fast, familiar to most Go coders, has good tooling support, benchmark support, and coverage support. In earlier versions of Go, we missed subtests for test organization and a more BDD approach. We looked at GinkGo (`github.com/onsi/ginkgo`) and GoConvey (github.com/smartystreets/goconvey/convey)--both with benefits--and chose GoConvey because of the simple and consistent approach it took to writing custom assertions. See the documenataion for "gotest/should" for more details on that. It also had a pretty test runner. But around the release of Go 1.7 we ran into some problems: the growth of parameterized (table-driven) tests in our code didn't play well with GoConvey. GoConvey's approach to building test suites made it hard to focus specific subtests. We also ran into an opportunity: `testing` now supported subtests. See https://godoc.org/testing#hdr-Subtests_and_Sub_benchmarks. We were able to drop a lot of fancy suite construction code and gain easier focussing on particular tests. But we had come to appreciate GoConvey's excellent assertion pattern. We considered moving to `github.com/stretchr/testify` and used it's pattern for the custom assertion wrapper (see `Assert` below). It also had a simple custom-assertion pattern, but GoConvey's seemed simpler and more useful. We took a sideways glance at GUnit (`github.com/smartystreets/gunit`) and Labix's GoCheck (`gopkg.in/check.v1 `). Very cool packages, but we wanted to stay closer to `testing` with its new versatility.
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package anaconda provides structs and functions for accessing version 1.1 of the Twitter API. Successful API queries return native Go structs that can be used immediately, with no need for type assertions. If you already have the access token (and secret) for your user (Twitter provides this for your own account on the developer portal), creating the client is simple: Executing queries on an authenticated TwitterApi struct is simple. Certain endpoints allow separate optional parameter; if desired, these can be passed as the final parameter. Anaconda implements most of the endpoints defined in the Twitter API documentation: https://dev.twitter.com/docs/api/1.1. For clarity, in most cases, the function name is simply the name of the HTTP method and the endpoint (e.g., the endpoint `GET /friendships/incoming` is provided by the function `GetFriendshipsIncoming`). In a few cases, a shortened form has been chosen to make life easier (for example, retweeting is simply the function `Retweet`) More detailed information about the behavior of each particular endpoint can be found at the official Twitter API documentation.
Package container is a set of common data structure. container contains the following packages: The queue package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system.
Package anaconda provides structs and functions for accessing version 1.1 of the Twitter API. Successful API queries return native Go structs that can be used immediately, with no need for type assertions. If you already have the access token (and secret) for your user (Twitter provides this for your own account on the developer portal), creating the client is simple: Executing queries on an authenticated TwitterApi struct is simple. Certain endpoints allow separate optional parameter; if desired, these can be passed as the final parameter. Anaconda implements most of the endpoints defined in the Twitter API documentation: https://dev.twitter.com/docs/api/1.1. For clarity, in most cases, the function name is simply the name of the HTTP method and the endpoint (e.g., the endpoint `GET /friendships/incoming` is provided by the function `GetFriendshipsIncoming`). In a few cases, a shortened form has been chosen to make life easier (for example, retweeting is simply the function `Retweet`) More detailed information about the behavior of each particular endpoint can be found at the official Twitter API documentation.
Package anaconda provides structs and functions for accessing version 1.1 of the Twitter API. Successful API queries return native Go structs that can be used immediately, with no need for type assertions. If you already have the access token (and secret) for your user (Twitter provides this for your own account on the developer portal), creating the client is simple: Executing queries on an authenticated TwitterApi struct is simple. Certain endpoints allow separate optional parameter; if desired, these can be passed as the final parameter. Anaconda implements most of the endpoints defined in the Twitter API documentation: https://dev.twitter.com/docs/api/1.1. For clarity, in most cases, the function name is simply the name of the HTTP method and the endpoint (e.g., the endpoint `GET /friendships/incoming` is provided by the function `GetFriendshipsIncoming`). In a few cases, a shortened form has been chosen to make life easier (for example, retweeting is simply the function `Retweet`) More detailed information about the behavior of each particular endpoint can be found at the official Twitter API documentation.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Let us assume we have a simple web appliation to protect. We'll modify this application so it uses SAML to authenticate users. Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [testshib.org](testshib.org), an identity provider designed for testing. Next we'll have to register our service provider with the identiy provider to establish trust from the service provider to the IDP. For [testshib.org](testshib.org), you can do something like: Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 2. The middleware redirects you to `https://idp.testshib.org/idp/profile/SAML2/Redirect/SSO` 3. testshib.org prompts you for a username and password. 4. testshib.org returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 5. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 6. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `examples/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows fromthe IDP to the service provider are supported vai the HTTP POST binding. The package supports signed and encrypted SAML assertions. It does not support signed or encrypted requests. The *RelayState* parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, *RelayState* is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [TestShib](http://www.testshib.org/) is a testing ground for SAML service and identity providers.
Package anaconda provides structs and functions for accessing version 1.1 of the Twitter API. Successful API queries return native Go structs that can be used immediately, with no need for type assertions. If you already have the access token (and secret) for your user (Twitter provides this for your own account on the developer portal), creating the client is simple: Executing queries on an authenticated TwitterApi struct is simple. Certain endpoints allow separate optional parameter; if desired, these can be passed as the final parameter. Anaconda implements most of the endpoints defined in the Twitter API documentation: https://dev.twitter.com/docs/api/1.1. For clarity, in most cases, the function name is simply the name of the HTTP method and the endpoint (e.g., the endpoint `GET /friendships/incoming` is provided by the function `GetFriendshipsIncoming`). In a few cases, a shortened form has been chosen to make life easier (for example, retweeting is simply the function `Retweet`) More detailed information about the behavior of each particular endpoint can be found at the official Twitter API documentation.
** We are working on testify v2 and would love to hear what you'd like to see in it, have your say here: https://cutt.ly/testify ** Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Note: between version 0.2.0 and the current master include changes to the API that will break your existing code a little. This change turned some fields from pointers to a single optional struct into the more correct slice of struct, and to pluralize the field name. For example, `IDPSSODescriptor *IDPSSODescriptor` has become `IDPSSODescriptors []IDPSSODescriptor`. This more accurately reflects the standard. The struct `Metadata` has been renamed to `EntityDescriptor`. In 0.2.0 and before, every struct derived from the standard has the same name as in the standard, *except* for `Metadata` which should always have been called `EntityDescriptor`. In various places `url.URL` is now used where `string` was used <= version 0.1.0. In various places where keys and certificates were modeled as `string` <= version 0.1.0 (what was I thinking?!) they are now modeled as `*rsa.PrivateKey`, `*x509.Certificate`, or `crypto.PrivateKey` as appropriate. Let us assume we have a simple web appliation to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import "net/http" ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [testshib.org](https://www.testshib.org/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identiy provider to establish trust from the service provider to the IDP. For [testshib.org](https://www.testshib.org/), you can do something like: Naviate to https://www.testshib.org/register.html and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://idp.testshib.org/idp/profile/SAML2/Redirect/SSO` 1. testshib.org prompts you for a username and password. 1. testshib.org returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `examples/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package supports signed and encrypted SAML assertions. It does not support signed or encrypted requests. The *RelayState* parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originaly requested link, rather than the root. Unfortunately, *RelayState* is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [TestShib](https://www.testshib.org/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `8EA205C01C425FF195A5E9A43FA0768F26FD2554`](https://keybase.io/crewjam)).
Package acmrpcclient implements a websocket-enabled Bitcoin JSON-RPC client. This client provides a robust and easy to use client for interfacing with a Bitcoin RPC server that uses a acmd/bitcoin core compatible Bitcoin JSON-RPC API. This client has been tested with acmd (https://github.com/Actinium-project/acmd), acmwallet (https://github.com/Actinium-project/acmwallet), and bitcoin core (https://github.com/bitcoin). In addition to the compatible standard HTTP POST JSON-RPC API, acmd and acmwallet provide a websocket interface that is more efficient than the standard HTTP POST method of accessing RPC. The section below discusses the differences between HTTP POST and websockets. By default, this client assumes the RPC server supports websockets and has TLS enabled. In practice, this currently means it assumes you are talking to acmd or acmwallet by default. However, configuration options are provided to fall back to HTTP POST and disable TLS to support talking with inferior bitcoin core style RPC servers. In HTTP POST-based JSON-RPC, every request creates a new HTTP connection, issues the call, waits for the response, and closes the connection. This adds quite a bit of overhead to every call and lacks flexibility for features such as notifications. In contrast, the websocket-based JSON-RPC interface provided by acmd and acmwallet only uses a single connection that remains open and allows asynchronous bi-directional communication. The websocket interface supports all of the same commands as HTTP POST, but they can be invoked without having to go through a connect/disconnect cycle for every call. In addition, the websocket interface provides other nice features such as the ability to register for asynchronous notifications of various events. The client provides both a synchronous (blocking) and asynchronous API. The synchronous (blocking) API is typically sufficient for most use cases. It works by issuing the RPC and blocking until the response is received. This allows straightforward code where you have the response as soon as the function returns. The asynchronous API works on the concept of futures. When you invoke the async version of a command, it will quickly return an instance of a type that promises to provide the result of the RPC at some future time. In the background, the RPC call is issued and the result is stored in the returned instance. Invoking the Receive method on the returned instance will either return the result immediately if it has already arrived, or block until it has. This is useful since it provides the caller with greater control over concurrency. The first important part of notifications is to realize that they will only work when connected via websockets. This should intuitively make sense because HTTP POST mode does not keep a connection open! All notifications provided by acmd require registration to opt-in. For example, if you want to be notified when funds are received by a set of addresses, you register the addresses via the NotifyReceived (or NotifyReceivedAsync) function. Notifications are exposed by the client through the use of callback handlers which are setup via a NotificationHandlers instance that is specified by the caller when creating the client. It is important that these notification handlers complete quickly since they are intentionally in the main read loop and will block further reads until they complete. This provides the caller with the flexibility to decide what to do when notifications are coming in faster than they are being handled. In particular this means issuing a blocking RPC call from a callback handler will cause a deadlock as more server responses won't be read until the callback returns, but the callback would be waiting for a response. Thus, any additional RPCs must be issued an a completely decoupled manner. By default, when running in websockets mode, this client will automatically keep trying to reconnect to the RPC server should the connection be lost. There is a back-off in between each connection attempt until it reaches one try per minute. Once a connection is re-established, all previously registered notifications are automatically re-registered and any in-flight commands are re-issued. This means from the caller's perspective, the request simply takes longer to complete. The caller may invoke the Shutdown method on the client to force the client to cease reconnect attempts and return ErrClientShutdown for all outstanding commands. The automatic reconnection can be disabled by setting the DisableAutoReconnect flag to true in the connection config when creating the client. Minor RPC Server Differences and Chain/Wallet Separation Some of the commands are extensions specific to a particular RPC server. For example, the DebugLevel call is an extension only provided by acmd (and acmwallet passthrough). Therefore if you call one of these commands against an RPC server that doesn't provide them, you will get an unimplemented error from the server. An effort has been made to call out which commmands are extensions in their documentation. Also, it is important to realize that acmd intentionally separates the wallet functionality into a separate process named acmwallet. This means if you are connected to the acmd RPC server directly, only the RPCs which are related to chain services will be available. Depending on your application, you might only need chain-related RPCs. In contrast, acmwallet provides pass through treatment for chain-related RPCs, so it supports them in addition to wallet-related RPCs. There are 3 categories of errors that will be returned throughout this package: The first category of errors are typically one of ErrInvalidAuth, ErrInvalidEndpoint, ErrClientDisconnect, or ErrClientShutdown. NOTE: The ErrClientDisconnect will not be returned unless the DisableAutoReconnect flag is set since the client automatically handles reconnect by default as previously described. The second category of errors typically indicates a programmer error and as such the type can vary, but usually will be best handled by simply showing/logging it. The third category of errors, that is errors returned by the server, can be detected by type asserting the error in a *acmjson.RPCError. For example, to detect if a command is unimplemented by the remote RPC server: The following full-blown client examples are in the examples directory:
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
[![](https://godoc.org/github.com/mmmcorp/saml?status.svg)](http://godoc.org/github.com/mmmcorp/saml) [![Build Status](https://travis-ci.org/crewjam/saml.svg?branch=master)](https://travis-ci.org/crewjam/saml) Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Version 0.4.0 introduces a few breaking changes to the _samlsp_ package in order to make the package more extensible, and to clean up the interfaces a bit. The default behavior remains the same, but you can now provide interface implementations of _RequestTracker_ (which tracks pending requests), _Session_ (which handles maintaining a session) and _OnError_ which handles reporting errors. Public fields of _samlsp.Middleware_ have changed, so some usages may require adjustment. See [issue 231](https://github.com/mmmcorp/saml/issues/231) for details. The option to provide an IDP metadata URL has been deprecated. Instead, we recommend that you use the `FetchMetadata()` function, or fetch the metadata yourself and use the new `ParseMetadata()` function, and pass the metadata in _samlsp.Options.IDPMetadata_. Similarly, the _HTTPClient_ field is now deprecated because it was only used for fetching metdata, which is no longer directly implemented. The fields that manage how cookies are set are deprecated as well. To customize how cookies are managed, provide custom implementation of _RequestTracker_ and/or _Session_, perhaps by extending the default implementations. The deprecated fields have not been removed from the Options structure, but will be in future. In particular we have deprecated the following fields in _samlsp.Options_: - `Logger` - This was used to emit errors while validating, which is an anti-pattern. - `IDPMetadataURL` - Instead use `FetchMetadata()` - `HTTPClient` - Instead pass httpClient to FetchMetadata - `CookieMaxAge` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieName` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider Let us assume we have a simple web application to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import ( ) ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [samltest.id](https://samltest.id/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identity provider to establish trust from the service provider to the IDP. For [samltest.id](https://samltest.id/), you can do something like: Navigate to https://samltest.id/upload.php and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://samltest.id/idp/profile/SAML2/Redirect/SSO` 1. samltest.id prompts you for a username and password. 1. samltest.id returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `example/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package can produce signed SAML assertions, and can validate both signed and encrypted SAML assertions. It does not support signed or encrypted requests. The _RelayState_ parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, _RelayState_ is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [SAMLtest](https://samltest.id/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `78B6038B3B9DFB88`](https://keybase.io/crewjam)).
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package btcrpcclient implements a websocket-enabled Bitcoin JSON-RPC client. This client provides a robust and easy to use client for interfacing with a Bitcoin RPC server that uses a btcd/bitcoin core compatible Bitcoin JSON-RPC API. This client has been tested with btcd (https://github.com/btcsuite/btcd), btcwallet (https://github.com/btcsuite/btcwallet), and bitcoin core (https://github.com/bitcoin). In addition to the compatible standard HTTP POST JSON-RPC API, btcd and btcwallet provide a websocket interface that is more efficient than the standard HTTP POST method of accessing RPC. The section below discusses the differences between HTTP POST and websockets. By default, this client assumes the RPC server supports websockets and has TLS enabled. In practice, this currently means it assumes you are talking to btcd or btcwallet by default. However, configuration options are provided to fall back to HTTP POST and disable TLS to support talking with inferior bitcoin core style RPC servers. In HTTP POST-based JSON-RPC, every request creates a new HTTP connection, issues the call, waits for the response, and closes the connection. This adds quite a bit of overhead to every call and lacks flexibility for features such as notifications. In contrast, the websocket-based JSON-RPC interface provided by btcd and btcwallet only uses a single connection that remains open and allows asynchronous bi-directional communication. The websocket interface supports all of the same commands as HTTP POST, but they can be invoked without having to go through a connect/disconnect cycle for every call. In addition, the websocket interface provides other nice features such as the ability to register for asynchronous notifications of various events. The client provides both a synchronous (blocking) and asynchronous API. The synchronous (blocking) API is typically sufficient for most use cases. It works by issuing the RPC and blocking until the response is received. This allows straightforward code where you have the response as soon as the function returns. The asynchronous API works on the concept of futures. When you invoke the async version of a command, it will quickly return an instance of a type that promises to provide the result of the RPC at some future time. In the background, the RPC call is issued and the result is stored in the returned instance. Invoking the Receive method on the returned instance will either return the result immediately if it has already arrived, or block until it has. This is useful since it provides the caller with greater control over concurrency. The first important part of notifications is to realize that they will only work when connected via websockets. This should intuitively make sense because HTTP POST mode does not keep a connection open! All notifications provided by btcd require registration to opt-in. For example, if you want to be notified when funds are received by a set of addresses, you register the addresses via the NotifyReceived (or NotifyReceivedAsync) function. Notifications are exposed by the client through the use of callback handlers which are setup via a NotificationHandlers instance that is specified by the caller when creating the client. It is important that these notification handlers complete quickly since they are intentionally in the main read loop and will block further reads until they complete. This provides the caller with the flexibility to decide what to do when notifications are coming in faster than they are being handled. In particular this means issuing a blocking RPC call from a callback handler will cause a deadlock as more server responses won't be read until the callback returns, but the callback would be waiting for a response. Thus, any additional RPCs must be issued an a completely decoupled manner. By default, when running in websockets mode, this client will automatically keep trying to reconnect to the RPC server should the connection be lost. There is a back-off in between each connection attempt until it reaches one try per minute. Once a connection is re-established, all previously registered notifications are automatically re-registered and any in-flight commands are re-issued. This means from the caller's perspective, the request simply takes longer to complete. The caller may invoke the Shutdown method on the client to force the client to cease reconnect attempts and return ErrClientShutdown for all outstanding commands. The automatic reconnection can be disabled by setting the DisableAutoReconnect flag to true in the connection config when creating the client. Minor RPC Server Differences and Chain/Wallet Separation Some of the commands are extensions specific to a particular RPC server. For example, the DebugLevel call is an extension only provided by btcd (and btcwallet passthrough). Therefore if you call one of these commands against an RPC server that doesn't provide them, you will get an unimplemented error from the server. An effort has been made to call out which commmands are extensions in their documentation. Also, it is important to realize that btcd intentionally separates the wallet functionality into a separate process named btcwallet. This means if you are connected to the btcd RPC server directly, only the RPCs which are related to chain services will be available. Depending on your application, you might only need chain-related RPCs. In contrast, btcwallet provides pass through treatment for chain-related RPCs, so it supports them in addition to wallet-related RPCs. There are 3 categories of errors that will be returned throughout this package: The first category of errors are typically one of ErrInvalidAuth, ErrInvalidEndpoint, ErrClientDisconnect, or ErrClientShutdown. NOTE: The ErrClientDisconnect will not be returned unless the DisableAutoReconnect flag is set since the client automatically handles reconnect by default as previously described. The second category of errors typically indicates a programmer error and as such the type can vary, but usually will be best handled by simply showing/logging it. The third category of errors, that is errors returned by the server, can be detected by type asserting the error in a *btcjson.Error. For example, to detect if a command is unimplemented by the remote RPC server: The following full-blown client examples are in the examples directory:
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package anaconda provides structs and functions for accessing version 1.1 of the Twitter API. Successful API queries return native Go structs that can be used immediately, with no need for type assertions. If you already have the access token (and secret) for your user (Twitter provides this for your own account on the developer portal), creating the client is simple: Executing queries on an authenticated TwitterApi struct is simple. Certain endpoints allow separate optional parameter; if desired, these can be passed as the final parameter. Anaconda implements most of the endpoints defined in the Twitter API documentation: https://dev.twitter.com/docs/api/1.1. For clarity, in most cases, the function name is simply the name of the HTTP method and the endpoint (e.g., the endpoint `GET /friendships/incoming` is provided by the function `GetFriendshipsIncoming`). In a few cases, a shortened form has been chosen to make life easier (for example, retweeting is simply the function `Retweet`) More detailed information about the behavior of each particular endpoint can be found at the official Twitter API documentation.
Package zcashrpcclient implements a websocket-enabled Bitcoin JSON-RPC client. This client provides a robust and easy to use client for interfacing with a Bitcoin RPC server that uses a btcd/bitcoin core compatible Bitcoin JSON-RPC API. This client has been tested with btcd (https://github.com/btcsuite/btcd), btcwallet (https://github.com/btcsuite/btcwallet), and bitcoin core (https://github.com/bitcoin). In addition to the compatible standard HTTP POST JSON-RPC API, btcd and btcwallet provide a websocket interface that is more efficient than the standard HTTP POST method of accessing RPC. The section below discusses the differences between HTTP POST and websockets. By default, this client assumes the RPC server supports websockets and has TLS enabled. In practice, this currently means it assumes you are talking to btcd or btcwallet by default. However, configuration options are provided to fall back to HTTP POST and disable TLS to support talking with inferior bitcoin core style RPC servers. In HTTP POST-based JSON-RPC, every request creates a new HTTP connection, issues the call, waits for the response, and closes the connection. This adds quite a bit of overhead to every call and lacks flexibility for features such as notifications. In contrast, the websocket-based JSON-RPC interface provided by btcd and btcwallet only uses a single connection that remains open and allows asynchronous bi-directional communication. The websocket interface supports all of the same commands as HTTP POST, but they can be invoked without having to go through a connect/disconnect cycle for every call. In addition, the websocket interface provides other nice features such as the ability to register for asynchronous notifications of various events. The client provides both a synchronous (blocking) and asynchronous API. The synchronous (blocking) API is typically sufficient for most use cases. It works by issuing the RPC and blocking until the response is received. This allows straightforward code where you have the response as soon as the function returns. The asynchronous API works on the concept of futures. When you invoke the async version of a command, it will quickly return an instance of a type that promises to provide the result of the RPC at some future time. In the background, the RPC call is issued and the result is stored in the returned instance. Invoking the Receive method on the returned instance will either return the result immediately if it has already arrived, or block until it has. This is useful since it provides the caller with greater control over concurrency. The first important part of notifications is to realize that they will only work when connected via websockets. This should intuitively make sense because HTTP POST mode does not keep a connection open! All notifications provided by btcd require registration to opt-in. For example, if you want to be notified when funds are received by a set of addresses, you register the addresses via the NotifyReceived (or NotifyReceivedAsync) function. Notifications are exposed by the client through the use of callback handlers which are setup via a NotificationHandlers instance that is specified by the caller when creating the client. It is important that these notification handlers complete quickly since they are intentionally in the main read loop and will block further reads until they complete. This provides the caller with the flexibility to decide what to do when notifications are coming in faster than they are being handled. In particular this means issuing a blocking RPC call from a callback handler will cause a deadlock as more server responses won't be read until the callback returns, but the callback would be waiting for a response. Thus, any additional RPCs must be issued an a completely decoupled manner. By default, when running in websockets mode, this client will automatically keep trying to reconnect to the RPC server should the connection be lost. There is a back-off in between each connection attempt until it reaches one try per minute. Once a connection is re-established, all previously registered notifications are automatically re-registered and any in-flight commands are re-issued. This means from the caller's perspective, the request simply takes longer to complete. The caller may invoke the Shutdown method on the client to force the client to cease reconnect attempts and return ErrClientShutdown for all outstanding commands. The automatic reconnection can be disabled by setting the DisableAutoReconnect flag to true in the connection config when creating the client. Minor RPC Server Differences and Chain/Wallet Separation Some of the commands are extensions specific to a particular RPC server. For example, the DebugLevel call is an extension only provided by btcd (and btcwallet passthrough). Therefore if you call one of these commands against an RPC server that doesn't provide them, you will get an unimplemented error from the server. An effort has been made to call out which commmands are extensions in their documentation. Also, it is important to realize that btcd intentionally separates the wallet functionality into a separate process named btcwallet. This means if you are connected to the btcd RPC server directly, only the RPCs which are related to chain services will be available. Depending on your application, you might only need chain-related RPCs. In contrast, btcwallet provides pass through treatment for chain-related RPCs, so it supports them in addition to wallet-related RPCs. There are 3 categories of errors that will be returned throughout this package: The first category of errors are typically one of ErrInvalidAuth, ErrInvalidEndpoint, ErrClientDisconnect, or ErrClientShutdown. NOTE: The ErrClientDisconnect will not be returned unless the DisableAutoReconnect flag is set since the client automatically handles reconnect by default as previously described. The second category of errors typically indicates a programmer error and as such the type can vary, but usually will be best handled by simply showing/logging it. The third category of errors, that is errors returned by the server, can be detected by type asserting the error in a *btcjson.RPCError. For example, to detect if a command is unimplemented by the remote RPC server: The following full-blown client examples are in the examples directory:
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package wrap creates a fast and flexible middleware stack for http.Handlers. Each middleware is a wrapper for another middleware and implements the Wrapper interface. Features Wrappers can be found at http://godoc.org/github.com/go-on/wrap-contrib/wraps. A (mountable) router that plays fine with wrappers can be found at http://godoc.org/github.com/go-on/router. Benchmarks (Go 1.3) Initial inspiration came from Christian Neukirchen's rack for ruby some years ago. The core of this package is the New function that constructs a stack of middlewares that implement the Wrapper interface. If the global DEBUG flag is set before calling New then each middleware call will result in calling the Debug method of the global DEBUGGER (defaults to a logger). To help constructing middleware there are some adapters like WrapperFunc, Handler, HandlerFunc, NextHandler and NextHandlerFunc each of them adapting to the Wrapper interface. To help sharing per request context there is a Contexter interface that must be implemented by the ResponseWriter. That can easily be done be providing a middleware that injects a context that wraps the current ResponseWriter and implements the Contexter interface. It must a least support the extraction of the wrapped ResponseWriter. Then there are functions to validate implementations of Contexter (ValidateContextInjecter) and to validate them against wrappers that store and retrieve the context data (ValidateWrapperContexts). An complete example for shared contexts can be found in the file example_context_test.go. Furthermore this package provides some ResponseWriter wrappers that also implement Contexter and that help with development of middleware. These are Buffer, Peek and EscapeHTML. Buffer is a simple buffer. A middleware may pass it to the next handlers ServeHTTP method as a drop in replacement for the response writer. After the ServeHTTP method is run the middleware may examine what has been written to the Buffer and decide what to write to the "original" ResponseWriter (that may well be another buffer passed from another middleware). The downside is the body being written two times and the complete caching of the body in the memory which will be inacceptable for large bodies. Therefor Peek is an alternative response writer wrapper that only caching headers and status code but allowing to intercept calls of the Write method. All middleware without the need to read the whole response body should use Peek or provide their own ResponseWriter wrapper (then do not forget to implement the Contexter interface). Finally EscapeHTML provides a response writer wrapper that allows on the fly html escaping of the bytes written to the wrapped response writer. It is pretty easy to write your custom middleware. You should start with a new struct type - that allows you to add options as fields later on. Then you could use the following template to implement the Wrapper interface If you need to run the next handler in order to inspect what it did, replace the response writer with a Peek (see NewPeek) or if you need full access to the written body with a Buffer. To form a middleware stack, simply use New() and pass it the middlewares. They get the request top-down. There are some adapters to let for example a http.Handler be a middleware (Wrapper) that does not call the next handler and stops the chain. To use per request context a custom type is needed that carries the context data and the user is expected to create and inject a Contexter supporting this type. Here is a template for a middleware that you could use to write middleware that wants to use / share context. For context sharing the user has to implement the Contexter interface in a way that supports all types the used middlewares expect. Here is a template for an implementation of the Contexter interface At any time there must be only one Contexter in the whole middleware stack and its the best to let it be the first middleware. Then you don't have to worry if its there or not (the Stack function might help you). The corresponding middleware stack would look like this If your application / handler also uses context data, it is a good idea to implement it as ContextWrapper as if it were a middleware and pass it ValidateWrapperContexts(). So if your context is wrong, you will get nice panics before your server even starts. And this is always in sync with your app / middleware. If for some reason the original ResponseWriter is needed (to type assert it to a http.Flusher for example), it may be reclaimed with the help of ReclaimResponseWriter(). You might want to look at existing middlewares to get some ideas: http://godoc.org/github.com/go-on/wrap-contrib/wraps 1. Should the context not better be an argument to a middleware function, to make this dependency visible in documentation and tools? Answer: A unified interface gives great freedom when organizing and ordering middleware as the middleware in between does not have to know about a certain context type if does not care about it. It simply passes the ResponseWriter that happens to have context down the middleware stack chain. On the other hand, exposing the context type by making it a parameter has the effect that every middleware will need to pass this parameter to the next one if the next one needs it. Every middleware is then tightly coupled to the next one and reordering or mixing of middleware from different sources becomes impossible. However with the ContextHandler interface and the ValidateWrapperContexts function we have a way to guarantee that the requirements are met. And this solution is also type safe. 2. A ResponseWriter is an interface, because it may implement other interfaces from the http libary, e.g. http.Flusher. If it is wrapped that underlying implementation is not accessible anymore Answer: When the Contexter is validated, it is checked, that the Context method supports http.ResponseWriter as well (and that it returns the underlying ResponseWriter). Since only one Contexter may be used within a stack, it is always possible to ask the Contexter for the underlying ResponseWriter. This is what helper functions like ReclaimResponseWriter(), Flush(), CloseNotify() and Hijack() do. 3. Why is the recommended way to use the Contexter interface to make a type assertion from the ResponseWriter to the Contexter interface without error handling? Answer: Middleware stacks should be created before the server is handling requests and then not change anymore. And you should run tests. Then the assertion will blow up and that is correct because there is no reasonable way to handle such an error. It means that either you have no context in your stack or you inject the context to late or the context does not handle the kind of type the middleware expects. In each case you should fix it early and the panic forces you to do. Use the DEBUG flag to see what's going on. 4. What happens if my context is wrapped inside another context or response writer? Answer: You should only have one Contexter per application and inject it as first wrapper into your middleware stack. All context specific data belongs there. Having multiple Contexter in a stack is considered a bug. The good news is that you now can be sure to be able to access all of your context data everywhere inside your stack. Never should a context wrap another one. There also should be no need for another context wrapping ResponseWriter because every type can be saved by a Contexter with few code. All response writer wrappers of this package implement the Contexter interface and every response writer you use should. 5. Why isn't there any default context object? Why do I have to write it on my own? Answer: To write your own context and context injection has several benefits: 6. Why is the context data accessed and stored via type switch and not by string keys? Answer: Type based context allows a very simple implementation that namespaces across packages needs no extra memory allocation and is, well, type safe. If you need to store multiple context data of the same type, simple defined an alias type for each key. 7. Is there an example how to integrate with 3rd party middleware libraries that expect context? Answer: Yes, have a look at http://godoc.org/github.com/go-on/wrap-contrib/third-party. 8. What about spawning goroutines / how does it relate to code.google.com/p/go.net/context? If you need your request to be handled by different goroutines and middlewares you might use your Contexter to store and provide access to a code.google.com/p/go.net/context Context just like any other context data. The good news is that you can write you middleware like before and extract your Context everywhere you need it. And if you have middleware in between that does not know about about Context or other context data you don't have build wrappers around that have Context as parameter. Instead the ResponseWriter that happens to provide you a Context will be passed down the chain by the middleware. "At Google, we require that Go programmers pass a Context parameter as the first argument to every function on the call path between incoming and outgoing requests." (Sameer Ajmani, http://blog.golang.org/context) This is not neccessary anymore. And it is not neccessary for any type of contextual data because that does not have to be in the type signature anymore.
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Let us assume we have a simple web appliation to protect. We'll modify this application so it uses SAML to authenticate users. Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [testshib.org](testshib.org), an identity provider designed for testing. Next we'll have to register our service provider with the identiy provider to establish trust from the service provider to the IDP. For [testshib.org](testshib.org), you can do something like: Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 2. The middleware redirects you to `https://idp.testshib.org/idp/profile/SAML2/Redirect/SSO` 3. testshib.org prompts you for a username and password. 4. testshib.org returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 5. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 6. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `examples/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows fromthe IDP to the service provider are supported vai the HTTP POST binding. The package supports signed and encrypted SAML assertions. It does not support signed or encrypted requests. The *RelayState* parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, *RelayState* is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [TestShib](http://www.testshib.org/) is a testing ground for SAML service and identity providers.
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy(request level configuration), alternatively, global(all services) or client level RetryPolicy configration is also possible. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go If you are trying to make a PUT/POST API call with binary request body, please make sure the binary request body is resettable, which means the request body should inherit Seeker interface. The Retry behavior Precedence (Highest to lowest) is defined as below:- The OCI Go SDK defines a default retry policy that retries on the errors suitable for retries (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm), for a recommended period of time (up to 7 attempts spread out over at most approximately 1.5 minutes). The default retry policy is defined by : Default Retry-able Errors Below is the list of default retry-able errors for which retry attempts should be made. The following errors should be retried (with backoff). HTTP Code Customer-facing Error Code Apart from the above errors, retries should also be attempted in the following Client Side errors : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) The above errors can be avoided through retrying and hence, are classified as the default retry-able errors. Additionally, retries should also be made for Circuit Breaker exceptions (Exceptions raised by Circuit Breaker in an open state) Default Termination Strategy The termination strategy defines when SDKs should stop attempting to retry. In other words, it's the deadline for retries. The OCI SDKs should stop retrying the operation after 7 retry attempts. This means the SDKs will have retried for ~98 seconds or ~1.5 minutes have elapsed due to total delays. SDKs will make a total of 8 attempts. (1 initial request + 7 retries) Default Delay Strategy Default Delay Strategy - The delay strategy defines the amount of time to wait between each of the retry attempts. The default delay strategy chosen for the SDK – Exponential backoff with jitter, using: 1. The base time to use in retry calculations will be 1 second 2. An exponent of 2. When calculating the next retry time, the SDK will raise this to the power of the number of attempts 3. A maximum wait time between calls of 30 seconds (Capped) 4. Added jitter value between 0-1000 milliseconds to spread out the requests Configure and use default retry policy You can set this retry policy for a single request: or for all requests made by a client: or for all requests made by all clients: or setting default retry via environment varaible, which is a global switch for all services: Some services enable retry for operations by default, this can be overridden using any alternatives mentioned above. To know which service operations have retries enabled by default, look at the operation's description in the SDK - it will say whether that it has retries enabled by default Some resources may have to be replicated across regions and are only eventually consistent. That means the request to create, update, or delete the resource succeeded, but the resource is not available everywhere immediately. Creating, updating, or deleting any resource in the Identity service is affected by eventual consistency, and doing so may cause other operations in other services to fail until the Identity resource has been replicated. For example, the request to CreateTag in the Identity service in the home region succeeds, but immediately using that created tag in another region in a request to LaunchInstance in the Compute service may fail. If you are creating, updating, or deleting resources in the Identity service, we recommend using an eventually consistent retry policy for any service you access. The default retry policy already deals with eventual consistency. Example: This retry policy will use a different strategy if an eventually consistent change was made in the recent past (called the "eventually consistent window", currently defined to be 4 minutes after the eventually consistent change). This special retry policy for eventual consistency will: 1. make up to 9 attempts (including the initial attempt); if an attempt is successful, no more attempts will be made 2. retry at most until (a) approximately the end of the eventually consistent window or (b) the end of the default retry period of about 1.5 minutes, whichever is farther in the future; if an attempt is successful, no more attempts will be made, and the OCI Go SDK will not wait any longer 3. retry on the error codes 400-RelatedResourceNotAuthorizedOrNotFound, 404-NotAuthorizedOrNotFound, and 409-NotAuthorizedOrResourceAlreadyExists, for which the default retry policy does not retry, in addition to the errors the default retry policy retries on (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm) If there were no eventually consistent actions within the recent past, then this special retry strategy is not used. If you want a retry policy that does not handle eventual consistency in a special way, for example because you retry on all error responses, you can use DefaultRetryPolicyWithoutEventualConsistency or NewRetryPolicyWithOptions with the common.ReplaceWithValuesFromRetryPolicy(common.DefaultRetryPolicyWithoutEventualConsistency()) option: The NewRetryPolicy function also creates a retry policy without eventual consistency. Circuit Breaker can prevent an application repeatedly trying to execute an operation that is likely to fail, allowing it to continue without waiting for the fault to be rectified or wasting CPU cycles, of course, it also enables an application to detect whether the fault has been resolved. If the problem appears to have been rectified, the application can attempt to invoke the operation. Go SDK intergrates sony/gobreaker solution, wraps in a circuit breaker object, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, this also saves the service from being overwhelmed with network calls in case of an outage. Circuit Breaker Configuration Definitions 1. Failure Rate Threshold - The state of the CircuitBreaker changes from CLOSED to OPEN when the failure rate is equal or greater than a configurable threshold. For example when more than 50% of the recorded calls have failed. 2. Reset Timeout - The timeout after which an open circuit breaker will attempt a request if a request is made 3. Failure Exceptions - The list of Exceptions that will be regarded as failures for the circuit. 4. Minimum number of calls/ Volume threshold - Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. 1. Failure Rate Threshold - 80% - This means when 80% of the requests calculated for a time window of 120 seconds have failed then the circuit will transition from closed to open. 2. Minimum number of calls/ Volume threshold - A value of 10, for the above defined time window of 120 seconds. 3. Reset Timeout - 30 seconds to wait before setting the breaker to halfOpen state, and trying the action again. 4. Failure Exceptions - The failures for the circuit will only be recorded for the retryable/transient exceptions. This means only the following exceptions will be regarded as failure for the circuit. HTTP Code Customer-facing Error Code Apart from the above, the following client side exceptions will also be treated as a failure for the circuit : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) Go SDK enable circuit breaker with default configuration for most of the service clients, if you don't want to enable the solution, can disable the functionality before your application running Go SDK also supports customize Circuit Breaker with specified configurations. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_circuitbreaker_test.go To know which service clients have circuit breakers enabled, look at the service client's description in the SDK - it will say whether that it has circuit breakers enabled by default The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.