Package crypto11 enables access to cryptographic keys from PKCS#11 using Go crypto API. 1. Either write a configuration file (see ConfigureFromFile) or define a configuration in your application (see PKCS11Config and Configure). This will identify the PKCS#11 library and token to use, and contain the password (or "PIN" in PKCS#11 terminology) to use if the token requires login. 2. Create keys with GenerateDSAKeyPair, GenerateRSAKeyPair and GenerateECDSAKeyPair. The keys you get back implement the standard Go crypto.Signer interface (and crypto.Decrypter, for RSA). They are automatically persisted under random a randomly generated label and ID (use the Identify method to discover them). 3. Retrieve existing keys with FindKeyPair. The return value is a Go crypto.PrivateKey; it may be converted either to crypto.Signer or to *PKCS11PrivateKeyDSA, *PKCS11PrivateKeyECDSA or *PKCS11PrivateKeyRSA. Note that PKCS#11 session handles must not be used concurrently from multiple threads. Consumers of the Signer interface know nothing of this and expect to be able to sign from multiple threads without constraint. We address this as follows. 1. PKCS11Object captures both the object handle and the slot ID for an object. 2. For each slot we maintain a pool of read-write sessions. The pool expands dynamically up to an (undocumented) limit. 3. Each operation transiently takes a session from the pool. They have exclusive use of the session, meeting PKCS#11's concurrency requirements. The details are, partially, exposed in the API; since the target use case is PKCS#11-unaware operation it may be that the API as it stands isn't good enough for PKCS#11-aware applications. Feedback welcome. See also https://golang.org/pkg/crypto/ The PKCS1v15DecryptOptions SessionKeyLen field is not implemented and an error is returned if it is nonzero. The reason for this is that it is not possible for crypto11 to guarantee the constant-time behavior in the specification. See https://github.com/thalesignite/crypto11/issues/5 for further discussion. Symmetric crypto support via cipher.Block is very slow. You can use the BlockModeCloser API but you must call the Close() interface (not found in cipher.BlockMode). See https://github.com/ThalesIgnite/crypto11/issues/6 for further discussion.
Package crypto11 enables access to cryptographic keys from PKCS#11 using Go crypto API. PKCS#11 tokens are accessed via Context objects. Each Context connects to one token. Context objects are created by calling Configure or ConfigureFromFile. In the latter case, the file should contain a JSON representation of a Config. There is support for generating DSA, RSA and ECDSA keys. These keys can be found later using FindKeyPair. All three key types implement the crypto.Signer interface and the RSA keys also implement crypto.Decrypter. RSA keys obtained through FindKeyPair will need a type assertion to be used for decryption. Assert either crypto.Decrypter or SignerDecrypter, as you prefer. Symmetric keys can also be generated. These are found later using FindKey. See the documentation for SecretKey for further information. Note that PKCS#11 session handles must not be used concurrently from multiple threads. Consumers of the Signer interface know nothing of this and expect to be able to sign from multiple threads without constraint. We address this as follows. 1. When a Context is created, a session is created and the user is logged in. This session remains open until the Context is closed, to ensure all object handles remain valid and to avoid repeatedly calling C_Login. 2. The Context also maintains a pool of read-write sessions. The pool expands dynamically as needed, but never beyond the maximum number of r/w sessions supported by the token (as reported by C_GetInfo). If other applications are using the token, a lower limit should be set in the Config. 3. Each operation transiently takes a session from the pool. They have exclusive use of the session, meeting PKCS#11's concurrency requirements. Sessions are returned to the pool afterwards and may be re-used. Behaviour of the pool can be tweaked via Config fields: - PoolWaitTimeout controls how long an operation can block waiting on a session from the pool. A zero value means there is no limit. Timeouts occur if the pool is fully used and additional operations are requested. - MaxSessions sets an upper bound on the number of sessions. If this value is zero, a default maximum is used (see DefaultMaxSessions). In every case the maximum supported sessions as reported by the token is obeyed. The PKCS1v15DecryptOptions SessionKeyLen field is not implemented and an error is returned if it is nonzero. The reason for this is that it is not possible for crypto11 to guarantee the constant-time behavior in the specification. See https://github.com/thalesignite/crypto11/issues/5 for further discussion. Symmetric crypto support via cipher.Block is very slow. You can use the BlockModeCloser API but you must call the Close() interface (not found in cipher.BlockMode). See https://github.com/ThalesIgnite/crypto11/issues/6 for further discussion.
Package crypto11 enables access to cryptographic keys from PKCS#11 using Go crypto API. 1. Either write a configuration file (see ConfigureFromFile) or define a configuration in your application (see PKCS11Config and Configure). This will identify the PKCS#11 library and token to use, and contain the password (or "PIN" in PKCS#11 terminology) to use if the token requires login. 2. Create keys with GenerateDSAKeyPair, GenerateRSAKeyPair and GenerateECDSAKeyPair. The keys you get back implement the standard Go crypto.Signer interface (and crypto.Decrypter, for RSA). They are automatically persisted under random a randomly generated label and ID (use the Identify method to discover them). 3. Retrieve existing keys with FindKeyPair. The return value is a Go crypto.PrivateKey; it may be converted either to crypto.Signer or to *PKCS11PrivateKeyDSA, *PKCS11PrivateKeyECDSA or *PKCS11PrivateKeyRSA. Note that PKCS#11 session handles must not be used concurrently from multiple threads. Consumers of the Signer interface know nothing of this and expect to be able to sign from multiple threads without constraint. We address this as follows. 1. PKCS11Object captures both the object handle and the slot ID for an object. 2. For each slot we maintain a pool of read-write sessions. The pool expands dynamically up to an (undocumented) limit. 3. Each operation transiently takes a session from the pool. They have exclusive use of the session, meeting PKCS#11's concurrency requirements. The details are, partially, exposed in the API; since the target use case is PKCS#11-unaware operation it may be that the API as it stands isn't good enough for PKCS#11-aware applications. Feedback welcome. See also https://golang.org/pkg/crypto/ The PKCS1v15DecryptOptions SessionKeyLen field is not implemented and an error is returned if it is nonzero. The reason for this is that it is not possible for crypto11 to guarantee the constant-time behavior in the specification. See https://github.com/thalesignite/crypto11/issues/5 for further discussion. Symmetric crypto support via cipher.Block is very slow. You can use the BlockModeCloser API but you must call the Close() interface (not found in cipher.BlockMode). See https://github.com/ThalesIgnite/crypto11/issues/6 for further discussion.
Package crypto11 enables access to cryptographic keys from PKCS#11 using Go crypto API. 1. Either write a configuration file (see ConfigureFromFile) or define a configuration in your application (see PKCS11Config and Configure). This will identify the PKCS#11 library and token to use, and contain the password (or "PIN" in PKCS#11 terminology) to use if the token requires login. 2. Create keys with GenerateDSAKeyPair, GenerateRSAKeyPair and GenerateECDSAKeyPair. The keys you get back implement the standard Go crypto.Signer interface (and crypto.Decrypter, for RSA). They are automatically persisted under random a randomly generated label and ID (use the Identify method to discover them). 3. Retrieve existing keys with FindKeyPair. The return value is a Go crypto.PrivateKey; it may be converted either to crypto.Signer or to *PKCS11PrivateKeyDSA, *PKCS11PrivateKeyECDSA or *PKCS11PrivateKeyRSA. Note that PKCS#11 session handles must not be used concurrently from multiple threads. Consumers of the Signer interface know nothing of this and expect to be able to sign from multiple threads without constraint. We address this as follows. 1. PKCS11Object captures both the object handle and the slot ID for an object. 2. For each slot we maintain a pool of read-write sessions. The pool expands dynamically up to an (undocumented) limit. 3. Each operation transiently takes a session from the pool. They have exclusive use of the session, meeting PKCS#11's concurrency requirements. The details are, partially, exposed in the API; since the target use case is PKCS#11-unaware operation it may be that the API as it stands isn't good enough for PKCS#11-aware applications. Feedback welcome. See also https://golang.org/pkg/crypto/ The PKCS1v15DecryptOptions SessionKeyLen field is not implemented and an error is returned if it is nonzero. The reason for this is that it is not possible for crypto11 to guarantee the constant-time behavior in the specification. See https://github.com/thalesignite/crypto11/issues/5 for further discussion. Symmetric crypto support via cipher.Block is very slow. You can use the BlockModeCloser API but you must call the Close() interface (not found in cipher.BlockMode). See https://github.com/ThalesIgnite/crypto11/issues/6 for further discussion.
Package crypto11 enables access to cryptographic keys from PKCS#11 using Go crypto API. 1. Either write a configuration file (see ConfigureFromFile) or define a configuration in your application (see PKCS11Config and Configure). This will identify the PKCS#11 library and token to use, and contain the password (or "PIN" in PKCS#11 terminology) to use if the token requires login. 2. Create keys with GenerateDSAKeyPair, GenerateRSAKeyPair and GenerateECDSAKeyPair. The keys you get back implement the standard Go crypto.Signer interface (and crypto.Decrypter, for RSA). They are automatically persisted under random a randomly generated label and ID (use the Identify method to discover them). 3. Retrieve existing keys with FindKeyPair. The return value is a Go crypto.PrivateKey; it may be converted either to crypto.Signer or to *PKCS11PrivateKeyDSA, *PKCS11PrivateKeyECDSA or *PKCS11PrivateKeyRSA. Note that PKCS#11 session handles must not be used concurrently from multiple threads. Consumers of the Signer interface know nothing of this and expect to be able to sign from multiple threads without constraint. We address this as follows. 1. PKCS11Object captures both the object handle and the slot ID for an object. 2. For each slot we maintain a pool of read-write sessions. The pool expands dynamically up to an (undocumented) limit. 3. Each operation transiently takes a session from the pool. They have exclusive use of the session, meeting PKCS#11's concurrency requirements. The details are, partially, exposed in the API; since the target use case is PKCS#11-unaware operation it may be that the API as it stands isn't good enough for PKCS#11-aware applications. Feedback welcome. See also https://golang.org/pkg/crypto/ The PKCS1v15DecryptOptions SessionKeyLen field is not implemented and an error is returned if it is nonzero. The reason for this is that it is not possible for crypto11 to guarantee the constant-time behavior in the specification. See https://github.com/thalesignite/crypto11/issues/5 for further discussion. Symmetric crypto support via cipher.Block is very slow. You can use the BlockModeCloser API but you must call the Close() interface (not found in cipher.BlockMode). See https://github.com/ThalesIgnite/crypto11/issues/6 for further discussion.
Package crypto11 enables access to cryptographic keys from PKCS#11 using Go crypto API. 1. Either write a configuration file (see ConfigureFromFile) or define a configuration in your application (see PKCS11Config and Configure). This will identify the PKCS#11 library and token to use, and contain the password (or "PIN" in PKCS#11 terminology) to use if the token requires login. 2. Create keys with GenerateDSAKeyPair, GenerateRSAKeyPair and GenerateECDSAKeyPair. The keys you get back implement the standard Go crypto.Signer interface (and crypto.Decrypter, for RSA). They are automatically persisted under random a randomly generated label and ID (use the Identify method to discover them). 3. Retrieve existing keys with FindKeyPair. The return value is a Go crypto.PrivateKey; it may be converted either to crypto.Signer or to *PKCS11PrivateKeyDSA, *PKCS11PrivateKeyECDSA or *PKCS11PrivateKeyRSA. Note that PKCS#11 session handles must not be used concurrently from multiple threads. Consumers of the Signer interface know nothing of this and expect to be able to sign from multiple threads without constraint. We address this as follows. 1. PKCS11Object captures both the object handle and the slot ID for an object. 2. For each slot we maintain a pool of read-write sessions. The pool expands dynamically up to an (undocumented) limit. 3. Each operation transiently takes a session from the pool. They have exclusive use of the session, meeting PKCS#11's concurrency requirements. The details are, partially, exposed in the API; since the target use case is PKCS#11-unaware operation it may be that the API as it stands isn't good enough for PKCS#11-aware applications. Feedback welcome. See also https://golang.org/pkg/crypto/ The PKCS1v15DecryptOptions SessionKeyLen field is not implemented and an error is returned if it is nonzero. The reason for this is that it is not possible for crypto11 to guarantee the constant-time behavior in the specification. See https://github.com/thalesignite/crypto11/issues/5 for further discussion. Symmetric crypto support via cipher.Block is very slow. You can use the BlockModeCloser API but you must call the Close() interface (not found in cipher.BlockMode). See https://github.com/ThalesIgnite/crypto11/issues/6 for further discussion.
Package oauth1 is a Go implementation of the OAuth1 spec RFC 5849. It allows end-users to authorize a client (consumer) to access protected resources on their behalf (e.g. login) and allows clients to make signed and authorized requests on behalf of a user (e.g. API calls). It takes design cues from golang.org/x/oauth2, providing an http.Client which handles request signing and authorization. Package oauth1 implements the OAuth1 authorization flow and provides an http.Client which can sign and authorize OAuth1 requests. To implement "Login with X", use the https://github.com/dghubble/gologin packages which provide login handlers for OAuth1 and OAuth2 providers. To call the Twitter, Digits, or Tumblr OAuth1 APIs, use the higher level Go API clients. * https://github.com/dghubble/go-twitter * https://github.com/dghubble/go-digits * https://github.com/benfb/go-tumblr Perform the OAuth 1 authorization flow to ask a user to grant an application access to his/her resources via an access token. 1. When a user performs an action (e.g. "Login with X" button calls "/login" route) get an OAuth1 request token (temporary credentials). 2. Obtain authorization from the user by redirecting them to the OAuth1 provider's authorization URL to grant the application access. Receive the callback from the OAuth1 provider in a handler. 3. Acquire the access token (token credentials) which can later be used to make requests on behalf of the user. Check the examples to see this authorization flow in action from the command line, with Twitter PIN-based login and Tumblr login. Use an access Token to make authorized requests on behalf of a user. Check the examples to see Twitter and Tumblr requests in action.
Package crypto11 enables access to cryptographic keys from PKCS#11 using Go crypto API. 1. Either write a configuration file (see ConfigureFromFile) or define a configuration in your application (see PKCS11Config and Configure). This will identify the PKCS#11 library and token to use, and contain the password (or "PIN" in PKCS#11 terminology) to use if the token requires login. 2. Create keys with GenerateDSAKeyPair, GenerateRSAKeyPair and GenerateECDSAKeyPair. The keys you get back implement the standard Go crypto.Signer interface (and crypto.Decrypter, for RSA). They are automatically persisted under random a randomly generated label and ID (use the Identify method to discover them). 3. Retrieve existing keys with FindKeyPair. The return value is a Go crypto.PrivateKey; it may be converted either to crypto.Signer or to *PKCS11PrivateKeyDSA, *PKCS11PrivateKeyECDSA or *PKCS11PrivateKeyRSA. Note that PKCS#11 session handles must not be used concurrently from multiple threads. Consumers of the Signer interface know nothing of this and expect to be able to sign from multiple threads without constraint. We address this as follows. 1. PKCS11Object captures both the object handle and the slot ID for an object. 2. For each slot we maintain a pool of read-write sessions. The pool expands dynamically up to an (undocumented) limit. 3. Each operation transiently takes a session from the pool. They have exclusive use of the session, meeting PKCS#11's concurrency requirements. The details are, partially, exposed in the API; since the target use case is PKCS#11-unaware operation it may be that the API as it stands isn't good enough for PKCS#11-aware applications. Feedback welcome. See also https://golang.org/pkg/crypto/ The PKCS1v15DecryptOptions SessionKeyLen field is not implemented and an error is returned if it is nonzero. The reason for this is that it is not possible for crypto11 to guarantee the constant-time behavior in the specification. See https://github.com/thalesignite/crypto11/issues/5 for further discussion. Symmetric crypto support via cipher.Block is very slow. You can use the BlockModeCloser API but you must call the Close() interface (not found in cipher.BlockMode). See https://github.com/ThalesIgnite/crypto11/issues/6 for further discussion.
Package oauth1 is a Go implementation of the OAuth1 spec RFC 5849. It allows end-users to authorize a client (consumer) to access protected resources on their behalf (e.g. login) and allows clients to make signed and authorized requests on behalf of a user (e.g. API calls). It takes design cues from golang.org/x/oauth2, providing an http.Client which handles request signing and authorization. Package oauth1 implements the OAuth1 authorization flow and provides an http.Client which can sign and authorize OAuth1 requests. To implement "Login with X", use the https://github.com/dghubble/gologin packages which provide login handlers for OAuth1 and OAuth2 providers. To call the Twitter, Digits, or Tumblr OAuth1 APIs, use the higher level Go API clients. * https://github.com/dghubble/go-twitter * https://github.com/dghubble/go-digits * https://github.com/benfb/go-tumblr Perform the OAuth 1 authorization flow to ask a user to grant an application access to his/her resources via an access token. 1. When a user performs an action (e.g. "Login with X" button calls "/login" route) get an OAuth1 request token (temporary credentials). 2. Obtain authorization from the user by redirecting them to the OAuth1 provider's authorization URL to grant the application access. Receive the callback from the OAuth1 provider in a handler. 3. Acquire the access token (token credentials) which can later be used to make requests on behalf of the user. Check the examples to see this authorization flow in action from the command line, with Twitter PIN-based login and Tumblr login. Use an access Token to make authorized requests on behalf of a user. Check the examples to see Twitter and Tumblr requests in action.
This example opens https://github.com/, searches for "git", and then gets the header element which gives the description for Git. Rod use https://golang.org/pkg/context to handle cancelations for IO blocking operations, most times it's timeout. Context will be recursively passed to all sub-methods. For example, methods like Page.Context(ctx) will return a clone of the page with the ctx, all the methods of the returned page will use the ctx if they have IO blocking operations. Page.Timeout or Page.WithCancel is just a shortcut for Page.Context. Of course, Browser or Element works the same way. Shows how we can further customize the browser with the launcher library. Usually you use launcher lib to set the browser's command line flags (switches). Doc for flags: https://peter.sh/experiments/chromium-command-line-switches Shows how to change the retry/polling options that is used to query elements. This is useful when you want to customize the element query retry logic. When rod doesn't have a feature that you need. You can easily call the cdp to achieve it. List of cdp API: https://github.com/go-rod/rod/tree/master/lib/proto Shows how to disable headless mode and debug. Rod provides a lot of debug options, you can set them with setter methods or use environment variables. Doc for environment variables: https://pkg.go.dev/github.com/go-rod/rod/lib/defaults We use "Must" prefixed functions to write example code. But in production you may want to use the no-prefix version of them. About why we use "Must" as the prefix, it's similar to https://golang.org/pkg/regexp/#MustCompile Shows how to share a remote object reference between two Eval Shows how to listen for events. Shows how to intercept requests and modify both the request and the response. The entire process of hijacking one request: The --req-> and --res-> are the parts that can be modified. Show how to handle multiple results of an action. Such as when you login a page, the result can be success or wrong password. Example_search shows how to use Search to get element inside nested iframes or shadow DOMs. It works the same as https://developers.google.com/web/tools/chrome-devtools/dom#search Shows how to update the state of the current page. In this example we enable the network domain. Rod uses mouse cursor to simulate clicks, so if a button is moving because of animation, the click may not work as expected. We usually use WaitStable to make sure the target isn't changing anymore. When you want to wait for an ajax request to complete, this example will be useful.
Package crypto11 enables access to cryptographic keys from PKCS#11 using Go crypto API. 1. Either write a configuration file (see ConfigureFromFile) or define a configuration in your application (see PKCS11Config and Configure). This will identify the PKCS#11 library and token to use, and contain the password (or "PIN" in PKCS#11 terminology) to use if the token requires login. 2. Create keys with GenerateDSAKeyPair, GenerateRSAKeyPair and GenerateECDSAKeyPair. The keys you get back implement the standard Go crypto.Signer interface (and crypto.Decrypter, for RSA). They are automatically persisted under random a randomly generated label and ID (use the Identify method to discover them). 3. Retrieve existing keys with FindKeyPair. The return value is a Go crypto.PrivateKey; it may be converted either to crypto.Signer or to *PKCS11PrivateKeyDSA, *PKCS11PrivateKeyECDSA or *PKCS11PrivateKeyRSA. Note that PKCS#11 session handles must not be used concurrently from multiple threads. Consumers of the Signer interface know nothing of this and expect to be able to sign from multiple threads without constraint. We address this as follows. 1. PKCS11Object captures both the object handle and the slot ID for an object. 2. For each slot we maintain a pool of read-write sessions. The pool expands dynamically up to an (undocumented) limit. 3. Each operation transiently takes a session from the pool. They have exclusive use of the session, meeting PKCS#11's concurrency requirements. The details are, partially, exposed in the API; since the target use case is PKCS#11-unaware operation it may be that the API as it stands isn't good enough for PKCS#11-aware applications. Feedback welcome. See also https://golang.org/pkg/crypto/ The PKCS1v15DecryptOptions SessionKeyLen field is not implemented and an error is returned if it is nonzero. The reason for this is that it is not possible for crypto11 to guarantee the constant-time behavior in the specification. See https://github.com/thalesignite/crypto11/issues/5 for further discussion. Symmetric crypto support via cipher.Block is very slow. You can use the BlockModeCloser API but you must call the Close() interface (not found in cipher.BlockMode). See https://github.com/ThalesIgnite/crypto11/issues/6 for further discussion.
Package oauth1 is a Go implementation of the OAuth1 spec RFC 5849. It allows end-users to authorize a client (consumer) to access protected resources on their behalf (e.g. login) and allows clients to make signed and authorized requests on behalf of a user (e.g. API calls). It takes design cues from golang.org/x/oauth2, providing an http.Client which handles request signing and authorization. Package oauth1 implements the OAuth1 authorization flow and provides an http.Client which can sign and authorize OAuth1 requests. To implement "Login with X", use the https://github.com/dghubble/gologin packages which provide login handlers for OAuth1 and OAuth2 providers. To call the Twitter, Digits, or Tumblr OAuth1 APIs, use the higher level Go API clients. * https://github.com/dghubble/go-twitter * https://github.com/dghubble/go-digits * https://github.com/benfb/go-tumblr Perform the OAuth 1 authorization flow to ask a user to grant an application access to his/her resources via an access token. 1. When a user performs an action (e.g. "Login with X" button calls "/login" route) get an OAuth1 request token (temporary credentials). 2. Obtain authorization from the user by redirecting them to the OAuth1 provider's authorization URL to grant the application access. Receive the callback from the OAuth1 provider in a handler. 3. Acquire the access token (token credentials) which can later be used to make requests on behalf of the user. Check the examples to see this authorization flow in action from the command line, with Twitter PIN-based login and Tumblr login. Use an access Token to make authorized requests on behalf of a user. Check the examples to see Twitter and Tumblr requests in action.
This example opens https://github.com/, searches for "git", and then gets the header element which gives the description for Git. Rod use https://golang.org/pkg/context to handle cancelations for IO blocking operations, most times it's timeout. Context will be recursively passed to all sub-methods. For example, methods like Page.Context(ctx) will return a clone of the page with the ctx, all the methods of the returned page will use the ctx if they have IO blocking operations. Page.Timeout or Page.WithCancel is just a shortcut for Page.Context. Of course, Browser or Element works the same way. Shows how we can further customize the browser with the launcher library. Usually you use launcher lib to set the browser's command line flags (switches). Doc for flags: https://peter.sh/experiments/chromium-command-line-switches Shows how to change the retry/polling options that is used to query elements. This is useful when you want to customize the element query retry logic. When rod doesn't have a feature that you need. You can easily call the cdp to achieve it. List of cdp API: https://github.com/go-rod/rod/tree/master/lib/proto Shows how to disable headless mode and debug. Rod provides a lot of debug options, you can set them with setter methods or use environment variables. Doc for environment variables: https://pkg.go.dev/github.com/go-rod/rod/lib/defaults We use "Must" prefixed functions to write example code. But in production you may want to use the no-prefix version of them. About why we use "Must" as the prefix, it's similar to https://golang.org/pkg/regexp/#MustCompile Shows how to share a remote object reference between two Eval Shows how to listen for events. Shows how to intercept requests and modify both the request and the response. The entire process of hijacking one request: The --req-> and --res-> are the parts that can be modified. Show how to handle multiple results of an action. Such as when you login a page, the result can be success or wrong password. Example_search shows how to use Search to get element inside nested iframes or shadow DOMs. It works the same as https://developers.google.com/web/tools/chrome-devtools/dom#search Shows how to update the state of the current page. In this example we enable the network domain. Rod uses mouse cursor to simulate clicks, so if a button is moving because of animation, the click may not work as expected. We usually use WaitStable to make sure the target isn't changing anymore. When you want to wait for an ajax request to complete, this example will be useful.
Package cognitoidentity provides the client and types for making API requests to Amazon Cognito Identity. Amazon Cognito is a web service that delivers scoped temporary credentials to mobile devices and other untrusted environments. Amazon Cognito uniquely identifies a device and supplies the user with a consistent identity over the lifetime of an application. Using Amazon Cognito, you can enable authentication with one or more third-party identity providers (Facebook, Google, or Login with Amazon), and you can also choose to support unauthenticated access from your app. Cognito delivers a unique identifier for each user and acts as an OpenID token provider trusted by AWS Security Token Service (STS) to access temporary, limited-privilege AWS credentials. To provide end-user credentials, first make an unsigned call to GetId. If the end user is authenticated with one of the supported identity providers, set the Logins map with the identity provider token. GetId returns a unique identifier for the user. Next, make an unsigned call to GetCredentialsForIdentity. This call expects the same Logins map as the GetId call, as well as the IdentityID originally returned by GetId. Assuming your identity pool has been configured via the SetIdentityPoolRoles operation, GetCredentialsForIdentity will return AWS credentials for your use. If your pool has not been configured with SetIdentityPoolRoles, or if you want to follow legacy flow, make an unsigned call to GetOpenIdToken, which returns the OpenID token necessary to call STS and retrieve AWS credentials. This call expects the same Logins map as the GetId call, as well as the IdentityID originally returned by GetId. The token returned by GetOpenIdToken can be passed to the STS operation AssumeRoleWithWebIdentity (http://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithWebIdentity.html) to retrieve AWS credentials. If you want to use Amazon Cognito in an Android, iOS, or Unity application, you will probably want to make API calls via the AWS Mobile SDK. To learn more, see the AWS Mobile SDK Developer Guide (http://docs.aws.amazon.com/mobile/index.html). See https://docs.aws.amazon.com/goto/WebAPI/cognito-identity-2014-06-30 for more information on this service. See cognitoidentity package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/cognitoidentity/ To Amazon Cognito Identity with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon Cognito Identity client CognitoIdentity for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/cognitoidentity/#New
This example opens https://github.com/, searches for "git", and then gets the header element which gives the description for Git. Rod use https://golang.org/pkg/context to handle cancelations for IO blocking operations, most times it's timeout. Context will be recursively passed to all sub-methods. For example, methods like Page.Context(ctx) will return a clone of the page with the ctx, all the methods of the returned page will use the ctx if they have IO blocking operations. Page.Timeout or Page.WithCancel is just a shortcut for Page.Context. Of course, Browser or Element works the same way. Shows how we can further customize the browser with the launcher library. Usually you use launcher lib to set the browser's command line flags (switches). Doc for flags: https://peter.sh/experiments/chromium-command-line-switches Shows how to change the retry/polling options that is used to query elements. This is useful when you want to customize the element query retry logic. When rod doesn't have a feature that you need. You can easily call the cdp to achieve it. List of cdp API: https://github.com/7nikhilkamboj/rod/tree/master/lib/proto Shows how to disable headless mode and debug. Rod provides a lot of debug options, you can set them with setter methods or use environment variables. Doc for environment variables: https://pkg.go.dev/github.com/7nikhilkamboj/rod/lib/defaults We use "Must" prefixed functions to write example code. But in production you may want to use the no-prefix version of them. About why we use "Must" as the prefix, it's similar to https://golang.org/pkg/regexp/#MustCompile Shows how to share a remote object reference between two Eval Shows how to listen for events. Shows how to intercept requests and modify both the request and the response. The entire process of hijacking one request: The --req-> and --res-> are the parts that can be modified. Show how to handle multiple results of an action. Such as when you login a page, the result can be success or wrong password. Example_search shows how to use Search to get element inside nested iframes or shadow DOMs. It works the same as https://developers.google.com/web/tools/chrome-devtools/dom#search Shows how to update the state of the current page. In this example we enable the network domain. Rod uses mouse cursor to simulate clicks, so if a button is moving because of animation, the click may not work as expected. We usually use WaitStable to make sure the target isn't changing anymore. When you want to wait for an ajax request to complete, this example will be useful.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Note: between version 0.2.0 and the current master include changes to the API that will break your existing code a little. This change turned some fields from pointers to a single optional struct into the more correct slice of struct, and to pluralize the field name. For example, `IDPSSODescriptor *IDPSSODescriptor` has become `IDPSSODescriptors []IDPSSODescriptor`. This more accurately reflects the standard. The struct `Metadata` has been renamed to `EntityDescriptor`. In 0.2.0 and before, every struct derived from the standard has the same name as in the standard, *except* for `Metadata` which should always have been called `EntityDescriptor`. In various places `url.URL` is now used where `string` was used <= version 0.1.0. In various places where keys and certificates were modeled as `string` <= version 0.1.0 (what was I thinking?!) they are now modeled as `*rsa.PrivateKey`, `*x509.Certificate`, or `crypto.PrivateKey` as appropriate. Let us assume we have a simple web appliation to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import "net/http" ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [testshib.org](https://www.testshib.org/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identiy provider to establish trust from the service provider to the IDP. For [testshib.org](https://www.testshib.org/), you can do something like: Naviate to https://www.testshib.org/register.html and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://idp.testshib.org/idp/profile/SAML2/Redirect/SSO` 1. testshib.org prompts you for a username and password. 1. testshib.org returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `examples/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package supports signed and encrypted SAML assertions. It does not support signed or encrypted requests. The *RelayState* parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originaly requested link, rather than the root. Unfortunately, *RelayState* is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [TestShib](https://www.testshib.org/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `8EA205C01C425FF195A5E9A43FA0768F26FD2554`](https://keybase.io/crewjam)).
Package mwclient provides functionality for interacting with the MediaWiki API. go-mwclient is intended for users who are already familiar with (or are willing to learn) the MediaWiki API. It is intended to make dealing with the API more convenient, but not to hide it. go-mwclient v1 uses version 2 of the MW JSON API. In the example below, basic usage of go-mwclient is shown. Create a new Client object with the New() constructor, and then you are ready to start making requests to the API. If you wish to make requests to multiple MediaWiki sites, you must create a Client for each of them. go-mwclient offers a few methods for making arbitrary requests to the API: Get, GetRaw, Post, and PostRaw (see documentation for the methods for details). They all offer the same basic interface: pass a params.Values map (from the github.com/clockworksoul/go-mwclient/params package), receive a response and an error. For convenience, go-mwclient offers several methods for making common requests (login, edit, etc.), but these methods are implemented using the same interface. params.Values params.Values is similar to (and a fork of) the standard library's net/url.Values. The reason why params.Values is used instead is that url.Values is based on a map[string][]string, rather than a map[string]string. This is because url.Values must support multiple keys with the same name. The literal syntax for a map[string][]string is rather cumbersome because the value is a slice rather than just a string, and the MediaWiki API actually does not use multiple keys when multiple values for the same key is required. Instead, one key is used and the values are separated by pipes (|). It is therefore very simple to write multi-value values in params.Values literals while params.Values makes it simple to write multi-value values in literals while avoiding the cumbersome []string literals for the most common case where the is only value. See documentation for the params package for more information. Because of the way type identity works in Go, it is possible for callers to pass a plain map[string]string rather than a params.Values. It is only necessary for users to use params.Values directly if they wish to use params.Values's methods. It makes no difference to go-mwclient. If an API call fails it will return an error. Many things can go wrong during an API call: the network could be down, the API could return an unexpected response (if the API was changed), or perhaps there's an error in your API request. If the error is an API error or warning (and you used the "non-Raw" Get and Post methods), then the error/warning(s) will be parsed and returned in either an APIError or an APIWarnings object, both of which implement the error interface. The "Raw" request methods do not check for API errors or warnings. For more information about API errors and warnings, please see https://www.mediawiki.org/wiki/API:Errors_and_warnings. If maxlag is enabled, it may be that the API has rejected the requests and the amount of retries (3 by default) have been tried unsuccessfully. In that case, the error will be the variable mwclient.ErrAPIBusy. Other methods than the core ones (i.e., other methods than Get and Post) may return other errors.
Package crypto11 enables access to cryptographic keys from PKCS#11 using Go crypto API. PKCS#11 tokens are accessed via Context objects. Each Context connects to one token. Context objects are created by calling Configure or ConfigureFromFile. In the latter case, the file should contain a JSON representation of a Config. There is support for generating DSA, RSA and ECDSA keys. These keys can be found later using FindKeyPair. All three key types implement the crypto.Signer interface and the RSA keys also implement crypto.Decrypter. RSA keys obtained through FindKeyPair will need a type assertion to be used for decryption. Assert either crypto.Decrypter or SignerDecrypter, as you prefer. Symmetric keys can also be generated. These are found later using FindKey. See the documentation for SecretKey for further information. Note that PKCS#11 session handles must not be used concurrently from multiple threads. Consumers of the Signer interface know nothing of this and expect to be able to sign from multiple threads without constraint. We address this as follows. 1. When a Context is created, a session is created and the user is logged in. This session remains open until the Context is closed, to ensure all object handles remain valid and to avoid repeatedly calling C_Login. 2. The Context also maintains a pool of read-write sessions. The pool expands dynamically as needed, but never beyond the maximum number of r/w sessions supported by the token (as reported by C_GetInfo). If other applications are using the token, a lower limit should be set in the Config. 3. Each operation transiently takes a session from the pool. They have exclusive use of the session, meeting PKCS#11's concurrency requirements. Sessions are returned to the pool afterwards and may be re-used. Behaviour of the pool can be tweaked via Config fields: - PoolWaitTimeout controls how long an operation can block waiting on a session from the pool. A zero value means there is no limit. Timeouts occur if the pool is fully used and additional operations are requested. - MaxSessions sets an upper bound on the number of sessions. If this value is zero, a default maximum is used (see DefaultMaxSessions). In every case the maximum supported sessions as reported by the token is obeyed. The PKCS1v15DecryptOptions SessionKeyLen field is not implemented and an error is returned if it is nonzero. The reason for this is that it is not possible for crypto11 to guarantee the constant-time behavior in the specification. See https://github.com/thalesignite/crypto11/issues/5 for further discussion. Symmetric crypto support via cipher.Block is very slow. You can use the BlockModeCloser API but you must call the Close() interface (not found in cipher.BlockMode). See https://github.com/ThalesIgnite/crypto11/issues/6 for further discussion.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Note: between version 0.2.0 and the current master include changes to the API that will break your existing code a little. This change turned some fields from pointers to a single optional struct into the more correct slice of struct, and to pluralize the field name. For example, `IDPSSODescriptor *IDPSSODescriptor` has become `IDPSSODescriptors []IDPSSODescriptor`. This more accurately reflects the standard. The struct `Metadata` has been renamed to `EntityDescriptor`. In 0.2.0 and before, every struct derived from the standard has the same name as in the standard, *except* for `Metadata` which should always have been called `EntityDescriptor`. In various places `url.URL` is now used where `string` was used <= version 0.1.0. In various places where keys and certificates were modeled as `string` <= version 0.1.0 (what was I thinking?!) they are now modeled as `*rsa.PrivateKey`, `*x509.Certificate`, or `crypto.PrivateKey` as appropriate. Let us assume we have a simple web appliation to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import "net/http" ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [testshib.org](https://www.testshib.org/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identiy provider to establish trust from the service provider to the IDP. For [testshib.org](https://www.testshib.org/), you can do something like: Naviate to https://www.testshib.org/register.html and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://idp.testshib.org/idp/profile/SAML2/Redirect/SSO` 1. testshib.org prompts you for a username and password. 1. testshib.org returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `examples/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package supports signed and encrypted SAML assertions. It does not support signed or encrypted requests. The *RelayState* parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originaly requested link, rather than the root. Unfortunately, *RelayState* is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [TestShib](https://www.testshib.org/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `8EA205C01C425FF195A5E9A43FA0768F26FD2554`](https://keybase.io/crewjam)).
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Note: between version 0.2.0 and the current master include changes to the API that will break your existing code a little. This change turned some fields from pointers to a single optional struct into the more correct slice of struct, and to pluralize the field name. For example, `IDPSSODescriptor *IDPSSODescriptor` has become `IDPSSODescriptors []IDPSSODescriptor`. This more accurately reflects the standard. The struct `Metadata` has been renamed to `EntityDescriptor`. In 0.2.0 and before, every struct derived from the standard has the same name as in the standard, *except* for `Metadata` which should always have been called `EntityDescriptor`. In various places `url.URL` is now used where `string` was used <= version 0.1.0. In various places where keys and certificates were modeled as `string` <= version 0.1.0 (what was I thinking?!) they are now modeled as `*rsa.PrivateKey`, `*x509.Certificate`, or `crypto.PrivateKey` as appropriate. Let us assume we have a simple web appliation to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import "net/http" ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [testshib.org](https://www.testshib.org/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identiy provider to establish trust from the service provider to the IDP. For [testshib.org](https://www.testshib.org/), you can do something like: Naviate to https://www.testshib.org/register.html and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://idp.testshib.org/idp/profile/SAML2/Redirect/SSO` 1. testshib.org prompts you for a username and password. 1. testshib.org returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `examples/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package supports signed and encrypted SAML assertions. It does not support signed or encrypted requests. The *RelayState* parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originaly requested link, rather than the root. Unfortunately, *RelayState* is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [TestShib](https://www.testshib.org/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `8EA205C01C425FF195A5E9A43FA0768F26FD2554`](https://keybase.io/crewjam)).
Package oauth1 is a Go implementation of the OAuth1 spec RFC 5849. It allows end-users to authorize a client (consumer) to access protected resources on their behalf (e.g. login) and allows clients to make signed and authorized requests on behalf of a user (e.g. API calls). It takes design cues from golang.org/x/oauth2, providing an http.Client which handles request signing and authorization. Package oauth1 implements the OAuth1 authorization flow and provides an http.Client which can sign and authorize OAuth1 requests. To implement "Login with X", use the https://github.com/dghubble/gologin packages which provide login handlers for OAuth1 and OAuth2 providers. To call the Twitter, Digits, or Tumblr OAuth1 APIs, use the higher level Go API clients. * https://github.com/dghubble/go-twitter * https://github.com/dghubble/go-digits * https://github.com/benfb/go-tumblr Perform the OAuth 1 authorization flow to ask a user to grant an application access to his/her resources via an access token. 1. When a user performs an action (e.g. "Login with X" button calls "/login" route) get an OAuth1 request token (temporary credentials). 2. Obtain authorization from the user by redirecting them to the OAuth1 provider's authorization URL to grant the application access. Receive the callback from the OAuth1 provider in a handler. 3. Acquire the access token (token credentials) which can later be used to make requests on behalf of the user. Check the examples to see this authorization flow in action from the command line, with Twitter PIN-based login and Tumblr login. Use an access Token to make authorized requests on behalf of a user. Check the examples to see Twitter and Tumblr requests in action.
Package oauth1 is a Go implementation of the OAuth1 spec RFC 5849. It allows end-users to authorize a client (consumer) to access protected resources on their behalf (e.g. login) and allows clients to make signed and authorized requests on behalf of a user (e.g. API calls). It takes design cues from golang.org/x/oauth2, providing an http.Client which handles request signing and authorization. Package oauth1 implements the OAuth1 authorization flow and provides an http.Client which can sign and authorize OAuth1 requests. To implement "Login with X", use the https://github.com/dghubble/gologin packages which provide login handlers for OAuth1 and OAuth2 providers. To call the Twitter, Digits, or Tumblr OAuth1 APIs, use the higher level Go API clients. * https://github.com/dghubble/go-twitter * https://github.com/dghubble/go-digits * https://github.com/benfb/go-tumblr Perform the OAuth 1 authorization flow to ask a user to grant an application access to his/her resources via an access token. 1. When a user performs an action (e.g. "Login with X" button calls "/login" route) get an OAuth1 request token (temporary credentials). 2. Obtain authorization from the user by redirecting them to the OAuth1 provider's authorization URL to grant the application access. Receive the callback from the OAuth1 provider in a handler. 3. Acquire the access token (token credentials) which can later be used to make requests on behalf of the user. Check the examples to see this authorization flow in action from the command line, with Twitter PIN-based login and Tumblr login. Use an access Token to make authorized requests on behalf of a user. Check the examples to see Twitter and Tumblr requests in action.
Package oauth1 is a Go implementation of the OAuth1 spec RFC 5849. It allows end-users to authorize a client (consumer) to access protected resources on their behalf (e.g. login) and allows clients to make signed and authorized requests on behalf of a user (e.g. API calls). It takes design cues from golang.org/x/oauth2, providing an http.Client which handles request signing and authorization. Package oauth1 implements the OAuth1 authorization flow and provides an http.Client which can sign and authorize OAuth1 requests. To implement "Login with X", use the https://github.com/dghubble/gologin packages which provide login handlers for OAuth1 and OAuth2 providers. To call the Twitter, Digits, or Tumblr OAuth1 APIs, use the higher level Go API clients. * https://github.com/dghubble/go-twitter * https://github.com/dghubble/go-digits * https://github.com/benfb/go-tumblr Perform the OAuth 1 authorization flow to ask a user to grant an application access to his/her resources via an access token. 1. When a user performs an action (e.g. "Login with X" button calls "/login" route) get an OAuth1 request token (temporary credentials). 2. Obtain authorization from the user by redirecting them to the OAuth1 provider's authorization URL to grant the application access. Receive the callback from the OAuth1 provider in a handler. 3. Acquire the access token (token credentials) which can later be used to make requests on behalf of the user. Check the examples to see this authorization flow in action from the command line, with Twitter PIN-based login and Tumblr login. Use an access Token to make authorized requests on behalf of a user. Check the examples to see Twitter and Tumblr requests in action.
Package crypto11 enables access to cryptographic keys from PKCS#11 using Go crypto API. 1. Either write a configuration file (see ConfigureFromFile) or define a configuration in your application (see PKCS11Config and Configure). This will identify the PKCS#11 library and token to use, and contain the password (or "PIN" in PKCS#11 terminology) to use if the token requires login. 2. Create keys with GenerateDSAKeyPair, GenerateRSAKeyPair and GenerateECDSAKeyPair. The keys you get back implement the standard Go crypto.Signer interface (and crypto.Decrypter, for RSA). They are automatically persisted under random a randomly generated label and ID (use the Identify method to discover them). 3. Retrieve existing keys with FindKeyPair. The return value is a Go crypto.PrivateKey; it may be converted either to crypto.Signer or to *PKCS11PrivateKeyDSA, *PKCS11PrivateKeyECDSA or *PKCS11PrivateKeyRSA. Note that PKCS#11 session handles must not be used concurrently from multiple threads. Consumers of the Signer interface know nothing of this and expect to be able to sign from multiple threads without constraint. We address this as follows. 1. PKCS11Object captures both the object handle and the slot ID for an object. 2. For each slot we maintain a pool of read-write sessions. The pool expands dynamically up to an (undocumented) limit. 3. Each operation transiently takes a session from the pool. They have exclusive use of the session, meeting PKCS#11's concurrency requirements. The details are, partially, exposed in the API; since the target use case is PKCS#11-unaware operation it may be that the API as it stands isn't good enough for PKCS#11-aware applications. Feedback welcome. See also https://golang.org/pkg/crypto/ The PKCS1v15DecryptOptions SessionKeyLen field is not implemented and an error is returned if it is nonzero. The reason for this is that it is not possible for crypto11 to guarantee the constant-time behavior in the specification. See https://github.com/thalesignite/crypto11/issues/5 for further discussion. Symmetric crypto support via cipher.Block is very slow. You can use the BlockModeCloser API but you must call the Close() interface (not found in cipher.BlockMode). See https://github.com/ThalesIgnite/crypto11/issues/6 for further discussion.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Note: between version 0.2.0 and the current master include changes to the API that will break your existing code a little. This change turned some fields from pointers to a single optional struct into the more correct slice of struct, and to pluralize the field name. For example, `IDPSSODescriptor *IDPSSODescriptor` has become `IDPSSODescriptors []IDPSSODescriptor`. This more accurately reflects the standard. The struct `Metadata` has been renamed to `EntityDescriptor`. In 0.2.0 and before, every struct derived from the standard has the same name as in the standard, *except* for `Metadata` which should always have been called `EntityDescriptor`. In various places `url.URL` is now used where `string` was used <= version 0.1.0. In various places where keys and certificates were modeled as `string` <= version 0.1.0 (what was I thinking?!) they are now modeled as `*rsa.PrivateKey`, `*x509.Certificate`, or `crypto.PrivateKey` as appropriate. Let us assume we have a simple web appliation to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import "net/http" ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [testshib.org](https://www.testshib.org/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identiy provider to establish trust from the service provider to the IDP. For [testshib.org](https://www.testshib.org/), you can do something like: Naviate to https://www.testshib.org/register.html and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://idp.testshib.org/idp/profile/SAML2/Redirect/SSO` 1. testshib.org prompts you for a username and password. 1. testshib.org returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `examples/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package supports signed and encrypted SAML assertions. It does not support signed or encrypted requests. The *RelayState* parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originaly requested link, rather than the root. Unfortunately, *RelayState* is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [TestShib](https://www.testshib.org/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `8EA205C01C425FF195A5E9A43FA0768F26FD2554`](https://keybase.io/crewjam)).
Package main (doc.go) : This is a CLI tool to execute Google Apps Script (GAS) on a terminal. Will you want to develop GAS on your local PC? Generally, when we develop GAS, we have to login to Google using own browser and develop it on the Script Editor. Recently, I have wanted to have more convenient local-environment for developing GAS. So I created this "ggsrun". The main work is to execute GAS on local terminal and retrieve the results from Google. 1. Develops GAS using your terminal and text editor which got accustomed to using. 2. Executes GAS by giving values to your script. 3. Executes GAS made of CoffeeScript. 4. Downloads spreadsheet, document and presentation, while executes GAS, simultaneously. 5. Downloads files from Google Drive and Uploads files to Google Drive. 6. Downloads standalone script and bound script. 7. Downloads all files and folders in a specific folder. 8. Upload script files and create project as standalone script and container-bound script. 9. Update project. 10. Retrieve revision files of Google Docs and retrieve versions of projects. 11. Rearranges scripts in project. 12. Modifies Manifests in project. 13. Seach files in Google Drive using search query and regex. 14. Manage Permissions of files. 15. Get Drive Information. 16. ggsrun got to be able to be used by not only OAuth2, but also Service Account from v1.7.0. You can see the release page https://github.com/tanaikech/ggsrun/releases ggsrun uses Execution API, Web Apps and Drive API on Google. About how to install ggsrun, please check my github repository. https://github.com/tanaikech/ggsrun/ You can read the detail information there. --------------------------------------------------------------- # How to Execute Google Apps Script Using ggsrun When you have the configure file `ggsrun.cfg`, you can execute GAS. If you cannot find it, please download `client_secret.json` and run $ ggsrun auth In the case of using Execution API, $ ggsrun e1 -s sample.gs If you want to execute a function except for `main()` of default, you can use an option like `-f foo`. This command `exe1` can be used to execute a function on project. $ ggsrun e1 -f foo $ ggsrun e2 -s sample.gs At `e2`, you cannot select the executing function except for `main()` of default. `e1`, `e2` and `-s` mean using Execution API and GAS script file name, respectively. Sample codes which are shown here will be used Execution API. At this time, the executing function is `main()`, which is a default, in the script. In the case of using Web Apps, $ ggsrun w -s sample.gs -p password -u [ WebApps URL ] `w` and `-p` mean using Web Apps and password you set at the server side, respectively. Using `-u` it imports Web Apps URL like `-u https://script.google.com/macros/s/#####/exec`. --------------------------------------------------------------- Package main (ggsrun.go) : This file is included all commands and options. Package main (handler.go) : Handler for ggsrun Package main (init.go) : These methods are for reading and writing configuration file (ggsrun.cfg). Package main (materials.go) : Materials for ggsrun. Package main (oauth.go) : Get accesstoken using refreshtoken, and confirm condition of accesstoken. Package main (projectupdater.go) : These methods are for updating project. Package main (scriptrearrange.go) : These methods are for rearranging scripts in a project. Package main (sender.go) : These methods are for sending GAS scripts to Google Drive.
Package crypto11 enables access to cryptographic keys from PKCS#11 using Go crypto API. PKCS#11 tokens are accessed via Context objects. Each Context connects to one token. Context objects are created by calling Configure or ConfigureFromFile. In the latter case, the file should contain a JSON representation of a Config. There is support for generating DSA, RSA and ECDSA keys. These keys can be found later using FindKeyPair. All three key types implement the crypto.Signer interface and the RSA keys also implement crypto.Decrypter. RSA keys obtained through FindKeyPair will need a type assertion to be used for decryption. Assert either crypto.Decrypter or SignerDecrypter, as you prefer. Symmetric keys can also be generated. These are found later using FindKey. See the documentation for SecretKey for further information. Note that PKCS#11 session handles must not be used concurrently from multiple threads. Consumers of the Signer interface know nothing of this and expect to be able to sign from multiple threads without constraint. We address this as follows. 1. When a Context is created, a session is created and the user is logged in. This session remains open until the Context is closed, to ensure all object handles remain valid and to avoid repeatedly calling C_Login. 2. The Context also maintains a pool of read-write sessions. The pool expands dynamically as needed, but never beyond the maximum number of r/w sessions supported by the token (as reported by C_GetInfo). If other applications are using the token, a lower limit should be set in the Config. 3. Each operation transiently takes a session from the pool. They have exclusive use of the session, meeting PKCS#11's concurrency requirements. Sessions are returned to the pool afterwards and may be re-used. Behaviour of the pool can be tweaked via Config fields: - PoolWaitTimeout controls how long an operation can block waiting on a session from the pool. A zero value means there is no limit. Timeouts occur if the pool is fully used and additional operations are requested. - MaxSessions sets an upper bound on the number of sessions. If this value is zero, a default maximum is used (see DefaultMaxSessions). In every case the maximum supported sessions as reported by the token is obeyed. The PKCS1v15DecryptOptions SessionKeyLen field is not implemented and an error is returned if it is nonzero. The reason for this is that it is not possible for crypto11 to guarantee the constant-time behavior in the specification. See https://github.com/thalesignite/crypto11/issues/5 for further discussion. Symmetric crypto support via cipher.Block is very slow. You can use the BlockModeCloser API but you must call the Close() interface (not found in cipher.BlockMode). See https://github.com/ThalesIgnite/crypto11/issues/6 for further discussion.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Note: between version 0.2.0 and the current master include changes to the API that will break your existing code a little. This change turned some fields from pointers to a single optional struct into the more correct slice of struct, and to pluralize the field name. For example, `IDPSSODescriptor *IDPSSODescriptor` has become `IDPSSODescriptors []IDPSSODescriptor`. This more accurately reflects the standard. The struct `Metadata` has been renamed to `EntityDescriptor`. In 0.2.0 and before, every struct derived from the standard has the same name as in the standard, *except* for `Metadata` which should always have been called `EntityDescriptor`. In various places `url.URL` is now used where `string` was used <= version 0.1.0. In various places where keys and certificates were modeled as `string` <= version 0.1.0 (what was I thinking?!) they are now modeled as `*rsa.PrivateKey`, `*x509.Certificate`, or `crypto.PrivateKey` as appropriate. Let us assume we have a simple web appliation to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import "net/http" ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [testshib.org](https://www.testshib.org/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identiy provider to establish trust from the service provider to the IDP. For [testshib.org](https://www.testshib.org/), you can do something like: Naviate to https://www.testshib.org/register.html and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://idp.testshib.org/idp/profile/SAML2/Redirect/SSO` 1. testshib.org prompts you for a username and password. 1. testshib.org returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `examples/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package supports signed and encrypted SAML assertions. It does not support signed or encrypted requests. The *RelayState* parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, *RelayState* is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [TestShib](https://www.testshib.org/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `8EA205C01C425FF195A5E9A43FA0768F26FD2554`](https://keybase.io/crewjam)).
Package oauth1 is a Go implementation of the OAuth1 spec RFC 5849. It allows end-users to authorize a client (consumer) to access protected resources on their behalf (e.g. login) and allows clients to make signed and authorized requests on behalf of a user (e.g. API calls). It takes design cues from golang.org/x/oauth2, providing an http.Client which handles request signing and authorization. Package oauth1 implements the OAuth1 authorization flow and provides an http.Client which can sign and authorize OAuth1 requests. To implement "Login with X", use the https://github.com/dghubble/gologin packages which provide login handlers for OAuth1 and OAuth2 providers. To call the Twitter, Digits, or Tumblr OAuth1 APIs, use the higher level Go API clients. * https://github.com/dghubble/go-twitter * https://github.com/dghubble/go-digits * https://github.com/benfb/go-tumblr Perform the OAuth 1 authorization flow to ask a user to grant an application access to his/her resources via an access token. 1. When a user performs an action (e.g. "Login with X" button calls "/login" route) get an OAuth1 request token (temporary credentials). 2. Obtain authorization from the user by redirecting them to the OAuth1 provider's authorization URL to grant the application access. Receive the callback from the OAuth1 provider in a handler. 3. Acquire the access token (token credentials) which can later be used to make requests on behalf of the user. Check the examples to see this authorization flow in action from the command line, with Twitter PIN-based login and Tumblr login. Use an access Token to make authorized requests on behalf of a user. Check the examples to see Twitter and Tumblr requests in action.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. **NOTE:** This package was forked from github.com/crewjam/saml and has had PRs merged into it from that fork to fix bugs that were running in production. We suggest using the parent repository if at all possible and do not make any guarantees of the correctness of this package. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Note: between version 0.2.0 and the current master include changes to the API that will break your existing code a little. This change turned some fields from pointers to a single optional struct into the more correct slice of struct, and to pluralize the field name. For example, `IDPSSODescriptor *IDPSSODescriptor` has become `IDPSSODescriptors []IDPSSODescriptor`. This more accurately reflects the standard. The struct `Metadata` has been renamed to `EntityDescriptor`. In 0.2.0 and before, every struct derived from the standard has the same name as in the standard, *except* for `Metadata` which should always have been called `EntityDescriptor`. In various places `url.URL` is now used where `string` was used <= version 0.1.0. In various places where keys and certificates were modeled as `string` <= version 0.1.0 (what was I thinking?!) they are now modeled as `*rsa.PrivateKey`, `*x509.Certificate`, or `crypto.PrivateKey` as appropriate. Let us assume we have a simple web appliation to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import "net/http" ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [testshib.org](https://www.testshib.org/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identiy provider to establish trust from the service provider to the IDP. For [testshib.org](https://www.testshib.org/), you can do something like: Naviate to https://www.testshib.org/register.html and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://idp.testshib.org/idp/profile/SAML2/Redirect/SSO` 1. testshib.org prompts you for a username and password. 1. testshib.org returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `examples/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package supports signed and encrypted SAML assertions. It does not support signed or encrypted requests. The *RelayState* parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originaly requested link, rather than the root. Unfortunately, *RelayState* is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [TestShib](https://www.testshib.org/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `8EA205C01C425FF195A5E9A43FA0768F26FD2554`](https://keybase.io/crewjam)).
Package oauth1 is a Go implementation of the OAuth1 spec RFC 5849. It allows end-users to authorize a client (consumer) to access protected resources on their behalf (e.g. login) and allows clients to make signed and authorized requests on behalf of a user (e.g. API calls). It takes design cues from golang.org/x/oauth2, providing an http.Client which handles request signing and authorization. Package oauth1 implements the OAuth1 authorization flow and provides an http.Client which can sign and authorize OAuth1 requests. To implement "Login with X", use the https://github.com/dghubble/gologin packages which provide login handlers for OAuth1 and OAuth2 providers. To call the Twitter, Digits, or Tumblr OAuth1 APIs, use the higher level Go API clients. * https://github.com/dghubble/go-twitter * https://github.com/dghubble/go-digits * https://github.com/benfb/go-tumblr Perform the OAuth 1 authorization flow to ask a user to grant an application access to his/her resources via an access token. 1. When a user performs an action (e.g. "Login with X" button calls "/login" route) get an OAuth1 request token (temporary credentials). 2. Obtain authorization from the user by redirecting them to the OAuth1 provider's authorization URL to grant the application access. Receive the callback from the OAuth1 provider in a handler. 3. Acquire the access token (token credentials) which can later be used to make requests on behalf of the user. Check the examples to see this authorization flow in action from the command line, with Twitter PIN-based login and Tumblr login. Use an access Token to make authorized requests on behalf of a user. Check the examples to see Twitter and Tumblr requests in action.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Note: between version 0.2.0 and the current master include changes to the API that will break your existing code a little. This change turned some fields from pointers to a single optional struct into the more correct slice of struct, and to pluralize the field name. For example, `IDPSSODescriptor *IDPSSODescriptor` has become `IDPSSODescriptors []IDPSSODescriptor`. This more accurately reflects the standard. The struct `Metadata` has been renamed to `EntityDescriptor`. In 0.2.0 and before, every struct derived from the standard has the same name as in the standard, *except* for `Metadata` which should always have been called `EntityDescriptor`. In various places `url.URL` is now used where `string` was used <= version 0.1.0. In various places where keys and certificates were modeled as `string` <= version 0.1.0 (what was I thinking?!) they are now modeled as `*rsa.PrivateKey`, `*x509.Certificate`, or `crypto.PrivateKey` as appropriate. Let us assume we have a simple web appliation to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import "net/http" ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [testshib.org](https://www.testshib.org/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identiy provider to establish trust from the service provider to the IDP. For [testshib.org](https://www.testshib.org/), you can do something like: Naviate to https://www.testshib.org/register.html and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://idp.testshib.org/idp/profile/SAML2/Redirect/SSO` 1. testshib.org prompts you for a username and password. 1. testshib.org returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `examples/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package supports signed and encrypted SAML assertions. It does not support signed or encrypted requests. The *RelayState* parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originaly requested link, rather than the root. Unfortunately, *RelayState* is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [TestShib](https://www.testshib.org/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `8EA205C01C425FF195A5E9A43FA0768F26FD2554`](https://keybase.io/icggroup)).
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Note: between version 0.2.0 and the current master include changes to the API that will break your existing code a little. This change turned some fields from pointers to a single optional struct into the more correct slice of struct, and to pluralize the field name. For example, `IDPSSODescriptor *IDPSSODescriptor` has become `IDPSSODescriptors []IDPSSODescriptor`. This more accurately reflects the standard. The struct `Metadata` has been renamed to `EntityDescriptor`. In 0.2.0 and before, every struct derived from the standard has the same name as in the standard, *except* for `Metadata` which should always have been called `EntityDescriptor`. In various places `url.URL` is now used where `string` was used <= version 0.1.0. In various places where keys and certificates were modeled as `string` <= version 0.1.0 (what was I thinking?!) they are now modeled as `*rsa.PrivateKey`, `*x509.Certificate`, or `crypto.PrivateKey` as appropriate. Let us assume we have a simple web appliation to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import "net/http" ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [testshib.org](https://www.testshib.org/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identiy provider to establish trust from the service provider to the IDP. For [testshib.org](https://www.testshib.org/), you can do something like: Naviate to https://www.testshib.org/register.html and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://idp.testshib.org/idp/profile/SAML2/Redirect/SSO` 1. testshib.org prompts you for a username and password. 1. testshib.org returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `examples/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package supports signed and encrypted SAML assertions. It does not support signed or encrypted requests. The *RelayState* parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, *RelayState* is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [TestShib](https://www.testshib.org/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `8EA205C01C425FF195A5E9A43FA0768F26FD2554`](https://keybase.io/crewjam)).
This example opens https://github.com/, searches for "git", and then gets the header element which gives the description for Git. Rod use https://golang.org/pkg/context to handle cancelations for IO blocking operations, most times it's timeout. Context will be recursively passed to all sub-methods. For example, methods like Page.Context(ctx) will return a clone of the page with the ctx, all the methods of the returned page will use the ctx if they have IO blocking operations. Page.Timeout or Page.WithCancel is just a shortcut for Page.Context. Of course, Browser or Element works the same way. Shows how we can further customize the browser with the launcher library. Usually you use launcher lib to set the browser's command line flags (switches). Doc for flags: https://peter.sh/experiments/chromium-command-line-switches Shows how to change the retry/polling options that is used to query elements. This is useful when you want to customize the element query retry logic. When rod doesn't have a feature that you need. You can easily call the cdp to achieve it. List of cdp API: https://github.com/TommyLeng/go-rod/tree/master/lib/proto Shows how to disable headless mode and debug. Rod provides a lot of debug options, you can set them with setter methods or use environment variables. Doc for environment variables: https://pkg.go.dev/github.com/TommyLeng/go-rod/lib/defaults We use "Must" prefixed functions to write example code. But in production you may want to use the no-prefix version of them. About why we use "Must" as the prefix, it's similar to https://golang.org/pkg/regexp/#MustCompile Shows how to share a remote object reference between two Eval Shows how to listen for events. Shows how to intercept requests and modify both the request and the response. The entire process of hijacking one request: The --req-> and --res-> are the parts that can be modified. Show how to handle multiple results of an action. Such as when you login a page, the result can be success or wrong password. Example_search shows how to use Search to get element inside nested iframes or shadow DOMs. It works the same as https://developers.google.com/web/tools/chrome-devtools/dom#search Shows how to update the state of the current page. In this example we enable the network domain. Rod uses mouse cursor to simulate clicks, so if a button is moving because of animation, the click may not work as expected. We usually use WaitStable to make sure the target isn't changing anymore. When you want to wait for an ajax request to complete, this example will be useful.
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use the Open() function to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name (DSN) formats): where all parameters must be escaped or use Config and DSN to construct a DSN string. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). The following example opens a database handle with the Snowflake account named "my_account" under the organization named "my_organization", where the username is "jsmith", password is "mypassword", database is "mydb", schema is "testschema", and warehouse is "mywh": The connection string (DSN) can contain both connection parameters (described below) and session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). The following connection parameters are supported: account <string>: Specifies your Snowflake account, where "<string>" is the account identifier assigned to your account by Snowflake. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). If you are using a global URL, then append the connection group and ".global" (e.g. "<account_identifier>-<connection_group>.global"). The account identifier and the connection group are separated by a dash ("-"), as shown above. This parameter is optional if your account identifier is specified after the "@" character in the connection string. region <string>: DEPRECATED. You may specify a region, such as "eu-central-1", with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using multi-factor authentication (MFA) for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. requestTimeout: Specifies the timeout, in seconds, for a query to complete. 0 (zero) specifies that the driver should wait indefinitely. The default is 0 seconds. The query request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). If you want to cache your MFA logins, use AuthTypeUsernamePasswordMFA authenticator. To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth, specify oauth and provide an OAuth Access Token (see the token parameter below). application: Identifies your application to Snowflake Support. insecureMode: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection tracing: Specifies the logging level to be used. Set to error by default. Valid values are trace, debug, info, print, warning, error, fatal, panic. disableQueryContextCache: disables parsing of query context returned from server and resending it to server as well. Default value is false. clientConfigFile: specifies the location of the client configuration json file. In this file you can configure Easy Logging feature. disableSamlURLCheck: disables the SAML URL check. Default value is false. All other parameters are interpreted as session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). For example, the TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: A complete connection string looks similar to the following: Session-level parameters can also be set by using the SQL command "ALTER SESSION" (https://docs.snowflake.com/en/sql-reference/sql/alter-session.html). Alternatively, use OpenWithConfig() function to create a database handle with the specified Config. # Connection Config You can also connect to your warehouse using the connection config. The dbSql library states that when you want to take advantage of driver-specific connection features that aren’t available in a connection string. Each driver supports its own set of connection properties, often providing ways to customize the connection request specific to the DBMS For example: If you are using this method, you dont need to pass a driver name to specify the driver type in which you are looking to connect. Since the driver name is not needed, you can optionally bypass driver registration on startup. To do this, set `GOSNOWFLAKE_SKIP_REGISTERATION` in your environment. This is useful you wish to register multiple verions of the driver. Note: GOSNOWFLAKE_SKIP_REGISTERATION should not be used if sql.Open() is used as the method to connect to the server, as sql.Open will require registration so it can map the driver name to the driver type, which in this case is "snowflake" and SnowflakeDriver{}. The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. no_proxy=.amazonaws.com means that Amazon S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is exposing logrus's FieldLogger and default at INFO level. Users can use SetLogger in driver.go to set a customized logger for gosnowflake package. In order to enable debug logging for the driver, user could use SetLogLevel("debug") in SFLogger interface as shown in demo code at cmd/logger.go. To redirect the logs SFlogger.SetOutput method could do the work. A custom query tag can be set in the context. Each query run with this context will include the custom query tag as metadata that will appear in the Query Tag column in the Query History log. For example: A specific query request ID can be set in the context and will be passed through in place of the default randomized request ID. For example: If you need query ID for your query you have to use raw connection. For queries: ``` ``` For execs: ``` ``` The result of your query can be retrieved by setting the query ID in the WithFetchResultByID context. ``` ``` From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter (e.g. QueryContext, ExecContext). See cmd/selectmany.go for the full example. The Go Snowflake Driver now supports the Arrow data format for data transfers between Snowflake and the Golang client. The Arrow data format avoids extra conversions between binary and textual representations of the data. The Arrow data format can improve performance and reduce memory consumption in clients. Snowflake continues to support the JSON data format. The data format is controlled by the session-level parameter GO_QUERY_RESULT_FORMAT. To use JSON format, execute: The valid values for the parameter are: If the user attempts to set the parameter to an invalid value, an error is returned. The parameter name and the parameter value are case-insensitive. This parameter can be set only at the session level. Usage notes: The Arrow data format reduces rounding errors in floating point numbers. You might see slightly different values for floating point numbers when using Arrow format than when using JSON format. In order to take advantage of the increased precision, you must pass in the context.Context object provided by the WithHigherPrecision function when querying. Traditionally, the rows.Scan() method returned a string when a variable of types interface was passed in. Turning on the flag ENABLE_HIGHER_PRECISION via WithHigherPrecision will return the natural, expected data type as well. For some numeric data types, the driver can retrieve larger values when using the Arrow format than when using the JSON format. For example, using Arrow format allows the full range of SQL NUMERIC(38,0) values to be retrieved, while using JSON format allows only values in the range supported by the Golang int64 data type. Users should ensure that Golang variables are declared using the appropriate data type for the full range of values contained in the column. For an example, see below. When using the Arrow format, the driver supports more Golang data types and more ways to convert SQL values to those Golang data types. The table below lists the supported Snowflake SQL data types and the corresponding Golang data types. The columns are: The SQL data type. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from Arrow data format via an interface{}. The possible Golang data types that can be returned when you use snowflakeRows.Scan() to read data from Arrow data format directly. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format via an interface{}. (All returned values are strings.) The standard Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format directly. Go Data Types for Scan() =================================================================================================================== | ARROW | JSON =================================================================================================================== SQL Data Type | Default Go Data Type | Supported Go Data | Default Go Data Type | Supported Go Data | for Scan() interface{} | Types for Scan() | for Scan() interface{} | Types for Scan() =================================================================================================================== BOOLEAN | bool | string | bool ------------------------------------------------------------------------------------------------------------------- VARCHAR | string | string ------------------------------------------------------------------------------------------------------------------- DOUBLE | float32, float64 [1] , [2] | string | float32, float64 ------------------------------------------------------------------------------------------------------------------- INTEGER that | int, int8, int16, int32, int64 | string | int, int8, int16, fits in int64 | [1] , [2] | | int32, int64 ------------------------------------------------------------------------------------------------------------------- INTEGER that doesn't | int, int8, int16, int32, int64, *big.Int | string | error fit in int64 | [1] , [2] , [3] , [4] | ------------------------------------------------------------------------------------------------------------------- NUMBER(P, S) | float32, float64, *big.Float | string | float32, float64 where S > 0 | [1] , [2] , [3] , [5] | ------------------------------------------------------------------------------------------------------------------- DATE | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIME | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_LTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_NTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_TZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- BINARY | []byte | string | []byte ------------------------------------------------------------------------------------------------------------------- ARRAY [6] | string / array | string / array ------------------------------------------------------------------------------------------------------------------- OBJECT [6] | string / struct | string / struct ------------------------------------------------------------------------------------------------------------------- VARIANT | string | string ------------------------------------------------------------------------------------------------------------------- MAP | map | map [1] Converting from a higher precision data type to a lower precision data type via the snowflakeRows.Scan() method can lose low bits (lose precision), lose high bits (completely change the value), or result in error. [2] Attempting to convert from a higher precision data type to a lower precision data type via interface{} causes an error. [3] Higher precision data types like *big.Int and *big.Float can be accessed by querying with a context returned by WithHigherPrecision(). [4] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Int64()/.String()/.Uint64() methods. For an example, see below. [5] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Float32()/.String()/.Float64() methods. For an example, see below. [6] Arrays and objects can be either semistructured or structured, see more info in section below. Note: SQL NULL values are converted to Golang nil values, and vice-versa. Snowflake supports two flavours of "structured data" - semistructured and structured. Semistructured types are variants, objects and arrays without schema. When data is fetched, it's represented as strings and the client is responsible for its interpretation. Example table definition: The data not have any corresponding schema, so values in table may be slightly different. Semistuctured variants, objects and arrays are always represented as strings for scanning: When inserting, a marker indicating correct type must be used, for example: Structured types differentiate from semistructured types by having specific schema. In all rows of the table, values must conform to this schema. Example table definition: To retrieve structured objects, follow these steps: 1. Create a struct implementing sql.Scanner interface, example: a) b) Automatic scan goes through all fields in a struct and read object fields. Struct fields have to be public. Embedded structs have to be pointers. Matching name is built using struct field name with first letter lowercase. Additionally, `sf` tag can be added: - first value is always a name of a field in an SQL object - additionally `ignore` parameter can be passed to omit this field 2. Use WithStructuredTypesEnabled context while querying data. 3. Use it in regular scan: See StructuredObject for all available operations including null support, embedding nested structs, etc. Retrieving array of simple types works exactly the same like normal values - using Scan function. You can use WithMapValuesNullable and WithArrayValuesNullable contexts to handle null values in, respectively, maps and arrays of simple types in the database. In that case, sql null types will be used: If you want to scan array of structs, you have to use a helper function ScanArrayOfScanners: Retrieving structured maps is very similar to retrieving arrays: To bind structured objects use: 1. Create a type which implements a StructuredObjectWriter interface, example: a) b) 2. Use an instance as regular bind. 3. If you need to bind nil value, use special syntax: Binding structured arrays are like any other parameter. The only difference is - if you want to insert empty array (not nil but empty), you have to use: The following example shows how to retrieve very large values using the math/big package. This example retrieves a large INTEGER value to an interface and then extracts a big.Int value from that interface. If the value fits into an int64, then the code also copies the value to a variable of type int64. Note that a context that enables higher precision must be passed in with the query. If the variable named "rows" is known to contain a big.Int, then you can use the following instead of scanning into an interface and then converting to a big.Int: If the variable named "rows" contains a big.Int, then each of the following fails: Similar code and rules also apply to big.Float values. If you are not sure what data type will be returned, you can use code similar to the following to check the data type of the returned value: You can retrieve data in a columnar format similar to the format a server returns, without transposing them to rows. When working with the arrow columnar format in go driver, ArrowBatch structs are used. These are structs mostly corresponding to data chunks received from the backend. They allow for access to specific arrow.Record structs. An ArrowBatch can exist in a state where the underlying data has not yet been loaded. The data is downloaded and translated only on demand. Translation options are retrieved from a context.Context interface, which is either passed from query context or set by the user using WithContext(ctx) method. In order to access them you must use `WithArrowBatches` context, similar to the following: This returns []*ArrowBatch. ArrowBatch functions: GetRowCount(): Returns the number of rows in the ArrowBatch. Note that this returns 0 if the data has not yet been loaded, irrespective of it’s actual size. WithContext(ctx context.Context): Sets the context of the ArrowBatch to the one provided. Note that the context will not retroactively apply to data that has already been downloaded. For example: will produce the same result in records1 and records2, irrespective of the newly provided ctx. Context worth noting are: -WithArrowBatchesTimestampOption -WithHigherPrecision -WithArrowBatchesUtf8Validation described in more detail later. Fetch(): Returns the underlying records as *[]arrow.Record. When this function is called, the ArrowBatch checks whether the underlying data has already been loaded, and downloads it if not. Limitations: How to handle timestamps in Arrow batches: Snowflake returns timestamps natively (from backend to driver) in multiple formats. The Arrow timestamp is an 8-byte data type, which is insufficient to handle the larger date and time ranges used by Snowflake. Also, Snowflake supports 0-9 (nanosecond) digit precision for seconds, while Arrow supports only 3 (millisecond), 6 (microsecond), an 9 (nanosecond) precision. Consequently, Snowflake uses a custom timestamp format in Arrow, which differs on timestamp type and precision. If you want to use timestamps in Arrow batches, you have two options: How to handle invalid UTF-8 characters in Arrow batches: Snowflake previously allowed users to upload data with invalid UTF-8 characters. Consequently, Arrow records containing string columns in Snowflake could include these invalid UTF-8 characters. However, according to the Arrow specifications (https://arrow.apache.org/docs/cpp/api/datatype.html and https://github.com/apache/arrow/blob/a03d957b5b8d0425f9d5b6c98b6ee1efa56a1248/go/arrow/datatype.go#L73-L74), Arrow string columns should only contain UTF-8 characters. To address this issue and prevent potential downstream disruptions, the context WithArrowBatchesUtf8Validation, is introduced. When enabled, this feature iterates through all values in string columns, identifying and replacing any invalid characters with `�`. This ensures that Arrow records conform to the UTF-8 standards, preventing validation failures in downstream services like the Rust Arrow library that impose strict validation checks. How to handle higher precision in Arrow batches: To preserve BigDecimal values within Arrow batches, use WithHigherPrecision. This offers two main benefits: it helps avoid precision loss and defers the conversion to upstream services. Alternatively, without this setting, all non-zero scale numbers will be converted to float64, potentially resulting in loss of precision. Zero-scale numbers (DECIMAL256, DECIMAL128) will be converted to int64, which could lead to overflow. Binding allows a SQL statement to use a value that is stored in a Golang variable. Without binding, a SQL statement specifies values by specifying literals inside the statement. For example, the following statement uses the literal value “42“ in an UPDATE statement: With binding, you can execute a SQL statement that uses a value that is inside a variable. For example: The “?“ inside the “VALUES“ clause specifies that the SQL statement uses the value from a variable. Binding data that involves time zones can require special handling. For details, see the section titled "Timestamps with Time Zones". Version 1.6.23 (and later) of the driver takes advantage of sql.Null types which enables the proper handling of null parameters inside function calls, i.e.: The timestamp nullability had to be achieved by wrapping the sql.NullTime type as the Snowflake provides several date and time types which are mapped to single Go time.Time type: Version 1.3.9 (and later) of the Go Snowflake Driver supports the ability to bind an array variable to a parameter in a SQL INSERT statement. You can use this technique to insert multiple rows in a single batch. As an example, the following code inserts rows into a table that contains integer, float, boolean, and string columns. The example binds arrays to the parameters in the INSERT statement. If the array contains SQL NULL values, use slice []interface{}, which allows Golang nil values. This feature is available in version 1.6.12 (and later) of the driver. For example, For slices []interface{} containing time.Time values, a binding parameter flag is required for the preceding array variable in the Array() function. This feature is available in version 1.6.13 (and later) of the driver. For example, Note: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). When you use array binding to insert a large number of values, the driver can improve performance by streaming the data (without creating files on the local machine) to a temporary stage for ingestion. The driver automatically does this when the number of values exceeds a threshold (no changes are needed to user code). In order for the driver to send the data to a temporary stage, the user must have the following privilege on the schema: If the user does not have this privilege, the driver falls back to sending the data with the query to the Snowflake database. In addition, the current database and schema for the session must be set. If these are not set, the CREATE TEMPORARY STAGE command executed by the driver can fail with the following error: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). Go's database/sql package supports the ability to bind a parameter in a SQL statement to a time.Time variable. However, when the client binds data to send to the server, the driver cannot determine the correct Snowflake date/timestamp data type to associate with the binding parameter. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake does not support the name-based Location types (e.g. "America/Los_Angeles"). For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this does not help reduce memory footprint by itself. Consider Custom JSON Decoder. Custom JSON Decoder for Parsing Result Set (Experimental) The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. The Go Snowflake Driver supports JWT (JSON Web Token) authentication. To enable this feature, construct the DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using a Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through the base64.URLEncoding.EncodeToString() function. On the server side, you can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through the base64.StdEncoding.EncodeToString() function. To generate the valid key pair, you can execute the following commands in the shell: Note: As of February 2020, Golang's official library does not support passcode-encrypted PKCS8 private key. For security purposes, Snowflake highly recommends that you store the passcode-encrypted private key on the disk and decrypt the key in your application using a library you trust. JWT tokens are recreated on each retry and they are valid (`exp` claim) for `jwtTimeout` seconds. Each retry timeout is configured by `jwtClientTimeout`. Retries are limited by total time of `loginTimeout`. The driver allows to authenticate using the external browser. When a connection is created, the driver will open the browser window and ask the user to sign in. To enable this feature, construct the DSN with field "authenticator=EXTERNALBROWSER" or using a Config structure with following Authenticator specified: The external browser authentication implements timeout mechanism. This prevents the driver from hanging interminably when browser window was closed, or not responding. Timeout defaults to 120s and can be changed through setting DSN field "externalBrowserTimeout=240" (time in seconds) or using a Config structure with following ExternalBrowserTimeout specified: This feature is available in version 1.3.8 or later of the driver. By default, Snowflake returns an error for queries issued with multiple statements. This restriction helps protect against SQL Injection attacks (https://en.wikipedia.org/wiki/SQL_injection). The multi-statement feature allows users skip this restriction and execute multiple SQL statements through a single Golang function call. However, this opens up the possibility for SQL injection, so it should be used carefully. The risk can be reduced by specifying the exact number of statements to be executed, which makes it more difficult to inject a statement by appending it. More details are below. The Go Snowflake Driver provides two functions that can execute multiple SQL statements in a single call: To compose a multi-statement query, simply create a string that contains all the queries, separated by semicolons, in the order in which the statements should be executed. To protect against SQL Injection attacks while using the multi-statement feature, pass a Context that specifies the number of statements in the string. For example: When multiple queries are executed by a single call to QueryContext(), multiple result sets are returned. After you process the first result set, get the next result set (for the next SQL statement) by calling NextResultSet(). The following pseudo-code shows how to process multiple result sets: The function db.ExecContext() returns a single result, which is the sum of the number of rows changed by each individual statement. For example, if your multi-statement query executed two UPDATE statements, each of which updated 10 rows, then the result returned would be 20. Individual row counts for individual statements are not available. The following code shows how to retrieve the result of a multi-statement query executed through db.ExecContext(): Note: Because a multi-statement ExecContext() returns a single value, you cannot detect offsetting errors. For example, suppose you expected the return value to be 20 because you expected each UPDATE statement to update 10 rows. If one UPDATE statement updated 15 rows and the other UPDATE statement updated only 5 rows, the total would still be 20. You would see no indication that the UPDATES had not functioned as expected. The ExecContext() function does not return an error if passed a query (e.g. a SELECT statement). However, it still returns only a single value, not a result set, so using it to execute queries (or a mix of queries and non-query statements) is impractical. The QueryContext() function does not return an error if passed non-query statements (e.g. DML). The function returns a result set for each statement, whether or not the statement is a query. For each non-query statement, the result set contains a single row that contains a single column; the value is the number of rows changed by the statement. If you want to execute a mix of query and non-query statements (e.g. a mix of SELECT and DML statements) in a multi-statement query, use QueryContext(). You can retrieve the result sets for the queries, and you can retrieve or ignore the row counts for the non-query statements. Note: PUT statements are not supported for multi-statement queries. If a SQL statement passed to ExecQuery() or QueryContext() fails to compile or execute, that statement is aborted, and subsequent statements are not executed. Any statements prior to the aborted statement are unaffected. For example, if the statements below are run as one multi-statement query, the multi-statement query fails on the third statement, and an exception is thrown. If you then query the contents of the table named "test", the values 1 and 2 would be present. When using the QueryContext() and ExecContext() functions, golang code can check for errors the usual way. For example: Preparing statements and using bind variables are also not supported for multi-statement queries. The Go Snowflake Driver supports asynchronous execution of SQL statements. Asynchronous execution allows you to start executing a statement and then retrieve the result later without being blocked while waiting. While waiting for the result of a SQL statement, you can perform other tasks, including executing other SQL statements. Most of the steps to execute an asynchronous query are the same as the steps to execute a synchronous query. However, there is an additional step, which is that you must call the WithAsyncMode() function to update your Context object to specify that asynchronous mode is enabled. In the code below, the call to "WithAsyncMode()" is specific to asynchronous mode. The rest of the code is compatible with both asynchronous mode and synchronous mode. The function db.QueryContext() returns an object of type snowflakeRows regardless of whether the query is synchronous or asynchronous. However: The call to the Next() function of snowflakeRows is always synchronous (i.e. blocking). If the query has not yet completed and the snowflakeRows object (named "rows" in this example) has not been filled in yet, then rows.Next() waits until the result set has been filled in. More generally, calls to any Golang SQL API function implemented in snowflakeRows or snowflakeResult are blocking calls, and wait if results are not yet available. (Examples of other synchronous calls include: snowflakeRows.Err(), snowflakeRows.Columns(), snowflakeRows.columnTypes(), snowflakeRows.Scan(), and snowflakeResult.RowsAffected().) Because the example code above executes only one query and no other activity, there is no significant difference in behavior between asynchronous and synchronous behavior. The differences become significant if, for example, you want to perform some other activity after the query starts and before it completes. The example code below starts a query, which run in the background, and then retrieves the results later. This example uses small SELECT statements that do not retrieve enough data to require asynchronous handling. However, the technique works for larger data sets, and for situations where the programmer might want to do other work after starting the queries and before retrieving the results. For a more elaborative example please see cmd/async/async.go The Go Snowflake Driver supports the PUT and GET commands. The PUT command copies a file from a local computer (the computer where the Golang client is running) to a stage on the cloud platform. The GET command copies data files from a stage on the cloud platform to a local computer. See the following for information on the syntax and supported parameters: Using PUT: The following example shows how to run a PUT command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: Different client platforms (e.g. linux, Windows) have different path name conventions. Ensure that you specify path names appropriately. This is particularly important on Windows, which uses the backslash character as both an escape character and as a separator in path names. To send information from a stream (rather than a file) use code similar to the code below. (The ReplaceAll() function is needed on Windows to handle backslashes in the path to the file.) Note: PUT statements are not supported for multi-statement queries. Using GET: The following example shows how to run a GET command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: To download a file into an in-memory stream (rather than a file) use code similar to the code below. Note: GET statements are not supported for multi-statement queries. Specifying temporary directory for encryption and compression: Putting and getting requires compression and/or encryption, which is done in the OS temporary directory. If you cannot use default temporary directory for your OS or you want to specify it yourself, you can use "tmpDirPath" DSN parameter. Remember, to encode slashes. Example: Using custom configuration for PUT/GET: If you want to override some default configuration options, you can use `WithFileTransferOptions` context. There are multiple config parameters including progress bars or compression.
Package powerwall implements an interface for accessing Tesla Powerwall devices via the local-network API. Note: This API is currently undocumented, and all information about it has been obtained through reverse-engineering. Some information here may therefore be incomplete or incorrect at this time, and it is also possible that the API will change without warning. Much of the information used to build this library was obtained from https://github.com/vloschiavo/powerwall2. If you have additional information about any of this, please help by contributing what you know to that project. General usage: First, you will need to create a new powerwall.Client object using the NewClient function. After that has been done, you can use any of the following functions on that object to interact with the Powerwall gateway. Functions for authentication and login: (Note: DoLogin generally does not need to be called explicitly) Functions for configuring the client object: Functions for reading power meter data: Functions for reading network interface info: Functions for getting general info about the gateway and site: Functions for getting info about the system state:
Package powerwall implements an interface for accessing Tesla Powerwall devices via the local-network API. Note: This API is currently undocumented, and all information about it has been obtained through reverse-engineering. Some information here may therefore be incomplete or incorrect at this time, and it is also possible that the API will change without warning. Much of the information used to build this library was obtained from https://github.com/vloschiavo/powerwall2. If you have additional information about any of this, please help by contributing what you know to that project. General usage: First, you will need to create a new powerwall.Client object using the NewClient function. After that has been done, you can use any of the following functions on that object to interact with the Powerwall gateway. Functions for authentication and login: (Note: DoLogin generally does not need to be called explicitly) Functions for configuring the client object: Functions for reading power meter data: Functions for reading network interface info: Functions for getting general info about the gateway and site: Functions for getting info about the system state:
Package mwclient provides functionality for interacting with the MediaWiki API. go-mwclient is intended for users who are already familiar with (or are willing to learn) the MediaWiki API. It is intended to make dealing with the API more convenient, but not to hide it. go-mwclient v1 uses version 2 of the MW JSON API. In the example below, basic usage of go-mwclient is shown. Create a new Client object with the New() constructor, and then you are ready to start making requests to the API. If you wish to make requests to multiple MediaWiki sites, you must create a Client for each of them. go-mwclient offers a few methods for making arbitrary requests to the API: Get, GetRaw, Post, and PostRaw (see documentation for the methods for details). They all offer the same basic interface: pass a params.Values map (from the cgt.name/pkg/go-mwclient/params package), receive a response and an error. For convenience, go-mwclient offers several methods for making common requests (login, edit, etc.), but these methods are implemented using the same interface. params.Values params.Values is similar to (and a fork of) the standard library's net/url.Values. The reason why params.Values is used instead is that url.Values is based on a map[string][]string, rather than a map[string]string. This is because url.Values must support multiple keys with the same name. The literal syntax for a map[string][]string is rather cumbersome because the value is a slice rather than just a string, and the MediaWiki API actually does not use multiple keys when multiple values for the same key is required. Instead, one key is used and the values are separated by pipes (|). It is therefore very simple to write multi-value values in params.Values literals while params.Values makes it simple to write multi-value values in literals while avoiding the cumbersome []string literals for the most common case where the is only value. See documentation for the params package for more information. Because of the way type identity works in Go, it is possible for callers to pass a plain map[string]string rather than a params.Values. It is only necessary for users to use params.Values directly if they wish to use params.Values's methods. It makes no difference to go-mwclient. If an API call fails it will return an error. Many things can go wrong during an API call: the network could be down, the API could return an unexpected response (if the API was changed), or perhaps there's an error in your API request. If the error is an API error or warning (and you used the "non-Raw" Get and Post methods), then the error/warning(s) will be parsed and returned in either an APIError or an APIWarnings object, both of which implement the error interface. The "Raw" request methods do not check for API errors or warnings. For more information about API errors and warnings, please see https://www.mediawiki.org/wiki/API:Errors_and_warnings. If maxlag is enabled, it may be that the API has rejected the requests and the amount of retries (3 by default) have been tried unsuccessfully. In that case, the error will be the variable mwclient.ErrAPIBusy. Other methods than the core ones (i.e., other methods than Get and Post) may return other errors.
Package rod is a high-level driver directly based on DevTools Protocol. This example opens https://github.com/, searches for "git", and then gets the header element which gives the description for Git. Rod use https://golang.org/pkg/context to handle cancellations for IO blocking operations, most times it's timeout. Context will be recursively passed to all sub-methods. For example, methods like Page.Context(ctx) will return a clone of the page with the ctx, all the methods of the returned page will use the ctx if they have IO blocking operations. Page.Timeout or Page.WithCancel is just a shortcut for Page.Context. Of course, Browser or Element works the same way. Shows how we can further customize the browser with the launcher library. Usually you use launcher lib to set the browser's command line flags (switches). Doc for flags: https://peter.sh/experiments/chromium-command-line-switches Shows how to change the retry/polling options that is used to query elements. This is useful when you want to customize the element query retry logic. When rod doesn't have a feature that you need. You can easily call the cdp to achieve it. List of cdp API: https://github.com/Humphryyy/rod/tree/master/lib/proto Shows how to disable headless mode and debug. Rod provides a lot of debug options, you can set them with setter methods or use environment variables. Doc for environment variables: https://pkg.go.dev/github.com/Humphryyy/rod/lib/defaults We use "Must" prefixed functions to write example code. But in production you may want to use the no-prefix version of them. About why we use "Must" as the prefix, it's similar to https://golang.org/pkg/regexp/#MustCompile Shows how to share a remote object reference between two Eval Shows how to listen for events. Shows how to intercept requests and modify both the request and the response. The entire process of hijacking one request: The --req-> and --res-> are the parts that can be modified. Show how to handle multiple results of an action. Such as when you login a page, the result can be success or wrong password. Example_search shows how to use Search to get element inside nested iframes or shadow DOMs. It works the same as https://developers.google.com/web/tools/chrome-devtools/dom#search Shows how to update the state of the current page. In this example we enable the network domain. Rod uses mouse cursor to simulate clicks, so if a button is moving because of animation, the click may not work as expected. We usually use WaitStable to make sure the target isn't changing anymore. When you want to wait for an ajax request to complete, this example will be useful.
Package rod is a high-level driver directly based on DevTools Protocol. This example opens https://github.com/, searches for "git", and then gets the header element which gives the description for Git. Rod use https://golang.org/pkg/context to handle cancellations for IO blocking operations, most times it's timeout. Context will be recursively passed to all sub-methods. For example, methods like Page.Context(ctx) will return a clone of the page with the ctx, all the methods of the returned page will use the ctx if they have IO blocking operations. Page.Timeout or Page.WithCancel is just a shortcut for Page.Context. Of course, Browser or Element works the same way. Shows how we can further customize the browser with the launcher library. Usually you use launcher lib to set the browser's command line flags (switches). Doc for flags: https://peter.sh/experiments/chromium-command-line-switches Shows how to change the retry/polling options that is used to query elements. This is useful when you want to customize the element query retry logic. When rod doesn't have a feature that you need. You can easily call the cdp to achieve it. List of cdp API: https://github.com/Unique-AG/rod/tree/master/lib/proto Shows how to disable headless mode and debug. Rod provides a lot of debug options, you can set them with setter methods or use environment variables. Doc for environment variables: https://pkg.go.dev/github.com/Unique-AG/rod/lib/defaults We use "Must" prefixed functions to write example code. But in production you may want to use the no-prefix version of them. About why we use "Must" as the prefix, it's similar to https://golang.org/pkg/regexp/#MustCompile Shows how to share a remote object reference between two Eval Shows how to listen for events. Shows how to intercept requests and modify both the request and the response. The entire process of hijacking one request: The --req-> and --res-> are the parts that can be modified. Show how to handle multiple results of an action. Such as when you login a page, the result can be success or wrong password. Example_search shows how to use Search to get element inside nested iframes or shadow DOMs. It works the same as https://developers.google.com/web/tools/chrome-devtools/dom#search Shows how to update the state of the current page. In this example we enable the network domain. Rod uses mouse cursor to simulate clicks, so if a button is moving because of animation, the click may not work as expected. We usually use WaitStable to make sure the target isn't changing anymore. When you want to wait for an ajax request to complete, this example will be useful.
Package oauth1 is a Go implementation of the OAuth1 spec RFC 5849. It allows end-users to authorize a client (consumer) to access protected resources on their behalf (e.g. login) and allows clients to make signed and authorized requests on behalf of a user (e.g. API calls). It takes design cues from golang.org/x/oauth2, providing an http.Client which handles request signing and authorization. Package oauth1 implements the OAuth1 authorization flow and provides an http.Client which can sign and authorize OAuth1 requests. To implement "Login with X", use the https://github.com/dghubble/gologin packages which provide login handlers for OAuth1 and OAuth2 providers. To call the Twitter, Digits, or Tumblr OAuth1 APIs, use the higher level Go API clients. * https://github.com/dghubble/go-twitter * https://github.com/dghubble/go-digits * https://github.com/benfb/go-tumblr Perform the OAuth 1 authorization flow to ask a user to grant an application access to his/her resources via an access token. 1. When a user performs an action (e.g. "Login with X" button calls "/login" route) get an OAuth1 request token (temporary credentials). 2. Obtain authorization from the user by redirecting them to the OAuth1 provider's authorization URL to grant the application access. Receive the callback from the OAuth1 provider in a handler. 3. Acquire the access token (token credentials) which can later be used to make requests on behalf of the user. Check the examples to see this authorization flow in action from the command line, with Twitter PIN-based login and Tumblr login. Use an access Token to make authorized requests on behalf of a user. Check the examples to see Twitter and Tumblr requests in action.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Note: between version 0.2.0 and the current master include changes to the API that will break your existing code a little. This change turned some fields from pointers to a single optional struct into the more correct slice of struct, and to pluralize the field name. For example, `IDPSSODescriptor *IDPSSODescriptor` has become `IDPSSODescriptors []IDPSSODescriptor`. This more accurately reflects the standard. The struct `Metadata` has been renamed to `EntityDescriptor`. In 0.2.0 and before, every struct derived from the standard has the same name as in the standard, *except* for `Metadata` which should always have been called `EntityDescriptor`. In various places `url.URL` is now used where `string` was used <= version 0.1.0. In various places where keys and certificates were modeled as `string` <= version 0.1.0 (what was I thinking?!) they are now modeled as `*rsa.PrivateKey`, `*x509.Certificate`, or `crypto.PrivateKey` as appropriate. Let us assume we have a simple web appliation to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import "net/http" ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [testshib.org](https://www.testshib.org/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identiy provider to establish trust from the service provider to the IDP. For [testshib.org](https://www.testshib.org/), you can do something like: Naviate to https://www.testshib.org/register.html and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://idp.testshib.org/idp/profile/SAML2/Redirect/SSO` 1. testshib.org prompts you for a username and password. 1. testshib.org returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `examples/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package supports signed and encrypted SAML assertions. It does not support signed or encrypted requests. The *RelayState* parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, *RelayState* is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [TestShib](https://www.testshib.org/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `8EA205C01C425FF195A5E9A43FA0768F26FD2554`](https://keybase.io/crewjam)).
Package rod is a high-level driver directly based on DevTools Protocol. This example opens https://github.com/, searches for "git", and then gets the header element which gives the description for Git. Rod use https://golang.org/pkg/context to handle cancellations for IO blocking operations, most times it's timeout. Context will be recursively passed to all sub-methods. For example, methods like Page.Context(ctx) will return a clone of the page with the ctx, all the methods of the returned page will use the ctx if they have IO blocking operations. Page.Timeout or Page.WithCancel is just a shortcut for Page.Context. Of course, Browser or Element works the same way. Shows how we can further customize the browser with the launcher library. Usually you use launcher lib to set the browser's command line flags (switches). Doc for flags: https://peter.sh/experiments/chromium-command-line-switches Shows how to change the retry/polling options that is used to query elements. This is useful when you want to customize the element query retry logic. When rod doesn't have a feature that you need. You can easily call the cdp to achieve it. List of cdp API: https://github.com/go-rod/rod/tree/master/lib/proto Shows how to disable headless mode and debug. Rod provides a lot of debug options, you can set them with setter methods or use environment variables. Doc for environment variables: https://pkg.go.dev/github.com/go-rod/rod/lib/defaults We use "Must" prefixed functions to write example code. But in production you may want to use the no-prefix version of them. About why we use "Must" as the prefix, it's similar to https://golang.org/pkg/regexp/#MustCompile Shows how to share a remote object reference between two Eval Shows how to listen for events. Shows how to intercept requests and modify both the request and the response. The entire process of hijacking one request: The --req-> and --res-> are the parts that can be modified. Show how to handle multiple results of an action. Such as when you login a page, the result can be success or wrong password. Example_search shows how to use Search to get element inside nested iframes or shadow DOMs. It works the same as https://developers.google.com/web/tools/chrome-devtools/dom#search Shows how to update the state of the current page. In this example we enable the network domain. Rod uses mouse cursor to simulate clicks, so if a button is moving because of animation, the click may not work as expected. We usually use WaitStable to make sure the target isn't changing anymore. When you want to wait for an ajax request to complete, this example will be useful.
Command goat provides an implementation of a BitTorrent tracker, written in Go. goat can be built using Go 1.1+. It can be downloaded, built, and installed, simply by running: In addition, goat depends on a MySQL server for data storage. After creating a database and user for goat, its database schema may be imported from the SQL files located in 'res/'. goat will not run unless MySQL is installed, and a database and user are properly configured for its use. Optionally, goat can be built to use ql (https://github.com/cznic/ql) as its storage backend. This is done by supplying the 'ql' tag in the go get command: A blank ql database file is located under 'res/ql/goat.db', and will be copied to '~/.config/goat/goat.db' on UNIX systems. goat is now able to use ql as its storage backend, for those who do not wish to use an external, MySQL backend. goat is capable of listening for torrent traffic in three modes: HTTP, HTTPS, and UDP. HTTP/HTTPS are the recommended methods, and are required in order for goat to serve its API, and to allow use of private tracker passkeys. HTTP is considered the standard mode of operation for goat. HTTP allows gathering a great number of metrics, use of passkeys, use of a client whitelist, and access to goat's RESTful API, when configured. For most trackers, this will be the only listener which is necessary in order for goat to function properly. The HTTPS listener provides a method to encrypt traffic to the tracker, but must be used with caution. Unless the SSL certificate in use is signed by a proper certificate authority, it will distress most clients, and they may outright refuse to announce to it. If you are in possession of a certificate signed by a certificate authority, this mode may be more ideal, as it provides added security for your clients. The UDP listener is the most unusual method of the three, and should only be used for public trackers. The BitTorrent UDP tracker protocol specifies a very specific packet format, meaning that additional information or parameters cannot be packed into a UDP datagram in a standard way. The UDP tracker may be the fastest and least bandwidth-intensive, but as stated, should only be used for public trackers. A new feature goat added to goat in order to allow better interoperability with many languages is a RESTful API, which is served using the HTTP or HTTPS listeners. This API enables easy retrieval of tracker statistics, while allowing goat to run as a completely independent process. It should be noted that the API is only enabled when configured, and when a HTTP or HTTPS listener is enabled. Without a transport mechanism, the API will be inaccessible. The API features several modes of authentication, including HTTP Basic for login and HMAC-SHA1 other calls. Upon logging into the API using HTTP Basic with a username and password pair, an API public key and secret will be generated. The public key is used as the username for HTTP Basic authentication, and the secret key is used to calculate a HMAC-SHA1 signature for the password. As part of API signature generation, a random nonce value must be generated and added to the request. It is added to the password portion of the HTTP Basic request, and also to the string which is used to create the signature. Nonce values must be changed on every request, or the request will fail. The current pseudocode format of the HMAC-SHA1 signature is as follows: The proper format for a HTTP Basic request is as follows: When the public key, nonce, and API signature are sent via HTTP Basic, the server will verify the signature. Successful authentication will allow access to the API. This list contains all API calls currently recognized by goat. Each call must be authenticated using the aforementioned methods. Request an API public key and secret key for this user. The public key, user ID, and secret key are used to authenticate further API calls. The expire time indicates when this key is set to expire. Further API calls will extend the expiration time. Retrieve a list of all files tracked by goat. Some extended attributes are not added to reduce strain on database, and to provide a more general overview. Retrieve extended attributes about a specific file with matching ID. This provides counts for number of completions, seeders, leechers, and a list of fileUser relationships associated with a given file. Retrieve a variety of metrics about the current status of goat, including its PID, hostname, memory usage, number of HTTP/UDP hits, etc. Create a user with the specified username, password, and torrent limit. Reterieve a list of all users registered to goat, including their ID, torrent limit, and username. Retrieve information about a single user with matching ID, including their ID, torrent limit, and username. goat is configured using a JSON file, which will be created under '~/.config/goat/config.json' on UNIX systems. Here is an example configuration, describing the settings available to the user.
Package delay provides a way to execute code outside the scope of a user request by using the taskqueue API. To declare a function that may be executed later, call Func in a top-level assignment context, passing it an arbitrary string key and a function whose first argument is of type appengine.Context. It is also possible to use a function literal. To call a function, invoke its Call method. A function may be called any number of times. If the function has any return arguments, and the last one is of type error, the function may return a non-nil error to signal that the function should be retried. The arguments to functions may be of any type that is encodable by the gob package. If an argument is of interface type, it is the client's responsibility to register with the gob package whatever concrete type may be passed for that argument; see http://golang.org/pkg/gob/#Register for details. Any errors during initialization or execution of a function will be logged to the application logs. Error logs that occur during initialization will be associated with the request that invoked the Call method. The state of a function invocation that has not yet successfully executed is preserved by combining the file name in which it is declared with the string key that was passed to the Func function. Updating an app with pending function invocations is safe as long as the relevant functions have the (filename, key) combination preserved. The delay package uses the Task Queue API to create tasks that call the reserved application path "/_ah/queue/go/delay". This path must not be marked as "login: required" in app.yaml; it must be marked as "login: admin" or have no access restriction.