Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package test provides a modern generic testing assertions library.
Package bigquery provides a client for the BigQuery service. Note: This package is in beta. Some backwards-incompatible changes may occur. The following assumes a basic familiarity with BigQuery concepts. See https://cloud.google.com/bigquery/docs. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client: To query existing tables, create a Query and call its Read method: Then iterate through the resulting rows. You can store a row using anything that implements the ValueLoader interface, or with a slice or map of bigquery.Value. A slice is simplest: You can also use a struct whose exported fields match the query: You can also start the query running and get the results later. Create the query as above, but call Run instead of Read. This returns a Job, which represents an asynchronous operation. Get the job's ID, a printable string. You can save this string to retrieve the results at a later time, even in another process. To retrieve the job's results from the ID, first look up the Job: Use the Job.Read method to obtain an iterator, and loop over the rows. Query.Read is just a convenience method that combines Query.Run and Job.Read. You can refer to datasets in the client's project with the Dataset method, and in other projects with the DatasetInProject method: These methods create references to datasets, not the datasets themselves. You can have a dataset reference even if the dataset doesn't exist yet. Use Dataset.Create to create a dataset from a reference: You can refer to tables with Dataset.Table. Like bigquery.Dataset, bigquery.Table is a reference to an object in BigQuery that may or may not exist. You can create, delete and update the metadata of tables with methods on Table. For instance, you could create a temporary table with: We'll see how to create a table with a schema in the next section. There are two ways to construct schemas with this package. You can build a schema by hand, like so: Or you can infer the schema from a struct: Struct inference supports tags like those of the encoding/json package, so you can change names, ignore fields, or mark a field as nullable (non-required). Fields declared as one of the Null types (NullInt64, NullFloat64, NullString, NullBool, NullTimestamp, NullDate, NullTime and NullDateTime) are automatically inferred as nullable, so the "nullable" tag is only needed for []byte, *big.Rat and pointer-to-struct fields. Having constructed a schema, you can create a table with it like so: You can copy one or more tables to another table. Begin by constructing a Copier describing the copy. Then set any desired copy options, and finally call Run to get a Job: You can chain the call to Run if you don't want to set options: You can wait for your job to complete: Job.Wait polls with exponential backoff. You can also poll yourself, if you wish: There are two ways to populate a table with this package: load the data from a Google Cloud Storage object, or upload rows directly from your program. For loading, first create a GCSReference, configuring it if desired. Then make a Loader, optionally configure it as well, and call its Run method. To upload, first define a type that implements the ValueSaver interface, which has a single method named Save. Then create an Uploader, and call its Put method with a slice of values. You can also upload a struct that doesn't implement ValueSaver. Use the StructSaver type to specify the schema and insert ID by hand, or just supply the struct or struct pointer directly and the schema will be inferred: If you've been following so far, extracting data from a BigQuery table into a Google Cloud Storage object will feel familiar. First create an Extractor, then optionally configure it, and lastly call its Run method. Errors returned by this client are often of the type [`googleapi.Error`](https://godoc.org/google.golang.org/api/googleapi#Error). These errors can be introspected for more information by type asserting to the richer `googleapi.Error` type. For example:
Package bigquery provides a client for the BigQuery service. Note: This package is in beta. Some backwards-incompatible changes may occur. The following assumes a basic familiarity with BigQuery concepts. See https://cloud.google.com/bigquery/docs. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client: To query existing tables, create a Query and call its Read method: Then iterate through the resulting rows. You can store a row using anything that implements the ValueLoader interface, or with a slice or map of bigquery.Value. A slice is simplest: You can also use a struct whose exported fields match the query: You can also start the query running and get the results later. Create the query as above, but call Run instead of Read. This returns a Job, which represents an asynchronous operation. Get the job's ID, a printable string. You can save this string to retrieve the results at a later time, even in another process. To retrieve the job's results from the ID, first look up the Job: Use the Job.Read method to obtain an iterator, and loop over the rows. Query.Read is just a convenience method that combines Query.Run and Job.Read. You can refer to datasets in the client's project with the Dataset method, and in other projects with the DatasetInProject method: These methods create references to datasets, not the datasets themselves. You can have a dataset reference even if the dataset doesn't exist yet. Use Dataset.Create to create a dataset from a reference: You can refer to tables with Dataset.Table. Like bigquery.Dataset, bigquery.Table is a reference to an object in BigQuery that may or may not exist. You can create, delete and update the metadata of tables with methods on Table. For instance, you could create a temporary table with: We'll see how to create a table with a schema in the next section. There are two ways to construct schemas with this package. You can build a schema by hand, like so: Or you can infer the schema from a struct: Struct inference supports tags like those of the encoding/json package, so you can change names, ignore fields, or mark a field as nullable (non-required). Fields declared as one of the Null types (NullInt64, NullFloat64, NullString, NullBool, NullTimestamp, NullDate, NullTime and NullDateTime) are automatically inferred as nullable, so the "nullable" tag is only needed for []byte, *big.Rat and pointer-to-struct fields. Having constructed a schema, you can create a table with it like so: You can copy one or more tables to another table. Begin by constructing a Copier describing the copy. Then set any desired copy options, and finally call Run to get a Job: You can chain the call to Run if you don't want to set options: You can wait for your job to complete: Job.Wait polls with exponential backoff. You can also poll yourself, if you wish: There are two ways to populate a table with this package: load the data from a Google Cloud Storage object, or upload rows directly from your program. For loading, first create a GCSReference, configuring it if desired. Then make a Loader, optionally configure it as well, and call its Run method. To upload, first define a type that implements the ValueSaver interface, which has a single method named Save. Then create an Uploader, and call its Put method with a slice of values. You can also upload a struct that doesn't implement ValueSaver. Use the StructSaver type to specify the schema and insert ID by hand, or just supply the struct or struct pointer directly and the schema will be inferred: If you've been following so far, extracting data from a BigQuery table into a Google Cloud Storage object will feel familiar. First create an Extractor, then optionally configure it, and lastly call its Run method. Errors returned by this client are often of the type [`googleapi.Error`](https://godoc.org/google.golang.org/api/googleapi#Error). These errors can be introspected for more information by type asserting to the richer `googleapi.Error` type. For example:
Package configtesting is used for asserting interface compatability with github.com/aws/aws-sdk-go-v2/config interfaces.
Package configtesting is used for config source assertions.
Package assert 是对 testing 包的一些简单包装 提供了两种操作方式:直接调用包函数;或是使用 Assertion 对象。 两种方式完全等价,可以根据自己需要,选择一种。
Package goworker is a Resque-compatible, Go-based background worker. It allows you to push jobs into a queue using an expressive language like Ruby while harnessing the efficiency and concurrency of Go to minimize job latency and cost. goworker workers can run alongside Ruby Resque clients so that you can keep all but your most resource-intensive jobs in Ruby. To create a worker, write a function matching the signature and register it using Here is a simple worker that prints its arguments: To create workers that share a database pool or other resources, use a closure to share variables. goworker worker functions receive the queue they are serving and a slice of interfaces. To use them as parameters to other functions, use Go type assertions to convert them into usable types. For testing, it is helpful to use the redis-cli program to insert jobs onto the Redis queue: will insert 100 jobs for the MyClass worker onto the myqueue queue. It is equivalent to: After building your workers, you will have an executable that you can run which will automatically poll a Redis server and call your workers as jobs arrive. There are several flags which control the operation of the goworker client. -queues="comma,delimited,queues" — This is the only required flag. The recommended practice is to separate your Resque workers from your goworkers with different queues. Otherwise, Resque worker classes that have no goworker analog will cause the goworker process to fail the jobs. Because of this, there is no default queue, nor is there a way to select all queues (à la Resque's * queue). Queues are processed in the order they are specififed. If you have multiple queues you can assign them weights. A queue with a weight of 2 will be checked twice as often as a queue with a weight of 1: -queues='high=2,low=1'. -interval=5.0 — Specifies the wait period between polling if no job was in the queue the last time one was requested. -concurrency=25 — Specifies the number of concurrently executing workers. This number can be as low as 1 or rather comfortably as high as 100,000, and should be tuned to your workflow and the availability of outside resources. -connections=2 — Specifies the maximum number of Redis connections that goworker will consume between the poller and all workers. There is not much performance gain over two and a slight penalty when using only one. This is configurable in case you need to keep connection counts low for cloud Redis providers who limit plans on maxclients. -uri=redis://localhost:6379/ — Specifies the URI of the Redis database from which goworker polls for jobs. Accepts URIs of the format redis://user:pass@host:port/db or unix:///path/to/redis.sock. The flag may also be set by the environment variable $($REDIS_PROVIDER) or $REDIS_URL. E.g. set $REDIS_PROVIDER to REDISTOGO_URL on Heroku to let the Redis To Go add-on configure the Redis database. -namespace=resque: — Specifies the namespace from which goworker retrieves jobs and stores stats on workers. -exit-on-complete=false — Exits goworker when there are no jobs left in the queue. This is helpful in conjunction with the time command to benchmark different configurations. -use-number=false — Uses json.Number when decoding numbers in the job payloads. This will avoid issues that occur when goworker and the json package decode large numbers as floats, which then get encoded in scientific notation, losing pecision. This will default to true soon. You can also configure your own flags for use within your workers. Be sure to set them before calling goworker.Main(). It is okay to call flags.Parse() before calling goworker.Main() if you need to do additional processing on your flags. To stop goworker, send a QUIT, TERM, or INT signal to the process. This will immediately stop job polling. There can be up to $CONCURRENCY jobs currently running, which will continue to run until they are finished. Like Resque, goworker makes no guarantees about the safety of jobs in the event of process shutdown. Workers must be both idempotent and tolerant to loss of the job in the event of failure. If the process is killed with a KILL or by a system failure, there may be one job that is currently in the poller's buffer that will be lost without any representation in either the queue or the worker variable. If you are running Goworker on a system like Heroku, which sends a TERM to signal a process that it needs to stop, ten seconds later sends a KILL to force the process to stop, your jobs must finish within 10 seconds or they may be lost. Jobs will be recoverable from the Redis database under as a JSON object with keys queue, run_at, and payload, but the process is manual. Additionally, there is no guarantee that the job in Redis under the worker key has not finished, if the process is killed before goworker can flush the update to Redis.
Package clingy is a library to create command line interfaces. The API is a design experiment that prioritizes discoverability of the code that uses it over anything else. There are two main ways that this can be seen. First, commands are defined in the same call that is used to execute them. In this way, you cannot define the tree of commands in a bunch of different files in the package, mutating state during init() functions, or otherwise. For example, consider the following execution: Because all of the commands are defined in that single spot, we know exactly which commands are available and where they are defined. Secondly, flags and arguments for commands are defined at the time they are loaded, and happen as part of a method call on the command implementation itself. This provides incentives to store the state on the command object itself. For example: This causes the command implementation to be more discoverable because you have a single location that contains all of the state the command will inspect. This also aids in unit testing, because you can construct values directly and execute them, skipping the Setup step and avoiding having to make mocks or fakes to inject the parameters. In service of this goal, some tradeoffs are made. Most notably, the return type for arguments and flags is `interface{}` and type casts are required. This was found to be acceptable because in typical usage, the argument will be directly assigned into a typed struct field, so the compiler complains if an incorrect type assertion is used. This changes a dynamic failure to a minor redundancy. Additionally, the code with the type assertions is executed right away whenever a command is run, so any testing will surface any bugs. Enjoy!
Package anaconda provides structs and functions for accessing version 1.1 of the Twitter API. Successful API queries return native Go structs that can be used immediately, with no need for type assertions. If you already have the access token (and secret) for your user (Twitter provides this for your own account on the developer portal), creating the client is simple: Executing queries on an authenticated TwitterApi struct is simple. Certain endpoints allow separate optional parameter; if desired, these can be passed as the final parameter. Anaconda implements most of the endpoints defined in the Twitter API documentation: https://dev.twitter.com/docs/api/1.1. For clarity, in most cases, the function name is simply the name of the HTTP method and the endpoint (e.g., the endpoint `GET /friendships/incoming` is provided by the function `GetFriendshipsIncoming`). In a few cases, a shortened form has been chosen to make life easier (for example, retweeting is simply the function `Retweet`) More detailed information about the behavior of each particular endpoint can be found at the official Twitter API documentation.
Package anaconda provides structs and functions for accessing version 1.1 of the Twitter API. Successful API queries return native Go structs that can be used immediately, with no need for type assertions. If you already have the access token (and secret) for your user (Twitter provides this for your own account on the developer portal), creating the client is simple: Executing queries on an authenticated TwitterApi struct is simple. Certain endpoints allow separate optional parameter; if desired, these can be passed as the final parameter. Anaconda implements most of the endpoints defined in the Twitter API documentation: https://dev.twitter.com/docs/api/1.1. For clarity, in most cases, the function name is simply the name of the HTTP method and the endpoint (e.g., the endpoint `GET /friendships/incoming` is provided by the function `GetFriendshipsIncoming`). In a few cases, a shortened form has been chosen to make life easier (for example, retweeting is simply the function `Retweet`) More detailed information about the behavior of each particular endpoint can be found at the official Twitter API documentation.
Package anaconda provides structs and functions for accessing version 1.1 of the Twitter API. Successful API queries return native Go structs that can be used immediately, with no need for type assertions. If you already have the access token (and secret) for your user (Twitter provides this for your own account on the developer portal), creating the client is simple: Executing queries on an authenticated TwitterApi struct is simple. Certain endpoints allow separate optional parameter; if desired, these can be passed as the final parameter. Anaconda implements most of the endpoints defined in the Twitter API documentation: https://dev.twitter.com/docs/api/1.1. For clarity, in most cases, the function name is simply the name of the HTTP method and the endpoint (e.g., the endpoint `GET /friendships/incoming` is provided by the function `GetFriendshipsIncoming`). In a few cases, a shortened form has been chosen to make life easier (for example, retweeting is simply the function `Retweet`) More detailed information about the behavior of each particular endpoint can be found at the official Twitter API documentation.
Package anaconda provides structs and functions for accessing version 1.1 of the Twitter API. Successful API queries return native Go structs that can be used immediately, with no need for type assertions. If you already have the access token (and secret) for your user (Twitter provides this for your own account on the developer portal), creating the client is simple: Executing queries on an authenticated TwitterApi struct is simple. Certain endpoints allow separate optional parameter; if desired, these can be passed as the final parameter. Anaconda implements most of the endpoints defined in the Twitter API documentation: https://dev.twitter.com/docs/api/1.1. For clarity, in most cases, the function name is simply the name of the HTTP method and the endpoint (e.g., the endpoint `GET /friendships/incoming` is provided by the function `GetFriendshipsIncoming`). In a few cases, a shortened form has been chosen to make life easier (for example, retweeting is simply the function `Retweet`) More detailed information about the behavior of each particular endpoint can be found at the official Twitter API documentation.
Package anaconda provides structs and functions for accessing version 1.1 of the Twitter API. Successful API queries return native Go structs that can be used immediately, with no need for type assertions. If you already have the access token (and secret) for your user (Twitter provides this for your own account on the developer portal), creating the client is simple: Executing queries on an authenticated TwitterApi struct is simple. Certain endpoints allow separate optional parameter; if desired, these can be passed as the final parameter. Anaconda implements most of the endpoints defined in the Twitter API documentation: https://dev.twitter.com/docs/api/1.1. For clarity, in most cases, the function name is simply the name of the HTTP method and the endpoint (e.g., the endpoint `GET /friendships/incoming` is provided by the function `GetFriendshipsIncoming`). In a few cases, a shortened form has been chosen to make life easier (for example, retweeting is simply the function `Retweet`) More detailed information about the behavior of each particular endpoint can be found at the official Twitter API documentation.
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy(request level configuration), alternatively, global(all services) or client level RetryPolicy configration is also possible. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go If you are trying to make a PUT/POST API call with binary request body, please make sure the binary request body is resettable, which means the request body should inherit Seeker interface. The Retry behavior Precedence (Highest to lowest) is defined as below:- The OCI Go SDK defines a default retry policy that retries on the errors suitable for retries (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm), for a recommended period of time (up to 7 attempts spread out over at most approximately 1.5 minutes). This default retry policy can be created using: You can set this retry policy for a single request: or for all requests made by a client: or for all requests made by all clients: or setting default retry via environment varaible, which is a global switch for all services: Some services enable retry for operations by default, this can be overridden using any alternatives mentioned above. Some resources may have to be replicated across regions and are only eventually consistent. That means the request to create, update, or delete the resource succeeded, but the resource is not available everywhere immediately. Creating, updating, or deleting any resource in the Identity service is affected by eventual consistency, and doing so may cause other operations in other services to fail until the Identity resource has been replicated. For example, the request to CreateTag in the Identity service in the home region succeeds, but immediately using that created tag in another region in a request to LaunchInstance in the Compute service may fail. If you are creating, updating, or deleting resources in the Identity service, we recommend using an eventually consistent retry policy for any service you access. The default retry policy already deals with eventual consistency. Example: This retry policy will use a different strategy if an eventually consistent change was made in the recent past (called the "eventually consistent window", currently defined to be 4 minutes after the eventually consistent change). This special retry policy for eventual consistency will: 1. make up to 9 attempts (including the initial attempt); if an attempt is successful, no more attempts will be made 2. retry at most until (a) approximately the end of the eventually consistent window or (b) the end of the default retry period of about 1.5 minutes, whichever is farther in the future; if an attempt is successful, no more attempts will be made, and the OCI Go SDK will not wait any longer 3. retry on the error codes 400-RelatedResourceNotAuthorizedOrNotFound, 404-NotAuthorizedOrNotFound, and 409-NotAuthorizedOrResourceAlreadyExists, for which the default retry policy does not retry, in addition to the errors the default retry policy retries on (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm) If there were no eventually consistent actions within the recent past, then this special retry strategy is not used. If you want a retry policy that does not handle eventual consistency in a special way, for example because you retry on all error responses, you can use DefaultRetryPolicyWithoutEventualConsistency or NewRetryPolicyWithOptions with the common.ReplaceWithValuesFromRetryPolicy(common.DefaultRetryPolicyWithoutEventualConsistency()) option: The NewRetryPolicy function also creates a retry policy without eventual consistency. Circuit Breaker can prevent an application repeatedly trying to execute an operation that is likely to fail, allowing it to continue without waiting for the fault to be rectified or wasting CPU cycles, of course, it also enables an application to detect whether the fault has been resolved. If the problem appears to have been rectified, the application can attempt to invoke the operation. Go SDK intergrates sony/gobreaker solution, wraps in a circuit breaker object, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, this also saves the service from being overwhelmed with network calls in case of an outage. Go SDK enable circuit breaker with default configuration, if you don't want to enable the solution, can disable the functionality before your application running Go SDK also supports customize Circuit Breaker with specified configuratoins. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_circuitbreaker_test.go The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
Package scope provides context objects for the sharing of scope across goroutines. This context object provides a number of utilities for coordinating concurrent work, in addition to sharing data. Contexts are nodes in a tree. A context is born either by forking from an existing context (becoming a child of that node in the tree), or a new tree is started by calling New(). A context can be terminated at any time. This is usually done by calling the Terminate() or Cancel() method. Termination is associated with an error value (which may be nil if one wants to indicate success). When a node in the tree is terminated, that termination is propagated down to all its unterminated descendents. For example, here is how one might fan out a search: Contexts can be terminated at any time. You can even fork a context with a deadline: There is a termination channel, Done(), available if you want to interrupt your work when a context is terminated: You can also spot-check for termination with a call to the Alive() method. Contexts provide a data store for key value pairs, shared across the entire scope. When a context is forked, the child context shares the same data map as its parent. This data store maps blank interfaces to blank interfaces, in the exact same manner as http://www.gorillatoolkit.org/pkg/context. This means you must use type assertions at runtime. To keep this reasonably safe, it's recommended to define and use your own unexported type for all keys maintained by your package. The shared data store is managed in a copy-on-write fashion as the tree branches. When a context is forked, the child maintains a pointer to the parent's data map. When Set() is called on the child, the original map is duplicated for the child, and the update is only applied to the child's map. Each context provides a WaitGroup() method, which returns the same pointer across the entire tree. You can use this to spin off background tasks and then wait for them before you completely shut down the scope. Contexts provide an optional feature to facilitate unit testing, called breakpoints. A breakpoint is identified by a list of hashable values. Production code can pass this list to the Check() method to synchronize and allow for an error to be injected. Test code can register a breakpoint with Breakpoint(), which returns a channel of errors. The test can receive from this channel to synchronize with the entry of the corresponding Check() call, and then write back an error to synchronize with the exit.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Version 0.4.0 introduces a few breaking changes to the _samlsp_ package in order to make the package more extensible, and to clean up the interfaces a bit. The default behavior remains the same, but you can now provide interface implementations of _RequestTracker_ (which tracks pending requests), _Session_ (which handles maintaining a session) and _OnError_ which handles reporting errors. Public fields of _samlsp.Middleware_ have changed, so some usages may require adjustment. See [issue 231](https://github.com/crewjam/saml/issues/231) for details. The option to provide an IDP metadata URL has been deprecated. Instead, we recommend that you use the `FetchMetadata()` function, or fetch the metadata yourself and use the new `ParseMetadata()` function, and pass the metadata in _samlsp.Options.IDPMetadata_. Similarly, the _HTTPClient_ field is now deprecated because it was only used for fetching metdata, which is no longer directly implemented. The fields that manage how cookies are set are deprecated as well. To customize how cookies are managed, provide custom implementation of _RequestTracker_ and/or _Session_, perhaps by extending the default implementations. The deprecated fields have not been removed from the Options structure, but will be in future. In particular we have deprecated the following fields in _samlsp.Options_: - `Logger` - This was used to emit errors while validating, which is an anti-pattern. - `IDPMetadataURL` - Instead use `FetchMetadata()` - `HTTPClient` - Instead pass httpClient to FetchMetadata - `CookieMaxAge` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieName` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider Let us assume we have a simple web application to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import ( ) ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [samltest.id](https://samltest.id/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identity provider to establish trust from the service provider to the IDP. For [samltest.id](https://samltest.id/), you can do something like: Navigate to https://samltest.id/upload.php and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://samltest.id/idp/profile/SAML2/Redirect/SSO` 1. samltest.id prompts you for a username and password. 1. samltest.id returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `example/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package can produce signed SAML assertions, and can validate both signed and encrypted SAML assertions. It does not support signed or encrypted requests. The _RelayState_ parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, _RelayState_ is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [SAMLtest](https://samltest.id/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `78B6038B3B9DFB88`](https://keybase.io/crewjam)).
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Version 0.4.0 introduces a few breaking changes to the _samlsp_ package in order to make the package more extensible, and to clean up the interfaces a bit. The default behavior remains the same, but you can now provide interface implementations of _RequestTracker_ (which tracks pending requests), _Session_ (which handles maintaining a session) and _OnError_ which handles reporting errors. Public fields of _samlsp.Middleware_ have changed, so some usages may require adjustment. See [issue 231](https://github.com/crewjam/saml/issues/231) for details. The option to provide an IDP metadata URL has been deprecated. Instead, we recommend that you use the `FetchMetadata()` function, or fetch the metadata yourself and use the new `ParseMetadata()` function, and pass the metadata in _samlsp.Options.IDPMetadata_. Similarly, the _HTTPClient_ field is now deprecated because it was only used for fetching metdata, which is no longer directly implemented. The fields that manage how cookies are set are deprecated as well. To customize how cookies are managed, provide custom implementation of _RequestTracker_ and/or _Session_, perhaps by extending the default implementations. The deprecated fields have not been removed from the Options structure, but will be in future. In particular we have deprecated the following fields in _samlsp.Options_: - `Logger` - This was used to emit errors while validating, which is an anti-pattern. - `IDPMetadataURL` - Instead use `FetchMetadata()` - `HTTPClient` - Instead pass httpClient to FetchMetadata - `CookieMaxAge` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieName` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider Let us assume we have a simple web application to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import ( ) ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [samltest.id](https://samltest.id/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identity provider to establish trust from the service provider to the IDP. For [samltest.id](https://samltest.id/), you can do something like: Navigate to https://samltest.id/upload.php and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://samltest.id/idp/profile/SAML2/Redirect/SSO` 1. samltest.id prompts you for a username and password. 1. samltest.id returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `example/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package can produce signed SAML assertions, and can validate both signed and encrypted SAML assertions. It does not support signed or encrypted requests. The _RelayState_ parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, _RelayState_ is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [SAMLtest](https://samltest.id/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `78B6038B3B9DFB88`](https://keybase.io/crewjam)).
Package testify is a set of packages that provide many tools for testifying that your code will behave as you intend. testify contains the following packages: The assert package provides a comprehensive set of assertion functions that tie in to the Go testing system. The http package contains tools to make it easier to test http activity using the Go testing system. The mock package provides a system by which it is possible to mock your objects and verify calls are happening as expected. The suite package provides a basic structure for using structs as testing suites, and methods on those structs as tests. It includes setup/teardown functionality in the way of interfaces.
Package standalone provides standalone functions useful for working with the Decred blockchain consensus rules. The primary goal of offering these functions via a separate module is to reduce the required dependencies to a minimum as compared to the blockchain module. It is ideal for applications such as lightweight clients that need to ensure basic security properties hold and calculate appropriate vote subsidies and block explorers. For example, some things an SPV wallet needs to prove are that the block headers all connect together, that they satisfy the proof of work requirements, and that a given transaction tree is valid for a given header. The provided functions fall into the following categories: Errors returned by this package are of type standalone.RuleError. This allows the caller to differentiate between errors further up the call stack through type assertions. In addition, callers can programmatically determine the specific rule violation by examining the ErrorCode field of the type asserted standalone.RuleError.
Package txscript implements the Decred transaction script language. This package provides data structures and functions to parse and execute decred transaction scripts. Decred transaction scripts are written in a stack-base, FORTH-like language. The Decred script language consists of a number of opcodes which fall into several categories such pushing and popping data to and from the stack, performing basic and bitwise arithmetic, conditional branching, comparing hashes, and checking cryptographic signatures. Scripts are processed from left to right and intentionally do not provide loops. The vast majority of Decred scripts at the time of this writing are of several standard forms which consist of a spender providing a public key and a signature which proves the spender owns the associated private key. This information is used to prove the the spender is authorized to perform the transaction. One benefit of using a scripting language is added flexibility in specifying what conditions must be met in order to spend decreds. Errors returned by this package are of type txscript.Error. This allows the caller to programmatically determine the specific error by examining the ErrorCode field of the type asserted txscript.Error while still providing rich error messages with contextual information. A convenience function named IsErrorCode is also provided to allow callers to easily check for a specific error code. See ErrorCode in the package documentation for a full list.
Package blockchain implements Decred block handling and chain selection rules. The Decred block handling and chain selection rules are an integral, and quite likely the most important, part of decred. Unfortunately, at the time of this writing, these rules are also largely undocumented and had to be ascertained from the bitcoind source code. At its core, Decred is a distributed consensus of which blocks are valid and which ones will comprise the main block chain (public ledger) that ultimately determines accepted transactions, so it is extremely important that fully validating nodes agree on all rules. At a high level, this package provides support for inserting new blocks into the block chain according to the aforementioned rules. It includes functionality such as rejecting duplicate blocks, ensuring blocks and transactions follow all rules, orphan handling, and best chain selection along with reorganization. Since this package does not deal with other Decred specifics such as network communication or wallets, it provides a notification system which gives the caller a high level of flexibility in how they want to react to certain events such as orphan blocks which need their parents requested and newly connected main chain blocks which might result in wallet updates. Before a block is allowed into the block chain, it must go through an intensive series of validation rules. The following list serves as a general outline of those rules to provide some intuition into what is going on under the hood, but is by no means exhaustive: Errors returned by this package are either the raw errors provided by underlying calls or of type blockchain.RuleError. This allows the caller to differentiate between unexpected errors, such as database errors, versus errors due to rule violations through type assertions. In addition, callers can programmatically determine the specific rule violation by examining the ErrorCode field of the type asserted blockchain.RuleError. This package includes spec changes outlined by the following BIPs:
Package assert 是对 testing 包的一些简单包装
Package resumable registers resumable versions of hash functions. Resumable varieties of hash functions are available via the standard crypto package. Support can be checked by type assertion against the resumable.Hash interface. While one can use these sub-packages directly, it makes more sense to register them using side-effect imports: This will make the resumable hashes available to the application through the standard crypto package. For example, if a new sha256 is required, one should use the following: Such a features allows one to control the inclusion of resumable hash support in a single file. Applications that require the resumable hash implementation can type switch to detect support, while other parts of the application can be completely oblivious to the presence of the alternative hash functions. Also note that the implementations available in this package are completely untouched from their Go counterparts in the standard library. Only an extra file is added to each package to implement the extra resumable hash functions.
Package assert 是对 testing 包的一些简单包装
Package tf is a microframework for parametrized testing of functions. I wrote this because I was tired of creating a []struct{} fixture for most of my tests. I knew there had to be an easier and more reliable way. It offers a simple and intuitive syntax for tests by wrapping the function: Assertions are performed with github.com/stretchr/testify/assert. If an assertion fails it will point to the correct line so you do not need to explicitly label tests. The above test will output (in verbose mode):
Package is provides a lightweight extension to the standard library's testing capabilities. Comments on the assertion lines are used to add a description. The following failing test: Will output: The following code shows a range of useful ways you can use the helper methods:
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy(request level configuration), alternatively, global(all services) or client level RetryPolicy configration is also possible. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go If you are trying to make a PUT/POST API call with binary request body, please make sure the binary request body is resettable, which means the request body should inherit Seeker interface. The Retry behavior Precedence (Highest to lowest) is defined as below:- The OCI Go SDK defines a default retry policy that retries on the errors suitable for retries (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm), for a recommended period of time (up to 7 attempts spread out over at most approximately 1.5 minutes). The default retry policy is defined by : Default Retry-able Errors Below is the list of default retry-able errors for which retry attempts should be made. The following errors should be retried (with backoff). HTTP Code Customer-facing Error Code Apart from the above errors, retries should also be attempted in the following Client Side errors : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) The above errors can be avoided through retrying and hence, are classified as the default retry-able errors. Additionally, retries should also be made for Circuit Breaker exceptions (Exceptions raised by Circuit Breaker in an open state) Default Termination Strategy The termination strategy defines when SDKs should stop attempting to retry. In other words, it's the deadline for retries. The OCI SDKs should stop retrying the operation after 7 retry attempts. This means the SDKs will have retried for ~98 seconds or ~1.5 minutes have elapsed due to total delays. SDKs will make a total of 8 attempts. (1 initial request + 7 retries) Default Delay Strategy Default Delay Strategy - The delay strategy defines the amount of time to wait between each of the retry attempts. The default delay strategy chosen for the SDK – Exponential backoff with jitter, using: 1. The base time to use in retry calculations will be 1 second 2. An exponent of 2. When calculating the next retry time, the SDK will raise this to the power of the number of attempts 3. A maximum wait time between calls of 30 seconds (Capped) 4. Added jitter value between 0-1000 milliseconds to spread out the requests Configure and use default retry policy You can set this retry policy for a single request: or for all requests made by a client: or for all requests made by all clients: or setting default retry via environment varaible, which is a global switch for all services: Some services enable retry for operations by default, this can be overridden using any alternatives mentioned above. To know which service operations have retries enabled by default, look at the operation's description in the SDK - it will say whether that it has retries enabled by default Some resources may have to be replicated across regions and are only eventually consistent. That means the request to create, update, or delete the resource succeeded, but the resource is not available everywhere immediately. Creating, updating, or deleting any resource in the Identity service is affected by eventual consistency, and doing so may cause other operations in other services to fail until the Identity resource has been replicated. For example, the request to CreateTag in the Identity service in the home region succeeds, but immediately using that created tag in another region in a request to LaunchInstance in the Compute service may fail. If you are creating, updating, or deleting resources in the Identity service, we recommend using an eventually consistent retry policy for any service you access. The default retry policy already deals with eventual consistency. Example: This retry policy will use a different strategy if an eventually consistent change was made in the recent past (called the "eventually consistent window", currently defined to be 4 minutes after the eventually consistent change). This special retry policy for eventual consistency will: 1. make up to 9 attempts (including the initial attempt); if an attempt is successful, no more attempts will be made 2. retry at most until (a) approximately the end of the eventually consistent window or (b) the end of the default retry period of about 1.5 minutes, whichever is farther in the future; if an attempt is successful, no more attempts will be made, and the OCI Go SDK will not wait any longer 3. retry on the error codes 400-RelatedResourceNotAuthorizedOrNotFound, 404-NotAuthorizedOrNotFound, and 409-NotAuthorizedOrResourceAlreadyExists, for which the default retry policy does not retry, in addition to the errors the default retry policy retries on (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm) If there were no eventually consistent actions within the recent past, then this special retry strategy is not used. If you want a retry policy that does not handle eventual consistency in a special way, for example because you retry on all error responses, you can use DefaultRetryPolicyWithoutEventualConsistency or NewRetryPolicyWithOptions with the common.ReplaceWithValuesFromRetryPolicy(common.DefaultRetryPolicyWithoutEventualConsistency()) option: The NewRetryPolicy function also creates a retry policy without eventual consistency. Circuit Breaker can prevent an application repeatedly trying to execute an operation that is likely to fail, allowing it to continue without waiting for the fault to be rectified or wasting CPU cycles, of course, it also enables an application to detect whether the fault has been resolved. If the problem appears to have been rectified, the application can attempt to invoke the operation. Go SDK intergrates sony/gobreaker solution, wraps in a circuit breaker object, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, this also saves the service from being overwhelmed with network calls in case of an outage. Circuit Breaker Configuration Definitions 1. Failure Rate Threshold - The state of the CircuitBreaker changes from CLOSED to OPEN when the failure rate is equal or greater than a configurable threshold. For example when more than 50% of the recorded calls have failed. 2. Reset Timeout - The timeout after which an open circuit breaker will attempt a request if a request is made 3. Failure Exceptions - The list of Exceptions that will be regarded as failures for the circuit. 4. Minimum number of calls/ Volume threshold - Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. 1. Failure Rate Threshold - 80% - This means when 80% of the requests calculated for a time window of 120 seconds have failed then the circuit will transition from closed to open. 2. Minimum number of calls/ Volume threshold - A value of 10, for the above defined time window of 120 seconds. 3. Reset Timeout - 30 seconds to wait before setting the breaker to halfOpen state, and trying the action again. 4. Failure Exceptions - The failures for the circuit will only be recorded for the retryable/transient exceptions. This means only the following exceptions will be regarded as failure for the circuit. HTTP Code Customer-facing Error Code Apart from the above, the following client side exceptions will also be treated as a failure for the circuit : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) Go SDK enable circuit breaker with default configuration for most of the service clients, if you don't want to enable the solution, can disable the functionality before your application running Go SDK also supports customize Circuit Breaker with specified configurations. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_circuitbreaker_test.go To know which service clients have circuit breakers enabled, look at the service client's description in the SDK - it will say whether that it has circuit breakers enabled by default The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy(request level configuration), alternatively, global(all services) or client level RetryPolicy configration is also possible. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go If you are trying to make a PUT/POST API call with binary request body, please make sure the binary request body is resettable, which means the request body should inherit Seeker interface. The Retry behavior Precedence (Highest to lowest) is defined as below:- The OCI Go SDK defines a default retry policy that retries on the errors suitable for retries (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm), for a recommended period of time (up to 7 attempts spread out over at most approximately 1.5 minutes). The default retry policy is defined by : Default Retry-able Errors Below is the list of default retry-able errors for which retry attempts should be made. The following errors should be retried (with backoff). HTTP Code Customer-facing Error Code Apart from the above errors, retries should also be attempted in the following Client Side errors : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) The above errors can be avoided through retrying and hence, are classified as the default retry-able errors. Additionally, retries should also be made for Circuit Breaker exceptions (Exceptions raised by Circuit Breaker in an open state) Default Termination Strategy The termination strategy defines when SDKs should stop attempting to retry. In other words, it's the deadline for retries. The OCI SDKs should stop retrying the operation after 7 retry attempts. This means the SDKs will have retried for ~98 seconds or ~1.5 minutes have elapsed due to total delays. SDKs will make a total of 8 attempts. (1 initial request + 7 retries) Default Delay Strategy Default Delay Strategy - The delay strategy defines the amount of time to wait between each of the retry attempts. The default delay strategy chosen for the SDK – Exponential backoff with jitter, using: 1. The base time to use in retry calculations will be 1 second 2. An exponent of 2. When calculating the next retry time, the SDK will raise this to the power of the number of attempts 3. A maximum wait time between calls of 30 seconds (Capped) 4. Added jitter value between 0-1000 milliseconds to spread out the requests Configure and use default retry policy You can set this retry policy for a single request: or for all requests made by a client: or for all requests made by all clients: or setting default retry via environment varaible, which is a global switch for all services: Some services enable retry for operations by default, this can be overridden using any alternatives mentioned above. To know which service operations have retries enabled by default, look at the operation's description in the SDK - it will say whether that it has retries enabled by default Some resources may have to be replicated across regions and are only eventually consistent. That means the request to create, update, or delete the resource succeeded, but the resource is not available everywhere immediately. Creating, updating, or deleting any resource in the Identity service is affected by eventual consistency, and doing so may cause other operations in other services to fail until the Identity resource has been replicated. For example, the request to CreateTag in the Identity service in the home region succeeds, but immediately using that created tag in another region in a request to LaunchInstance in the Compute service may fail. If you are creating, updating, or deleting resources in the Identity service, we recommend using an eventually consistent retry policy for any service you access. The default retry policy already deals with eventual consistency. Example: This retry policy will use a different strategy if an eventually consistent change was made in the recent past (called the "eventually consistent window", currently defined to be 4 minutes after the eventually consistent change). This special retry policy for eventual consistency will: 1. make up to 9 attempts (including the initial attempt); if an attempt is successful, no more attempts will be made 2. retry at most until (a) approximately the end of the eventually consistent window or (b) the end of the default retry period of about 1.5 minutes, whichever is farther in the future; if an attempt is successful, no more attempts will be made, and the OCI Go SDK will not wait any longer 3. retry on the error codes 400-RelatedResourceNotAuthorizedOrNotFound, 404-NotAuthorizedOrNotFound, and 409-NotAuthorizedOrResourceAlreadyExists, for which the default retry policy does not retry, in addition to the errors the default retry policy retries on (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm) If there were no eventually consistent actions within the recent past, then this special retry strategy is not used. If you want a retry policy that does not handle eventual consistency in a special way, for example because you retry on all error responses, you can use DefaultRetryPolicyWithoutEventualConsistency or NewRetryPolicyWithOptions with the common.ReplaceWithValuesFromRetryPolicy(common.DefaultRetryPolicyWithoutEventualConsistency()) option: The NewRetryPolicy function also creates a retry policy without eventual consistency. Circuit Breaker can prevent an application repeatedly trying to execute an operation that is likely to fail, allowing it to continue without waiting for the fault to be rectified or wasting CPU cycles, of course, it also enables an application to detect whether the fault has been resolved. If the problem appears to have been rectified, the application can attempt to invoke the operation. Go SDK intergrates sony/gobreaker solution, wraps in a circuit breaker object, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, this also saves the service from being overwhelmed with network calls in case of an outage. Circuit Breaker Configuration Definitions 1. Failure Rate Threshold - The state of the CircuitBreaker changes from CLOSED to OPEN when the failure rate is equal or greater than a configurable threshold. For example when more than 50% of the recorded calls have failed. 2. Reset Timeout - The timeout after which an open circuit breaker will attempt a request if a request is made 3. Failure Exceptions - The list of Exceptions that will be regarded as failures for the circuit. 4. Minimum number of calls/ Volume threshold - Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. 1. Failure Rate Threshold - 80% - This means when 80% of the requests calculated for a time window of 120 seconds have failed then the circuit will transition from closed to open. 2. Minimum number of calls/ Volume threshold - A value of 10, for the above defined time window of 120 seconds. 3. Reset Timeout - 30 seconds to wait before setting the breaker to halfOpen state, and trying the action again. 4. Failure Exceptions - The failures for the circuit will only be recorded for the retryable/transient exceptions. This means only the following exceptions will be regarded as failure for the circuit. HTTP Code Customer-facing Error Code Apart from the above, the following client side exceptions will also be treated as a failure for the circuit : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) Go SDK enable circuit breaker with default configuration for most of the service clients, if you don't want to enable the solution, can disable the functionality before your application running Go SDK also supports customize Circuit Breaker with specified configurations. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_circuitbreaker_test.go To know which service clients have circuit breakers enabled, look at the service client's description in the SDK - it will say whether that it has circuit breakers enabled by default The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
Package sessions provides cookie and filesystem sessions and infrastructure for custom session backends. The key features are: Let's start with an example that shows the sessions API in a nutshell: First we initialize a session store calling NewCookieStore() and passing a secret key used to authenticate the session. Inside the handler, we call store.Get() to retrieve an existing session or a new one. Then we set some session values in session.Values, which is a map[interface{}]interface{}. And finally we call session.Save() to save the session in the response. Note that in production code, we should check for errors when calling session.Save(r, w), and either display an error message or otherwise handle it. Save must be called before writing to the response, otherwise the session cookie will not be sent to the client. That's all you need to know for the basic usage. Let's take a look at other options, starting with flash messages. Flash messages are session values that last until read. The term appeared with Ruby On Rails a few years back. When we request a flash message, it is removed from the session. To add a flash, call session.AddFlash(), and to get all flashes, call session.Flashes(). Here is an example: Flash messages are useful to set information to be read after a redirection, like after form submissions. There may also be cases where you want to store a complex datatype within a session, such as a struct. Sessions are serialised using the encoding/gob package, so it is easy to register new datatypes for storage in sessions: As it's not possible to pass a raw type as a parameter to a function, gob.Register() relies on us passing it a value of the desired type. In the example above we've passed it a pointer to a struct and a pointer to a custom type representing a map[string]interface. (We could have passed non-pointer values if we wished.) This will then allow us to serialise/deserialise values of those types to and from our sessions. Note that because session values are stored in a map[string]interface{}, there's a need to type-assert data when retrieving it. We'll use the Person struct we registered above: By default, session cookies last for a month. This is probably too long for some cases, but it is easy to change this and other attributes during runtime. Sessions can be configured individually or the store can be configured and then all sessions saved using it will use that configuration. We access session.Options or store.Options to set a new configuration. The fields are basically a subset of http.Cookie fields. Let's change the maximum age of a session to one week: Sometimes we may want to change authentication and/or encryption keys without breaking existing sessions. The CookieStore supports key rotation, and to use it you just need to set multiple authentication and encryption keys, in pairs, to be tested in order: New sessions will be saved using the first pair. Old sessions can still be read because the first pair will fail, and the second will be tested. This makes it easy to "rotate" secret keys and still be able to validate existing sessions. Note: for all pairs the encryption key is optional; set it to nil or omit it and and encryption won't be used. Multiple sessions can be used in the same request, even with different session backends. When this happens, calling Save() on each session individually would be cumbersome, so we have a way to save all sessions at once: it's sessions.Save(). Here's an example: This is possible because when we call Get() from a session store, it adds the session to a common registry. Save() uses it to save all registered sessions.
Package txscript implements the Decred transaction script language. This package provides data structures and functions to parse and execute decred transaction scripts. Decred transaction scripts are written in a stack-base, FORTH-like language. The Decred script language consists of a number of opcodes which fall into several categories such pushing and popping data to and from the stack, performing basic and bitwise arithmetic, conditional branching, comparing hashes, and checking cryptographic signatures. Scripts are processed from left to right and intentionally do not provide loops. The vast majority of Decred scripts at the time of this writing are of several standard forms which consist of a spender providing a public key and a signature which proves the spender owns the associated private key. This information is used to prove the spender is authorized to perform the transaction. One benefit of using a scripting language is added flexibility in specifying what conditions must be met in order to spend decreds. Errors returned by this package are of type txscript.Error. This allows the caller to programmatically determine the specific error by examining the ErrorCode field of the type asserted txscript.Error while still providing rich error messages with contextual information. A convenience function named IsErrorCode is also provided to allow callers to easily check for a specific error code. See ErrorCode in the package documentation for a full list.