Package caddy implements the Caddy server manager. To use this package: You should call Wait() on your instance to wait for all servers to quit before your process exits.
Package sdk is the official AWS SDK for the Go programming language. The AWS SDK for Go provides APIs and utilities that developers can use to build Go applications that use AWS services, such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). The SDK removes the complexity of coding directly against a web service interface. It hides a lot of the lower-level plumbing, such as authentication, request retries, and error handling. The SDK also includes helpful utilities on top of the AWS APIs that add additional capabilities and functionality. For example, the Amazon S3 Download and Upload Manager will automatically split up large objects into multiple parts and transfer them concurrently. See the s3manager package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/ Checkout the Getting Started Guide and API Reference Docs detailed the SDK's components and details on each AWS client the SDK supports. The Getting Started Guide provides examples and detailed description of how to get setup with the SDK. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/welcome.html The API Reference Docs include a detailed breakdown of the SDK's components such as utilities and AWS clients. Use this as a reference of the Go types included with the SDK, such as AWS clients, API operations, and API parameters. https://docs.aws.amazon.com/sdk-for-go/api/ The SDK is composed of two main components, SDK core, and service clients. The SDK core packages are all available under the aws package at the root of the SDK. Each client for a supported AWS service is available within its own package under the service folder at the root of the SDK. aws - SDK core, provides common shared types such as Config, Logger, and utilities to make working with API parameters easier. awserr - Provides the error interface that the SDK will use for all errors that occur in the SDK's processing. This includes service API response errors as well. The Error type is made up of a code and message. Cast the SDK's returned error type to awserr.Error and call the Code method to compare returned error to specific error codes. See the package's documentation for additional values that can be extracted such as RequestId. credentials - Provides the types and built in credentials providers the SDK will use to retrieve AWS credentials to make API requests with. Nested under this folder are also additional credentials providers such as stscreds for assuming IAM roles, and ec2rolecreds for EC2 Instance roles. endpoints - Provides the AWS Regions and Endpoints metadata for the SDK. Use this to lookup AWS service endpoint information such as which services are in a region, and what regions a service is in. Constants are also provided for all region identifiers, e.g UsWest2RegionID for "us-west-2". session - Provides initial default configuration, and load configuration from external sources such as environment and shared credentials file. request - Provides the API request sending, and retry logic for the SDK. This package also includes utilities for defining your own request retryer, and configuring how the SDK processes the request. service - Clients for AWS services. All services supported by the SDK are available under this folder. The SDK includes the Go types and utilities you can use to make requests to AWS service APIs. Within the service folder at the root of the SDK you'll find a package for each AWS service the SDK supports. All service clients follows a common pattern of creation and usage. When creating a client for an AWS service you'll first need to have a Session value constructed. The Session provides shared configuration that can be shared between your service clients. When service clients are created you can pass in additional configuration via the aws.Config type to override configuration provided by in the Session to create service client instances with custom configuration. Once the service's client is created you can use it to make API requests the AWS service. These clients are safe to use concurrently. In the AWS SDK for Go, you can configure settings for service clients, such as the log level and maximum number of retries. Most settings are optional; however, for each service client, you must specify a region and your credentials. The SDK uses these values to send requests to the correct AWS region and sign requests with the correct credentials. You can specify these values as part of a session or as environment variables. See the SDK's configuration guide for more information. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html See the session package documentation for more information on how to use Session with the SDK. https://docs.aws.amazon.com/sdk-for-go/api/aws/session/ See the Config type in the aws package for more information on configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config When using the SDK you'll generally need your AWS credentials to authenticate with AWS services. The SDK supports multiple methods of supporting these credentials. By default the SDK will source credentials automatically from its default credential chain. See the session package for more information on this chain, and how to configure it. The common items in the credential chain are the following: Environment Credentials - Set of environment variables that are useful when sub processes are created for specific roles. Shared Credentials file (~/.aws/credentials) - This file stores your credentials based on a profile name and is useful for local development. EC2 Instance Role Credentials - Use EC2 Instance Role to assign credentials to application running on an EC2 instance. This removes the need to manage credential files in production. Credentials can be configured in code as well by setting the Config's Credentials value to a custom provider or using one of the providers included with the SDK to bypass the default credential chain and use a custom one. This is helpful when you want to instruct the SDK to only use a specific set of credentials or providers. This example creates a credential provider for assuming an IAM role, "myRoleARN" and configures the S3 service client to use that role for API requests. See the credentials package documentation for more information on credential providers included with the SDK, and how to customize the SDK's usage of credentials. https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials The SDK has support for the shared configuration file (~/.aws/config). This support can be enabled by setting the environment variable, "AWS_SDK_LOAD_CONFIG=1", or enabling the feature in code when creating a Session via the Option's SharedConfigState parameter. In addition to the credentials you'll need to specify the region the SDK will use to make AWS API requests to. In the SDK you can specify the region either with an environment variable, or directly in code when a Session or service client is created. The last value specified in code wins if the region is specified multiple ways. To set the region via the environment variable set the "AWS_REGION" to the region you want to the SDK to use. Using this method to set the region will allow you to run your application in multiple regions without needing additional code in the application to select the region. The endpoints package includes constants for all regions the SDK knows. The values are all suffixed with RegionID. These values are helpful, because they reduce the need to type the region string manually. To set the region on a Session use the aws package's Config struct parameter Region to the AWS region you want the service clients created from the session to use. This is helpful when you want to create multiple service clients, and all of the clients make API requests to the same region. See the endpoints package for the AWS Regions and Endpoints metadata. https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/ In addition to setting the region when creating a Session you can also set the region on a per service client bases. This overrides the region of a Session. This is helpful when you want to create service clients in specific regions different from the Session's region. See the Config type in the aws package for more information and additional options such as setting the Endpoint, and other service client configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config Once the client is created you can make an API request to the service. Each API method takes a input parameter, and returns the service response and an error. The SDK provides methods for making the API call in multiple ways. In this list we'll use the S3 ListObjects API as an example for the different ways of making API requests. ListObjects - Base API operation that will make the API request to the service. ListObjectsRequest - API methods suffixed with Request will construct the API request, but not send it. This is also helpful when you want to get a presigned URL for a request, and share the presigned URL instead of your application making the request directly. ListObjectsPages - Same as the base API operation, but uses a callback to automatically handle pagination of the API's response. ListObjectsWithContext - Same as base API operation, but adds support for the Context pattern. This is helpful for controlling the canceling of in flight requests. See the Go standard library context package for more information. This method also takes request package's Option functional options as the variadic argument for modifying how the request will be made, or extracting information from the raw HTTP response. ListObjectsPagesWithContext - same as ListObjectsPages, but adds support for the Context pattern. Similar to ListObjectsWithContext this method also takes the request package's Option function option types as the variadic argument. In addition to the API operations the SDK also includes several higher level methods that abstract checking for and waiting for an AWS resource to be in a desired state. In this list we'll use WaitUntilBucketExists to demonstrate the different forms of waiters. WaitUntilBucketExists. - Method to make API request to query an AWS service for a resource's state. Will return successfully when that state is accomplished. WaitUntilBucketExistsWithContext - Same as WaitUntilBucketExists, but adds support for the Context pattern. In addition these methods take request package's WaiterOptions to configure the waiter, and how underlying request will be made by the SDK. The API method will document which error codes the service might return for the operation. These errors will also be available as const strings prefixed with "ErrCode" in the service client's package. If there are no errors listed in the API's SDK documentation you'll need to consult the AWS service's API documentation for the errors that could be returned. Pagination helper methods are suffixed with "Pages", and provide the functionality needed to round trip API page requests. Pagination methods take a callback function that will be called for each page of the API's response. Waiter helper methods provide the functionality to wait for an AWS resource state. These methods abstract the logic needed to to check the state of an AWS resource, and wait until that resource is in a desired state. The waiter will block until the resource is in the state that is desired, an error occurs, or the waiter times out. If a resource times out the error code returned will be request.WaiterResourceNotReadyErrorCode. This example shows a complete working Go file which will upload a file to S3 and use the Context pattern to implement timeout logic that will cancel the request if it takes too long. This example highlights how to use sessions, create a service client, make a request, handle the error, and process the response.
Package caddy implements the Caddy server manager. To use this package: You should call Wait() on your instance to wait for all servers to quit before your process exits.
Package storage provides an easy way to work with Google Cloud Storage. Google Cloud Storage stores data in named objects, which are grouped into buckets. More information about Google Cloud Storage is available at https://cloud.google.com/storage/docs. See https://pkg.go.dev/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a Client: The client will use your default application credentials. Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines. You may configure the client by passing in options from the google.golang.org/api/option package. You may also use options defined in this package, such as WithJSONReads. If you only wish to access public data, you can create an unauthenticated client with To use an emulator with this library, you can set the STORAGE_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Storage. You can then create and use a client as usual: Please note that there is no official emulator for Cloud Storage. A Google Cloud Storage bucket is a collection of objects. To work with a bucket, make a bucket handle: A handle is a reference to a bucket. You can have a handle even if the bucket doesn't exist yet. To create a bucket in Google Cloud Storage, call BucketHandle.Create: Note that although buckets are associated with projects, bucket names are global across all projects. Each bucket has associated metadata, represented in this package by BucketAttrs. The third argument to BucketHandle.Create allows you to set the initial BucketAttrs of a bucket. To retrieve a bucket's attributes, use BucketHandle.Attrs: An object holds arbitrary data as a sequence of bytes, like a file. You refer to objects using a handle, just as with buckets, but unlike buckets you don't explicitly create an object. Instead, the first time you write to an object it will be created. You can use the standard Go io.Reader and io.Writer interfaces to read and write object data: Objects also have attributes, which you can fetch with ObjectHandle.Attrs: Listing objects in a bucket is done with the BucketHandle.Objects method: Objects are listed lexicographically by name. To filter objects lexicographically, [Query.StartOffset] and/or [Query.EndOffset] can be used: If only a subset of object attributes is needed when listing, specifying this subset using Query.SetAttrSelection may speed up the listing process: Both objects and buckets have ACLs (Access Control Lists). An ACL is a list of ACLRules, each of which specifies the role of a user, group or project. ACLs are suitable for fine-grained control, but you may prefer using IAM to control access at the project level (see Cloud Storage IAM docs. To list the ACLs of a bucket or object, obtain an ACLHandle and call ACLHandle.List: You can also set and delete ACLs. Every object has a generation and a metageneration. The generation changes whenever the content changes, and the metageneration changes whenever the metadata changes. Conditions let you check these values before an operation; the operation only executes if the conditions match. You can use conditions to prevent race conditions in read-modify-write operations. For example, say you've read an object's metadata into objAttrs. Now you want to write to that object, but only if its contents haven't changed since you read it. Here is how to express that: You can obtain a URL that lets anyone read or write an object for a limited time. Signing a URL requires credentials authorized to sign a URL. To use the same authentication that was used when instantiating the Storage client, use BucketHandle.SignedURL. You can also sign a URL without creating a client. See the documentation of SignedURL for details. A type of signed request that allows uploads through HTML forms directly to Cloud Storage with temporary permission. Conditions can be applied to restrict how the HTML form is used and exercised by a user. For more information, please see the XML POST Object docs as well as the documentation of BucketHandle.GenerateSignedPostPolicyV4. If the GoogleAccessID and PrivateKey option fields are not provided, they will be automatically detected by BucketHandle.SignedURL and BucketHandle.GenerateSignedPostPolicyV4 if any of the following are true: Detecting GoogleAccessID may not be possible if you are authenticated using a token source or using option.WithHTTPClient. In this case, you can provide a service account email for GoogleAccessID and the client will attempt to sign the URL or Post Policy using that service account. To generate the signature, you must have: Errors returned by this client are often of the type googleapi.Error. These errors can be introspected for more information by using errors.As with the richer googleapi.Error type. For example: Methods in this package may retry calls that fail with transient errors. Retrying continues indefinitely unless the controlling context is canceled, the client is closed, or a non-transient error is received. To stop retries from continuing, use context timeouts or cancellation. The retry strategy in this library follows best practices for Cloud Storage. By default, operations are retried only if they are idempotent, and exponential backoff with jitter is employed. In addition, errors are only retried if they are defined as transient by the service. See the Cloud Storage retry docs for more information. Users can configure non-default retry behavior for a single library call (using BucketHandle.Retryer and ObjectHandle.Retryer) or for all calls made by a client (using Client.SetRetry). For example: You can add custom headers to any API call made by this package by using callctx.SetHeaders on the context which is passed to the method. For example, to add a custom audit logging header: This package includes support for the Cloud Storage gRPC API. The implementation uses gRPC rather than the Default JSON & XML APIs to make requests to Cloud Storage. The Go Storage gRPC client is generally available. The Notifications, Serivce Account HMAC and GetServiceAccount RPCs are not supported through the gRPC client. To create a client which will use gRPC, use the alternate constructor: Using the gRPC API inside GCP with a bucket in the same region can allow for Direct Connectivity (enabling requests to skip some proxy steps and reducing response latency). A warning is emmitted if gRPC is not used within GCP to warn that Direct Connectivity could not be initialized. Direct Connectivity is not required to access the gRPC API. Dependencies for the gRPC API may slightly increase the size of binaries for applications depending on this package. If you are not using gRPC, you can use the build tag `disable_grpc_modules` to opt out of these dependencies and reduce the binary size. The gRPC client emits metrics by default and will export the gRPC telemetry discussed in gRFC/66 and gRFC/78 to Google Cloud Monitoring. The metrics are accessible through Cloud Monitoring API and you incur no additional cost for publishing the metrics. Google Cloud Support can use this information to more quickly diagnose problems related to GCS and gRPC. Sending this data does not incur any billing charges, and requires minimal CPU (a single RPC every minute) or memory (a few KiB to batch the telemetry). To access the metrics you can view them through Cloud Monitoring metric explorer with the prefix `storage.googleapis.com/client`. Metrics are emitted every minute. You can disable metrics using the following example when creating a new gRPC client using WithDisabledClientMetrics. The metrics exporter uses Cloud Monitoring API which determines project ID and credentials doing the following: * Project ID is determined using OTel Resource Detector for the environment otherwise it falls back to the project provided by google.FindCredentials. * Credentials are determined using Application Default Credentials. The principal must have `roles/monitoring.metricWriter` role granted. If not a logged warning will be emitted. Subsequent are silenced to prevent noisy logs. Certain control plane and long-running operations for Cloud Storage (including Folder and Managed Folder operations) are supported via the autogenerated Storage Control client, which is available as a subpackage in this module. See package docs at cloud.google.com/go/storage/control/apiv2 or reference the Storage Control API docs.
Package cap provides all the Linux Capabilities userspace library API bindings in native Go. Capabilities are a feature of the Linux kernel that allow fine grain permissions to perform privileged operations. Privileged operations are required to do irregular system level operations from code. You can read more about how Capabilities are intended to work here: This package supports native Go bindings for all the features described in that paper as well as supporting subsequent changes to the kernel for other styles of inheritable Capability. Some simple things you can do with this package are: The "cap" package operates with POSIX semantics for security state. That is all OS threads are kept in sync at all times. The package "kernel.org/pub/linux/libs/security/libcap/psx" is used to implement POSIX semantics system calls that manipulate thread state uniformly over the whole Go (and any CGo linked) process runtime. Note, if the Go runtime syscall interface contains the Linux variant syscall.AllThreadsSyscall() API (it debuted in go1.16 see https://github.com/golang/go/issues/1435 for its history) then the "libcap/psx" package will use that to invoke Capability setting system calls in pure Go binaries. With such an enhanced Go runtime, to force this behavior, use the CGO_ENABLED=0 environment variable. POSIX semantics are more secure than trying to manage privilege at a thread level when those threads share a common memory image as they do under Linux: it is trivial to exploit a vulnerability in one thread of a process to cause execution on any another thread. So, any imbalance in security state, in such cases will readily create an opportunity for a privilege escalation vulnerability. POSIX semantics also work well with Go, which deliberately tries to insulate the user from worrying about the number of OS threads that are actually running in their program. Indeed, Go can efficiently launch and manage tens of thousands of concurrent goroutines without bogging the program or wider system down. It does this by aggressively migrating idle threads to make progress on unblocked goroutines. So, inconsistent security state across OS threads can also lead to program misbehavior. The only exception to this process-wide common security state is the cap.Launcher related functionality. This briefly locks an OS thread to a goroutine in order to launch another executable - the robust implementation of this kind of support is quite subtle, so please read its documentation carefully, if you find that you need it. See https://sites.google.com/site/fullycapable/ for recent updates, some more complete walk-through examples of ways of using 'cap.Set's etc and information on how to file bugs. Copyright (c) 2019-21 Andrew G. Morgan <morgan@kernel.org> The cap and psx packages are licensed with a (you choose) BSD 3-clause or GPL2. See LICENSE file for details.
Package controllerruntime provides tools to construct Kubernetes-style controllers that manipulate both Kubernetes CRDs and aggregated/built-in Kubernetes APIs. It defines easy helpers for the common use cases when building CRDs, built on top of customizable layers of abstraction. Common cases should be easy, and uncommon cases should be possible. In general, controller-runtime tries to guide users towards Kubernetes controller best-practices. The main entrypoint for controller-runtime is this root package, which contains all of the common types needed to get started building controllers: The examples in this package walk through a basic controller setup. The kubebuilder book (https://book.kubebuilder.io) has some more in-depth walkthroughs. controller-runtime favors structs with sane defaults over constructors, so it's fairly common to see structs being used directly in controller-runtime. A brief-ish walkthrough of the layout of this library can be found below. Each package contains more information about how to use it. Frequently asked questions about using controller-runtime and designing controllers can be found at https://github.com/kubernetes-sigs/controller-runtime/blob/main/FAQ.md. Every controller and webhook is ultimately run by a Manager (pkg/manager). A manager is responsible for running controllers and webhooks, and setting up common dependencies, like shared caches and clients, as well as managing leader election (pkg/leaderelection). Managers are generally configured to gracefully shut down controllers on pod termination by wiring up a signal handler (pkg/manager/signals). Controllers (pkg/controller) use events (pkg/event) to eventually trigger reconcile requests. They may be constructed manually, but are often constructed with a Builder (pkg/builder), which eases the wiring of event sources (pkg/source), like Kubernetes API object changes, to event handlers (pkg/handler), like "enqueue a reconcile request for the object owner". Predicates (pkg/predicate) can be used to filter which events actually trigger reconciles. There are pre-written utilities for the common cases, and interfaces and helpers for advanced cases. Controller logic is implemented in terms of Reconcilers (pkg/reconcile). A Reconciler implements a function which takes a reconcile Request containing the name and namespace of the object to reconcile, reconciles the object, and returns a Response or an error indicating whether to requeue for a second round of processing. Reconcilers use Clients (pkg/client) to access API objects. The default client provided by the manager reads from a local shared cache (pkg/cache) and writes directly to the API server, but clients can be constructed that only talk to the API server, without a cache. The Cache will auto-populate with watched objects, as well as when other structured objects are requested. The default split client does not promise to invalidate the cache during writes (nor does it promise sequential create/get coherence), and code should not assume a get immediately following a create/update will return the updated resource. Caches may also have indexes, which can be created via a FieldIndexer (pkg/client) obtained from the manager. Indexes can used to quickly and easily look up all objects with certain fields set. Reconcilers may retrieve event recorders (pkg/recorder) to emit events using the manager. Clients, Caches, and many other things in Kubernetes use Schemes (pkg/scheme) to associate Go types to Kubernetes API Kinds (Group-Version-Kinds, to be specific). Similarly, webhooks (pkg/webhook/admission) may be implemented directly, but are often constructed using a builder (pkg/webhook/admission/builder). They are run via a server (pkg/webhook) which is managed by a Manager. Logging (pkg/log) in controller-runtime is done via structured logs, using a log set of interfaces called logr (https://pkg.go.dev/github.com/go-logr/logr). While controller-runtime provides easy setup for using Zap (https://go.uber.org/zap, pkg/log/zap), you can provide any implementation of logr as the base logger for controller-runtime. Metrics (pkg/metrics) provided by controller-runtime are registered into a controller-runtime-specific Prometheus metrics registry. The manager can serve these by an HTTP endpoint, and additional metrics may be registered to this Registry as normal. You can easily build integration and unit tests for your controllers and webhooks using the test Environment (pkg/envtest). This will automatically stand up a copy of etcd and kube-apiserver, and provide the correct options to connect to the API server. It's designed to work well with the Ginkgo testing framework, but should work with any testing setup. This example creates a simple application Controller that is configured for ReplicaSets and Pods. * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application. This example creates a simple application Controller that is configured for ExampleCRDWithConfigMapRef CRD. Any change in the configMap referenced in this Custom Resource will cause the re-reconcile of the parent ExampleCRDWithConfigMapRef due to the implementation of the .Watches method of "sigs.k8s.io/controller-runtime/pkg/builder".Builder. This example creates a simple application Controller that is configured for ReplicaSets and Pods. This application controller will be running leader election with the provided configuration in the manager options. If leader election configuration is not provided, controller runs leader election with default values. Default values taken from: https://github.com/kubernetes/component-base/blob/master/config/v1alpha1/defaults.go * defaultLeaseDuration = 15 * time.Second * defaultRenewDeadline = 10 * time.Second * defaultRetryPeriod = 2 * time.Second * Create a new application for ReplicaSets that manages Pods owned by the ReplicaSet and calls into ReplicaSetReconciler. * Start the application.
Package kms provides the API client, operations, and parameter types for AWS Key Management Service. Key Management Service (KMS) is an encryption and key management web service. This guide describes the KMS operations that you can call programmatically. For general information about KMS, see the Key Management Service Developer Guide. KMS has replaced the term customer master key (CMK) with KMS key and KMS key. The concept has not changed. To prevent breaking changes, KMS is keeping some variations of this term. Amazon Web Services provides SDKs that consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .Net, macOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to KMS and other Amazon Web Services services. For example, the SDKs take care of tasks such as signing requests (see below), managing errors, and retrying requests automatically. For more information about the Amazon Web Services SDKs, including how to download and install them, see Tools for Amazon Web Services. We recommend that you use the Amazon Web Services SDKs to make programmatic API calls to KMS. If you need to use FIPS 140-2 validated cryptographic modules when communicating with Amazon Web Services, use the FIPS endpoint in your preferred Amazon Web Services Region. For more information about the available FIPS endpoints, see Service endpointsin the Key Management Service topic of the Amazon Web Services General Reference. All KMS API calls must be signed and be transmitted using Transport Layer Security (TLS). KMS recommends you always use the latest supported TLS version. Clients must also support cipher suites with Perfect Forward Secrecy (PFS) such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve Ephemeral Diffie-Hellman (ECDHE). Most modern systems such as Java 7 and later support these modes. Requests must be signed using an access key ID and a secret access key. We strongly recommend that you do not use your Amazon Web Services account root access key ID and secret access key for everyday work. You can use the access key ID and secret access key for an IAM user or you can use the Security Token Service (STS) to generate temporary security credentials and use those to sign requests. All KMS requests must be signed with Signature Version 4. KMS supports CloudTrail, a service that logs Amazon Web Services API calls and related events for your Amazon Web Services account and delivers them to an Amazon S3 bucket that you specify. By using the information collected by CloudTrail, you can determine what requests were made to KMS, who made the request, when it was made, and so on. To learn more about CloudTrail, including how to turn it on and find your log files, see the CloudTrail User Guide. For more information about credentials and request signing, see the following: Amazon Web Services Security Credentials Temporary Security Credentials Signature Version 4 Signing Process Of the API operations discussed in this guide, the following will prove the most useful for most applications. You will likely perform operations other than these, such as creating keys and assigning policies, by using the console.
Package kinesis provides the API client, operations, and parameter types for Amazon Kinesis. Amazon Kinesis Data Streams is a managed service that scales elastically for real-time processing of streaming big data.
Package gophercloud provides a multi-vendor interface to OpenStack-compatible clouds. The library has a three-level hierarchy: providers, services, and resources. Provider structs represent the cloud providers that offer and manage a collection of services. You will generally want to create one Provider client per OpenStack cloud. Use your OpenStack credentials to create a Provider client. The IdentityEndpoint is typically refered to as "auth_url" or "OS_AUTH_URL" in information provided by the cloud operator. Additionally, the cloud may refer to TenantID or TenantName as project_id and project_name. Credentials are specified like so: You can authenticate with a token by doing: You may also use the openstack.AuthOptionsFromEnv() helper function. This function reads in standard environment variables frequently found in an OpenStack `openrc` file. Again note that Gophercloud currently uses "tenant" instead of "project". Service structs are specific to a provider and handle all of the logic and operations for a particular OpenStack service. Examples of services include: Compute, Object Storage, Block Storage. In order to define one, you need to pass in the parent provider, like so: Resource structs are the domain models that services make use of in order to work with and represent the state of API resources: Intermediate Result structs are returned for API operations, which allow generic access to the HTTP headers, response body, and any errors associated with the network transaction. To turn a result into a usable resource struct, you must call the Extract method which is chained to the response, or an Extract function from an applicable extension: All requests that enumerate a collection return a Pager struct that is used to iterate through the results one page at a time. Use the EachPage method on that Pager to handle each successive Page in a closure, then use the appropriate extraction method from that request's package to interpret that Page as a slice of results: If you want to obtain the entire collection of pages without doing any intermediary processing on each page, you can use the AllPages method: This top-level package contains utility functions and data types that are used throughout the provider and service packages. Of particular note for end users are the AuthOptions and EndpointOpts structs. An example retry backoff function, which respects the 429 HTTP response code and a "Retry-After" header:
Package lambda provides the API client, operations, and parameter types for AWS Lambda. Lambda is a compute service that lets you run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging. With Lambda, you can run code for virtually any type of application or backend service. For more information about the Lambda service, see What is Lambdain the Lambda Developer Guide. The Lambda API Reference provides information about each of the API methods, including details about the parameters in each API request and response. You can use Software Development Kits (SDKs), Integrated Development Environment (IDE) Toolkits, and command line tools to access the API. For installation instructions, see Tools for Amazon Web Services. For a list of Region-specific endpoints that Lambda supports, see Lambda endpoints and quotas in the Amazon Web Services General Reference.. When making the API calls, you will need to authenticate your request by providing a signature. Lambda supports signature version 4. For more information, see Signature Version 4 signing processin the Amazon Web Services General Reference.. Because Amazon Web Services SDKs use the CA certificates from your computer, changes to the certificates on the Amazon Web Services servers can cause connection failures when you attempt to use an SDK. You can prevent these failures by keeping your computer's CA certificates and operating system up-to-date. If you encounter this issue in a corporate environment and do not manage your own computer, you might need to ask an administrator to assist with the update process. The following list shows minimum operating system and Java versions: Microsoft Windows versions that have updates from January 2005 or later installed contain at least one of the required CAs in their trust list. Mac OS X 10.4 with Java for Mac OS X 10.4 Release 5 (February 2007), Mac OS X 10.5 (October 2007), and later versions contain at least one of the required CAs in their trust list. Red Hat Enterprise Linux 5 (March 2007), 6, and 7 and CentOS 5, 6, and 7 all contain at least one of the required CAs in their default trusted CA list. Java 1.4.2_12 (May 2006), 5 Update 2 (March 2005), and all later versions, including Java 6 (December 2006), 7, and 8, contain at least one of the required CAs in their default trusted CA list. When accessing the Lambda management console or Lambda API endpoints, whether through browsers or programmatically, you will need to ensure your client machines support any of the following CAs: Amazon Root CA 1 Starfield Services Root Certificate Authority - G2 Starfield Class 2 Certification Authority Root certificates from the first two authorities are available from Amazon trust services, but keeping your computer up-to-date is the more straightforward solution. To learn more about ACM-provided certificates, see Amazon Web Services Certificate Manager FAQs.
Package timestreamwrite provides the API client, operations, and parameter types for Amazon Timestream Write. Amazon Timestream is a fast, scalable, fully managed time-series database service that makes it easy to store and analyze trillions of time-series data points per day. With Timestream, you can easily store and analyze IoT sensor data to derive insights from your IoT applications. You can analyze industrial telemetry to streamline equipment management and maintenance. You can also store and analyze log data and metrics to improve the performance and availability of your applications. Timestream is built from the ground up to effectively ingest, process, and store time-series data. It organizes data to optimize query processing. It automatically scales based on the volume of data ingested and on the query volume to ensure you receive optimal performance while inserting and querying data. As your data grows over time, Timestream’s adaptive query processing engine spans across storage tiers to provide fast analysis while reducing costs.
Package emr provides the API client, operations, and parameter types for Amazon EMR. Amazon EMR is a web service that makes it easier to process large amounts of data efficiently. Amazon EMR uses Hadoop processing combined with several Amazon Web Services services to do tasks such as web indexing, data mining, log file analysis, machine learning, scientific simulation, and data warehouse management.
Package dragonboat is a multi-group Raft implementation. The NodeHost struct is the facade interface for all features provided by the dragonboat package. Each NodeHost instance usually runs on a separate host managing its CPU, storage and network resources. Each NodeHost can manage Raft nodes from many different Raft groups known as Raft clusters. Each Raft cluster is identified by its ClusterID and it usually consists of multiple nodes, each identified its NodeID value. Nodes from the same Raft cluster can be considered as replicas of the same data, they are suppose to be distributed on different NodeHost instances across the network, this brings fault tolerance to machine and network failures as application data stored in the Raft cluster will be available as long as the majority of its managing NodeHost instances (i.e. its underlying hosts) are available. User applications can leverage the power of the Raft protocol implemented in dragonboat by implementing the IStateMachine or IOnDiskStateMachine component, as defined in github.com/lni/dragonboat/v3/statemachine. Known as user state machines, each IStateMachine and IOnDiskStateMachine instance is in charge of updating, querying and snapshotting application data with minimum exposure to the complexity of the Raft protocol implementation. User applications can use NodeHost's APIs to update the state of their IStateMachine or IOnDiskStateMachine instances, this is called making proposals. Once accepted by the majority nodes of a Raft cluster, the proposal is considered as committed and it will be applied on all member nodes of the Raft cluster. Applications can also make linearizable reads to query the state of the IStateMachine or IOnDiskStateMachine instances. Dragonboat employs the ReadIndex protocol invented by Diego Ongaro for fast linearizable reads. Dragonboat guarantees the linearizability of your I/O when interacting with the IStateMachine or IOnDiskStateMachine instances. In plain English, writes (via making proposal) to your Raft cluster appears to be instantaneous, once a write is completed, all later reads (linearizable read using the ReadIndex protocol as implemented and provided in dragonboat) should return the value of that write or a later write. Once a value is returned by a linearizable read, all later reads should return the same value or the result of a later write. To strictly provide such guarantee, we need to implement the at-most-once semantic required by linearizability. For a client, when it retries the proposal that failed to complete before its deadline during the previous attempt, it has the risk to have the same proposal committed and applied twice into the user state machine. Dragonboat prevents this by implementing the client session concept described in Diego Ongaro's PhD thesis. Arbitrary number of Raft clusters can be launched across the network simultaneously to aggregate distributed processing and storage capacities. Users can also make membership change requests to add or remove nodes from any interested Raft cluster. NodeHost APIs for making the above mentioned requests can be loosely classified into two categories, synchronous and asynchronous APIs. Synchronous APIs will not return until the completion of the requested operation. Their method names all start with Sync*. The asynchronous counterparts of such synchronous APIs, on the other hand, usually return immediately. This allows users to concurrently initiate multiple such asynchronous operations to save the total amount of time required to complete all of them. Dragonboat is a feature complete Multi-Group Raft implementation - snapshotting, membership change, leadership transfer, non-voting members and disk based state machine are all provided. Dragonboat is also extensively optimized. The Raft protocol implementation is fully pipelined, meaning proposals can start before the completion of previous proposals. This is critical for system throughput in high latency environment. Dragonboat is also fully batched, internal operations are batched whenever possible to maximize the overall throughput.
Package logr defines a general-purpose logging API and abstract interfaces to back that API. Packages in the Go ecosystem can depend on this package, while callers can implement logging with whatever backend is appropriate. Logging is done using a Logger instance. Logger is a concrete type with methods, which defers the actual logging to a LogSink interface. The main methods of Logger are Info() and Error(). Arguments to Info() and Error() are key/value pairs rather than printf-style formatted strings, emphasizing "structured logging". With Go's standard log package, we might write: With logr's structured logging, we'd write: Errors are much the same. Instead of: We'd write: Info() and Error() are very similar, but they are separate methods so that LogSink implementations can choose to do things like attach additional information (such as stack traces) on calls to Error(). Error() messages are always logged, regardless of the current verbosity. If there is no error instance available, passing nil is valid. Often we want to log information only when the application in "verbose mode". To write log lines that are more verbose, Logger has a V() method. The higher the V-level of a log line, the less critical it is considered. Log-lines with V-levels that are not enabled (as per the LogSink) will not be written. Level V(0) is the default, and logger.V(0).Info() has the same meaning as logger.Info(). Negative V-levels have the same meaning as V(0). Error messages do not have a verbosity level and are always logged. Where we might have written: We can write: Logger instances can have name strings so that all messages logged through that instance have additional context. For example, you might want to add a subsystem name: The WithName() method returns a new Logger, which can be passed to constructors or other functions for further use. Repeated use of WithName() will accumulate name "segments". These name segments will be joined in some way by the LogSink implementation. It is strongly recommended that name segments contain simple identifiers (letters, digits, and hyphen), and do not contain characters that could muddle the log output or confuse the joining operation (e.g. whitespace, commas, periods, slashes, brackets, quotes, etc). Logger instances can store any number of key/value pairs, which will be logged alongside all messages logged through that instance. For example, you might want to create a Logger instance per managed object: With the standard log package, we might write: With logr we'd write: Logger has very few hard rules, with the goal that LogSink implementations might have a lot of freedom to differentiate. There are, however, some things to consider. The log message consists of a constant message attached to the log line. This should generally be a simple description of what's occurring, and should never be a format string. Variable information can then be attached using named values. Keys are arbitrary strings, but should generally be constant values. Values may be any Go value, but how the value is formatted is determined by the LogSink implementation. Logger instances are meant to be passed around by value. Code that receives such a value can call its methods without having to check whether the instance is ready for use. The zero logger (= Logger{}) is identical to Discard() and discards all log entries. Code that receives a Logger by value can simply call it, the methods will never crash. For cases where passing a logger is optional, a pointer to Logger should be used. Keys are not strictly required to conform to any specification or regex, but it is recommended that they: These guidelines help ensure that log data is processed properly regardless of the log implementation. For example, log implementations will try to output JSON data or will store data for later database (e.g. SQL) queries. While users are generally free to use key names of their choice, it's generally best to avoid using the following keys, as they're frequently used by implementations: Implementations are encouraged to make use of these keys to represent the above concepts, when necessary (for example, in a pure-JSON output form, it would be necessary to represent at least message and timestamp as ordinary named values). Implementations may choose to give callers access to the underlying logging implementation. The recommended pattern for this is: Logger grants access to the sink to enable type assertions like this: Custom `With*` functions can be implemented by copying the complete Logger struct and replacing the sink in the copy: Don't use New to construct a new Logger with a LogSink retrieved from an existing Logger. Source code attribution might not work correctly and unexported fields in Logger get lost. Beware that the same LogSink instance may be shared by different logger instances. Calling functions that modify the LogSink will affect all of those.
Package configservice provides the API client, operations, and parameter types for AWS Config. Config provides a way to keep track of the configurations of all the Amazon Web Services resources associated with your Amazon Web Services account. You can use Config to get the current and historical configurations of each Amazon Web Services resource and also to get information about the relationship between the resources. An Amazon Web Services resource can be an Amazon Compute Cloud (Amazon EC2) instance, an Elastic Block Store (EBS) volume, an elastic network Interface (ENI), or a security group. For a complete list of resources currently supported by Config, see Supported Amazon Web Services resources. You can access and manage Config through the Amazon Web Services Management Console, the Amazon Web Services Command Line Interface (Amazon Web Services CLI), the Config API, or the Amazon Web Services SDKs for Config. This reference guide contains documentation for the Config API and the Amazon Web Services CLI commands that you can use to manage Config. The Config API uses the Signature Version 4 protocol for signing requests. For more information about how to sign a request with this protocol, see Signature Version 4 Signing Process. For detailed information about Config features and their associated actions or commands, as well as how to work with Amazon Web Services Management Console, see What Is Configin the Config Developer Guide.
Package amqp091 is an AMQP 0.9.1 client with RabbitMQ extensions Understand the AMQP 0.9.1 messaging model by reviewing these links first. Much of the terminology in this library directly relates to AMQP concepts. Most other broker clients publish to queues, but in AMQP, clients publish Exchanges instead. AMQP is programmable, meaning that both the producers and consumers agree on the configuration of the broker, instead of requiring an operator or system configuration that declares the logical topology in the broker. The routing between producers and consumer queues is via Bindings. These bindings form the logical topology of the broker. In this library, a message sent from publisher is called a "Publishing" and a message received to a consumer is called a "Delivery". The fields of Publishings and Deliveries are close but not exact mappings to the underlying wire format to maintain stronger types. Many other libraries will combine message properties with message headers. In this library, the message well known properties are strongly typed fields on the Publishings and Deliveries, whereas the user defined headers are in the Headers field. The method naming closely matches the protocol's method name with positional parameters mapping to named protocol message fields. The motivation here is to present a comprehensive view over all possible interactions with the server. Generally, methods that map to protocol methods of the "basic" class will be elided in this interface, and "select" methods of various channel mode selectors will be elided for example Channel.Confirm and Channel.Tx. The library is intentionally designed to be synchronous, where responses for each protocol message are required to be received in an RPC manner. Some methods have a noWait parameter like Channel.QueueDeclare, and some methods are asynchronous like Channel.Publish. The error values should still be checked for these methods as they will indicate IO failures like when the underlying connection closes. Clients of this library may be interested in receiving some of the protocol messages other than Deliveries like basic.ack methods while a channel is in confirm mode. The Notify* methods with Connection and Channel receivers model the pattern of asynchronous events like closes due to exceptions, or messages that are sent out of band from an RPC call like basic.ack or basic.flow. Any asynchronous events, including Deliveries and Publishings must always have a receiver until the corresponding chans are closed. Without asynchronous receivers, the synchronous methods will block. It's important as a client to an AMQP topology to ensure the state of the broker matches your expectations. For both publish and consume use cases, make sure you declare the queues, exchanges and bindings you expect to exist prior to calling Channel.PublishWithContext or Channel.Consume. When Dial encounters an amqps:// scheme, it will use the zero value of a tls.Config. This will only perform server certificate and host verification. Use DialTLS when you wish to provide a client certificate (recommended), include a private certificate authority's certificate in the cert chain for server validity, or run insecure by not verifying the server certificate. DialTLS will use the provided tls.Config when it encounters an amqps:// scheme and will dial a plain connection when it encounters an amqp:// scheme. SSL/TLS in RabbitMQ is documented here: http://www.rabbitmq.com/ssl.html In order to be notified when a connection or channel gets closed, both structures offer the possibility to register channels using Channel.NotifyClose and Connection.NotifyClose functions: No errors will be sent in case of a graceful connection close. In case of a non-graceful closure due to e.g. network issue, or forced connection closure from the Management UI, the error will be notified synchronously by the library. The library sends to notification channels just once. After sending a notification to all channels, the library closes all registered notification channels. After receiving a notification, the application should create and register a new channel. To avoid deadlocks in the library, it is necessary to consume from the channels. This could be done inside a different goroutine with a select listening on the two channels inside a for loop like: It is strongly recommended to use buffered channels to avoid deadlocks inside the library. Using Channel.NotifyPublish allows the caller of the library to be notified, through a go channel, when a message has been received and confirmed by the broker. It's advisable to wait for all Confirmations to arrive before calling Channel.Close or Connection.Close. It is also necessary to consume from this channel until it gets closed. The library sends synchronously to the registered channel. It is advisable to use a buffered channel, with capacity set to the maximum acceptable number of unconfirmed messages. It is important to consume from the confirmation channel at all times, in order to avoid deadlocks in the library. This exports a Client object that wraps this library. It automatically reconnects when the connection fails, and blocks all pushes until the connection succeeds. It also confirms every outgoing message, so none are lost. It doesn't automatically ack each message, but leaves that to the parent process, since it is usage-dependent. Try running this in one terminal, and rabbitmq-server in another. Stop & restart RabbitMQ to see how the queue reacts.
Package autorest implements an HTTP request pipeline suitable for use across multiple go-routines and provides the shared routines relied on by AutoRest (see https://github.com/Azure/autorest/) generated Go code. The package breaks sending and responding to HTTP requests into three phases: Preparing, Sending, and Responding. A typical pattern is: Each phase relies on decorators to modify and / or manage processing. Decorators may first modify and then pass the data along, pass the data first and then modify the result, or wrap themselves around passing the data (such as a logger might do). Decorators run in the order provided. For example, the following: will set the URL to: Preparers and Responders may be shared and re-used (assuming the underlying decorators support sharing and re-use). Performant use is obtained by creating one or more Preparers and Responders shared among multiple go-routines, and a single Sender shared among multiple sending go-routines, all bound together by means of input / output channels. Decorators hold their passed state within a closure (such as the path components in the example above). Be careful to share Preparers and Responders only in a context where such held state applies. For example, it may not make sense to share a Preparer that applies a query string from a fixed set of values. Similarly, sharing a Responder that reads the response body into a passed struct (e.g., ByUnmarshallingJson) is likely incorrect. Lastly, the Swagger specification (https://swagger.io) that drives AutoRest (https://github.com/Azure/autorest/) precisely defines two date forms: date and date-time. The github.com/Azure/go-autorest/autorest/date package provides time.Time derivations to ensure correct parsing and formatting. Errors raised by autorest objects and methods will conform to the autorest.Error interface. See the included examples for more detail. For details on the suggested use of this package by generated clients, see the Client described below.
Package guardduty provides the API client, operations, and parameter types for Amazon GuardDuty. Amazon GuardDuty is a continuous security monitoring service that analyzes and processes the following foundational data sources - VPC flow logs, Amazon Web Services CloudTrail management event logs, CloudTrail S3 data event logs, EKS audit logs, DNS logs, Amazon EBS volume data, runtime activity belonging to container workloads, such as Amazon EKS, Amazon ECS (including Amazon Web Services Fargate), and Amazon EC2 instances. It uses threat intelligence feeds, such as lists of malicious IPs and domains, and machine learning to identify unexpected, potentially unauthorized, and malicious activity within your Amazon Web Services environment. This can include issues like escalations of privileges, uses of exposed credentials, or communication with malicious IPs, domains, or presence of malware on your Amazon EC2 instances and container workloads. For example, GuardDuty can detect compromised EC2 instances and container workloads serving malware, or mining bitcoin. GuardDuty also monitors Amazon Web Services account access behavior for signs of compromise, such as unauthorized infrastructure deployments like EC2 instances deployed in a Region that has never been used, or unusual API calls like a password policy change to reduce password strength. GuardDuty informs you about the status of your Amazon Web Services environment by producing security findings that you can view in the GuardDuty console or through Amazon EventBridge. For more information, see the Amazon GuardDuty User Guide.
Package datapipeline provides the API client, operations, and parameter types for AWS Data Pipeline. AWS Data Pipeline configures and manages a data-driven workflow called a pipeline. AWS Data Pipeline handles the details of scheduling and ensuring that data dependencies are met so that your application can focus on processing the data. AWS Data Pipeline provides a JAR implementation of a task runner called AWS Data Pipeline Task Runner. AWS Data Pipeline Task Runner provides logic for common data management scenarios, such as performing database queries and running data analysis using Amazon Elastic MapReduce (Amazon EMR). You can use AWS Data Pipeline Task Runner as your task runner, or you can write your own task runner to provide custom data management. AWS Data Pipeline implements two main sets of functionality. Use the first set to create a pipeline and define data sources, schedules, dependencies, and the transforms to be performed on the data. Use the second set in your task runner application to receive the next task ready for processing. The logic for performing the task, such as querying the data, running data analysis, or converting the data from one format to another, is contained within the task runner. The task runner performs the task assigned to it by the web service, reporting progress to the web service as it does so. When the task is done, the task runner reports the final success or failure of the task to the web service.
Package iot provides the API client, operations, and parameter types for AWS IoT. IoT provides secure, bi-directional communication between Internet-connected devices (such as sensors, actuators, embedded devices, or smart appliances) and the Amazon Web Services cloud. You can discover your custom IoT-Data endpoint to communicate with, configure rules for data processing and integration with other services, organize resources associated with each device (Registry), configure logging, and create and manage policies and credentials to authenticate devices. The service endpoints that expose this API are listed in Amazon Web Services IoT Core Endpoints and Quotas. You must use the endpoint for the region that has the resources you want to access. The service name used by Amazon Web Services Signature Version 4 to sign the request is: execute-api. For more information about how IoT works, see the Developer Guide. For information about how to use the credentials provider for IoT, see Authorizing Direct Calls to Amazon Web Services Services.
Pact Go enables consumer driven contract testing, providing a mock service and DSL for the consumer project, and interaction playback and verification for the service provider project. Consumer side Pact testing is an isolated test that ensures a given component is able to collaborate with another (remote) component. Pact will automatically start a Mock server in the background that will act as the collaborators' test double. This implies that any interactions expected on the Mock server will be validated, meaning a test will fail if all interactions were not completed, or if unexpected interactions were found: A typical consumer-side test would look something like this: If this test completed successfully, a Pact file should have been written to ./pacts/my_consumer-my_provider.json containing all of the interactions expected to occur between the Consumer and Provider. In addition to verbatim value matching, you have 3 useful matching functions in the `dsl` package that can increase expressiveness and reduce brittle test cases. Here is a complex example that shows how all 3 terms can be used together: This example will result in a response body from the mock server that looks like: See the examples in the dsl package and the matcher tests (https://github.com/pact-foundation/pact-go/blob/master/dsl/matcher_test.go) for more matching examples. NOTE: You will need to use valid Ruby regular expressions (http://ruby-doc.org/core-2.1.5/Regexp.html) and double escape backslashes. Read more about flexible matching (https://github.com/pact-foundation/pact-ruby/wiki/Regular-expressions-and-type-matching-with-Pact. Provider side Pact testing, involves verifying that the contract - the Pact file - can be satisfied by the Provider. A typical Provider side test would like something like: The `VerifyProvider` will handle all verifications, treating them as subtests and giving you granular test reporting. If you don't like this behaviour, you may call `VerifyProviderRaw` directly and handle the errors manually. Note that `PactURLs` may be a list of local pact files or remote based urls (possibly from a Pact Broker - http://docs.pact.io/documentation/sharings_pacts.html). Pact reads the specified pact files (from remote or local sources) and replays the interactions against a running Provider. If all of the interactions are met we can say that both sides of the contract are satisfied and the test passes. When validating a Provider, you have 3 options to provide the Pact files: 1. Use "PactURLs" to specify the exact set of pacts to be replayed: Options 2 and 3 are particularly useful when you want to validate that your Provider is able to meet the contracts of what's in Production and also the latest in development. See this [article](http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) for more on this strategy. Each interaction in a pact should be verified in isolation, with no context maintained from the previous interactions. So how do you test a request that requires data to exist on the provider? Provider states are how you achieve this using Pact. Provider states also allow the consumer to make the same request with different expected responses (e.g. different response codes, or the same resource with a different subset of data). States are configured on the consumer side when you issue a dsl.Given() clause with a corresponding request/response pair. Configuring the provider is a little more involved, and (currently) requires running an API endpoint to configure any [provider states](http://docs.pact.io/documentation/provider_states.html) during the verification process. The option you must provide to the dsl.VerifyRequest is: An example route using the standard Go http package might look like this: See the examples or read more at http://docs.pact.io/documentation/provider_states.html. See the Pact Broker (http://docs.pact.io/documentation/sharings_pacts.html) documentation for more details on the Broker and this article (http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) on how to make it work for you. Publishing using Go code: Publishing from the CLI: Use a cURL request like the following to PUT the pact to the right location, specifying your consumer name, provider name and consumer version. The following flags are required to use basic authentication when publishing or retrieving Pact files to/from a Pact Broker: Pact Go uses a simple log utility (logutils - https://github.com/hashicorp/logutils) to filter log messages. The CLI already contains flags to manage this, should you want to control log level in your tests, you can set it like so:
Package xray provides the API client, operations, and parameter types for AWS X-Ray. Amazon Web Services X-Ray provides APIs for managing debug traces and retrieving service maps and other data created by processing those traces.
Package expect provides an expect-like interface to automate control of applications. It is unlike expect in that it does not spawn or manage process lifecycle. This package only focuses on expecting output and sending input through it's psuedoterminal.
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
Package sqlite provides a Go interface to SQLite 3. The semantics of this package are deliberately close to the SQLite3 C API, so it is helpful to be familiar with http://www.sqlite.org/c3ref/intro.html. An SQLite connection is represented by a *sqlite.Conn. Connections cannot be used concurrently. A typical Go program will create a pool of connections (using Open to create a *sqlitex.Pool) so goroutines can borrow a connection while they need to talk to the database. This package assumes SQLite will be used concurrently by the process through several connections, so the build options for SQLite enable multi-threading and the shared cache: https://www.sqlite.org/sharedcache.html The implementation automatically handles shared cache locking, see the documentation on Stmt.Step for details. The optional SQLite3 compiled in are: FTS5, RTree, JSON1, Session, GeoPoly This is not a database/sql driver. Statements are prepared with the Prepare and PrepareTransient methods. When using Prepare, statements are keyed inside a connection by the original query string used to create them. This means long-running high-performance code paths can write: After all the connections in a pool have been warmed up by passing through one of these Prepare calls, subsequent calls are simply a map lookup that returns an existing statement. The sqlite package supports the SQLite incremental I/O interface for streaming blob data into and out of the the database without loading the entire blob into a single []byte. (This is important when working either with very large blobs, or more commonly, a large number of moderate-sized blobs concurrently.) To write a blob, first use an INSERT statement to set the size of the blob and assign a rowid: Use BindZeroBlob or SetZeroBlob to set the size of myblob. Then you can open the blob with: Every connection can have a done channel associated with it using the SetInterrupt method. This is typically the channel returned by a context.Context Done method. For example, a timeout can be associated with a connection session: As database connections are long-lived, the SetInterrupt method can be called multiple times to reset the associated lifetime. When using pools, the shorthand for associating a context with a connection is: SQLite transactions have to be managed manually with this package by directly calling BEGIN / COMMIT / ROLLBACK or SAVEPOINT / RELEASE/ ROLLBACK. The sqlitex has a Savepoint function that helps automate this. Using a Pool to execute SQL in a concurrent HTTP handler. For helper functions that make some kinds of statements easier to write see the sqlitex package.
Package support provides the API client, operations, and parameter types for AWS Support. The Amazon Web Services Support API Reference is intended for programmers who need detailed information about the Amazon Web Services Support operations and data types. You can use the API to manage your support cases programmatically. The Amazon Web Services Support API uses HTTP methods that return results in JSON format. You must have a Business, Enterprise On-Ramp, or Enterprise Support plan to use the Amazon Web Services Support API. If you call the Amazon Web Services Support API from an account that doesn't have a Business, Enterprise On-Ramp, or Enterprise Support plan, the SubscriptionRequiredException error message appears. For information about changing your support plan, see Amazon Web Services Support. You can also use the Amazon Web Services Support API to access features for Trusted Advisor. You can return a list of checks and their descriptions, get check results, specify checks to refresh, and get the refresh status of checks. You can manage your support cases with the following Amazon Web Services Support API operations: The CreateCase, DescribeCases, DescribeAttachment, and ResolveCaseoperations create Amazon Web Services Support cases, retrieve information about cases, and resolve cases. The DescribeCommunications, AddCommunicationToCase, and AddAttachmentsToSetoperations retrieve and add communications and attachments to Amazon Web Services Support cases. The DescribeServicesand DescribeSeverityLevelsoperations return Amazon Web Service names, service codes, service categories, and problem severity levels. You use these values when you call the CreateCase operation. You can also use the Amazon Web Services Support API to call the Trusted Advisor operations. For more information, see Trusted Advisorin the Amazon Web Services Support User Guide. For authentication of requests, Amazon Web Services Support uses Signature Version 4 Signing Process. For more information about this service and the endpoints to use, see About the Amazon Web Services Support API in the Amazon Web Services Support User Guide.
Package transfer provides the API client, operations, and parameter types for AWS Transfer Family. Transfer Family is a fully managed service that enables the transfer of files over the File Transfer Protocol (FTP), File Transfer Protocol over SSL (FTPS), or Secure Shell (SSH) File Transfer Protocol (SFTP) directly into and out of Amazon Simple Storage Service (Amazon S3) or Amazon EFS. Additionally, you can use Applicability Statement 2 (AS2) to transfer files into and out of Amazon S3. Amazon Web Services helps you seamlessly migrate your file transfer workflows to Transfer Family by integrating with existing authentication systems, and providing DNS routing with Amazon Route 53 so nothing changes for your customers and partners, or their applications. With your data in Amazon S3, you can use it with Amazon Web Services services for processing, analytics, machine learning, and archiving. Getting started with Transfer Family is easy since there is no infrastructure to buy and set up.
Package health provides the API client, operations, and parameter types for AWS Health APIs and Notifications. The Health API provides access to the Health information that appears in the Health Dashboard. You can use the API operations to get information about events that might affect your Amazon Web Services services and resources. You must have a Business, Enterprise On-Ramp, or Enterprise Support plan from Amazon Web Services Support to use the Health API. If you call the Health API from an Amazon Web Services account that doesn't have a Business, Enterprise On-Ramp, or Enterprise Support plan, you receive a SubscriptionRequiredException error. For API access, you need an access key ID and a secret access key. Use temporary credentials instead of long-term access keys when possible. Temporary credentials include an access key ID, a secret access key, and a security token that indicates when the credentials expire. For more information, see Best practices for managing Amazon Web Services access keysin the Amazon Web Services General Reference. You can use the Health endpoint health.us-east-1.amazonaws.com (HTTPS) to call the Health API operations. Health supports a multi-Region application architecture and has two regional endpoints in an active-passive configuration. You can use the high availability endpoint example to determine which Amazon Web Services Region is active, so that you can get the latest information from the API. For more information, see Accessing the Health APIin the Health User Guide. For authentication of requests, Health uses the Signature Version 4 Signing Process. If your Amazon Web Services account is part of Organizations, you can use the Health organizational view feature. This feature provides a centralized view of Health events across all accounts in your organization. You can aggregate Health events in real time to identify accounts in your organization that are affected by an operational event or get notified of security vulnerabilities. Use the organizational view API operations to enable this feature and return event information. For more information, see Aggregating Health eventsin the Health User Guide. When you use the Health API operations to return Health events, see the following recommendations: Use the eventScopeCodeparameter to specify whether to return Health events that are public or account-specific. Use pagination to view all events from the response. For example, if you call the DescribeEventsForOrganization operation to get all events in your organization, you might receive several page results. Specify the nextToken in the next request to return more results.
Package skipper provides an HTTP routing library with flexible configuration as well as a runtime update of the routing rules. Skipper works as an HTTP reverse proxy that is responsible for mapping incoming requests to multiple HTTP backend services, based on routes that are selected by the request attributes. At the same time, both the requests and the responses can be augmented by a filter chain that is specifically defined for each route. Optionally, it can provide circuit breaker mechanism individually for each backend host. Skipper can load and update the route definitions from multiple data sources without being restarted. It provides a default executable command with a few built-in filters, however, its primary use case is to be extended with custom filters, predicates or data sources. For further information read 'Extending Skipper'. Skipper took the core design and inspiration from Vulcand: https://github.com/mailgun/vulcand. Skipper is 'go get' compatible. If needed, create a 'go workspace' first: Get the Skipper packages: Create a file with a route: Optionally, verify the syntax of the file: Start Skipper and make an HTTP request: The core of Skipper's request processing is implemented by a reverse proxy in the 'proxy' package. The proxy receives the incoming request, forwards it to the routing engine in order to receive the most specific matching route. When a route matches, the request is forwarded to all filters defined by it. The filters can modify the request or execute any kind of program logic. Once the request has been processed by all the filters, it is forwarded to the backend endpoint of the route. The response from the backend goes once again through all the filters in reverse order. Finally, it is mapped as the response of the original incoming request. Besides the default proxying mechanism, it is possible to define routes without a real network backend endpoint. One of these cases is called a 'shunt' backend, in which case one of the filters needs to handle the request providing its own response (e.g. the 'static' filter). Actually, filters themselves can instruct the request flow to shunt by calling the Serve(*http.Response) method of the filter context. Another case of a route without a network backend is the 'loopback'. A loopback route can be used to match a request, modified by filters, against the lookup tree with different conditions and then execute a different route. One example scenario can be to use a single route as an entry point to execute some calculation to get an A/B testing decision and then matching the updated request metadata for the actual destination route. This way the calculation can be executed for only those requests that don't contain information about a previously calculated decision. For further details, see the 'proxy' and 'filters' package documentation. Finding a request's route happens by matching the request attributes to the conditions in the route's definitions. Such definitions may have the following conditions: - method - path (optionally with wildcards) - path regular expressions - host regular expressions - headers - header regular expressions It is also possible to create custom predicates with any other matching criteria. The relation between the conditions in a route definition is 'and', meaning, that a request must fulfill each condition to match a route. For further details, see the 'routing' package documentation. Filters are applied in order of definition to the request and in reverse order to the response. They are used to modify request and response attributes, such as headers, or execute background tasks, like logging. Some filters may handle the requests without proxying them to service backends. Filters, depending on their implementation, may accept/require parameters, that are set specifically to the route. For further details, see the 'filters' package documentation. Each route has one of the following backends: HTTP endpoint, shunt, loopback or dynamic. Backend endpoints can be any HTTP service. They are specified by their network address, including the protocol scheme, the domain name or the IP address, and optionally the port number: e.g. "https://www.example.org:4242". (The path and query are sent from the original request, or set by filters.) A shunt route means that Skipper handles the request alone and doesn't make requests to a backend service. In this case, it is the responsibility of one of the filters to generate the response. A loopback route executes the routing mechanism on current state of the request from the start, including the route lookup. This way it serves as a form of an internal redirect. A dynamic route means that the final target will be defined in a filter. One of the filters in the chain must set the target backend url explicitly. Route definitions consist of the following: - request matching conditions (predicates) - filter chain (optional) - backend The eskip package implements the in-memory and text representations of route definitions, including a parser. (Note to contributors: in order to stay compatible with 'go get', the generated part of the parser is stored in the repository. When changing the grammar, 'go generate' needs to be executed explicitly to update the parser.) For further details, see the 'eskip' package documentation Skipper has filter implementations of basic auth and OAuth2. It can be integrated with tokeninfo based OAuth2 providers. For details, see: https://godoc.org/github.com/zalando/skipper/filters/auth. Skipper's route definitions of Skipper are loaded from one or more data sources. It can receive incremental updates from those data sources at runtime. It provides three different data clients: - Kubernetes: Skipper can be used as part of a Kubernetes Ingress Controller implementation together with https://github.com/zalando-incubator/kube-ingress-aws-controller . In this scenario, Skipper uses the Kubernetes API's Ingress extensions as a source for routing. For a complete deployment example, see more details in: https://github.com/zalando-incubator/kubernetes-on-aws/ . - Innkeeper: the Innkeeper service implements a storage for large sets of Skipper routes, with an HTTP+JSON API, OAuth2 authentication and role management. See the 'innkeeper' package and https://github.com/zalando/innkeeper. - etcd: Skipper can load routes and receive updates from etcd clusters (https://github.com/coreos/etcd). See the 'etcd' package. - static file: package eskipfile implements a simple data client, which can load route definitions from a static file in eskip format. Currently, it loads the routes on startup. It doesn't support runtime updates. Skipper can use additional data sources, provided by extensions. Sources must implement the DataClient interface in the routing package. Skipper provides circuit breakers, configured either globally, based on backend hosts or based on individual routes. It supports two types of circuit breaker behavior: open on N consecutive failures, or open on N failures out of M requests. For details, see: https://godoc.org/github.com/zalando/skipper/circuit. Skipper can be started with the default executable command 'skipper', or as a library built into an application. The easiest way to start Skipper as a library is to execute the 'Run' function of the current, root package. Each option accepted by the 'Run' function is wired in the default executable as well, as a command line flag. E.g. EtcdUrls becomes -etcd-urls as a comma separated list. For command line help, enter: An additional utility, eskip, can be used to verify, print, update and delete routes from/to files or etcd (Innkeeper on the roadmap). See the cmd/eskip command package, and/or enter in the command line: Skipper doesn't use dynamically loaded plugins, however, it can be used as a library, and it can be extended with custom predicates, filters and/or custom data sources. To create a custom predicate, one needs to implement the PredicateSpec interface in the routing package. Instances of the PredicateSpec are used internally by the routing package to create the actual Predicate objects as referenced in eskip routes, with concrete arguments. Example, randompredicate.go: In the above example, a custom predicate is created, that can be referenced in eskip definitions with the name 'Random': To create a custom filter we need to implement the Spec interface of the filters package. 'Spec' is the specification of a filter, and it is used to create concrete filter instances, while the raw route definitions are processed. Example, hellofilter.go: The above example creates a filter specification, and in the routes where they are included, the filter instances will set the 'X-Hello' header for each and every response. The name of the filter is 'hello', and in a route definition it is referenced as: The easiest way to create a custom Skipper variant is to implement the required filters (as in the example above) by importing the Skipper package, and starting it with the 'Run' command. Example, hello.go: A file containing the routes, routes.eskip: Start the custom router: The 'Run' function in the root Skipper package starts its own listener but it doesn't provide the best composability. The proxy package, however, provides a standard http.Handler, so it is possible to use it in a more complex solution as a building block for routing. Skipper provides detailed logging of failures, and access logs in Apache log format. Skipper also collects detailed performance metrics, and exposes them on a separate listener endpoint for pulling snapshots. For details, see the 'logging' and 'metrics' packages documentation. The router's performance depends on the environment and on the used filters. Under ideal circumstances, and without filters, the biggest time factor is the route lookup. Skipper is able to scale to thousands of routes with logarithmic performance degradation. However, this comes at the cost of increased memory consumption, due to storing the whole lookup tree in a single structure. Benchmarks for the tree lookup can be run by: In case more aggressive scale is needed, it is possible to setup Skipper in a cascade model, with multiple Skipper instances for specific route segments.
Package goparquet is an implementation of the parquet file format in Go. It provides functionality to both read and write parquet files, as well as high-level functionality to manage the data schema of parquet files, to directly write Go objects to parquet files using automatic or custom marshalling and to read records from parquet files into Go objects using automatic or custom marshalling. parquet is a file format to store nested data structures in a flat columnar format. By storing in a column-oriented way, it allows for efficient reading of individual columns without having to read and decode complete rows. This allows for efficient reading and faster processing when using the file format in conjunction with distributed data processing frameworks like Apache Hadoop or distributed SQL query engines like Presto and AWS Athena. This particular implementation is divided into several packages. The top-level package that you're currently viewing is the low-level implementation of the file format. It is accompanied by the sub-packages parquetschema and floor. parquetschema provides functionality to parse textual schema definitions as well as the data types to manually or programmatically construct schema definitions by other means that are open to the user. The textual schema definition format is based on the barely documented schema definition format that is implemented in the parquet Java implementation. See the parquetschema sub-package for further documentation on how to use this package and the grammar of the schema definition format as well as examples. floor is a high-level wrapper around the low-level package. It provides functionality to open parquet files to read from them or to write to them. When reading from parquet files, floor takes care of automatically unmarshal the low-level data into the user-provided Go object. When writing to parquet files, user-provided Go objects are first marshalled to a low-level data structure that is then written to the parquet file. These mechanisms allow to directly read and write Go objects without having to deal with the details of the low-level parquet format. Alternatively, marshalling and unmarshalling can be implemented in a custom manner, giving the user maximum flexibility in case of disparities between the parquet schema definition and the actual Go data structure. For more information, please refer to the floor sub-package's documentation. To aid in working with parquet files, this package also provides a commandline tool named "parquet-tool" that allows you to inspect a parquet file's schema, meta data, row count and content as well as to merge and split parquet files. When operating with parquet files, most users should be able to cover their regular use cases of reading and writing files using just the high-level floor package as well as the parquetschema package. Only if a user has more special requirements in how to work with the parquet files, it is advisable to use this low-level package. To write to a parquet file, the type provided by this package is the FileWriter. Create a new *FileWriter object using the NewFileWriter function. You have a number of options available with which you can influence the FileWriter's behaviour. You can use these options to e.g. set meta data, the compression algorithm to use, the schema definition to use, or whether the data should be written in the V2 format. If you didn't set a schema definition, you then need to manually create columns using the functions NewDataColumn, NewListColumn and NewMapColumn, and then add them to the FileWriter by using the AddColumn method. To further structure your data into groups, use AddGroup to create groups. When you add columns to groups, you need to provide the full column name using dotted notation (e.g. "groupname.fieldname") to AddColumn. Using the AddData method, you can then add records. The provided data is of type map[string]interface{}. This data can be nested: to provide data for a repeated field, the data type to use for the map value is []interface{}. When the provided data is a group, the data type for the group itself again needs to be map[string]interface{}. The data within a parquet file is divided into row groups of a certain size. You can either set the desired row group size as a FileWriterOption, or you can manually check the estimated data size of the current row group using the CurrentRowGroupSize method, and use FlushRowGroup to write the data to disk and start a new row group. Please note that CurrentRowGroupSize only estimates the _uncompressed_ data size. If you've enabled compression, it is impossible to predict the compressed data size, so the actual row groups written to disk may be a lot smaller than uncompressed, depending on how efficiently your data can be compressed. When you're done writing, always use the Close method to flush any remaining data and to write the file's footer. To read from files, create a FileReader object using the NewFileReader function. You can optionally provide a list of columns to read. If these are set, only these columns are read from the file, while all other columns are ignored. If no columns are proided, then all columns are read. With the FileReader, you can then go through the row groups (using PreLoad and SkipRowGroup). and iterate through the row data in each row group (using NextRow). To find out how many rows to expect in total and per row group, use the NumRows and RowGroupNumRows methods. The number of row groups can be determined using the RowGroupCount method.
Package outposts provides the API client, operations, and parameter types for AWS Outposts. Amazon Web Services Outposts is a fully managed service that extends Amazon Web Services infrastructure, APIs, and tools to customer premises. By providing local access to Amazon Web Services managed infrastructure, Amazon Web Services Outposts enables customers to build and run applications on premises using the same programming interfaces as in Amazon Web Services Regions, while using local compute and storage resources for lower latency and local data processing needs.
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy(request level configuration), alternatively, global(all services) or client level RetryPolicy configration is also possible. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go If you are trying to make a PUT/POST API call with binary request body, please make sure the binary request body is resettable, which means the request body should inherit Seeker interface. The Retry behavior Precedence (Highest to lowest) is defined as below:- The OCI Go SDK defines a default retry policy that retries on the errors suitable for retries (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm), for a recommended period of time (up to 7 attempts spread out over at most approximately 1.5 minutes). The default retry policy is defined by : Default Retry-able Errors Below is the list of default retry-able errors for which retry attempts should be made. The following errors should be retried (with backoff). HTTP Code Customer-facing Error Code Apart from the above errors, retries should also be attempted in the following Client Side errors : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) The above errors can be avoided through retrying and hence, are classified as the default retry-able errors. Additionally, retries should also be made for Circuit Breaker exceptions (Exceptions raised by Circuit Breaker in an open state) Default Termination Strategy The termination strategy defines when SDKs should stop attempting to retry. In other words, it's the deadline for retries. The OCI SDKs should stop retrying the operation after 7 retry attempts. This means the SDKs will have retried for ~98 seconds or ~1.5 minutes have elapsed due to total delays. SDKs will make a total of 8 attempts. (1 initial request + 7 retries) Default Delay Strategy Default Delay Strategy - The delay strategy defines the amount of time to wait between each of the retry attempts. The default delay strategy chosen for the SDK – Exponential backoff with jitter, using: 1. The base time to use in retry calculations will be 1 second 2. An exponent of 2. When calculating the next retry time, the SDK will raise this to the power of the number of attempts 3. A maximum wait time between calls of 30 seconds (Capped) 4. Added jitter value between 0-1000 milliseconds to spread out the requests Configure and use default retry policy You can set this retry policy for a single request: or for all requests made by a client: or for all requests made by all clients: or setting default retry via environment varaible, which is a global switch for all services: Some services enable retry for operations by default, this can be overridden using any alternatives mentioned above. To know which service operations have retries enabled by default, look at the operation's description in the SDK - it will say whether that it has retries enabled by default Some resources may have to be replicated across regions and are only eventually consistent. That means the request to create, update, or delete the resource succeeded, but the resource is not available everywhere immediately. Creating, updating, or deleting any resource in the Identity service is affected by eventual consistency, and doing so may cause other operations in other services to fail until the Identity resource has been replicated. For example, the request to CreateTag in the Identity service in the home region succeeds, but immediately using that created tag in another region in a request to LaunchInstance in the Compute service may fail. If you are creating, updating, or deleting resources in the Identity service, we recommend using an eventually consistent retry policy for any service you access. The default retry policy already deals with eventual consistency. Example: This retry policy will use a different strategy if an eventually consistent change was made in the recent past (called the "eventually consistent window", currently defined to be 4 minutes after the eventually consistent change). This special retry policy for eventual consistency will: 1. make up to 9 attempts (including the initial attempt); if an attempt is successful, no more attempts will be made 2. retry at most until (a) approximately the end of the eventually consistent window or (b) the end of the default retry period of about 1.5 minutes, whichever is farther in the future; if an attempt is successful, no more attempts will be made, and the OCI Go SDK will not wait any longer 3. retry on the error codes 400-RelatedResourceNotAuthorizedOrNotFound, 404-NotAuthorizedOrNotFound, and 409-NotAuthorizedOrResourceAlreadyExists, for which the default retry policy does not retry, in addition to the errors the default retry policy retries on (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm) If there were no eventually consistent actions within the recent past, then this special retry strategy is not used. If you want a retry policy that does not handle eventual consistency in a special way, for example because you retry on all error responses, you can use DefaultRetryPolicyWithoutEventualConsistency or NewRetryPolicyWithOptions with the common.ReplaceWithValuesFromRetryPolicy(common.DefaultRetryPolicyWithoutEventualConsistency()) option: The NewRetryPolicy function also creates a retry policy without eventual consistency. Circuit Breaker can prevent an application repeatedly trying to execute an operation that is likely to fail, allowing it to continue without waiting for the fault to be rectified or wasting CPU cycles, of course, it also enables an application to detect whether the fault has been resolved. If the problem appears to have been rectified, the application can attempt to invoke the operation. Go SDK intergrates sony/gobreaker solution, wraps in a circuit breaker object, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, this also saves the service from being overwhelmed with network calls in case of an outage. Circuit Breaker Configuration Definitions 1. Failure Rate Threshold - The state of the CircuitBreaker changes from CLOSED to OPEN when the failure rate is equal or greater than a configurable threshold. For example when more than 50% of the recorded calls have failed. 2. Reset Timeout - The timeout after which an open circuit breaker will attempt a request if a request is made 3. Failure Exceptions - The list of Exceptions that will be regarded as failures for the circuit. 4. Minimum number of calls/ Volume threshold - Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. 1. Failure Rate Threshold - 80% - This means when 80% of the requests calculated for a time window of 120 seconds have failed then the circuit will transition from closed to open. 2. Minimum number of calls/ Volume threshold - A value of 10, for the above defined time window of 120 seconds. 3. Reset Timeout - 30 seconds to wait before setting the breaker to halfOpen state, and trying the action again. 4. Failure Exceptions - The failures for the circuit will only be recorded for the retryable/transient exceptions. This means only the following exceptions will be regarded as failure for the circuit. HTTP Code Customer-facing Error Code Apart from the above, the following client side exceptions will also be treated as a failure for the circuit : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) Go SDK enable circuit breaker with default configuration for most of the service clients, if you don't want to enable the solution, can disable the functionality before your application running Go SDK also supports customize Circuit Breaker with specified configurations. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_circuitbreaker_test.go To know which service clients have circuit breakers enabled, look at the service client's description in the SDK - it will say whether that it has circuit breakers enabled by default The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: Dedicated endpoints are the endpoint templates defined by the service for a specific realm at client level. OCI Go SDK allows you to enable the use of these realm-specific endpoint templates feature at application level and at client level. The value set at client level takes precedence over the value set at the application level. This feature is disabled by default. For reference, please refer https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go#L222-L251 The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. You can also enable logs by code. For example This way you enable debug logs by code. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy(request level configuration), alternatively, global(all services) or client level RetryPolicy configration is also possible. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go If you are trying to make a PUT/POST API call with binary request body, please make sure the binary request body is resettable, which means the request body should inherit Seeker interface. The Retry behavior Precedence (Highest to lowest) is defined as below:- The OCI Go SDK defines a default retry policy that retries on the errors suitable for retries (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm), for a recommended period of time (up to 7 attempts spread out over at most approximately 1.5 minutes). The default retry policy is defined by : Default Retry-able Errors Below is the list of default retry-able errors for which retry attempts should be made. The following errors should be retried (with backoff). HTTP Code Customer-facing Error Code Apart from the above errors, retries should also be attempted in the following Client Side errors : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) The above errors can be avoided through retrying and hence, are classified as the default retry-able errors. Additionally, retries should also be made for Circuit Breaker exceptions (Exceptions raised by Circuit Breaker in an open state) Default Termination Strategy The termination strategy defines when SDKs should stop attempting to retry. In other words, it's the deadline for retries. The OCI SDKs should stop retrying the operation after 7 retry attempts. This means the SDKs will have retried for ~98 seconds or ~1.5 minutes have elapsed due to total delays. SDKs will make a total of 8 attempts. (1 initial request + 7 retries) Default Delay Strategy Default Delay Strategy - The delay strategy defines the amount of time to wait between each of the retry attempts. The default delay strategy chosen for the SDK – Exponential backoff with jitter, using: 1. The base time to use in retry calculations will be 1 second 2. An exponent of 2. When calculating the next retry time, the SDK will raise this to the power of the number of attempts 3. A maximum wait time between calls of 30 seconds (Capped) 4. Added jitter value between 0-1000 milliseconds to spread out the requests Configure and use default retry policy You can set this retry policy for a single request: or for all requests made by a client: or for all requests made by all clients: or setting default retry via environment variable, which is a global switch for all services: Some services enable retry for operations by default, this can be overridden using any alternatives mentioned above. To know which service operations have retries enabled by default, look at the operation's description in the SDK - it will say whether that it has retries enabled by default Some resources may have to be replicated across regions and are only eventually consistent. That means the request to create, update, or delete the resource succeeded, but the resource is not available everywhere immediately. Creating, updating, or deleting any resource in the Identity service is affected by eventual consistency, and doing so may cause other operations in other services to fail until the Identity resource has been replicated. For example, the request to CreateTag in the Identity service in the home region succeeds, but immediately using that created tag in another region in a request to LaunchInstance in the Compute service may fail. If you are creating, updating, or deleting resources in the Identity service, we recommend using an eventually consistent retry policy for any service you access. The default retry policy already deals with eventual consistency. Example: This retry policy will use a different strategy if an eventually consistent change was made in the recent past (called the "eventually consistent window", currently defined to be 4 minutes after the eventually consistent change). This special retry policy for eventual consistency will: 1. make up to 9 attempts (including the initial attempt); if an attempt is successful, no more attempts will be made 2. retry at most until (a) approximately the end of the eventually consistent window or (b) the end of the default retry period of about 1.5 minutes, whichever is farther in the future; if an attempt is successful, no more attempts will be made, and the OCI Go SDK will not wait any longer 3. retry on the error codes 400-RelatedResourceNotAuthorizedOrNotFound, 404-NotAuthorizedOrNotFound, and 409-NotAuthorizedOrResourceAlreadyExists, for which the default retry policy does not retry, in addition to the errors the default retry policy retries on (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm) If there were no eventually consistent actions within the recent past, then this special retry strategy is not used. If you want a retry policy that does not handle eventual consistency in a special way, for example because you retry on all error responses, you can use DefaultRetryPolicyWithoutEventualConsistency or NewRetryPolicyWithOptions with the common.ReplaceWithValuesFromRetryPolicy(common.DefaultRetryPolicyWithoutEventualConsistency()) option: The NewRetryPolicy function also creates a retry policy without eventual consistency. Circuit Breaker can prevent an application repeatedly trying to execute an operation that is likely to fail, allowing it to continue without waiting for the fault to be rectified or wasting CPU cycles, of course, it also enables an application to detect whether the fault has been resolved. If the problem appears to have been rectified, the application can attempt to invoke the operation. Go SDK intergrates sony/gobreaker solution, wraps in a circuit breaker object, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, this also saves the service from being overwhelmed with network calls in case of an outage. Circuit Breaker Configuration Definitions 1. Failure Rate Threshold - The state of the CircuitBreaker changes from CLOSED to OPEN when the failure rate is equal or greater than a configurable threshold. For example when more than 50% of the recorded calls have failed. 2. Reset Timeout - The timeout after which an open circuit breaker will attempt a request if a request is made 3. Failure Exceptions - The list of Exceptions that will be regarded as failures for the circuit. 4. Minimum number of calls/ Volume threshold - Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. 1. Failure Rate Threshold - 80% - This means when 80% of the requests calculated for a time window of 120 seconds have failed then the circuit will transition from closed to open. 2. Minimum number of calls/ Volume threshold - A value of 10, for the above defined time window of 120 seconds. 3. Reset Timeout - 30 seconds to wait before setting the breaker to halfOpen state, and trying the action again. 4. Failure Exceptions - The failures for the circuit will only be recorded for the retryable/transient exceptions. This means only the following exceptions will be regarded as failure for the circuit. HTTP Code Customer-facing Error Code Apart from the above, the following client side exceptions will also be treated as a failure for the circuit : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) Go SDK enable circuit breaker with default configuration for most of the service clients, if you don't want to enable the solution, can disable the functionality before your application running Go SDK also supports customize Circuit Breaker with specified configurations. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_circuitbreaker_test.go To know which service clients have circuit breakers enabled, look at the service client's description in the SDK - it will say whether that it has circuit breakers enabled by default As a result of the SDK treating responses with a non-2xx HTTP status code as an error, the SDK will produce an error on 3xx responses. This can impact operations which support conditional GETs, such as GetObject() and HeadObject() methods as these can return responses with an HTTP status code of 304 if passed an 'IfNoneMatch' that corresponds to the current etag of the object / bucket. In order to account for this, you should check for status code 304 when an error is produced. For example: The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: In order to use a custom CA bundle, you can set the environment variable OCI_DEFAULT_CERTS_PATH to point to the path of custom CA Bundle you want the OCI GO SDK to use while making API calls to the OCI services If you additionally want to set custom leaf/client certs, then you can use the the environment variables OCI_DEFAULT_CLIENT_CERTS_PATH and OCI_DEFAULT_CLIENT_CERTS_PRIVATE_KEY_PATH to set the path of the custom client/leaf cert and the private key respectively. The default refresh interval for custom CA bundle or client certs is 30 minutes. If you want to modify this, then you can configure the refresh interval in minutes by using either the Global property OciGlobalRefreshIntervalForCustomCerts defined in the common package or set the environment variable OCI_DEFAULT_REFRESH_INTERVAL_FOR_CUSTOM_CERTS to set it instead. Please note, that the property OciGlobalRefreshIntervalForCustomCerts has a higher precedence than the environment variable OCI_DEFAULT_REFRESH_INTERVAL_FOR_CUSTOM_CERTS. If this value is negative, then it would be assumed that it is unset. If it is set to 0, then the SDK would disable the custom ca bundle and client cert refresh Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy(request level configuration), alternatively, global(all services) or client level RetryPolicy configration is also possible. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go If you are trying to make a PUT/POST API call with binary request body, please make sure the binary request body is resettable, which means the request body should inherit Seeker interface. The Retry behavior Precedence (Highest to lowest) is defined as below:- The OCI Go SDK defines a default retry policy that retries on the errors suitable for retries (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm), for a recommended period of time (up to 7 attempts spread out over at most approximately 1.5 minutes). The default retry policy is defined by : Default Retry-able Errors Below is the list of default retry-able errors for which retry attempts should be made. The following errors should be retried (with backoff). HTTP Code Customer-facing Error Code Apart from the above errors, retries should also be attempted in the following Client Side errors : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) The above errors can be avoided through retrying and hence, are classified as the default retry-able errors. Additionally, retries should also be made for Circuit Breaker exceptions (Exceptions raised by Circuit Breaker in an open state) Default Termination Strategy The termination strategy defines when SDKs should stop attempting to retry. In other words, it's the deadline for retries. The OCI SDKs should stop retrying the operation after 7 retry attempts. This means the SDKs will have retried for ~98 seconds or ~1.5 minutes have elapsed due to total delays. SDKs will make a total of 8 attempts. (1 initial request + 7 retries) Default Delay Strategy Default Delay Strategy - The delay strategy defines the amount of time to wait between each of the retry attempts. The default delay strategy chosen for the SDK – Exponential backoff with jitter, using: 1. The base time to use in retry calculations will be 1 second 2. An exponent of 2. When calculating the next retry time, the SDK will raise this to the power of the number of attempts 3. A maximum wait time between calls of 30 seconds (Capped) 4. Added jitter value between 0-1000 milliseconds to spread out the requests Configure and use default retry policy You can set this retry policy for a single request: or for all requests made by a client: or for all requests made by all clients: or setting default retry via environment varaible, which is a global switch for all services: Some services enable retry for operations by default, this can be overridden using any alternatives mentioned above. To know which service operations have retries enabled by default, look at the operation's description in the SDK - it will say whether that it has retries enabled by default Some resources may have to be replicated across regions and are only eventually consistent. That means the request to create, update, or delete the resource succeeded, but the resource is not available everywhere immediately. Creating, updating, or deleting any resource in the Identity service is affected by eventual consistency, and doing so may cause other operations in other services to fail until the Identity resource has been replicated. For example, the request to CreateTag in the Identity service in the home region succeeds, but immediately using that created tag in another region in a request to LaunchInstance in the Compute service may fail. If you are creating, updating, or deleting resources in the Identity service, we recommend using an eventually consistent retry policy for any service you access. The default retry policy already deals with eventual consistency. Example: This retry policy will use a different strategy if an eventually consistent change was made in the recent past (called the "eventually consistent window", currently defined to be 4 minutes after the eventually consistent change). This special retry policy for eventual consistency will: 1. make up to 9 attempts (including the initial attempt); if an attempt is successful, no more attempts will be made 2. retry at most until (a) approximately the end of the eventually consistent window or (b) the end of the default retry period of about 1.5 minutes, whichever is farther in the future; if an attempt is successful, no more attempts will be made, and the OCI Go SDK will not wait any longer 3. retry on the error codes 400-RelatedResourceNotAuthorizedOrNotFound, 404-NotAuthorizedOrNotFound, and 409-NotAuthorizedOrResourceAlreadyExists, for which the default retry policy does not retry, in addition to the errors the default retry policy retries on (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm) If there were no eventually consistent actions within the recent past, then this special retry strategy is not used. If you want a retry policy that does not handle eventual consistency in a special way, for example because you retry on all error responses, you can use DefaultRetryPolicyWithoutEventualConsistency or NewRetryPolicyWithOptions with the common.ReplaceWithValuesFromRetryPolicy(common.DefaultRetryPolicyWithoutEventualConsistency()) option: The NewRetryPolicy function also creates a retry policy without eventual consistency. Circuit Breaker can prevent an application repeatedly trying to execute an operation that is likely to fail, allowing it to continue without waiting for the fault to be rectified or wasting CPU cycles, of course, it also enables an application to detect whether the fault has been resolved. If the problem appears to have been rectified, the application can attempt to invoke the operation. Go SDK intergrates sony/gobreaker solution, wraps in a circuit breaker object, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, this also saves the service from being overwhelmed with network calls in case of an outage. Circuit Breaker Configuration Definitions 1. Failure Rate Threshold - The state of the CircuitBreaker changes from CLOSED to OPEN when the failure rate is equal or greater than a configurable threshold. For example when more than 50% of the recorded calls have failed. 2. Reset Timeout - The timeout after which an open circuit breaker will attempt a request if a request is made 3. Failure Exceptions - The list of Exceptions that will be regarded as failures for the circuit. 4. Minimum number of calls/ Volume threshold - Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. 1. Failure Rate Threshold - 80% - This means when 80% of the requests calculated for a time window of 120 seconds have failed then the circuit will transition from closed to open. 2. Minimum number of calls/ Volume threshold - A value of 10, for the above defined time window of 120 seconds. 3. Reset Timeout - 30 seconds to wait before setting the breaker to halfOpen state, and trying the action again. 4. Failure Exceptions - The failures for the circuit will only be recorded for the retryable/transient exceptions. This means only the following exceptions will be regarded as failure for the circuit. HTTP Code Customer-facing Error Code Apart from the above, the following client side exceptions will also be treated as a failure for the circuit : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) Go SDK enable circuit breaker with default configuration for most of the service clients, if you don't want to enable the solution, can disable the functionality before your application running Go SDK also supports customize Circuit Breaker with specified configurations. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_circuitbreaker_test.go To know which service clients have circuit breakers enabled, look at the service client's description in the SDK - it will say whether that it has circuit breakers enabled by default The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
Package kinesisanalyticsv2 provides the API client, operations, and parameter types for Amazon Kinesis Analytics. Amazon Managed Service for Apache Flink was previously known as Amazon Kinesis Data Analytics for Apache Flink. Amazon Managed Service for Apache Flink is a fully managed service that you can use to process and analyze streaming data using Java, Python, SQL, or Scala. The service enables you to quickly author and run Java, SQL, or Scala code against streaming sources to perform time series analytics, feed real-time dashboards, and create real-time metrics.
Package cadence and its subdirectories contain the Cadence client side framework. The Cadence service is a task orchestrator for your application’s tasks. Applications using Cadence can execute a logical flow of tasks, especially long-running business logic, asynchronously or synchronously. They can also scale at runtime on distributed systems. A quick example illustrates its use case. Consider Uber Eats where Cadence manages the entire business flow from placing an order, accepting it, handling shopping cart processes (adding, updating, and calculating cart items), entering the order in a pipeline (for preparing food and coordinating delivery), to scheduling delivery as well as handling payments. Cadence consists of a programming framework (or client library) and a managed service (or backend). The framework enables developers to author and coordinate tasks in Go code. The root cadence package contains common data structures. The subpackages are: The Cadence hosted service brokers and persists events generated during workflow execution. Worker nodes owned and operated by customers execute the coordination and task logic. To facilitate the implementation of worker nodes Cadence provides a client-side library for the Go language. In Cadence, you can code the logical flow of events separately as a workflow and code business logic as activities. The workflow identifies the activities and sequences them, while an activity executes the logic. Dynamic workflow execution graphs - Determine the workflow execution graphs at runtime based on the data you are processing. Cadence does not pre-compute the execution graphs at compile time or at workflow start time. Therefore, you have the ability to write workflows that can dynamically adjust to the amount of data they are processing. If you need to trigger 10 instances of an activity to efficiently process all the data in one run, but only 3 for a subsequent run, you can do that. Child Workflows - Orchestrate the execution of a workflow from within another workflow. Cadence will return the results of the child workflow execution to the parent workflow upon completion of the child workflow. No polling is required in the parent workflow to monitor status of the child workflow, making the process efficient and fault tolerant. Durable Timers - Implement delayed execution of tasks in your workflows that are robust to worker failures. Cadence provides two easy to use APIs, **workflow.Sleep** and **workflow.Timer**, for implementing time based events in your workflows. Cadence ensures that the timer settings are persisted and the events are generated even if workers executing the workflow crash. Signals - Modify/influence the execution path of a running workflow by pushing additional data directly to the workflow using a signal. Via the Signal facility, Cadence provides a mechanism to consume external events directly in workflow code. Task routing - Efficiently process large amounts of data using a Cadence workflow, by caching the data locally on a worker and executing all activities meant to process that data on that same worker. Cadence enables you to choose the worker you want to execute a certain activity by scheduling that activity execution in the worker's specific task-list. Unique workflow ID enforcement - Use business entity IDs for your workflows and let Cadence ensure that only one workflow is running for a particular entity at a time. Cadence implements an atomic "uniqueness check" and ensures that no race conditions are possible that would result in multiple workflow executions for the same workflow ID. Therefore, you can implement your code to attempt to start a workflow without checking if the ID is already in use, even in the cases where only one active execution per workflow ID is desired. Perpetual/ContinueAsNew workflows - Run periodic tasks as a single perpetually running workflow. With the "ContinueAsNew" facility, Cadence allows you to leverage the "unique workflow ID enforcement" feature for periodic workflows. Cadence will complete the current execution and start the new execution atomically, ensuring you get to keep your workflow ID. By starting a new execution Cadence also ensures that workflow execution history does not grow indefinitely for perpetual workflows. At-most once activity execution - Execute non-idempotent activities as part of your workflows. Cadence will not automatically retry activities on failure. For every activity execution Cadence will return a success result, a failure result, or a timeout to the workflow code and let the workflow code determine how each one of those result types should be handled. Asynch Activity Completion - Incorporate human input or thrid-party service asynchronous callbacks into your workflows. Cadence allows a workflow to pause execution on an activity and wait for an external actor to resume it with a callback. During this pause the activity does not have any actively executing code, such as a polling loop, and is merely an entry in the Cadence datastore. Therefore, the workflow is unaffected by any worker failures happening over the duration of the pause. Activity Heartbeating - Detect unexpected failures/crashes and track progress in long running activities early. By configuring your activity to report progress periodically to the Cadence server, you can detect a crash that occurs 10 minutes into an hour-long activity execution much sooner, instead of waiting for the 60-minute execution timeout. The recorded progress before the crash gives you sufficient information to determine whether to restart the activity from the beginning or resume it from the point of failure. Timeouts for activities and workflow executions - Protect against stuck and unresponsive activities and workflows with appropriate timeout values. Cadence requires that timeout values are provided for every activity or workflow invocation. There is no upper bound on the timeout values, so you can set timeouts that span days, weeks, or even months. Visibility - Get a list of all your active and/or completed workflow. Explore the execution history of a particular workflow execution. Cadence provides a set of visibility APIs that allow you, the workflow owner, to monitor past and current workflow executions. Debuggability - Replay any workflow execution history locally under a debugger. The Cadence client library provides an API to allow you to capture a stack trace from any failed workflow execution history.
Package peer provides a common base for creating and managing Decred network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the Decred wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the decred network to use, which services it supports, and callbacks to invoke when decred messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to decred messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported decred messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, PushGetHeadersMsg, and PushRejectMsg functions are provided as a convenience. While it is of course possible to create and send these message manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Next, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. Finally, the PushRejectMsg function can be used to easily create and send an appropriate reject message based on the provided parameters as well as optionally provides a flag to cause it to block until the message is actually sent. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a slog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all improvement proposals supported by the wire package. (https://godoc.org/github.com/decred/dcrd/wire#hdr-Bitcoin_Improvement_Proposals) This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
Package peer provides a common base for creating and managing Decred network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the Decred wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the decred network to use, which services it supports, and callbacks to invoke when decred messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to decred messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported decred messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, and PushGetHeadersMsg functions are provided as a convenience. While it is of course possible to create and send these messages manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Finally, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a slog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all improvement proposals supported by the wire package. This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
Package temporal and its subdirectories contain the Temporal client side framework. The Temporal service is a task orchestrator for your application’s tasks. Applications using Temporal can execute a logical flow of tasks, especially long-running business logic, asynchronously or synchronously. They can also scale at runtime on distributed systems. A quick example illustrates its use case. Consider Uber Eats where Temporal manages the entire business flow from placing an order, accepting it, handling shopping cart processes (adding, updating, and calculating cart items), entering the order in a pipeline (for preparing food and coordinating delivery), to scheduling delivery as well as handling payments. Temporal consists of a programming framework (or client library) and a managed service (or backend). The framework enables developers to author and coordinate tasks in Go code. The root temporal package contains common data structures. The subpackages are: The Temporal hosted service brokers and persists events generated during workflow execution. Worker nodes owned and operated by customers execute the coordination and task logic. To facilitate the implementation of worker nodes Temporal provides a client-side library for the Go language. In Temporal, you can code the logical flow of events separately as a workflow and code business logic as activities. The workflow identifies the activities and sequences them, while an activity executes the logic. Dynamic workflow execution graphs - Determine the workflow execution graphs at runtime based on the data you are processing. Temporal does not pre-compute the execution graphs at compile time or at workflow start time. Therefore, you have the ability to write workflows that can dynamically adjust to the amount of data they are processing. If you need to trigger 10 instances of an activity to efficiently process all the data in one run, but only 3 for a subsequent run, you can do that. Child Workflows - Orchestrate the execution of a workflow from within another workflow. Temporal will return the results of the child workflow execution to the parent workflow upon completion of the child workflow. No polling is required in the parent workflow to monitor status of the child workflow, making the process efficient and fault tolerant. Durable Timers - Implement delayed execution of tasks in your workflows that are robust to worker failures. Temporal provides two easy to use APIs, **workflow.Sleep** and **workflow.Timer**, for implementing time based events in your workflows. Temporal ensures that the timer settings are persisted and the events are generated even if workers executing the workflow crash. Signals - Modify/influence the execution path of a running workflow by pushing additional data directly to the workflow using a signal. Via the Signal facility, Temporal provides a mechanism to consume external events directly in workflow code. Task routing - Efficiently process large amounts of data using a Temporal workflow, by caching the data locally on a worker and executing all activities meant to process that data on that same worker. Temporal enables you to choose the worker you want to execute a certain activity by scheduling that activity execution in the worker's specific task queue. Unique workflow ID enforcement - Use business entity IDs for your workflows and let Temporal ensure that only one workflow is running for a particular entity at a time. Temporal implements an atomic "uniqueness check" and ensures that no race conditions are possible that would result in multiple workflow executions for the same workflow ID. Therefore, you can implement your code to attempt to start a workflow without checking if the ID is already in use, even in the cases where only one active execution per workflow ID is desired. Perpetual/ContinueAsNew workflows - Run periodic tasks as a single perpetually running workflow. With the "ContinueAsNew" facility, Temporal allows you to leverage the "unique workflow ID enforcement" feature for periodic workflows. Temporal will complete the current execution and start the new execution atomically, ensuring you get to keep your workflow ID. By starting a new execution Temporal also ensures that workflow execution history does not grow indefinitely for perpetual workflows. At-most once activity execution - Execute non-idempotent activities as part of your workflows. Temporal will not automatically retry activities on failure. For every activity execution Temporal will return a success result, a failure result, or a timeout to the workflow code and let the workflow code determine how each one of those result types should be handled. Asynch Activity Completion - Incorporate human input or thrid-party service asynchronous callbacks into your workflows. Temporal allows a workflow to pause execution on an activity and wait for an external actor to resume it with a callback. During this pause the activity does not have any actively executing code, such as a polling loop, and is merely an entry in the Temporal datastore. Therefore, the workflow is unaffected by any worker failures happening over the duration of the pause. Activity Heartbeating - Detect unexpected failures/crashes and track progress in long running activities early. By configuring your activity to report progress periodically to the Temporal server, you can detect a crash that occurs 10 minutes into an hour-long activity execution much sooner, instead of waiting for the 60-minute execution timeout. The recorded progress before the crash gives you sufficient information to determine whether to restart the activity from the beginning or resume it from the point of failure. Timeouts for activities and workflow executions - Protect against stuck and unresponsive activities and workflows with appropriate timeout values. Temporal requires that timeout values are provided for every activity or workflow invocation. There is no upper bound on the timeout values, so you can set timeouts that span days, weeks, or even months. Visibility - Get a list of all your active and/or completed workflow. Explore the execution history of a particular workflow execution. Temporal provides a set of visibility APIs that allow you, the workflow owner, to monitor past and current workflow executions. Debuggability - Replay any workflow execution history locally under a debugger. The Temporal client library provides an API to allow you to capture a stack trace from any failed workflow execution history.
Package rollbar is a Golang Rollbar client that makes it easy to report errors to Rollbar with full stacktraces. This package is designed to be used via the functions exposed at the root of the `rollbar` package. These work by managing a single instance of the `Client` type that is configurable via the setter functions at the root of the package. If you wish for more fine grained control over the client or you wish to have multiple independent clients then you can create and manage your own instances of the `Client` type. We provide two implementations of the `Transport` interface, `AsyncTransport` and `SyncTransport`. These manage the communication with the network layer. The Async version uses a buffered channel to communicate with the Rollbar API in a separate go routine. The Sync version is fully synchronous. It is possible to create your own `Transport` and configure a Client to use your preferred implementation. Go does not provide a mechanism for handling all panics automatically, therefore we provide two functions `Wrap` and `WrapAndWait` to make working with panics easier. They both take a function with arguments and then report to Rollbar if that function panics. They use the recover mechanism to capture the panic, and therefore if you wish your process to have the normal behaviour on panic (i.e. to crash), you will need to re-panic the result of calling `Wrap`. For example, The above pattern of calling `Wrap(...)` and then `Wait(...)` can be combined via `WrapAndWait(...)`. When `WrapAndWait(...)` returns if there was a panic it has already been sent to the Rollbar API. The error is still returned by this function if there is one. `Wrap` and `WrapAndWait` will accept functions with any number and type of arguments and return values. However, they do not return the function's return value, instead returning the error value. To add Rollbar panic handling to a function while preserving access to the function's return values, we provide the `LogPanic` helper designed to be used inside your deferred function. This offers virtually the same functionality as `Wrap` and `WrapAndWait` while preserving access to the function return values. Due to the nature of the `error` type in Go, it can be difficult to attribute errors to their original origin without doing some extra work. To account for this, we provide multiple ways of configuring the client to unwrap errors and extract stack traces. The client will automatically unwrap any error type which implements the `Unwrap() error` method specified in Go 1.13. (See https://golang.org/pkg/errors/ for details.) This behavior can be extended for other types of errors by calling `SetUnwrapper`. For stack traces, we provide the `Stacker` interface, which can be implemented on custom error types: If you cannot implement the `Stacker` interface on your error type (which is common for third-party error libraries), you can provide a custom tracing function by calling `SetStackTracer`. See the documentation of `SetUnwrapper` and `SetStackTracer` for more information and examples. Finally, users of github.com/pkg/errors can use the utilities provided in the `errors` sub-package.
Package peer provides a common base for creating and managing Decred network peers. This package builds upon the wire package, which provides the fundamental primitives necessary to speak the Decred wire protocol, in order to simplify the process of creating fully functional peers. In essence, it provides a common base for creating concurrent safe fully validating nodes, Simplified Payment Verification (SPV) nodes, proxies, etc. A quick overview of the major features peer provides are as follows: All peer configuration is handled with the Config struct. This allows the caller to specify things such as the user agent name and version, the decred network to use, which services it supports, and callbacks to invoke when decred messages are received. See the documentation for each field of the Config struct for more details. A peer can either be inbound or outbound. The caller is responsible for establishing the connection to remote peers and listening for incoming peers. This provides high flexibility for things such as connecting via proxies, acting as a proxy, creating bridge peers, choosing whether to listen for inbound peers, etc. NewOutboundPeer and NewInboundPeer functions must be followed by calling Connect with a net.Conn instance to the peer. This will start all async I/O goroutines and initiate the protocol negotiation process. Once finished with the peer call Disconnect to disconnect from the peer and clean up all resources. WaitForDisconnect can be used to block until peer disconnection and resource cleanup has completed. In order to do anything useful with a peer, it is necessary to react to decred messages. This is accomplished by creating an instance of the MessageListeners struct with the callbacks to be invoke specified and setting the Listeners field of the Config struct specified when creating a peer to it. For convenience, a callback hook for all of the currently supported decred messages is exposed which receives the peer instance and the concrete message type. In addition, a hook for OnRead is provided so even custom messages types for which this package does not directly provide a hook, as long as they implement the wire.Message interface, can be used. Finally, the OnWrite hook is provided, which in conjunction with OnRead, can be used to track server-wide byte counts. It is often useful to use closures which encapsulate state when specifying the callback handlers. This provides a clean method for accessing that state when callbacks are invoked. The QueueMessage function provides the fundamental means to send messages to the remote peer. As the name implies, this employs a non-blocking queue. A done channel which will be notified when the message is actually sent can optionally be specified. There are certain message types which are better sent using other functions which provide additional functionality. Of special interest are inventory messages. Rather than manually sending MsgInv messages via Queuemessage, the inventory vectors should be queued using the QueueInventory function. It employs batching and trickling along with intelligent known remote peer inventory detection and avoidance through the use of a most-recently used algorithm. In addition to the bare QueueMessage function previously described, the PushAddrMsg, PushGetBlocksMsg, PushGetHeadersMsg, and PushRejectMsg functions are provided as a convenience. While it is of course possible to create and send these message manually via QueueMessage, these helper functions provided additional useful functionality that is typically desired. For example, the PushAddrMsg function automatically limits the addresses to the maximum number allowed by the message and randomizes the chosen addresses when there are too many. This allows the caller to simply provide a slice of known addresses, such as that returned by the addrmgr package, without having to worry about the details. Next, the PushGetBlocksMsg and PushGetHeadersMsg functions will construct proper messages using a block locator and ignore back to back duplicate requests. Finally, the PushRejectMsg function can be used to easily create and send an appropriate reject message based on the provided parameters as well as optionally provides a flag to cause it to block until the message is actually sent. A snapshot of the current peer statistics can be obtained with the StatsSnapshot function. This includes statistics such as the total number of bytes read and written, the remote address, user agent, and negotiated protocol version. This package provides extensive logging capabilities through the UseLogger function which allows a slog.Logger to be specified. For example, logging at the debug level provides summaries of every message sent and received, and logging at the trace level provides full dumps of parsed messages as well as the raw message bytes using a format similar to hexdump -C. This package supports all improvement proposals supported by the wire package. This example demonstrates the basic process for initializing and creating an outbound peer. Peers negotiate by exchanging version and verack messages. For demonstration, a simple handler for version message is attached to the peer.
Package ora implements an Oracle database driver. ### Golang Oracle Database Driver ### #### TL;DR; just use it #### Call stored procedure with OUT parameters: An Oracle database may be accessed through the database/sql(http://golang.org/pkg/database/sql) package or through the ora package directly. database/sql offers connection pooling, thread safety, a consistent API to multiple database technologies and a common set of Go types. The ora package offers additional features including pointers, slices, nullable types, numerics of various sizes, Oracle-specific types, Go return type configuration, and Oracle abstractions such as environment, server and session. The ora package is written with the Oracle Call Interface (OCI) C-language libraries provided by Oracle. The OCI libraries are a standard for client application communication and driver communication with Oracle databases. The ora package has been verified to work with: * Oracle Standard 11g (11.2.0.4.0), Linux x86_64 (RHEL6) * Oracle Enterprise 12c (12.1.0.1.0), Windows 8.1 and AMD64. --- * [Installation](https://github.com/rana/ora#installation) * [Data Types](https://github.com/rana/ora#data-types) * [SQL Placeholder Syntax](https://github.com/rana/ora#sql-placeholder-syntax) * [Working With The Sql Package](https://github.com/rana/ora#working-with-the-sql-package) * [Working With The Oracle Package Directly](https://github.com/rana/ora#working-with-the-oracle-package-directly) * [Logging](https://github.com/rana/ora#logging) * [Test Database Setup](https://github.com/rana/ora#test-database-setup) * [Limitations](https://github.com/rana/ora#limitations) * [License](https://github.com/rana/ora#license) * [API Reference](http://godoc.org/github.com/rana/ora#pkg-index) * [Examples](./examples) --- Minimum requirements are Go 1.3 with CGO enabled, a GCC C compiler, and Oracle 11g (11.2.0.4.0) or Oracle Instant Client (11.2.0.4.0). Install Oracle or Oracle Instant Client. Copy the [oci8.pc](contrib/oci8.pc) from the `contrib` folder (or the one for your system, maybe tailored to your specific locations) to a folder in `$PKG_CONFIG_PATH` or a system folder, such as The ora package has no external Go dependencies and is available on GitHub and gopkg.in: *WARNING*: If you have Oracle Instant Client 11.2, you'll need to add "=lnnz11" to the list of linked libs! Otherwise, you may encounter "undefined reference to `nzosSCSP_SetCertSelectionParams' " errors. Oracle Instant Client 12.1 does not need this. The ora package supports all built-in Oracle data types. The supported Oracle built-in data types are NUMBER, BINARY_DOUBLE, BINARY_FLOAT, FLOAT, DATE, TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, INTERVAL YEAR TO MONTH, INTERVAL DAY TO SECOND, CHAR, NCHAR, VARCHAR, VARCHAR2, NVARCHAR2, LONG, CLOB, NCLOB, BLOB, LONG RAW, RAW, ROWID and BFILE. SYS_REFCURSOR is also supported. Oracle does not provide a built-in boolean type. Oracle provides a single-byte character type. A common practice is to define two single-byte characters which represent true and false. The ora package adopts this approach. The oracle package associates a Go bool value to a Go rune and sends and receives the rune to a CHAR(1 BYTE) column or CHAR(1 CHAR) column. The default false rune is zero '0'. The default true rune is one '1'. The bool rune association may be configured or disabled when directly using the ora package but not with the database/sql package. Within a SQL string a placeholder may be specified to indicate where a Go variable is placed. The SQL placeholder is an Oracle identifier, from 1 to 30 characters, prefixed with a colon (:). For example: Placeholders within a SQL statement are bound by position. The actual name is not used by the ora package driver e.g., placeholder names :c1, :1, or :xyz are treated equally. The `database/sql` package provides a LastInsertId method to return the last inserted row's id. Oracle does not provide such functionality, but if you append `... RETURNING col /*LastInsertId*/` to your SQL, then it will be presented as LastInsertId. Note that you have to mark with a `/*LastInsertId*/` (case insensitive) your `RETURNING` part, to allow ora to return the last column as `LastInsertId()`. That column must fit in `int64`, though! You may access an Oracle database through the database/sql package. The database/sql package offers a consistent API across different databases, connection pooling, thread safety and a set of common Go types. database/sql makes working with Oracle straight-forward. The ora package implements interfaces in the database/sql/driver package enabling database/sql to communicate with an Oracle database. Using database/sql ensures you never have to call the ora package directly. When using database/sql, the mapping between Go types and Oracle types may be changed slightly. The database/sql package has strict expectations on Go return types. The Go-to-Oracle type mapping for database/sql is: The "ora" driver is automatically registered for use with sql.Open, but you can call ora.SetCfg to set the used configuration options including statement configuration and Rset configuration. When configuring the driver for use with database/sql, keep in mind that database/sql has strict Go type-to-Oracle type mapping expectations. The ora package allows programming with pointers, slices, nullable types, numerics of various sizes, Oracle-specific types, Go return type configuration, and Oracle abstractions such as environment, server and session. When working with the ora package directly, the API is slightly different than database/sql. When using the ora package directly, the mapping between Go types and Oracle types may be changed. The Go-to-Oracle type mapping for the ora package is: An example of using the ora package directly: Pointers may be used to capture out-bound values from a SQL statement such as an insert or stored procedure call. For example, a numeric pointer captures an identity value: A string pointer captures an out parameter from a stored procedure: Slices may be used to insert multiple records with a single insert statement: The ora package provides nullable Go types to support DML operations such as insert and select. The nullable Go types provided by the ora package are Int64, Int32, Int16, Int8, Uint64, Uint32, Uint16, Uint8, Float64, Float32, Time, IntervalYM, IntervalDS, String, Bool, Binary and Bfile. For example, you may insert nullable Strings and select nullable Strings: The `Stmt.Prep` method is variadic accepting zero or more `GoColumnType` which define a Go return type for a select-list column. For example, a Prep call can be configured to return an int64 and a nullable Int64 from the same column: Go numerics of various sizes are supported in DML operations. The ora package supports int64, int32, int16, int8, uint64, uint32, uint16, uint8, float64 and float32. For example, you may insert a uint16 and select numerics of various sizes: If a non-nullable type is defined for a nullable column returning null, the Go type's zero value is returned. GoColumnTypes defined by the ora package are: When Stmt.Prep doesn't receive a GoColumnType, or receives an incorrect GoColumnType, the default value defined in RsetCfg is used. EnvCfg, SrvCfg, SesCfg, StmtCfg and RsetCfg are the main configuration structs. EnvCfg configures aspects of an Env. SrvCfg configures aspects of a Srv. SesCfg configures aspects of a Ses. StmtCfg configures aspects of a Stmt. RsetCfg configures aspects of Rset. StmtCfg and RsetCfg have the most options to configure. RsetCfg defines the default mapping between an Oracle select-list column and a Go type. StmtCfg may be set in an EnvCfg, SrvCfg, SesCfg and StmtCfg. RsetCfg may be set in a Stmt. EnvCfg.StmtCfg, SrvCfg.StmtCfg, SesCfg.StmtCfg may optionally be specified to configure a statement. If StmtCfg isn't specified default values are applied. EnvCfg.StmtCfg, SrvCfg.StmtCfg, SesCfg.StmtCfg cascade to new descendent structs. When ora.OpenEnv() is called a specified EnvCfg is used or a default EnvCfg is created. Creating a Srv with env.OpenSrv() will use SrvCfg.StmtCfg if it is specified; otherwise, EnvCfg.StmtCfg is copied by value to SrvCfg.StmtCfg. Creating a Ses with srv.OpenSes() will use SesCfg.StmtCfg if it is specified; otherwise, SrvCfg.StmtCfg is copied by value to SesCfg.StmtCfg. Creating a Stmt with ses.Prep() will use SesCfg.StmtCfg if it is specified; otherwise, a new StmtCfg with default values is set on the Stmt. Call Stmt.Cfg() to change a Stmt's configuration. An Env may contain multiple Srv. A Srv may contain multiple Ses. A Ses may contain multiple Stmt. A Stmt may contain multiple Rset. Setting a RsetCfg on a StmtCfg does not cascade through descendent structs. Configuration of Stmt.Cfg takes effect prior to calls to Stmt.Exe and Stmt.Qry; consequently, any updates to Stmt.Cfg after a call to Stmt.Exe or Stmt.Qry are not observed. One configuration scenario may be to set a server's select statements to return nullable Go types by default: Another scenario may be to configure the runes mapped to bool values: Oracle-specific types offered by the ora package are ora.Rset, ora.IntervalYM, ora.IntervalDS, ora.Raw, ora.Lob and ora.Bfile. ora.Rset represents an Oracle SYS_REFCURSOR. ora.IntervalYM represents an Oracle INTERVAL YEAR TO MONTH. ora.IntervalDS represents an Oracle INTERVAL DAY TO SECOND. ora.Raw represents an Oracle RAW or LONG RAW. ora.Lob may represent an Oracle BLOB or Oracle CLOB. And ora.Bfile represents an Oracle BFILE. ROWID columns are returned as strings and don't have a unique Go type. #### LOBs The default for SELECTing [BC]LOB columns is a safe Bin or S, which means all the contents of the LOB is slurped into memory and returned as a []byte or string. The DefaultLOBFetchLen says LOBs are prefetched only a minimal way, to minimize extra memory usage - you can override this using `stmt.SetCfg(stmt.Cfg().SetLOBFetchLen(100))`. If you want more control, you can use ora.L in Prep, Qry or `ses.SetCfg(ses.Cfg().SetBlob(ora.L))`. But keep in mind that Oracle restricts the use of LOBs: it is forbidden to do ANYTHING while reading the LOB! No another query, no exec, no close of the Rset - even *advance* to the next record in the result set is forbidden! Failing to adhere these rules results in "Invalid handle" and ORA-03127 errors. You cannot start reading another LOB till you haven't finished reading the previous LOB, not even in the same row! Failing this results in ORA-24804! For examples, see [z_lob_test.go](z_lob_test.go). #### Rset Rset is used to obtain Go values from a SQL select statement. Methods Rset.Next, Rset.NextRow, and Rset.Len are available. Fields Rset.Row, Rset.Err, Rset.Index, and Rset.ColumnNames are also available. The Next method attempts to load data from an Oracle buffer into Row, returning true when successful. When no data is available, or if an error occurs, Next returns false setting Row to nil. Any error in Next is assigned to Err. Calling Next increments Index and method Len returns the total number of rows processed. The NextRow method is convenient for returning a single row. NextRow calls Next and returns Row. ColumnNames returns the names of columns defined by the SQL select statement. Rset has two usages. Rset may be returned from Stmt.Qry when prepared with a SQL select statement: Or, *Rset may be passed to Stmt.Exe when prepared with a stored procedure accepting an OUT SYS_REFCURSOR parameter: Stored procedures with multiple OUT SYS_REFCURSOR parameters enable a single Exe call to obtain multiple Rsets: The types of values assigned to Row may be configured in StmtCfg.Rset. For configuration to take effect, assign StmtCfg.Rset prior to calling Stmt.Qry or Stmt.Exe. Rset prefetching may be controlled by StmtCfg.PrefetchRowCount and StmtCfg.PrefetchMemorySize. PrefetchRowCount works in coordination with PrefetchMemorySize. When PrefetchRowCount is set to zero only PrefetchMemorySize is used; otherwise, the minimum of PrefetchRowCount and PrefetchMemorySize is used. The default uses a PrefetchMemorySize of 134MB. Opening and closing Rsets is managed internally. Rset does not have an Open method or Close method. IntervalYM may be be inserted and selected: IntervalDS may be be inserted and selected: Transactions on an Oracle server are supported. DML statements auto-commit unless a transaction has started: Ses.PrepAndExe, Ses.PrepAndQry, Ses.Ins, Ses.Upd, and Ses.Sel are convenient one-line methods. Ses.PrepAndExe offers a convenient one-line call to Ses.Prep and Stmt.Exe. Ses.PrepAndQry offers a convenient one-line call to Ses.Prep and Stmt.Qry. Ses.Ins composes, prepares and executes a sql INSERT statement. Ses.Ins is useful when you have to create and maintain a simple INSERT statement with a long list of columns. As table columns are added and dropped over the lifetime of a table Ses.Ins is easy to read and revise. Ses.Upd composes, prepares and executes a sql UPDATE statement. Ses.Upd is useful when you have to create and maintain a simple UPDATE statement with a long list of columns. As table columns are added and dropped over the lifetime of a table Ses.Upd is easy to read and revise. Ses.Sel composes, prepares and queries a sql SELECT statement. Ses.Sel is useful when you have to create and maintain a simple SELECT statement with a long list of columns that have non-default GoColumnTypes. As table columns are added and dropped over the lifetime of a table Ses.Sel is easy to read and revise. The Ses.Ping method checks whether the client's connection to an Oracle server is valid. A call to Ping requires an open Ses. Ping will return a nil error when the connection is fine: The Srv.Version method is available to obtain the Oracle server version. A call to Version requires an open Ses: Further code examples are available in the [example file](https://github.com/rana/ora/blob/master/z_example_test.go), test files and [samples folder](https://github.com/rana/ora/tree/master/samples). The ora package provides a simple ora.Logger interface for logging. Logging is disabled by default. Specify one of three optional built-in logging packages to enable logging; or, use your own logging package. ora.Cfg().Log offers various options to enable or disable logging of specific ora driver methods. For example: To use the standard Go log package: which produces a sample log of: Messages are prefixed with 'ORA I' for information or 'ORA E' for an error. The log package is configured to write to os.Stderr by default. Use the ora/lg.Std type to configure an alternative io.Writer. To use the glog package: which produces a sample log of: To use the log15 package: which produces a sample log of: See https://github.com/rana/ora/tree/master/samples/lg15/main.go for sample code which uses the log15 package. Tests are available and require some setup. Setup varies depending on whether the Oracle server is configured as a container database or non-container database. It's simpler to setup a non-container database. An example for each setup is explained. Non-container test database setup steps: Container test database setup steps: Some helpful SQL maintenance statements: Run the tests. database/sql method Stmt.QueryRow is not supported. Go 1.6 introduced stricter cgo (call C from Go) rules, and introduced runtime checks. This is good, as the possibility of C code corrupting Go code is almost completely eliminated, but it also means a severe call overhead grow. [Sometimes](https://groups.google.com/forum/#!topic/golang-nuts/ccMkPG6Bi5k) this can be 22x the go 1.5.3 call time! So if you need performance more than correctness, start your programs with "GODEBUG=cgocheck=0" environment setting. Copyright 2017 Rana Ian, Tamás Gulácsi. All rights reserved. Use of this source code is governed by The MIT License found in the accompanying LICENSE file.
Pact Go enables consumer driven contract testing, providing a mock service and DSL for the consumer project, and interaction playback and verification for the service provider project. Consumer side Pact testing is an isolated test that ensures a given component is able to collaborate with another (remote) component. Pact will automatically start a Mock server in the background that will act as the collaborators' test double. This implies that any interactions expected on the Mock server will be validated, meaning a test will fail if all interactions were not completed, or if unexpected interactions were found: A typical consumer-side test would look something like this: If this test completed successfully, a Pact file should have been written to ./pacts/my_consumer-my_provider.json containing all of the interactions expected to occur between the Consumer and Provider. In addition to verbatim value matching, you have 3 useful matching functions in the `dsl` package that can increase expressiveness and reduce brittle test cases. Here is a complex example that shows how all 3 terms can be used together: This example will result in a response body from the mock server that looks like: See the examples in the dsl package and the matcher tests (https://github.com/pact-foundation/pact-go/v2/blob/master/dsl/matcher_test.go) for more matching examples. NOTE: You will need to use valid Ruby regular expressions (http://ruby-doc.org/core-2.1.5/Regexp.html) and double escape backslashes. Read more about flexible matching (https://github.com/pact-foundation/pact-ruby/wiki/Regular-expressions-and-type-matching-with-Pact. Provider side Pact testing, involves verifying that the contract - the Pact file - can be satisfied by the Provider. A typical Provider side test would like something like: The `VerifyProvider` will handle all verifications, treating them as subtests and giving you granular test reporting. If you don't like this behaviour, you may call `VerifyProviderRaw` directly and handle the errors manually. Note that `PactURLs` may be a list of local pact files or remote based urls (possibly from a Pact Broker - http://docs.pact.io/documentation/sharings_pacts.html). Pact reads the specified pact files (from remote or local sources) and replays the interactions against a running Provider. If all of the interactions are met we can say that both sides of the contract are satisfied and the test passes. When validating a Provider, you have 3 options to provide the Pact files: 1. Use "PactURLs" to specify the exact set of pacts to be replayed: Options 2 and 3 are particularly useful when you want to validate that your Provider is able to meet the contracts of what's in Production and also the latest in development. See this [article](http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) for more on this strategy. Each interaction in a pact should be verified in isolation, with no context maintained from the previous interactions. So how do you test a request that requires data to exist on the provider? Provider states are how you achieve this using Pact. Provider states also allow the consumer to make the same request with different expected responses (e.g. different response codes, or the same resource with a different subset of data). States are configured on the consumer side when you issue a dsl.Given() clause with a corresponding request/response pair. Configuring the provider is a little more involved, and (currently) requires running an API endpoint to configure any [provider states](http://docs.pact.io/documentation/provider_states.html) during the verification process. The option you must provide to the dsl.VerifyRequest is: An example route using the standard Go http package might look like this: See the examples or read more at http://docs.pact.io/documentation/provider_states.html. See the Pact Broker (http://docs.pact.io/documentation/sharings_pacts.html) documentation for more details on the Broker and this article (http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) on how to make it work for you. Publishing using Go code: Publishing from the CLI: Use a cURL request like the following to PUT the pact to the right location, specifying your consumer name, provider name and consumer version. The following flags are required to use basic authentication when publishing or retrieving Pact files to/from a Pact Broker: Pact Go uses a simple log utility (logutils - https://github.com/hashicorp/logutils) to filter log messages. The CLI already contains flags to manage this, should you want to control log level in your tests, you can set it like so:
Package fwd provides a buffered reader and writer. Each has methods that help improve the encoding/decoding performance of some binary protocols. The Writer and Reader type provide similar functionality to their counterparts in bufio, plus a few extra utility methods that simplify read-ahead and write-ahead. I wrote this package to improve serialization performance for http://github.com/tinylib/msgp, where it provided about a 2x speedup over `bufio` for certain workloads. However, care must be taken to understand the semantics of the extra methods provided by this package, as they allow the user to access and manipulate the buffer memory directly. The extra methods for Reader are Reader.Peek, Reader.Skip and Reader.Next. (*fwd.Reader).Peek, unlike (*bufio.Reader).Peek, will re-allocate the read buffer in order to accommodate arbitrarily large read-ahead. (*fwd.Reader).Skip skips the next 'n' bytes in the stream, and uses the io.Seeker interface if the underlying stream implements it. (*fwd.Reader).Next returns a slice pointing to the next 'n' bytes in the read buffer (like Reader.Peek), but also increments the read position. This allows users to process streams in arbitrary block sizes without having to manage appropriately-sized slices. Additionally, obviating the need to copy the data from the buffer to another location in memory can improve performance dramatically in CPU-bound applications. Writer only has one extra method, which is (*fwd.Writer).Next, which returns a slice pointing to the next 'n' bytes of the writer, and increments the write position by the length of the returned slice. This allows users to write directly to the end of the buffer.
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy(request level configuration), alternatively, global(all services) or client level RetryPolicy configration is also possible. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go If you are trying to make a PUT/POST API call with binary request body, please make sure the binary request body is resettable, which means the request body should inherit Seeker interface. The Retry behavior Precedence (Highest to lowest) is defined as below:- The OCI Go SDK defines a default retry policy that retries on the errors suitable for retries (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm), for a recommended period of time (up to 7 attempts spread out over at most approximately 1.5 minutes). The default retry policy is defined by : Default Retry-able Errors Below is the list of default retry-able errors for which retry attempts should be made. The following errors should be retried (with backoff). HTTP Code Customer-facing Error Code Apart from the above errors, retries should also be attempted in the following Client Side errors : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) The above errors can be avoided through retrying and hence, are classified as the default retry-able errors. Additionally, retries should also be made for Circuit Breaker exceptions (Exceptions raised by Circuit Breaker in an open state) Default Termination Strategy The termination strategy defines when SDKs should stop attempting to retry. In other words, it's the deadline for retries. The OCI SDKs should stop retrying the operation after 7 retry attempts. This means the SDKs will have retried for ~98 seconds or ~1.5 minutes have elapsed due to total delays. SDKs will make a total of 8 attempts. (1 initial request + 7 retries) Default Delay Strategy Default Delay Strategy - The delay strategy defines the amount of time to wait between each of the retry attempts. The default delay strategy chosen for the SDK – Exponential backoff with jitter, using: 1. The base time to use in retry calculations will be 1 second 2. An exponent of 2. When calculating the next retry time, the SDK will raise this to the power of the number of attempts 3. A maximum wait time between calls of 30 seconds (Capped) 4. Added jitter value between 0-1000 milliseconds to spread out the requests Configure and use default retry policy You can set this retry policy for a single request: or for all requests made by a client: or for all requests made by all clients: or setting default retry via environment varaible, which is a global switch for all services: Some services enable retry for operations by default, this can be overridden using any alternatives mentioned above. To know which service operations have retries enabled by default, look at the operation's description in the SDK - it will say whether that it has retries enabled by default Some resources may have to be replicated across regions and are only eventually consistent. That means the request to create, update, or delete the resource succeeded, but the resource is not available everywhere immediately. Creating, updating, or deleting any resource in the Identity service is affected by eventual consistency, and doing so may cause other operations in other services to fail until the Identity resource has been replicated. For example, the request to CreateTag in the Identity service in the home region succeeds, but immediately using that created tag in another region in a request to LaunchInstance in the Compute service may fail. If you are creating, updating, or deleting resources in the Identity service, we recommend using an eventually consistent retry policy for any service you access. The default retry policy already deals with eventual consistency. Example: This retry policy will use a different strategy if an eventually consistent change was made in the recent past (called the "eventually consistent window", currently defined to be 4 minutes after the eventually consistent change). This special retry policy for eventual consistency will: 1. make up to 9 attempts (including the initial attempt); if an attempt is successful, no more attempts will be made 2. retry at most until (a) approximately the end of the eventually consistent window or (b) the end of the default retry period of about 1.5 minutes, whichever is farther in the future; if an attempt is successful, no more attempts will be made, and the OCI Go SDK will not wait any longer 3. retry on the error codes 400-RelatedResourceNotAuthorizedOrNotFound, 404-NotAuthorizedOrNotFound, and 409-NotAuthorizedOrResourceAlreadyExists, for which the default retry policy does not retry, in addition to the errors the default retry policy retries on (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm) If there were no eventually consistent actions within the recent past, then this special retry strategy is not used. If you want a retry policy that does not handle eventual consistency in a special way, for example because you retry on all error responses, you can use DefaultRetryPolicyWithoutEventualConsistency or NewRetryPolicyWithOptions with the common.ReplaceWithValuesFromRetryPolicy(common.DefaultRetryPolicyWithoutEventualConsistency()) option: The NewRetryPolicy function also creates a retry policy without eventual consistency. Circuit Breaker can prevent an application repeatedly trying to execute an operation that is likely to fail, allowing it to continue without waiting for the fault to be rectified or wasting CPU cycles, of course, it also enables an application to detect whether the fault has been resolved. If the problem appears to have been rectified, the application can attempt to invoke the operation. Go SDK intergrates sony/gobreaker solution, wraps in a circuit breaker object, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, this also saves the service from being overwhelmed with network calls in case of an outage. Circuit Breaker Configuration Definitions 1. Failure Rate Threshold - The state of the CircuitBreaker changes from CLOSED to OPEN when the failure rate is equal or greater than a configurable threshold. For example when more than 50% of the recorded calls have failed. 2. Reset Timeout - The timeout after which an open circuit breaker will attempt a request if a request is made 3. Failure Exceptions - The list of Exceptions that will be regarded as failures for the circuit. 4. Minimum number of calls/ Volume threshold - Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. 1. Failure Rate Threshold - 80% - This means when 80% of the requests calculated for a time window of 120 seconds have failed then the circuit will transition from closed to open. 2. Minimum number of calls/ Volume threshold - A value of 10, for the above defined time window of 120 seconds. 3. Reset Timeout - 30 seconds to wait before setting the breaker to halfOpen state, and trying the action again. 4. Failure Exceptions - The failures for the circuit will only be recorded for the retryable/transient exceptions. This means only the following exceptions will be regarded as failure for the circuit. HTTP Code Customer-facing Error Code Apart from the above, the following client side exceptions will also be treated as a failure for the circuit : 1. HTTP Connection timeout 2. Request Connection Errors 3. Request Exceptions 4. Other timeouts (like Read Timeout) Go SDK enable circuit breaker with default configuration for most of the service clients, if you don't want to enable the solution, can disable the functionality before your application running Go SDK also supports customize Circuit Breaker with specified configurations. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_circuitbreaker_test.go To know which service clients have circuit breakers enabled, look at the service client's description in the SDK - it will say whether that it has circuit breakers enabled by default The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
A process model for go. Ifrit is a small set of interfaces for composing single-purpose units of work into larger programs. Users divide their program into single purpose units of work, each of which implements the `Runner` interface Each `Runner` can be invoked to create a `Process` which can be monitored and signaled to stop. The name Ifrit comes from a type of daemon in arabic folklore. It's a play on the unix term 'daemon' to indicate a process that is managed by the init system. Ifrit ships with a standard library which contains packages for common processes - http servers, integration test helpers - alongside packages which model process supervision and orchestration. These packages can be combined to form complex servers which start and shutdown cleanly. The advantage of small, single-responsibility processes is that they are simple, and thus can be made reliable. Ifrit's interfaces are designed to be free of race conditions and edge cases, allowing larger orcestrated process to also be made reliable. The overall effect is less code and more reliability as your system grows with grace.
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy(request level configuration), alternatively, global(all services) or client level RetryPolicy configration is also possible. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go If you are trying to make a PUT/POST API call with binary request body, please make sure the binary request body is resettable, which means the request body should inherit Seeker interface. The Retry behavior Precedence (Highest to lowest) is defined as below:- The OCI Go SDK defines a default retry policy that retries on the errors suitable for retries (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm), for a recommended period of time (up to 7 attempts spread out over at most approximately 1.5 minutes). This default retry policy can be created using: You can set this retry policy for a single request: or for all requests made by a client: or for all requests made by all clients: or setting default retry via environment varaible, which is a global switch for all services: Some services enable retry for operations by default, this can be overridden using any alternatives mentioned above. Some resources may have to be replicated across regions and are only eventually consistent. That means the request to create, update, or delete the resource succeeded, but the resource is not available everywhere immediately. Creating, updating, or deleting any resource in the Identity service is affected by eventual consistency, and doing so may cause other operations in other services to fail until the Identity resource has been replicated. For example, the request to CreateTag in the Identity service in the home region succeeds, but immediately using that created tag in another region in a request to LaunchInstance in the Compute service may fail. If you are creating, updating, or deleting resources in the Identity service, we recommend using an eventually consistent retry policy for any service you access. The default retry policy already deals with eventual consistency. Example: This retry policy will use a different strategy if an eventually consistent change was made in the recent past (called the "eventually consistent window", currently defined to be 4 minutes after the eventually consistent change). This special retry policy for eventual consistency will: 1. make up to 9 attempts (including the initial attempt); if an attempt is successful, no more attempts will be made 2. retry at most until (a) approximately the end of the eventually consistent window or (b) the end of the default retry period of about 1.5 minutes, whichever is farther in the future; if an attempt is successful, no more attempts will be made, and the OCI Go SDK will not wait any longer 3. retry on the error codes 400-RelatedResourceNotAuthorizedOrNotFound, 404-NotAuthorizedOrNotFound, and 409-NotAuthorizedOrResourceAlreadyExists, for which the default retry policy does not retry, in addition to the errors the default retry policy retries on (see https://docs.oracle.com/en-us/iaas/Content/API/References/apierrors.htm) If there were no eventually consistent actions within the recent past, then this special retry strategy is not used. If you want a retry policy that does not handle eventual consistency in a special way, for example because you retry on all error responses, you can use DefaultRetryPolicyWithoutEventualConsistency or NewRetryPolicyWithOptions with the common.ReplaceWithValuesFromRetryPolicy(common.DefaultRetryPolicyWithoutEventualConsistency()) option: The NewRetryPolicy function also creates a retry policy without eventual consistency. Circuit Breaker can prevent an application repeatedly trying to execute an operation that is likely to fail, allowing it to continue without waiting for the fault to be rectified or wasting CPU cycles, of course, it also enables an application to detect whether the fault has been resolved. If the problem appears to have been rectified, the application can attempt to invoke the operation. Go SDK intergrates sony/gobreaker solution, wraps in a circuit breaker object, which monitors for failures. Once the failures reach a certain threshold, the circuit breaker trips, and all further calls to the circuit breaker return with an error, this also saves the service from being overwhelmed with network calls in case of an outage. Go SDK enable circuit breaker with default configuration, if you don't want to enable the solution, can disable the functionality before your application running Go SDK also supports customize Circuit Breaker with specified configuratoins. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_circuitbreaker_test.go The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help
This is the official Go SDK for Oracle Cloud Infrastructure Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#installing for installation instructions. Refer to https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring for configuration instructions. The following example shows how to get started with the SDK. The example belows creates an identityClient struct with the default configuration. It then utilizes the identityClient to list availability domains and prints them out to stdout More examples can be found in the SDK Github repo: https://github.com/oracle/oci-go-sdk/tree/master/example Optional fields are represented with the `mandatory:"false"` tag on input structs. The SDK will omit all optional fields that are nil when making requests. In the case of enum-type fields, the SDK will omit fields whose value is an empty string. The SDK uses pointers for primitive types in many input structs. To aid in the construction of such structs, the SDK provides functions that return a pointer for a given value. For example: The SDK exposes functionality that allows the user to customize any http request before is sent to the service. You can do so by setting the `Interceptor` field in any of the `Client` structs. For example: The Interceptor closure gets called before the signing process, thus any changes done to the request will be properly signed and submitted to the service. The SDK exposes a stand-alone signer that can be used to signing custom requests. Related code can be found here: https://github.com/oracle/oci-go-sdk/blob/master/common/http_signer.go. The example below shows how to create a default signer. The signer also allows more granular control on the headers used for signing. For example: You can combine a custom signer with the exposed clients in the SDK. This allows you to add custom signed headers to the request. Following is an example: Bear in mind that some services have a white list of headers that it expects to be signed. Therefore, adding an arbitrary header can result in authentications errors. To see a runnable example, see https://github.com/oracle/oci-go-sdk/blob/master/example/example_identity_test.go For more information on the signing algorithm refer to: https://docs.cloud.oracle.com/Content/API/Concepts/signingrequests.htm Some operations accept or return polymorphic JSON objects. The SDK models such objects as interfaces. Further the SDK provides structs that implement such interfaces. Thus, for all operations that expect interfaces as input, pass the struct in the SDK that satisfies such interface. For example: In the case of a polymorphic response you can type assert the interface to the expected type. For example: An example of polymorphic JSON request handling can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_test.go#L63 When calling a list operation, the operation will retrieve a page of results. To retrieve more data, call the list operation again, passing in the value of the most recent response's OpcNextPage as the value of Page in the next list operation call. When there is no more data the OpcNextPage field will be nil. An example of pagination using this logic can be found here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_core_pagination_test.go The SDK has a built-in logging mechanism used internally. The internal logging logic is used to record the raw http requests, responses and potential errors when (un)marshalling request and responses. Built-in logging in the SDK is controlled via the environment variable "OCI_GO_SDK_DEBUG" and its contents. The below are possible values for the "OCI_GO_SDK_DEBUG" variable 1. "info" or "i" enables all info logging messages 2. "debug" or "d" enables all debug and info logging messages 3. "verbose" or "v" or "1" enables all verbose, debug and info logging messages 4. "null" turns all logging messages off. If the value of the environment variable does not match any of the above then default logging level is "info". If the environment variable is not present then no logging messages are emitted. The default destination for logging is Stderr and if you want to output log to a file you can set via environment variable "OCI_GO_SDK_LOG_OUTPUT_MODE". The below are possible values 1. "file" or "f" enables all logging output saved to file 2. "combine" or "c" enables all logging output to both stderr and file You can also customize the log file location and name via "OCI_GO_SDK_LOG_FILE" environment variable, the value should be the path to a specific file If this environment variable is not present, the default location will be the project root path Sometimes you may need to wait until an attribute of a resource, such as an instance or a VCN, reaches a certain state. An example of this would be launching an instance and then waiting for the instance to become available, or waiting until a subnet in a VCN has been terminated. You might also want to retry the same operation again if there's network issue etc... This can be accomplished by using the RequestMetadata.RetryPolicy. You can find the examples here: https://github.com/oracle/oci-go-sdk/blob/master/example/example_retry_test.go The GO SDK uses the net/http package to make calls to OCI services. If your environment requires you to use a proxy server for outgoing HTTP requests then you can set this up in the following ways: 1. Configuring environment variable as described here https://golang.org/pkg/net/http/#ProxyFromEnvironment 2. Modifying the underlying Transport struct for a service client In order to modify the underlying Transport struct in HttpClient, you can do something similar to (sample code for audit service client): The Object Storage service supports multipart uploads to make large object uploads easier by splitting the large object into parts. The Go SDK supports raw multipart upload operations for advanced use cases, as well as a higher level upload class that uses the multipart upload APIs. For links to the APIs used for multipart upload operations, see Managing Multipart Uploads (https://docs.cloud.oracle.com/iaas/Content/Object/Tasks/usingmultipartuploads.htm). Higher level multipart uploads are implemented using the UploadManager, which will: split a large object into parts for you, upload the parts in parallel, and then recombine and commit the parts as a single object in storage. This code sample shows how to use the UploadManager to automatically split an object into parts for upload to simplify interaction with the Object Storage service: https://github.com/oracle/oci-go-sdk/blob/master/example/example_objectstorage_test.go Some response fields are enum-typed. In the future, individual services may return values not covered by existing enums for that field. To address this possibility, every enum-type response field is a modeled as a type that supports any string. Thus if a service returns a value that is not recognized by your version of the SDK, then the response field will be set to this value. When individual services return a polymorphic JSON response not available as a concrete struct, the SDK will return an implementation that only satisfies the interface modeling the polymorphic JSON response. If you are using a version of the SDK released prior to the announcement of a new region, you may need to use a workaround to reach it, depending on whether the region is in the oraclecloud.com realm. A region is a localized geographic area. For more information on regions and how to identify them, see Regions and Availability Domains(https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm). A realm is a set of regions that share entities. You can identify your realm by looking at the domain name at the end of the network address. For example, the realm for xyz.abc.123.oraclecloud.com is oraclecloud.com. oraclecloud.com Realm: For regions in the oraclecloud.com realm, even if common.Region does not contain the new region, the forward compatibility of the SDK can automatically handle it. You can pass new region names just as you would pass ones that are already defined. For more information on passing region names in the configuration, see Configuring (https://github.com/oracle/oci-go-sdk/blob/master/README.md#configuring). For details on common.Region, see (https://github.com/oracle/oci-go-sdk/blob/master/common/common.go). Other Realms: For regions in realms other than oraclecloud.com, you can use the following workarounds to reach new regions with earlier versions of the SDK. NOTE: Be sure to supply the appropriate endpoints for your region. You can overwrite the target host with client.Host: If you are authenticating via instance principals, you can set the authentication endpoint in an environment variable: Got a fix for a bug, or a new feature you'd like to contribute? The SDK is open source and accepting pull requests on GitHub https://github.com/oracle/oci-go-sdk Licensing information available at: https://github.com/oracle/oci-go-sdk/blob/master/LICENSE.txt To be notified when a new version of the Go SDK is released, subscribe to the following feed: https://github.com/oracle/oci-go-sdk/releases.atom Please refer to this link: https://github.com/oracle/oci-go-sdk#help