Package firehose provides the API client, operations, and parameter types for Amazon Kinesis Firehose. Amazon Data Firehose was previously known as Amazon Kinesis Data Firehose. Amazon Data Firehose is a fully managed service that delivers real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3), Amazon OpenSearch Service, Amazon Redshift, Splunk, and various other supported destinations.
Package mq provides the API client, operations, and parameter types for AmazonMQ. Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easy to set up and operate message brokers in the cloud. A message broker allows software applications and components to communicate using various programming languages, operating systems, and formal messaging protocols.
Package codepipeline provides the API client, operations, and parameter types for AWS CodePipeline. This is the CodePipeline API Reference. This guide provides descriptions of the actions and data types for CodePipeline. Some functionality for your pipeline can only be configured through the API. For more information, see the CodePipeline User Guide. You can use the CodePipeline API to work with pipelines, stages, actions, and transitions. Pipelines are models of automated release processes. Each pipeline is uniquely named, and consists of stages, actions, and transitions. You can work with pipelines by calling: CreatePipeline DeletePipeline GetPipeline GetPipelineExecution GetPipelineState ListActionExecutions ListPipelines ListPipelineExecutions StartPipelineExecution StopPipelineExecution UpdatePipeline Pipelines include stages. Each stage contains one or more actions that must complete before the next stage begins. A stage results in success or failure. If a stage fails, the pipeline stops at that stage and remains stopped until either a new version of an artifact appears in the source location, or a user takes action to rerun the most recent artifact through the pipeline. You can call GetPipelineState, which displays the status of a pipeline, including the status of stages in the pipeline, or GetPipeline, which returns the entire structure of the pipeline, including the stages of that pipeline. For more information about the structure of stages and actions, see CodePipeline Pipeline Structure Reference. Pipeline stages include actions that are categorized into categories such as source or build actions performed in a stage of a pipeline. For example, you can use a source action to import artifacts into a pipeline from a source such as Amazon S3. Like stages, you do not work with actions directly in most cases, but you do define and interact with actions when working with pipeline operations such as CreatePipelineand GetPipelineState. Valid action categories are: Source Build Test Deploy Approval Invoke Compute Pipelines also include transitions, which allow the transition of artifacts from one stage to the next in a pipeline after the actions in one stage complete. You can work with transitions by calling: DisableStageTransition EnableStageTransition For third-party integrators or developers who want to create their own integrations with CodePipeline, the expected sequence varies from the standard API user. To integrate with CodePipeline, developers need to work with the following items: Jobs, which are instances of an action. For example, a job for a source action might import a revision of an artifact from a source. You can work with jobs by calling: AcknowledgeJob GetJobDetails PollForJobs PutJobFailureResult PutJobSuccessResult Third party jobs, which are instances of an action created by a partner action and integrated into CodePipeline. Partner actions are created by members of the Amazon Web Services Partner Network. You can work with third party jobs by calling: AcknowledgeThirdPartyJob GetThirdPartyJobDetails PollForThirdPartyJobs PutThirdPartyJobFailureResult PutThirdPartyJobSuccessResult
Package configservice provides the API client, operations, and parameter types for AWS Config. Config provides a way to keep track of the configurations of all the Amazon Web Services resources associated with your Amazon Web Services account. You can use Config to get the current and historical configurations of each Amazon Web Services resource and also to get information about the relationship between the resources. An Amazon Web Services resource can be an Amazon Compute Cloud (Amazon EC2) instance, an Elastic Block Store (EBS) volume, an elastic network Interface (ENI), or a security group. For a complete list of resources currently supported by Config, see Supported Amazon Web Services resources. You can access and manage Config through the Amazon Web Services Management Console, the Amazon Web Services Command Line Interface (Amazon Web Services CLI), the Config API, or the Amazon Web Services SDKs for Config. This reference guide contains documentation for the Config API and the Amazon Web Services CLI commands that you can use to manage Config. The Config API uses the Signature Version 4 protocol for signing requests. For more information about how to sign a request with this protocol, see Signature Version 4 Signing Process. For detailed information about Config features and their associated actions or commands, as well as how to work with Amazon Web Services Management Console, see What Is Configin the Config Developer Guide.
Package sfn provides the API client, operations, and parameter types for AWS Step Functions. Step Functions coordinates the components of distributed applications and microservices using visual workflows. You can use Step Functions to build applications from individual components, each of which performs a discrete function, or task, allowing you to scale and change applications quickly. Step Functions provides a console that helps visualize the components of your application as a series of steps. Step Functions automatically triggers and tracks each step, and retries steps when there are errors, so your application executes predictably and in the right order every time. Step Functions logs the state of each step, so you can quickly diagnose and debug any issues. Step Functions manages operations and underlying infrastructure to ensure your application is available at any scale. You can run tasks on Amazon Web Services, your own servers, or any system that has access to Amazon Web Services. You can access and use Step Functions using the console, the Amazon Web Services SDKs, or an HTTP API. For more information about Step Functions, see the Step Functions Developer Guide. If you use the Step Functions API actions using Amazon Web Services SDK integrations, make sure the API actions are in camel case and parameter names are in Pascal case. For example, you could use Step Functions API action startSyncExecution and specify its parameter as StateMachineArn .
Package appconfig provides the API client, operations, and parameter types for Amazon AppConfig. AppConfig feature flags and dynamic configurations help software builders quickly and securely adjust application behavior in production environments without full code deployments. AppConfig speeds up software release frequency, improves application resiliency, and helps you address emergent issues more quickly. With feature flags, you can gradually release new capabilities to users and measure the impact of those changes before fully deploying the new capabilities to all users. With operational flags and dynamic configurations, you can update block lists, allow lists, throttling limits, logging verbosity, and perform other operational tuning to quickly respond to issues in production environments. AppConfig is a capability of Amazon Web Services Systems Manager. Despite the fact that application configuration content can vary greatly from application to application, AppConfig supports the following use cases, which cover a broad spectrum of customer needs: Feature flags and toggles - Safely release new capabilities to your customers in a controlled environment. Instantly roll back changes if you experience a problem. Application tuning - Carefully introduce application changes while testing the impact of those changes with users in production environments. Allow list or block list - Control access to premium features or instantly block specific users without deploying new code. Centralized configuration storage - Keep your configuration data organized and consistent across all of your workloads. You can use AppConfig to deploy configuration data stored in the AppConfig hosted configuration store, Secrets Manager, Systems Manager, Parameter Store, or Amazon S3. This section provides a high-level description of how AppConfig works and how you get started. 1. Identify configuration values in code you want to manage in the cloud Before you start creating AppConfig artifacts, we recommend you identify configuration data in your code that you want to dynamically manage using AppConfig. Good examples include feature flags or toggles, allow and block lists, logging verbosity, service limits, and throttling rules, to name a few. If your configuration data already exists in the cloud, you can take advantage of AppConfig validation, deployment, and extension features to further streamline configuration data management. 2. Create an application namespace To create a namespace, you create an AppConfig artifact called an application. An application is simply an organizational construct like a folder. 3. Create environments For each AppConfig application, you define one or more environments. An environment is a logical grouping of targets, such as applications in a Beta or Production environment, Lambda functions, or containers. You can also define environments for application subcomponents, such as the Web , Mobile , and Back-end . You can configure Amazon CloudWatch alarms for each environment. The system monitors alarms during a configuration deployment. If an alarm is triggered, the system rolls back the configuration. 4. Create a configuration profile A configuration profile includes, among other things, a URI that enables AppConfig to locate your configuration data in its stored location and a profile type. AppConfig supports two configuration profile types: feature flags and freeform configurations. Feature flag configuration profiles store their data in the AppConfig hosted configuration store and the URI is simply hosted . For freeform configuration profiles, you can store your data in the AppConfig hosted configuration store or any Amazon Web Services service that integrates with AppConfig, as described in Creating a free form configuration profilein the the AppConfig User Guide. A configuration profile can also include optional validators to ensure your configuration data is syntactically and semantically correct. AppConfig performs a check using the validators when you start a deployment. If any errors are detected, the deployment rolls back to the previous configuration data. 5. Deploy configuration data When you create a new deployment, you specify the following: An application ID A configuration profile ID A configuration version An environment ID where you want to deploy the configuration data A deployment strategy ID that defines how fast you want the changes to take effect When you call the StartDeployment API action, AppConfig performs the following tasks: Retrieves the configuration data from the underlying data store by using the location URI in the configuration profile. Verifies the configuration data is syntactically and semantically correct by using the validators you specified when you created your configuration profile. Caches a copy of the data so it is ready to be retrieved by your application. This cached copy is called the deployed data. 6. Retrieve the configuration You can configure AppConfig Agent as a local host and have the agent poll AppConfig for configuration updates. The agent calls the StartConfigurationSessionand GetLatestConfiguration API actions and caches your configuration data locally. To retrieve the data, your application makes an HTTP call to the localhost server. AppConfig Agent supports several use cases, as described in Simplified retrieval methodsin the the AppConfig User Guide. If AppConfig Agent isn't supported for your use case, you can configure your application to poll AppConfig for configuration updates by directly calling the StartConfigurationSession and GetLatestConfigurationAPI actions. This reference is intended to be used with the AppConfig User Guide.
Package elasticbeanstalk provides the API client, operations, and parameter types for AWS Elastic Beanstalk. AWS Elastic Beanstalk makes it easy for you to create, deploy, and manage scalable, fault-tolerant applications running on the Amazon Web Services cloud. For more information about this product, go to the AWS Elastic Beanstalk details page. The location of the latest AWS Elastic Beanstalk WSDL is https://elasticbeanstalk.s3.amazonaws.com/doc/2010-12-01/AWSElasticBeanstalk.wsdl. To install the Software Development Kits (SDKs), Integrated Development Environment (IDE) Toolkits, and command line tools that enable you to access the API, go to Tools for Amazon Web Services. For a list of region-specific endpoints that AWS Elastic Beanstalk supports, go to Regions and Endpointsin the Amazon Web Services Glossary.
Package codedeploy provides the API client, operations, and parameter types for AWS CodeDeploy. CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances running in your own facility, serverless Lambda functions, or applications in an Amazon ECS service. You can deploy a nearly unlimited variety of application content, such as an updated Lambda function, updated applications in an Amazon ECS service, code, web and configuration files, executables, packages, scripts, multimedia files, and so on. CodeDeploy can deploy application content stored in Amazon S3 buckets, GitHub repositories, or Bitbucket repositories. You do not need to make changes to your existing code before you can use CodeDeploy. CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications, without many of the risks associated with error-prone manual deployments. Use the information in this guide to help you work with the following CodeDeploy components: Application: A name that uniquely identifies the application you want to deploy. CodeDeploy uses this name, which functions as a container, to ensure the correct combination of revision, deployment configuration, and deployment group are referenced during a deployment. Deployment group: A set of individual instances, CodeDeploy Lambda deployment configuration settings, or an Amazon ECS service and network details. A Lambda deployment group specifies how to route traffic to a new version of a Lambda function. An Amazon ECS deployment group specifies the service created in Amazon ECS to deploy, a load balancer, and a listener to reroute production traffic to an updated containerized application. An Amazon EC2/On-premises deployment group contains individually tagged instances, Amazon EC2 instances in Amazon EC2 Auto Scaling groups, or both. All deployment groups can specify optional trigger, alarm, and rollback settings. Deployment configuration: A set of deployment rules and deployment success and failure conditions used by CodeDeploy during a deployment. Deployment: The process and the components used when updating a Lambda function, a containerized application in an Amazon ECS service, or of installing content on one or more instances. Application revisions: For an Lambda deployment, this is an AppSpec file that specifies the Lambda function to be updated and one or more functions to validate deployment lifecycle events. For an Amazon ECS deployment, this is an AppSpec file that specifies the Amazon ECS task definition, container, and port where production traffic is rerouted. For an EC2/On-premises deployment, this is an archive file that contains source content—source code, webpages, executable files, and deployment scripts—along with an AppSpec file. Revisions are stored in Amazon S3 buckets or GitHub repositories. For Amazon S3, a revision is uniquely identified by its Amazon S3 object key and its ETag, version, or both. For GitHub, a revision is uniquely identified by its commit ID. This guide also contains information to help you get details about the instances in your deployments, to make on-premises instances available for CodeDeploy deployments, to get details about a Lambda function deployment, and to get details about Amazon ECS service deployments. CodeDeploy User Guide CodeDeploy API Reference Guide CLI Reference for CodeDeploy CodeDeploy Developer Forum
Package eventbridge provides the API client, operations, and parameter types for Amazon EventBridge. Amazon EventBridge helps you to respond to state changes in your Amazon Web Services resources. When your resources change state, they automatically send events to an event stream. You can create rules that match selected events in the stream and route them to targets to take action. You can also use rules to take action on a predetermined schedule. For example, you can configure rules to: Automatically invoke an Lambda function to update DNS entries when an event notifies you that Amazon EC2 instance enters the running state. Direct specific API records from CloudTrail to an Amazon Kinesis data stream for detailed analysis of potential security or availability risks. Periodically invoke a built-in target to create a snapshot of an Amazon EBS volume. For more information about the features of Amazon EventBridge, see the Amazon EventBridge User Guide.
Package waf provides the API client, operations, and parameter types for AWS WAF. This is AWS WAF Classic documentation. For more information, see AWS WAF Classic in the developer guide. For the latest version of AWS WAF, use the AWS WAFV2 API and see the AWS WAF Developer Guide. With the latest version, AWS WAF has a single set of endpoints for regional and global use. This is the AWS WAF Classic API Reference for using AWS WAF Classic with Amazon CloudFront. The AWS WAF Classic actions and data types listed in the reference are available for protecting Amazon CloudFront distributions. You can use these actions and data types via the endpoint waf.amazonaws.com. This guide is for developers who need detailed information about the AWS WAF Classic API actions, data types, and errors. For detailed information about AWS WAF Classic features and an overview of how to use the AWS WAF Classic API, see the AWS WAF Classicin the developer guide.
Package acmpca provides the API client, operations, and parameter types for AWS Certificate Manager Private Certificate Authority. This is the Amazon Web Services Private Certificate Authority API Reference. It provides descriptions, syntax, and usage examples for each of the actions and data types involved in creating and managing a private certificate authority (CA) for your organization. The documentation for each action shows the API request parameters and the JSON response. Alternatively, you can use one of the Amazon Web Services SDKs to access an API that is tailored to the programming language or platform that you prefer. For more information, see Amazon Web Services SDKs. Each Amazon Web Services Private CA API operation has a quota that determines the number of times the operation can be called per second. Amazon Web Services Private CA throttles API requests at different rates depending on the operation. Throttling means that Amazon Web Services Private CA rejects an otherwise valid request because the request exceeds the operation's quota for the number of requests per second. When a request is throttled, Amazon Web Services Private CA returns a ThrottlingExceptionerror. Amazon Web Services Private CA does not guarantee a minimum request rate for APIs. To see an up-to-date list of your Amazon Web Services Private CA quotas, or to request a quota increase, log into your Amazon Web Services account and visit the Service Quotasconsole.
Package databasemigrationservice provides the API client, operations, and parameter types for AWS Database Migration Service. Database Migration Service (DMS) can migrate your data to and from the most widely used commercial and open-source databases such as Oracle, PostgreSQL, Microsoft SQL Server, Amazon Redshift, MariaDB, Amazon Aurora, MySQL, and SAP Adaptive Server Enterprise (ASE). The service supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to MySQL or SQL Server to PostgreSQL. For more information about DMS, see What Is Database Migration Service? in the Database Migration Service User Guide.
Package lightsail provides the API client, operations, and parameter types for Amazon Lightsail. Amazon Lightsail is the easiest way to get started with Amazon Web Services (Amazon Web Services) for developers who need to build websites or web applications. It includes everything you need to launch your project quickly - instances (virtual private servers), container services, storage buckets, managed databases, SSD-based block storage, static IP addresses, load balancers, content delivery network (CDN) distributions, DNS management of registered domains, and resource snapshots (backups) - for a low, predictable monthly price. You can manage your Lightsail resources using the Lightsail console, Lightsail API, Command Line Interface (CLI), or SDKs. For more information about Lightsail concepts and tasks, see the Amazon Lightsail Developer Guide. This API Reference provides detailed information about the actions, data types, parameters, and errors of the Lightsail service. For more information about the supported Amazon Web Services Regions, endpoints, and service quotas of the Lightsail service, see Amazon Lightsail Endpoints and Quotasin the Amazon Web Services General Reference.
Package sesv2 provides the API client, operations, and parameter types for Amazon Simple Email Service. Amazon SESis an Amazon Web Services service that you can use to send email messages to your customers. If you're new to Amazon SES API v2, you might find it helpful to review the Amazon Simple Email Service Developer Guide. The Amazon SES Developer Guide provides information and code samples that demonstrate how to use Amazon SES API v2 features programmatically.
Package costexplorer provides the API client, operations, and parameter types for AWS Cost Explorer Service. You can use the Cost Explorer API to programmatically query your cost and usage data. You can query for aggregated data such as total monthly costs or total daily usage. You can also query for granular data. This might include the number of daily write operations for Amazon DynamoDB database tables in your production environment. The Cost Explorer API provides the following endpoint: For information about the costs that are associated with the Cost Explorer API, see Amazon Web Services Cost Management Pricing.
Package servicecatalog provides the API client, operations, and parameter types for AWS Service Catalog. Service Catalogenables organizations to create and manage catalogs of IT services that are approved for Amazon Web Services. To get the most out of this documentation, you should be familiar with the terminology discussed in Service Catalog Concepts.
Package wafv2 provides the API client, operations, and parameter types for AWS WAFV2. This is the latest version of the WAF API, released in November, 2019. The names of the entities that you use to access this API, like endpoints and namespaces, all have the versioning information added, like "V2" or "v2", to distinguish from the prior version. We recommend migrating your resources to this version, because it has a number of significant improvements. If you used WAF prior to this release, you can't use this WAFV2 API to access any WAF resources that you created before. WAF Classic support will end on September 30, 2025. For information about WAF, including how to migrate your WAF Classic resources to this version, see the WAF Developer Guide. WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to an Amazon CloudFront distribution, Amazon API Gateway REST API, Application Load Balancer, AppSync GraphQL API, Amazon Cognito user pool, App Runner service, or Amazon Web Services Verified Access instance. WAF also lets you control access to your content, to protect the Amazon Web Services resource that WAF is monitoring. Based on conditions that you specify, such as the IP addresses that requests originate from or the values of query strings, the protected resource responds to requests with either the requested content, an HTTP 403 status code (Forbidden), or with a custom response. This API guide is for developers who need detailed information about WAF API actions, data types, and errors. For detailed information about WAF features and guidance for configuring and using WAF, see the WAF Developer Guide. You can make calls using the endpoints listed in WAF endpoints and quotas. For regional applications, you can use any of the endpoints in the list. A regional application can be an Application Load Balancer (ALB), an Amazon API Gateway REST API, an AppSync GraphQL API, an Amazon Cognito user pool, an App Runner service, or an Amazon Web Services Verified Access instance. For Amazon CloudFront applications, you must use the API endpoint listed for US East (N. Virginia): us-east-1. Alternatively, you can use one of the Amazon Web Services SDKs to access an API that's tailored to the programming language or platform that you're using. For more information, see Amazon Web Services SDKs.
Package guardduty provides the API client, operations, and parameter types for Amazon GuardDuty. Amazon GuardDuty is a continuous security monitoring service that analyzes and processes the following foundational data sources - VPC flow logs, Amazon Web Services CloudTrail management event logs, CloudTrail S3 data event logs, EKS audit logs, DNS logs, Amazon EBS volume data, runtime activity belonging to container workloads, such as Amazon EKS, Amazon ECS (including Amazon Web Services Fargate), and Amazon EC2 instances. It uses threat intelligence feeds, such as lists of malicious IPs and domains, and machine learning to identify unexpected, potentially unauthorized, and malicious activity within your Amazon Web Services environment. This can include issues like escalations of privileges, uses of exposed credentials, or communication with malicious IPs, domains, or presence of malware on your Amazon EC2 instances and container workloads. For example, GuardDuty can detect compromised EC2 instances and container workloads serving malware, or mining bitcoin. GuardDuty also monitors Amazon Web Services account access behavior for signs of compromise, such as unauthorized infrastructure deployments like EC2 instances deployed in a Region that has never been used, or unusual API calls like a password policy change to reduce password strength. GuardDuty informs you about the status of your Amazon Web Services environment by producing security findings that you can view in the GuardDuty console or through Amazon EventBridge. For more information, see the Amazon GuardDuty User Guide.
Package sagemaker provides the API client, operations, and parameter types for Amazon SageMaker Service. Provides APIs for creating and managing SageMaker resources. Other Resources: SageMaker Developer Guide Amazon Augmented AI Runtime API Reference
Package applicationautoscaling provides the API client, operations, and parameter types for Application Auto Scaling. With Application Auto Scaling, you can configure automatic scaling for the following resources: Amazon AppStream 2.0 fleets Amazon Aurora Replicas Amazon Comprehend document classification and entity recognizer endpoints Amazon DynamoDB tables and global secondary indexes throughput capacity Amazon ECS services Amazon ElastiCache for Redis clusters (replication groups) Amazon EMR clusters Amazon Keyspaces (for Apache Cassandra) tables Lambda function provisioned concurrency Amazon Managed Streaming for Apache Kafka broker storage Amazon Neptune clusters Amazon SageMaker endpoint variants Amazon SageMaker inference components Amazon SageMaker serverless endpoint provisioned concurrency Spot Fleets (Amazon EC2) Pool of WorkSpaces Custom resources provided by your own applications or services To learn more about Application Auto Scaling, see the Application Auto Scaling User Guide. The Application Auto Scaling service API includes three key sets of actions: Register and manage scalable targets - Register Amazon Web Services or custom resources as scalable targets (a resource that Application Auto Scaling can scale), set minimum and maximum capacity limits, and retrieve information on existing scalable targets. Configure and manage automatic scaling - Define scaling policies to dynamically scale your resources in response to CloudWatch alarms, schedule one-time or recurring scaling actions, and retrieve your recent scaling activity history. Suspend and resume scaling - Temporarily suspend and later resume automatic scaling by calling the RegisterScalableTargetAPI action for any Application Auto Scaling scalable target. You can suspend and resume (individually or in combination) scale-out activities that are triggered by a scaling policy, scale-in activities that are triggered by a scaling policy, and scheduled scaling.
Package route53resolver provides the API client, operations, and parameter types for Amazon Route 53 Resolver. When you create a VPC using Amazon VPC, you automatically get DNS resolution within the VPC from Route 53 Resolver. By default, Resolver answers DNS queries for VPC domain names such as domain names for EC2 instances or Elastic Load Balancing load balancers. Resolver performs recursive lookups against public name servers for all other domain names. You can also configure DNS resolution between your VPC and your network over a Direct Connect or VPN connection: DNS resolvers on your network can forward DNS queries to Resolver in a specified VPC. This allows your DNS resolvers to easily resolve domain names for Amazon Web Services resources such as EC2 instances or records in a Route 53 private hosted zone. For more information, see How DNS Resolvers on Your Network Forward DNS Queries to Route 53 Resolverin the Amazon Route 53 Developer Guide. You can configure Resolver to forward queries that it receives from EC2 instances in your VPCs to DNS resolvers on your network. To forward selected queries, you create Resolver rules that specify the domain names for the DNS queries that you want to forward (such as example.com), and the IP addresses of the DNS resolvers on your network that you want to forward the queries to. If a query matches multiple rules (example.com, acme.example.com), Resolver chooses the rule with the most specific match (acme.example.com) and forwards the query to the IP addresses that you specified in that rule. For more information, see How Route 53 Resolver Forwards DNS Queries from Your VPCs to Your Network in the Amazon Route 53 Developer Guide. Like Amazon VPC, Resolver is Regional. In each Region where you have VPCs, you can choose whether to forward queries from your VPCs to your network (outbound queries), from your network to your VPCs (inbound queries), or both.
Package ec2imds provides the API client for interacting with the Amazon EC2 Instance Metadata Service. See the EC2 IMDS user guide for more information on using the API. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
Package rolesanywhere provides the API client, operations, and parameter types for IAM Roles Anywhere. Identity and Access Management Roles Anywhere provides a secure way for your workloads such as servers, containers, and applications that run outside of Amazon Web Services to obtain temporary Amazon Web Services credentials. Your workloads can use the same IAM policies and roles you have for native Amazon Web Services applications to access Amazon Web Services resources. Using IAM Roles Anywhere eliminates the need to manage long-term credentials for workloads running outside of Amazon Web Services. To use IAM Roles Anywhere, your workloads must use X.509 certificates issued by their certificate authority (CA). You register the CA with IAM Roles Anywhere as a trust anchor to establish trust between your public key infrastructure (PKI) and IAM Roles Anywhere. If you don't manage your own PKI system, you can use Private Certificate Authority to create a CA and then use that to establish trust with IAM Roles Anywhere. This guide describes the IAM Roles Anywhere operations that you can call programmatically. For more information about IAM Roles Anywhere, see the IAM Roles Anywhere User Guide.
Package transcribe provides the API client, operations, and parameter types for Amazon Transcribe Service. Amazon Transcribe offers three main types of batch transcription: Standard, Medical, and Call Analytics. Standard transcriptions are the most common option. Refer to for details. Medical transcriptions are tailored to medical professionals and incorporate medical terms. A common use case for this service is transcribing doctor-patient dialogue into after-visit notes. Refer to for details. Call Analytics transcriptions are designed for use with call center audio on two different channels; if you're looking for insight into customer service calls, use this option. Refer to for details.
Package fis provides the API client, operations, and parameter types for AWS Fault Injection Simulator. Amazon Web Services Fault Injection Service is a managed service that enables you to perform fault injection experiments on your Amazon Web Services workloads. For more information, see the Fault Injection Service User Guide.
Package datapipeline provides the API client, operations, and parameter types for AWS Data Pipeline. AWS Data Pipeline configures and manages a data-driven workflow called a pipeline. AWS Data Pipeline handles the details of scheduling and ensuring that data dependencies are met so that your application can focus on processing the data. AWS Data Pipeline provides a JAR implementation of a task runner called AWS Data Pipeline Task Runner. AWS Data Pipeline Task Runner provides logic for common data management scenarios, such as performing database queries and running data analysis using Amazon Elastic MapReduce (Amazon EMR). You can use AWS Data Pipeline Task Runner as your task runner, or you can write your own task runner to provide custom data management. AWS Data Pipeline implements two main sets of functionality. Use the first set to create a pipeline and define data sources, schedules, dependencies, and the transforms to be performed on the data. Use the second set in your task runner application to receive the next task ready for processing. The logic for performing the task, such as querying the data, running data analysis, or converting the data from one format to another, is contained within the task runner. The task runner performs the task assigned to it by the web service, reporting progress to the web service as it does so. When the task is done, the task runner reports the final success or failure of the task to the web service.
Package backup provides the API client, operations, and parameter types for AWS Backup. Backup is a unified backup service designed to protect Amazon Web Services services and their associated data. Backup simplifies the creation, migration, restoration, and deletion of backups, while also providing reporting and auditing.
Package inspector2 provides the API client, operations, and parameter types for Inspector2. Amazon Inspector is a vulnerability discovery service that automates continuous scanning for security vulnerabilities within your Amazon EC2, Amazon ECR, and Amazon Web Services Lambda environments.
Package swf provides the API client, operations, and parameter types for Amazon Simple Workflow Service. The Amazon Simple Workflow Service (Amazon SWF) makes it easy to build applications that use Amazon's cloud to coordinate work across distributed components. In Amazon SWF, a task represents a logical unit of work that is performed by a component of your workflow. Coordinating tasks in a workflow involves managing intertask dependencies, scheduling, and concurrency in accordance with the logical flow of the application. Amazon SWF gives you full control over implementing tasks and coordinating them without worrying about underlying complexities such as tracking their progress and maintaining their state. This documentation serves as reference only. For a broader overview of the Amazon SWF programming model, see the Amazon SWF Developer Guide.
Package iot provides the API client, operations, and parameter types for AWS IoT. IoT provides secure, bi-directional communication between Internet-connected devices (such as sensors, actuators, embedded devices, or smart appliances) and the Amazon Web Services cloud. You can discover your custom IoT-Data endpoint to communicate with, configure rules for data processing and integration with other services, organize resources associated with each device (Registry), configure logging, and create and manage policies and credentials to authenticate devices. The service endpoints that expose this API are listed in Amazon Web Services IoT Core Endpoints and Quotas. You must use the endpoint for the region that has the resources you want to access. The service name used by Amazon Web Services Signature Version 4 to sign the request is: execute-api. For more information about how IoT works, see the Developer Guide. For information about how to use the credentials provider for IoT, see Authorizing Direct Calls to Amazon Web Services Services.
Package kendra provides the API client, operations, and parameter types for AWSKendraFrontendService. Amazon Kendra is a service for indexing large document sets.
Package fsx provides the API client, operations, and parameter types for Amazon FSx. Amazon FSx is a fully managed service that makes it easy for storage and application administrators to launch and use shared file storage.
Package cognitoidentity provides the API client, operations, and parameter types for Amazon Cognito Identity. Amazon Cognito Federated Identities is a web service that delivers scoped temporary credentials to mobile devices and other untrusted environments. It uniquely identifies a device and supplies the user with a consistent identity over the lifetime of an application. Using Amazon Cognito Federated Identities, you can enable authentication with one or more third-party identity providers (Facebook, Google, or Login with Amazon) or an Amazon Cognito user pool, and you can also choose to support unauthenticated access from your app. Cognito delivers a unique identifier for each user and acts as an OpenID token provider trusted by AWS Security Token Service (STS) to access temporary, limited-privilege AWS credentials. For a description of the authentication flow from the Amazon Cognito Developer Guide see Authentication Flow. For more information see Amazon Cognito Federated Identities.
Package dax provides the API client, operations, and parameter types for Amazon DynamoDB Accelerator (DAX). DAX is a managed caching service engineered for Amazon DynamoDB. DAX dramatically speeds up database reads by caching frequently-accessed data from DynamoDB, so applications can access that data with sub-millisecond latency. You can create a DAX cluster easily, using the AWS Management Console. With a few simple modifications to your code, your application can begin taking advantage of the DAX cluster and realize significant improvements in read performance.
Package comprehend provides the API client, operations, and parameter types for Amazon Comprehend. Amazon Comprehend is an Amazon Web Services service for gaining insight into the content of documents. Use these actions to determine the topics contained in your documents, the topics they discuss, the predominant sentiment expressed in them, the predominant language used, and more.
Package cloudwatchevents provides the API client, operations, and parameter types for Amazon CloudWatch Events. Amazon EventBridge helps you to respond to state changes in your Amazon Web Services resources. When your resources change state, they automatically send events to an event stream. You can create rules that match selected events in the stream and route them to targets to take action. You can also use rules to take action on a predetermined schedule. For example, you can configure rules to: Automatically invoke an Lambda function to update DNS entries when an event notifies you that Amazon EC2 instance enters the running state. Direct specific API records from CloudTrail to an Amazon Kinesis data stream for detailed analysis of potential security or availability risks. Periodically invoke a built-in target to create a snapshot of an Amazon EBS volume. For more information about the features of Amazon EventBridge, see the Amazon EventBridge User Guide.
Package qldb provides the API client, operations, and parameter types for Amazon QLDB. The resource management API for Amazon QLDB
Package xray provides the API client, operations, and parameter types for AWS X-Ray. Amazon Web Services X-Ray provides APIs for managing debug traces and retrieving service maps and other data created by processing those traces.
Package batch provides the API client, operations, and parameter types for AWS Batch. Using Batch, you can run batch computing workloads on the Amazon Web Services Cloud. Batch computing is a common means for developers, scientists, and engineers to access large amounts of compute resources. Batch uses the advantages of the batch computing to remove the undifferentiated heavy lifting of configuring and managing required infrastructure. At the same time, it also adopts a familiar batch computing software approach. You can use Batch to efficiently provision resources, and work toward eliminating capacity constraints, reducing your overall compute costs, and delivering results more quickly. As a fully managed service, Batch can run batch computing workloads of any scale. Batch automatically provisions compute resources and optimizes workload distribution based on the quantity and scale of your specific workloads. With Batch, there's no need to install or manage batch computing software. This means that you can focus on analyzing results and solving your specific problems instead.
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use the Open() function to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name (DSN) formats): where all parameters must be escaped or use Config and DSN to construct a DSN string. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). The following example opens a database handle with the Snowflake account named "my_account" under the organization named "my_organization", where the username is "jsmith", password is "mypassword", database is "mydb", schema is "testschema", and warehouse is "mywh": The connection string (DSN) can contain both connection parameters (described below) and session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). The following connection parameters are supported: account <string>: Specifies your Snowflake account, where "<string>" is the account identifier assigned to your account by Snowflake. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). If you are using a global URL, then append the connection group and ".global" (e.g. "<account_identifier>-<connection_group>.global"). The account identifier and the connection group are separated by a dash ("-"), as shown above. This parameter is optional if your account identifier is specified after the "@" character in the connection string. region <string>: DEPRECATED. You may specify a region, such as "eu-central-1", with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using multi-factor authentication (MFA) for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. requestTimeout: Specifies the timeout, in seconds, for a query to complete. 0 (zero) specifies that the driver should wait indefinitely. The default is 0 seconds. The query request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). If you want to cache your MFA logins, use AuthTypeUsernamePasswordMFA authenticator. To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth, specify oauth and provide an OAuth Access Token (see the token parameter below). application: Identifies your application to Snowflake Support. insecureMode: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection tracing: Specifies the logging level to be used. Set to error by default. Valid values are trace, debug, info, print, warning, error, fatal, panic. disableQueryContextCache: disables parsing of query context returned from server and resending it to server as well. Default value is false. clientConfigFile: specifies the location of the client configuration json file. In this file you can configure Easy Logging feature. disableSamlURLCheck: disables the SAML URL check. Default value is false. All other parameters are interpreted as session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). For example, the TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: A complete connection string looks similar to the following: Session-level parameters can also be set by using the SQL command "ALTER SESSION" (https://docs.snowflake.com/en/sql-reference/sql/alter-session.html). Alternatively, use OpenWithConfig() function to create a database handle with the specified Config. # Connection Config You can also connect to your warehouse using the connection config. The dbSql library states that when you want to take advantage of driver-specific connection features that aren’t available in a connection string. Each driver supports its own set of connection properties, often providing ways to customize the connection request specific to the DBMS For example: If you are using this method, you dont need to pass a driver name to specify the driver type in which you are looking to connect. Since the driver name is not needed, you can optionally bypass driver registration on startup. To do this, set `GOSNOWFLAKE_SKIP_REGISTERATION` in your environment. This is useful you wish to register multiple verions of the driver. Note: GOSNOWFLAKE_SKIP_REGISTERATION should not be used if sql.Open() is used as the method to connect to the server, as sql.Open will require registration so it can map the driver name to the driver type, which in this case is "snowflake" and SnowflakeDriver{}. You can load the connnection configuration with .toml file format. With two environment variables SNOWFLAKE_HOME(connections.toml file directory) SNOWFLAKE_DEFAULT_CONNECTION_NAME(DSN name), the driver will search the config file and load the connection. You can find how to use this connection way at ./cmd/tomlfileconnection or Snowflake doc: https://docs.snowflake.com/en/developer-guide/snowflake-cli-v2/connecting/specify-credentials The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. no_proxy=.amazonaws.com means that Amazon S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is exposing logrus's FieldLogger and default at INFO level. Users can use SetLogger in driver.go to set a customized logger for gosnowflake package. In order to enable debug logging for the driver, user could use SetLogLevel("debug") in SFLogger interface as shown in demo code at cmd/logger.go. To redirect the logs SFlogger.SetOutput method could do the work. A custom query tag can be set in the context. Each query run with this context will include the custom query tag as metadata that will appear in the Query Tag column in the Query History log. For example: A specific query request ID can be set in the context and will be passed through in place of the default randomized request ID. For example: If you need query ID for your query you have to use raw connection. For queries: ``` ``` For execs: ``` ``` The result of your query can be retrieved by setting the query ID in the WithFetchResultByID context. ``` ``` From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter (e.g. QueryContext, ExecContext). See cmd/selectmany.go for the full example. The Go Snowflake Driver now supports the Arrow data format for data transfers between Snowflake and the Golang client. The Arrow data format avoids extra conversions between binary and textual representations of the data. The Arrow data format can improve performance and reduce memory consumption in clients. Snowflake continues to support the JSON data format. The data format is controlled by the session-level parameter GO_QUERY_RESULT_FORMAT. To use JSON format, execute: The valid values for the parameter are: If the user attempts to set the parameter to an invalid value, an error is returned. The parameter name and the parameter value are case-insensitive. This parameter can be set only at the session level. Usage notes: The Arrow data format reduces rounding errors in floating point numbers. You might see slightly different values for floating point numbers when using Arrow format than when using JSON format. In order to take advantage of the increased precision, you must pass in the context.Context object provided by the WithHigherPrecision function when querying. Traditionally, the rows.Scan() method returned a string when a variable of types interface was passed in. Turning on the flag ENABLE_HIGHER_PRECISION via WithHigherPrecision will return the natural, expected data type as well. For some numeric data types, the driver can retrieve larger values when using the Arrow format than when using the JSON format. For example, using Arrow format allows the full range of SQL NUMERIC(38,0) values to be retrieved, while using JSON format allows only values in the range supported by the Golang int64 data type. Users should ensure that Golang variables are declared using the appropriate data type for the full range of values contained in the column. For an example, see below. When using the Arrow format, the driver supports more Golang data types and more ways to convert SQL values to those Golang data types. The table below lists the supported Snowflake SQL data types and the corresponding Golang data types. The columns are: The SQL data type. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from Arrow data format via an interface{}. The possible Golang data types that can be returned when you use snowflakeRows.Scan() to read data from Arrow data format directly. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format via an interface{}. (All returned values are strings.) The standard Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format directly. Go Data Types for Scan() =================================================================================================================== | ARROW | JSON =================================================================================================================== SQL Data Type | Default Go Data Type | Supported Go Data | Default Go Data Type | Supported Go Data | for Scan() interface{} | Types for Scan() | for Scan() interface{} | Types for Scan() =================================================================================================================== BOOLEAN | bool | string | bool ------------------------------------------------------------------------------------------------------------------- VARCHAR | string | string ------------------------------------------------------------------------------------------------------------------- DOUBLE | float32, float64 [1] , [2] | string | float32, float64 ------------------------------------------------------------------------------------------------------------------- INTEGER that | int, int8, int16, int32, int64 | string | int, int8, int16, fits in int64 | [1] , [2] | | int32, int64 ------------------------------------------------------------------------------------------------------------------- INTEGER that doesn't | int, int8, int16, int32, int64, *big.Int | string | error fit in int64 | [1] , [2] , [3] , [4] | ------------------------------------------------------------------------------------------------------------------- NUMBER(P, S) | float32, float64, *big.Float | string | float32, float64 where S > 0 | [1] , [2] , [3] , [5] | ------------------------------------------------------------------------------------------------------------------- DATE | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIME | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_LTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_NTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_TZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- BINARY | []byte | string | []byte ------------------------------------------------------------------------------------------------------------------- ARRAY [6] | string / array | string / array ------------------------------------------------------------------------------------------------------------------- OBJECT [6] | string / struct | string / struct ------------------------------------------------------------------------------------------------------------------- VARIANT | string | string ------------------------------------------------------------------------------------------------------------------- MAP | map | map [1] Converting from a higher precision data type to a lower precision data type via the snowflakeRows.Scan() method can lose low bits (lose precision), lose high bits (completely change the value), or result in error. [2] Attempting to convert from a higher precision data type to a lower precision data type via interface{} causes an error. [3] Higher precision data types like *big.Int and *big.Float can be accessed by querying with a context returned by WithHigherPrecision(). [4] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Int64()/.String()/.Uint64() methods. For an example, see below. [5] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Float32()/.String()/.Float64() methods. For an example, see below. [6] Arrays and objects can be either semistructured or structured, see more info in section below. Note: SQL NULL values are converted to Golang nil values, and vice-versa. Snowflake supports two flavours of "structured data" - semistructured and structured. Semistructured types are variants, objects and arrays without schema. When data is fetched, it's represented as strings and the client is responsible for its interpretation. Example table definition: The data not have any corresponding schema, so values in table may be slightly different. Semistuctured variants, objects and arrays are always represented as strings for scanning: When inserting, a marker indicating correct type must be used, for example: Structured types differentiate from semistructured types by having specific schema. In all rows of the table, values must conform to this schema. Example table definition: To retrieve structured objects, follow these steps: 1. Create a struct implementing sql.Scanner interface, example: a) b) Automatic scan goes through all fields in a struct and read object fields. Struct fields have to be public. Embedded structs have to be pointers. Matching name is built using struct field name with first letter lowercase. Additionally, `sf` tag can be added: - first value is always a name of a field in an SQL object - additionally `ignore` parameter can be passed to omit this field 2. Use WithStructuredTypesEnabled context while querying data. 3. Use it in regular scan: See StructuredObject for all available operations including null support, embedding nested structs, etc. Retrieving array of simple types works exactly the same like normal values - using Scan function. You can use WithMapValuesNullable and WithArrayValuesNullable contexts to handle null values in, respectively, maps and arrays of simple types in the database. In that case, sql null types will be used: If you want to scan array of structs, you have to use a helper function ScanArrayOfScanners: Retrieving structured maps is very similar to retrieving arrays: To bind structured objects use: 1. Create a type which implements a StructuredObjectWriter interface, example: a) b) 2. Use an instance as regular bind. 3. If you need to bind nil value, use special syntax: Binding structured arrays are like any other parameter. The only difference is - if you want to insert empty array (not nil but empty), you have to use: The following example shows how to retrieve very large values using the math/big package. This example retrieves a large INTEGER value to an interface and then extracts a big.Int value from that interface. If the value fits into an int64, then the code also copies the value to a variable of type int64. Note that a context that enables higher precision must be passed in with the query. If the variable named "rows" is known to contain a big.Int, then you can use the following instead of scanning into an interface and then converting to a big.Int: If the variable named "rows" contains a big.Int, then each of the following fails: Similar code and rules also apply to big.Float values. If you are not sure what data type will be returned, you can use code similar to the following to check the data type of the returned value: You can retrieve data in a columnar format similar to the format a server returns, without transposing them to rows. When working with the arrow columnar format in go driver, ArrowBatch structs are used. These are structs mostly corresponding to data chunks received from the backend. They allow for access to specific arrow.Record structs. An ArrowBatch can exist in a state where the underlying data has not yet been loaded. The data is downloaded and translated only on demand. Translation options are retrieved from a context.Context interface, which is either passed from query context or set by the user using WithContext(ctx) method. In order to access them you must use `WithArrowBatches` context, similar to the following: This returns []*ArrowBatch. ArrowBatch functions: GetRowCount(): Returns the number of rows in the ArrowBatch. Note that this returns 0 if the data has not yet been loaded, irrespective of it’s actual size. WithContext(ctx context.Context): Sets the context of the ArrowBatch to the one provided. Note that the context will not retroactively apply to data that has already been downloaded. For example: will produce the same result in records1 and records2, irrespective of the newly provided ctx. Context worth noting are: -WithArrowBatchesTimestampOption -WithHigherPrecision -WithArrowBatchesUtf8Validation described in more detail later. Fetch(): Returns the underlying records as *[]arrow.Record. When this function is called, the ArrowBatch checks whether the underlying data has already been loaded, and downloads it if not. Limitations: How to handle timestamps in Arrow batches: Snowflake returns timestamps natively (from backend to driver) in multiple formats. The Arrow timestamp is an 8-byte data type, which is insufficient to handle the larger date and time ranges used by Snowflake. Also, Snowflake supports 0-9 (nanosecond) digit precision for seconds, while Arrow supports only 3 (millisecond), 6 (microsecond), an 9 (nanosecond) precision. Consequently, Snowflake uses a custom timestamp format in Arrow, which differs on timestamp type and precision. If you want to use timestamps in Arrow batches, you have two options: How to handle invalid UTF-8 characters in Arrow batches: Snowflake previously allowed users to upload data with invalid UTF-8 characters. Consequently, Arrow records containing string columns in Snowflake could include these invalid UTF-8 characters. However, according to the Arrow specifications (https://arrow.apache.org/docs/cpp/api/datatype.html and https://github.com/apache/arrow/blob/a03d957b5b8d0425f9d5b6c98b6ee1efa56a1248/go/arrow/datatype.go#L73-L74), Arrow string columns should only contain UTF-8 characters. To address this issue and prevent potential downstream disruptions, the context WithArrowBatchesUtf8Validation, is introduced. When enabled, this feature iterates through all values in string columns, identifying and replacing any invalid characters with `�`. This ensures that Arrow records conform to the UTF-8 standards, preventing validation failures in downstream services like the Rust Arrow library that impose strict validation checks. How to handle higher precision in Arrow batches: To preserve BigDecimal values within Arrow batches, use WithHigherPrecision. This offers two main benefits: it helps avoid precision loss and defers the conversion to upstream services. Alternatively, without this setting, all non-zero scale numbers will be converted to float64, potentially resulting in loss of precision. Zero-scale numbers (DECIMAL256, DECIMAL128) will be converted to int64, which could lead to overflow. Binding allows a SQL statement to use a value that is stored in a Golang variable. Without binding, a SQL statement specifies values by specifying literals inside the statement. For example, the following statement uses the literal value “42“ in an UPDATE statement: With binding, you can execute a SQL statement that uses a value that is inside a variable. For example: The “?“ inside the “VALUES“ clause specifies that the SQL statement uses the value from a variable. Binding data that involves time zones can require special handling. For details, see the section titled "Timestamps with Time Zones". Version 1.6.23 (and later) of the driver takes advantage of sql.Null types which enables the proper handling of null parameters inside function calls, i.e.: The timestamp nullability had to be achieved by wrapping the sql.NullTime type as the Snowflake provides several date and time types which are mapped to single Go time.Time type: Version 1.3.9 (and later) of the Go Snowflake Driver supports the ability to bind an array variable to a parameter in a SQL INSERT statement. You can use this technique to insert multiple rows in a single batch. As an example, the following code inserts rows into a table that contains integer, float, boolean, and string columns. The example binds arrays to the parameters in the INSERT statement. If the array contains SQL NULL values, use slice []interface{}, which allows Golang nil values. This feature is available in version 1.6.12 (and later) of the driver. For example, For slices []interface{} containing time.Time values, a binding parameter flag is required for the preceding array variable in the Array() function. This feature is available in version 1.6.13 (and later) of the driver. For example, Note: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). When you use array binding to insert a large number of values, the driver can improve performance by streaming the data (without creating files on the local machine) to a temporary stage for ingestion. The driver automatically does this when the number of values exceeds a threshold (no changes are needed to user code). In order for the driver to send the data to a temporary stage, the user must have the following privilege on the schema: If the user does not have this privilege, the driver falls back to sending the data with the query to the Snowflake database. In addition, the current database and schema for the session must be set. If these are not set, the CREATE TEMPORARY STAGE command executed by the driver can fail with the following error: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). Go's database/sql package supports the ability to bind a parameter in a SQL statement to a time.Time variable. However, when the client binds data to send to the server, the driver cannot determine the correct Snowflake date/timestamp data type to associate with the binding parameter. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake does not support the name-based Location types (e.g. "America/Los_Angeles"). For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this does not help reduce memory footprint by itself. Consider Custom JSON Decoder. Custom JSON Decoder for Parsing Result Set (Experimental) The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. The Go Snowflake Driver supports JWT (JSON Web Token) authentication. To enable this feature, construct the DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using a Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through the base64.URLEncoding.EncodeToString() function. On the server side, you can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through the base64.StdEncoding.EncodeToString() function. To generate the valid key pair, you can execute the following commands in the shell: Note: As of February 2020, Golang's official library does not support passcode-encrypted PKCS8 private key. For security purposes, Snowflake highly recommends that you store the passcode-encrypted private key on the disk and decrypt the key in your application using a library you trust. JWT tokens are recreated on each retry and they are valid (`exp` claim) for `jwtTimeout` seconds. Each retry timeout is configured by `jwtClientTimeout`. Retries are limited by total time of `loginTimeout`. The driver allows to authenticate using the external browser. When a connection is created, the driver will open the browser window and ask the user to sign in. To enable this feature, construct the DSN with field "authenticator=EXTERNALBROWSER" or using a Config structure with following Authenticator specified: The external browser authentication implements timeout mechanism. This prevents the driver from hanging interminably when browser window was closed, or not responding. Timeout defaults to 120s and can be changed through setting DSN field "externalBrowserTimeout=240" (time in seconds) or using a Config structure with following ExternalBrowserTimeout specified: This feature is available in version 1.3.8 or later of the driver. By default, Snowflake returns an error for queries issued with multiple statements. This restriction helps protect against SQL Injection attacks (https://en.wikipedia.org/wiki/SQL_injection). The multi-statement feature allows users skip this restriction and execute multiple SQL statements through a single Golang function call. However, this opens up the possibility for SQL injection, so it should be used carefully. The risk can be reduced by specifying the exact number of statements to be executed, which makes it more difficult to inject a statement by appending it. More details are below. The Go Snowflake Driver provides two functions that can execute multiple SQL statements in a single call: To compose a multi-statement query, simply create a string that contains all the queries, separated by semicolons, in the order in which the statements should be executed. To protect against SQL Injection attacks while using the multi-statement feature, pass a Context that specifies the number of statements in the string. For example: When multiple queries are executed by a single call to QueryContext(), multiple result sets are returned. After you process the first result set, get the next result set (for the next SQL statement) by calling NextResultSet(). The following pseudo-code shows how to process multiple result sets: The function db.ExecContext() returns a single result, which is the sum of the number of rows changed by each individual statement. For example, if your multi-statement query executed two UPDATE statements, each of which updated 10 rows, then the result returned would be 20. Individual row counts for individual statements are not available. The following code shows how to retrieve the result of a multi-statement query executed through db.ExecContext(): Note: Because a multi-statement ExecContext() returns a single value, you cannot detect offsetting errors. For example, suppose you expected the return value to be 20 because you expected each UPDATE statement to update 10 rows. If one UPDATE statement updated 15 rows and the other UPDATE statement updated only 5 rows, the total would still be 20. You would see no indication that the UPDATES had not functioned as expected. The ExecContext() function does not return an error if passed a query (e.g. a SELECT statement). However, it still returns only a single value, not a result set, so using it to execute queries (or a mix of queries and non-query statements) is impractical. The QueryContext() function does not return an error if passed non-query statements (e.g. DML). The function returns a result set for each statement, whether or not the statement is a query. For each non-query statement, the result set contains a single row that contains a single column; the value is the number of rows changed by the statement. If you want to execute a mix of query and non-query statements (e.g. a mix of SELECT and DML statements) in a multi-statement query, use QueryContext(). You can retrieve the result sets for the queries, and you can retrieve or ignore the row counts for the non-query statements. Note: PUT statements are not supported for multi-statement queries. If a SQL statement passed to ExecQuery() or QueryContext() fails to compile or execute, that statement is aborted, and subsequent statements are not executed. Any statements prior to the aborted statement are unaffected. For example, if the statements below are run as one multi-statement query, the multi-statement query fails on the third statement, and an exception is thrown. If you then query the contents of the table named "test", the values 1 and 2 would be present. When using the QueryContext() and ExecContext() functions, golang code can check for errors the usual way. For example: Preparing statements and using bind variables are also not supported for multi-statement queries. The Go Snowflake Driver supports asynchronous execution of SQL statements. Asynchronous execution allows you to start executing a statement and then retrieve the result later without being blocked while waiting. While waiting for the result of a SQL statement, you can perform other tasks, including executing other SQL statements. Most of the steps to execute an asynchronous query are the same as the steps to execute a synchronous query. However, there is an additional step, which is that you must call the WithAsyncMode() function to update your Context object to specify that asynchronous mode is enabled. In the code below, the call to "WithAsyncMode()" is specific to asynchronous mode. The rest of the code is compatible with both asynchronous mode and synchronous mode. The function db.QueryContext() returns an object of type snowflakeRows regardless of whether the query is synchronous or asynchronous. However: The call to the Next() function of snowflakeRows is always synchronous (i.e. blocking). If the query has not yet completed and the snowflakeRows object (named "rows" in this example) has not been filled in yet, then rows.Next() waits until the result set has been filled in. More generally, calls to any Golang SQL API function implemented in snowflakeRows or snowflakeResult are blocking calls, and wait if results are not yet available. (Examples of other synchronous calls include: snowflakeRows.Err(), snowflakeRows.Columns(), snowflakeRows.columnTypes(), snowflakeRows.Scan(), and snowflakeResult.RowsAffected().) Because the example code above executes only one query and no other activity, there is no significant difference in behavior between asynchronous and synchronous behavior. The differences become significant if, for example, you want to perform some other activity after the query starts and before it completes. The example code below starts a query, which run in the background, and then retrieves the results later. This example uses small SELECT statements that do not retrieve enough data to require asynchronous handling. However, the technique works for larger data sets, and for situations where the programmer might want to do other work after starting the queries and before retrieving the results. For a more elaborative example please see cmd/async/async.go The Go Snowflake Driver supports the PUT and GET commands. The PUT command copies a file from a local computer (the computer where the Golang client is running) to a stage on the cloud platform. The GET command copies data files from a stage on the cloud platform to a local computer. See the following for information on the syntax and supported parameters: Using PUT: The following example shows how to run a PUT command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: Different client platforms (e.g. linux, Windows) have different path name conventions. Ensure that you specify path names appropriately. This is particularly important on Windows, which uses the backslash character as both an escape character and as a separator in path names. To send information from a stream (rather than a file) use code similar to the code below. (The ReplaceAll() function is needed on Windows to handle backslashes in the path to the file.) Note: PUT statements are not supported for multi-statement queries. Using GET: The following example shows how to run a GET command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: To download a file into an in-memory stream (rather than a file) use code similar to the code below. Note: GET statements are not supported for multi-statement queries. Specifying temporary directory for encryption and compression: Putting and getting requires compression and/or encryption, which is done in the OS temporary directory. If you cannot use default temporary directory for your OS or you want to specify it yourself, you can use "tmpDirPath" DSN parameter. Remember, to encode slashes. Example: Using custom configuration for PUT/GET: If you want to override some default configuration options, you can use `WithFileTransferOptions` context. There are multiple config parameters including progress bars or compression.
Package glacier provides the API client, operations, and parameter types for Amazon Glacier. Glacier is an extremely low-cost storage service that provides secure, durable, and easy-to-use storage for data backup and archival. With Glacier, customers can store their data cost effectively for months, years, or decades. Glacier also enables customers to offload the administrative burdens of operating and scaling storage to AWS, so they don't have to worry about capacity planning, hardware provisioning, data replication, hardware failure and recovery, or time-consuming hardware migrations. Glacier is a great storage choice when low storage cost is paramount and your data is rarely retrieved. If your application requires fast or frequent access to your data, consider using Amazon S3. For more information, see Amazon Simple Storage Service (Amazon S3). You can store any kind of data in any format. There is no maximum limit on the total amount of data you can store in Glacier. If you are a first-time user of Glacier, we recommend that you begin by reading the following sections in the Amazon S3 Glacier Developer Guide: What is Amazon S3 Glacier Getting Started with Amazon S3 Glacier
Package ebs provides the API client, operations, and parameter types for Amazon Elastic Block Store. You can use the Amazon Elastic Block Store (Amazon EBS) direct APIs to create Amazon EBS snapshots, write data directly to your snapshots, read data on your snapshots, and identify the differences or changes between two snapshots. If you’re an independent software vendor (ISV) who offers backup services for Amazon EBS, the EBS direct APIs make it more efficient and cost-effective to track incremental changes on your Amazon EBS volumes through snapshots. This can be done without having to create new volumes from snapshots, and then use Amazon Elastic Compute Cloud (Amazon EC2) instances to compare the differences. You can create incremental snapshots directly from data on-premises into volumes and the cloud to use for quick disaster recovery. With the ability to write and read snapshots, you can write your on-premises data to an snapshot during a disaster. Then after recovery, you can restore it back to Amazon Web Services or on-premises from the snapshot. You no longer need to build and maintain complex mechanisms to copy data to and from Amazon EBS. This API reference provides detailed information about the actions, data types, parameters, and errors of the EBS direct APIs. For more information about the elements that make up the EBS direct APIs, and examples of how to use them effectively, see Accessing the Contents of an Amazon EBS Snapshotin the Amazon Elastic Compute Cloud User Guide. For more information about the supported Amazon Web Services Regions, endpoints, and service quotas for the EBS direct APIs, see Amazon Elastic Block Store Endpoints and Quotasin the Amazon Web Services General Reference.
Package cloudcontrol provides the API client, operations, and parameter types for AWS Cloud Control API. For more information about Amazon Web Services Cloud Control API, see the Amazon Web Services Cloud Control API User Guide.
Package inspector provides the API client, operations, and parameter types for Amazon Inspector. Amazon Inspector enables you to analyze the behavior of your AWS resources and to identify potential security issues. For more information, see Amazon Inspector User Guide.
Package opsworks provides the API client, operations, and parameter types for AWS OpsWorks. Welcome to the OpsWorks Stacks API Reference. This guide provides descriptions, syntax, and usage examples for OpsWorks Stacks actions and data types, including common parameters and error codes. OpsWorks Stacks is an application management service that provides an integrated experience for managing the complete application lifecycle. For information about OpsWorks, see the OpsWorksinformation page. Use the OpsWorks Stacks API by using the Command Line Interface (CLI) or by using one of the Amazon Web Services SDKs to implement applications in your preferred language. For more information, see: CLI SDK for Java SDK for .NET SDK for PHP SDK for Ruby Amazon Web Services SDK for Node.js SDK for Python (Boto) OpsWorks Stacks supports the following endpoints, all HTTPS. You must connect to one of the following endpoints. Stacks can only be accessed or managed within the endpoint in which they are created. opsworks.us-east-1.amazonaws.com opsworks.us-east-2.amazonaws.com opsworks.us-west-1.amazonaws.com opsworks.us-west-2.amazonaws.com opsworks.ca-central-1.amazonaws.com (API only; not available in the Amazon Web Services Management Console) opsworks.eu-west-1.amazonaws.com opsworks.eu-west-2.amazonaws.com opsworks.eu-west-3.amazonaws.com opsworks.eu-central-1.amazonaws.com opsworks.ap-northeast-1.amazonaws.com opsworks.ap-northeast-2.amazonaws.com opsworks.ap-south-1.amazonaws.com opsworks.ap-southeast-1.amazonaws.com opsworks.ap-southeast-2.amazonaws.com opsworks.sa-east-1.amazonaws.com When you call CreateStack, CloneStack, or UpdateStack we recommend you use the ConfigurationManager parameter to specify the Chef version. The recommended and default value for Linux stacks is currently 12. Windows stacks use Chef 12.2. For more information, see Chef Versions. You can specify Chef 12, 11.10, or 11.4 for your Linux stack. We recommend migrating your existing Linux stacks to Chef 12 as soon as possible.
Package ssoadmin provides the API client, operations, and parameter types for AWS Single Sign-On Admin. IAM Identity Center (successor to Single Sign-On) helps you securely create, or connect, your workforce identities and manage their access centrally across Amazon Web Services accounts and applications. IAM Identity Center is the recommended approach for workforce authentication and authorization in Amazon Web Services, for organizations of any size and type. IAM Identity Center uses the sso and identitystore API namespaces. This reference guide provides information on single sign-on operations which could be used for access management of Amazon Web Services accounts. For information about IAM Identity Center features, see the IAM Identity Center User Guide. Many operations in the IAM Identity Center APIs rely on identifiers for users and groups, known as principals. For more information about how to work with principals and principal IDs in IAM Identity Center, see the Identity Store API Reference. Amazon Web Services provides SDKs that consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .Net, iOS, Android, and more). The SDKs provide a convenient way to create programmatic access to IAM Identity Center and other Amazon Web Services services. For more information about the Amazon Web Services SDKs, including how to download and install them, see Tools for Amazon Web Services.
Package resourcegroups provides the API client, operations, and parameter types for AWS Resource Groups. Resource Groups lets you organize Amazon Web Services resources such as Amazon Elastic Compute Cloud instances, Amazon Relational Database Service databases, and Amazon Simple Storage Service buckets into groups using criteria that you define as tags. A resource group is a collection of resources that match the resource types specified in a query, and share one or more tags or portions of tags. You can create a group of resources based on their roles in your cloud infrastructure, lifecycle stages, regions, application layers, or virtually any criteria. Resource Groups enable you to automate management tasks, such as those in Amazon Web Services Systems Manager Automation documents, on tag-related resources in Amazon Web Services Systems Manager. Groups of tagged resources also let you quickly view a custom console in Amazon Web Services Systems Manager that shows Config compliance and other monitoring data about member resources. To create a resource group, build a resource query, and specify tags that identify the criteria that members of the group have in common. Tags are key-value pairs. For more information about Resource Groups, see the Resource Groups User Guide. Resource Groups uses a REST-compliant API that you can use to perform the following types of operations. Create, Read, Update, and Delete (CRUD) operations on resource groups and resource query entities Applying, editing, and removing tags from resource groups Resolving resource group member Amazon resource names (ARN)s so they can be returned as search results Getting data about resources that are members of a group Searching Amazon Web Services resources based on a resource query
Package pricing provides the API client, operations, and parameter types for AWS Price List Service. The Amazon Web Services Price List API is a centralized and convenient way to programmatically query Amazon Web Services for services, products, and pricing information. The Amazon Web Services Price List uses standardized product attributes such as Location , Storage Class , and Operating System , and provides prices at the SKU level. You can use the Amazon Web Services Price List to do the following: Build cost control and scenario planning tools Reconcile billing data Forecast future spend for budgeting purposes Provide cost benefit analysis that compare your internal workloads with Amazon Web Services Use GetServices without a service code to retrieve the service codes for all Amazon Web Services services, then GetServices with a service code to retrieve the attribute names for that service. After you have the service code and attribute names, you can use GetAttributeValues to see what values are available for an attribute. With the service code and an attribute name and value, you can use GetProducts to find specific products that you're interested in, such as an AmazonEC2 instance, with a Provisioned IOPS volumeType . For more information, see Using the Amazon Web Services Price List API in the Billing User Guide.
Package cloud9 provides the API client, operations, and parameter types for AWS Cloud9. Cloud9 is a collection of tools that you can use to code, build, run, test, debug, and release software in the cloud. For more information about Cloud9, see the Cloud9 User Guide. Cloud9 supports these operations: CreateEnvironmentEC2 : Creates an Cloud9 development environment, launches an Amazon EC2 instance, and then connects from the instance to the environment. CreateEnvironmentMembership : Adds an environment member to an environment. DeleteEnvironment : Deletes an environment. If an Amazon EC2 instance is connected to the environment, also terminates the instance. DeleteEnvironmentMembership : Deletes an environment member from an environment. DescribeEnvironmentMemberships : Gets information about environment members for an environment. DescribeEnvironments : Gets information about environments. DescribeEnvironmentStatus : Gets status information for an environment. ListEnvironments : Gets a list of environment identifiers. ListTagsForResource : Gets the tags for an environment. TagResource : Adds tags to an environment. UntagResource : Removes tags from an environment. UpdateEnvironment : Changes the settings of an existing environment. UpdateEnvironmentMembership : Changes the settings of an existing environment member for an environment.
Package budgets provides the API client, operations, and parameter types for AWS Budgets. Use the Amazon Web Services Budgets API to plan your service usage, service costs, and instance reservations. This API reference provides descriptions, syntax, and usage examples for each of the actions and data types for the Amazon Web Services Budgets feature. Budgets provide you with a way to see the following information: How close your plan is to your budgeted amount or to the free tier limits Your usage-to-date, including how much you've used of your Reserved Instances (RIs) Your current estimated charges from Amazon Web Services, and how much your predicted usage will accrue in charges by the end of the month How much of your budget has been used Amazon Web Services updates your budget status several times a day. Budgets track your unblended costs, subscriptions, refunds, and RIs. You can create the following types of budgets: Cost budgets - Plan how much you want to spend on a service. Usage budgets - Plan how much you want to use one or more services. RI utilization budgets - Define a utilization threshold, and receive alerts when your RI usage falls below that threshold. This lets you see if your RIs are unused or under-utilized. RI coverage budgets - Define a coverage threshold, and receive alerts when the number of your instance hours that are covered by RIs fall below that threshold. This lets you see how much of your instance usage is covered by a reservation. The Amazon Web Services Budgets API provides the following endpoint: For information about costs that are associated with the Amazon Web Services Budgets API, see Amazon Web Services Cost Management Pricing.