Package stackdriver contains the OpenCensus exporters for Stackdriver Monitoring and Stackdriver Tracing. This exporter can be used to send metrics to Stackdriver Monitoring and traces to Stackdriver trace. The package uses Application Default Credentials to authenticate by default. See: https://developers.google.com/identity/protocols/application-default-credentials Alternatively, pass the authentication options in both the MonitoringClientOptions and the TraceClientOptions fields of Options. This exporter support exporting OpenCensus views to Stackdriver Monitoring. Each registered view becomes a metric in Stackdriver Monitoring, with the tags becoming labels. The aggregation function determines the metric kind: LastValue aggregations generate Gauge metrics and all other aggregations generate Cumulative metrics. In order to be able to push your stats to Stackdriver Monitoring, you must: These steps enable the API but don't require that your app is hosted on Google Cloud Platform. This exporter supports exporting Trace Spans to Stackdriver Trace. It also supports the Google "Cloud Trace" propagation format header.
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include `<`, `<=`, `>`, `>=`, `==`, and 'array-contains'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it.
Package api is the root of the packages used to access Google Cloud Services. See https://godoc.org/google.golang.org/api for a full list of sub-packages. Within api there exist numerous clients which connect to Google APIs, and various utility packages. All clients in sub-packages are configurable via client options. These options are described here: https://godoc.org/google.golang.org/api/option. All the clients in sub-packages support authentication via Google Application Default Credentials (see https://cloud.google.com/docs/authentication/production), or by providing a JSON key file for a Service Account. See the authentication examples in https://godoc.org/google.golang.org/api/transport for more details. Due to the auto-generated nature of this collection of libraries, complete APIs or specific versions can appear or go away without notice. As a result, you should always locally vendor any API(s) that your code relies upon. Google APIs follow semver as specified by https://cloud.google.com/apis/design/versioning. The code generator and the code it produces - the libraries in the google.golang.org/api/... subpackages - are beta. Note that versioning and stability is strictly not communicated through Go modules. Go modules are used only for dependency management. Many parameters are specified using ints. However, underlying APIs might operate on a finer granularity, expecting int64, int32, uint64, or uint32, all of whom have different maximum values. Subsequently, specifying an int parameter in one of these clients may result in an error from the API because the value is too large. To see the exact type of int that the API expects, you can inspect the API's discovery doc. A global catalogue pointing to the discovery doc of APIs can be found at https://www.googleapis.com/discovery/v1/apis.
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include `<`, `<=`, `>`, `>=`, `==`, and 'array-contains'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it.
Package spanner provides a client for reading and writing to Cloud Spanner databases. See the packages under admin for clients that operate on databases and instances. Note: This package is in beta. Some backwards-incompatible changes may occur. See https://cloud.google.com/spanner/docs/getting-started/go/ for an introduction to Cloud Spanner and additional help on using this API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client that refers to the database of interest: Remember to close the client after use to free up the sessions in the session pool. Two Client methods, Apply and Single, work well for simple reads and writes. As a quick introduction, here we write a new row to the database and read it back: All the methods used above are discussed in more detail below. Every Cloud Spanner row has a unique key, composed of one or more columns. Construct keys with a literal of type Key: The keys of a Cloud Spanner table are ordered. You can specify ranges of keys using the KeyRange type: By default, a KeyRange includes its start key but not its end key. Use the Kind field to specify other boundary conditions: A KeySet represents a set of keys. A single Key or KeyRange can act as a KeySet. Use the KeySets function to build the union of several KeySets: AllKeys returns a KeySet that refers to all the keys in a table: All Cloud Spanner reads and writes occur inside transactions. There are two types of transactions, read-only and read-write. Read-only transactions cannot change the database, do not acquire locks, and may access either the current database state or states in the past. Read-write transactions can read the database before writing to it, and always apply to the most recent database state. The simplest and fastest transaction is a ReadOnlyTransaction that supports a single read operation. Use Client.Single to create such a transaction. You can chain the call to Single with a call to a Read method. When you only want one row whose key you know, use ReadRow. Provide the table name, key, and the columns you want to read: Read multiple rows with the Read method. It takes a table name, KeySet, and list of columns: Read returns a RowIterator. You can call the Do method on the iterator and pass a callback: RowIterator also follows the standard pattern for the Google Cloud Client Libraries: Always call Stop when you finish using an iterator this way, whether or not you iterate to the end. (Failing to call Stop could lead you to exhaust the database's session quota.) To read rows with an index, use ReadUsingIndex. The most general form of reading uses SQL statements. Construct a Statement with NewStatement, setting any parameters using the Statement's Params map: You can also construct a Statement directly with a struct literal, providing your own map of parameters. Use the Query method to run the statement and obtain an iterator: Once you have a Row, via an iterator or a call to ReadRow, you can extract column values in several ways. Pass in a pointer to a Go variable of the appropriate type when you extract a value. You can extract by column position or name: You can extract all the columns at once: Or you can define a Go struct that corresponds to your columns, and extract into that: For Cloud Spanner columns that may contain NULL, use one of the NullXXX types, like NullString: To perform more than one read in a transaction, use ReadOnlyTransaction: You must call Close when you are done with the transaction. Cloud Spanner read-only transactions conceptually perform all their reads at a single moment in time, called the transaction's read timestamp. Once a read has started, you can call ReadOnlyTransaction's Timestamp method to obtain the read timestamp. By default, a transaction will pick the most recent time (a time where all previously committed transactions are visible) for its reads. This provides the freshest data, but may involve some delay. You can often get a quicker response if you are willing to tolerate "stale" data. You can control the read timestamp selected by a transaction by calling the WithTimestampBound method on the transaction before using it. For example, to perform a query on data that is at most one minute stale, use See the documentation of TimestampBound for more details. To write values to a Cloud Spanner database, construct a Mutation. The spanner package has functions for inserting, updating and deleting rows. Except for the Delete methods, which take a Key or KeyRange, each mutation-building function comes in three varieties. One takes lists of columns and values along with the table name: One takes a map from column names to values: And the third accepts a struct value, and determines the columns from the struct field names: To apply a list of mutations to the database, use Apply: If you need to read before writing in a single transaction, use a ReadWriteTransaction. ReadWriteTransactions may abort and need to be retried. You pass in a function to ReadWriteTransaction, and the client will handle the retries automatically. Use the transaction's BufferWrite method to buffer mutations, which will all be executed at the end of the transaction: Spanner supports DML statements like INSERT, UPDATE and DELETE. Use ReadWriteTransaction.Update to run DML statements. It returns the number of rows affected. (You can call use ReadWriteTransaction.Query with a DML statement. The first call to Next on the resulting RowIterator will return iterator.Done, and the RowCount field of the iterator will hold the number of affected rows.) For large databases, it may be more efficient to partition the DML statement. Use client.PartitionedUpdate to run a DML statement in this way. Not all DML statements can be partitioned. This client has been instrumented to use OpenCensus tracing (http://opencensus.io). To enable tracing, see "Enabling Tracing for a Program" at https://godoc.org/go.opencensus.io/trace. OpenCensus tracing requires Go 1.8 or higher.
Package bigtable is an API to Google Cloud Bigtable. See https://cloud.google.com/bigtable/docs/ for general product documentation. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Use NewClient or NewAdminClient to create a client that can be used to access the data or admin APIs respectively. Both require credentials that have permission to access the Cloud Bigtable API. If your program is run on Google App Engine or Google Compute Engine, using the Application Default Credentials (https://developers.google.com/accounts/docs/application-default-credentials) is the simplest option. Those credentials will be used by default when NewClient or NewAdminClient are called. To use alternate credentials, pass them to NewClient or NewAdminClient using option.WithTokenSource. For instance, you can use service account credentials by visiting https://cloud.google.com/console/project/MYPROJECT/apiui/credential, creating a new OAuth "Client ID", storing the JSON key somewhere accessible, and writing Here, `google` means the golang.org/x/oauth2/google package and `option` means the google.golang.org/api/option package. The principal way to read from a Bigtable is to use the ReadRows method on *Table. A RowRange specifies a contiguous portion of a table. A Filter may be provided through RowFilter to limit or transform the data that is returned. To read a single row, use the ReadRow helper method. This API exposes two distinct forms of writing to a Bigtable: a Mutation and a ReadModifyWrite. The former expresses idempotent operations. The latter expresses non-idempotent operations and returns the new values of updated cells. These operations are performed by creating a Mutation or ReadModifyWrite (with NewMutation or NewReadModifyWrite), building up one or more operations on that, and then using the Apply or ApplyReadModifyWrite methods on a Table. For instance, to set a couple of cells in a table, To increment an encoded value in one cell, If a read or write operation encounters a transient error it will be retried until a successful response, an unretryable error or the context deadline is reached. Non-idempotent writes (where the timestamp is set to ServerTime) will not be retried. In the case of ReadRows, retried calls will not re-scan rows that have already been processed.
Package cloud contains Google Cloud Platform APIs related types and common functions.
Package stackdriver contains the OpenCensus exporters for Stackdriver Monitoring and Stackdriver Tracing. This exporter can be used to send metrics to Stackdriver Monitoring and traces to Stackdriver trace. The package uses Application Default Credentials to authenticate by default. See: https://developers.google.com/identity/protocols/application-default-credentials Alternatively, pass the authentication options in both the MonitoringClientOptions and the TraceClientOptions fields of Options. This exporter support exporting OpenCensus views to Stackdriver Monitoring. Each registered view becomes a metric in Stackdriver Monitoring, with the tags becoming labels. The aggregation function determines the metric kind: LastValue aggregations generate Gauge metrics and all other aggregations generate Cumulative metrics. In order to be able to push your stats to Stackdriver Monitoring, you must: These steps enable the API but don't require that your app is hosted on Google Cloud Platform. This exporter supports exporting Trace Spans to Stackdriver Trace. It also supports the Google "Cloud Trace" propagation format header.
Package stackdriver contains the OpenCensus exporters for Stackdriver Monitoring and Stackdriver Tracing. This exporter can be used to send metrics to Stackdriver Monitoring and traces to Stackdriver trace. The package uses Application Default Credentials to authenticate by default. See: https://developers.google.com/identity/protocols/application-default-credentials Alternatively, pass the authentication options in both the MonitoringClientOptions and the TraceClientOptions fields of Options. This exporter support exporting OpenCensus views to Stackdriver Monitoring. Each registered view becomes a metric in Stackdriver Monitoring, with the tags becoming labels. The aggregation function determines the metric kind: LastValue aggregations generate Gauge metrics and all other aggregations generate Cumulative metrics. In order to be able to push your stats to Stackdriver Monitoring, you must: These steps enable the API but don't require that your app is hosted on Google Cloud Platform. This exporter supports exporting Trace Spans to Stackdriver Trace. It also supports the Google "Cloud Trace" propagation format header.
Google Play Music Web Manager presents a simple RESTful web interface for managing your Google Play Music library. It takes a single argument, the TCP address where to start serving (default localhost:8080). The methods and endpoints it supports are as follows: All endpoints expect the request to carry the access_token and uploader_id cookies, which are a Google OAuth 2.0 token carrying the scope https://www.googleapis.com/auth/musicmanager and the ID of an authorized device under your Play Music account (see https://play.google.com/music/listen#/accountsettings, and inspect the id attribute of the div.device-list-item elements) respectively. For convenience, Google Play Music Web Manager provides the following endpoints for acquiring these cookies: The main endpoints redirect to the start of the auth flow if either access_token or uploader_id is missing, and use the redirect query parameter to return back once the flow is completed, so using the interface through a web browser is a relatively seamless experience. Google Play Music Web Manager can be run either as a standalone web server or on Google App Engine, though the latter comes with a few gotchas. See the source file appengine.go for details. On startup, Google Play Music Web Manager expects to find in the current working directory a file called credentials.json, which contains web application credentials for a Google Cloud project with the Google+ API enabled, and a Go html/template file static/list.tpl. The latter is included in the default distribution; the former can be acquired by registering a new project in the Google Developers Console (https://console.developers.google.com) and creating a new client ID for a web application under the "Credentials" tab. Note that Google Play Music Web Manager ignores the redirect_uris key of the credentials.json file; the OAuth2 redirect URL is instead generated dynamically by appending "/oauth2callback" to the host on which the request was received.
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include `<`, `<=`, `>`, `>=`, `==`, and 'array-contains'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it.
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include `<`, `<=`, `>`, `>=`, `==`, and 'array-contains'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it.
Package bigtable is an API to Google Cloud Bigtable. See https://cloud.google.com/bigtable/docs/ for general product documentation. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Use NewClient or NewAdminClient to create a client that can be used to access the data or admin APIs respectively. Both require credentials that have permission to access the Cloud Bigtable API. If your program is run on Google App Engine or Google Compute Engine, using the Application Default Credentials (https://developers.google.com/accounts/docs/application-default-credentials) is the simplest option. Those credentials will be used by default when NewClient or NewAdminClient are called. To use alternate credentials, pass them to NewClient or NewAdminClient using option.WithTokenSource. For instance, you can use service account credentials by visiting https://cloud.google.com/console/project/MYPROJECT/apiui/credential, creating a new OAuth "Client ID", storing the JSON key somewhere accessible, and writing Here, `google` means the golang.org/x/oauth2/google package and `option` means the google.golang.org/api/option package. The principal way to read from a Bigtable is to use the ReadRows method on *Table. A RowRange specifies a contiguous portion of a table. A Filter may be provided through RowFilter to limit or transform the data that is returned. To read a single row, use the ReadRow helper method. This API exposes two distinct forms of writing to a Bigtable: a Mutation and a ReadModifyWrite. The former expresses idempotent operations. The latter expresses non-idempotent operations and returns the new values of updated cells. These operations are performed by creating a Mutation or ReadModifyWrite (with NewMutation or NewReadModifyWrite), building up one or more operations on that, and then using the Apply or ApplyReadModifyWrite methods on a Table. For instance, to set a couple of cells in a table, To increment an encoded value in one cell, If a read or write operation encounters a transient error it will be retried until a successful response, an unretryable error or the context deadline is reached. Non-idempotent writes (where the timestamp is set to ServerTime) will not be retried. In the case of ReadRows, retried calls will not re-scan rows that have already been processed.
Package stackdriver contains the OpenCensus exporters for Stackdriver Monitoring and Stackdriver Tracing. This exporter can be used to send metrics to Stackdriver Monitoring and traces to Stackdriver trace. The package uses Application Default Credentials to authenticate by default. See: https://developers.google.com/identity/protocols/application-default-credentials Alternatively, pass the authentication options in both the MonitoringClientOptions and the TraceClientOptions fields of Options. This exporter support exporting OpenCensus views to Stackdriver Monitoring. Each registered view becomes a metric in Stackdriver Monitoring, with the tags becoming labels. The aggregation function determines the metric kind: LastValue aggregations generate Gauge metrics and all other aggregations generate Cumulative metrics. In order to be able to push your stats to Stackdriver Monitoring, you must: These steps enable the API but don't require that your app is hosted on Google Cloud Platform. This exporter supports exporting Trace Spans to Stackdriver Trace. It also supports the Google "Cloud Trace" propagation format header.
Package stackdriver contains the OpenCensus exporters for Stackdriver Monitoring and Stackdriver Tracing. This exporter can be used to send metrics to Stackdriver Monitoring and traces to Stackdriver trace. The package uses Application Default Credentials to authenticate by default. See: https://developers.google.com/identity/protocols/application-default-credentials Alternatively, pass the authentication options in both the MonitoringClientOptions and the TraceClientOptions fields of Options. This exporter support exporting OpenCensus views to Stackdriver Monitoring. Each registered view becomes a metric in Stackdriver Monitoring, with the tags becoming labels. The aggregation function determines the metric kind: LastValue aggregations generate Gauge metrics and all other aggregations generate Cumulative metrics. In order to be able to push your stats to Stackdriver Monitoring, you must: These steps enable the API but don't require that your app is hosted on Google Cloud Platform. This exporter supports exporting Trace Spans to Stackdriver Trace. It also supports the Google "Cloud Trace" propagation format header.
Package stackdriver contains the OpenCensus exporters for Stackdriver Monitoring and Stackdriver Tracing. This exporter can be used to send metrics to Stackdriver Monitoring and traces to Stackdriver trace. The package uses Application Default Credentials to authenticate by default. See: https://developers.google.com/identity/protocols/application-default-credentials Alternatively, pass the authentication options in both the MonitoringClientOptions and the TraceClientOptions fields of Options. This exporter support exporting OpenCensus views to Stackdriver Monitoring. Each registered view becomes a metric in Stackdriver Monitoring, with the tags becoming labels. The aggregation function determines the metric kind: LastValue aggregations generate Gauge metrics and all other aggregations generate Cumulative metrics. In order to be able to push your stats to Stackdriver Monitoring, you must: These steps enable the API but don't require that your app is hosted on Google Cloud Platform. This exporter supports exporting Trace Spans to Stackdriver Trace. It also supports the Google "Cloud Trace" propagation format header.
Package spanner provides a client for reading and writing to Cloud Spanner databases. See the packages under admin for clients that operate on databases and instances. Note: This package is in beta. Some backwards-incompatible changes may occur. See https://cloud.google.com/spanner/docs/getting-started/go/ for an introduction to Cloud Spanner and additional help on using this API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client that refers to the database of interest: Remember to close the client after use to free up the sessions in the session pool. Two Client methods, Apply and Single, work well for simple reads and writes. As a quick introduction, here we write a new row to the database and read it back: All the methods used above are discussed in more detail below. Every Cloud Spanner row has a unique key, composed of one or more columns. Construct keys with a literal of type Key: The keys of a Cloud Spanner table are ordered. You can specify ranges of keys using the KeyRange type: By default, a KeyRange includes its start key but not its end key. Use the Kind field to specify other boundary conditions: A KeySet represents a set of keys. A single Key or KeyRange can act as a KeySet. Use the KeySets function to build the union of several KeySets: AllKeys returns a KeySet that refers to all the keys in a table: All Cloud Spanner reads and writes occur inside transactions. There are two types of transactions, read-only and read-write. Read-only transactions cannot change the database, do not acquire locks, and may access either the current database state or states in the past. Read-write transactions can read the database before writing to it, and always apply to the most recent database state. The simplest and fastest transaction is a ReadOnlyTransaction that supports a single read operation. Use Client.Single to create such a transaction. You can chain the call to Single with a call to a Read method. When you only want one row whose key you know, use ReadRow. Provide the table name, key, and the columns you want to read: Read multiple rows with the Read method. It takes a table name, KeySet, and list of columns: Read returns a RowIterator. You can call the Do method on the iterator and pass a callback: RowIterator also follows the standard pattern for the Google Cloud Client Libraries: Always call Stop when you finish using an iterator this way, whether or not you iterate to the end. (Failing to call Stop could lead you to exhaust the database's session quota.) To read rows with an index, use ReadUsingIndex. The most general form of reading uses SQL statements. Construct a Statement with NewStatement, setting any parameters using the Statement's Params map: You can also construct a Statement directly with a struct literal, providing your own map of parameters. Use the Query method to run the statement and obtain an iterator: Once you have a Row, via an iterator or a call to ReadRow, you can extract column values in several ways. Pass in a pointer to a Go variable of the appropriate type when you extract a value. You can extract by column position or name: You can extract all the columns at once: Or you can define a Go struct that corresponds to your columns, and extract into that: For Cloud Spanner columns that may contain NULL, use one of the NullXXX types, like NullString: To perform more than one read in a transaction, use ReadOnlyTransaction: You must call Close when you are done with the transaction. Cloud Spanner read-only transactions conceptually perform all their reads at a single moment in time, called the transaction's read timestamp. Once a read has started, you can call ReadOnlyTransaction's Timestamp method to obtain the read timestamp. By default, a transaction will pick the most recent time (a time where all previously committed transactions are visible) for its reads. This provides the freshest data, but may involve some delay. You can often get a quicker response if you are willing to tolerate "stale" data. You can control the read timestamp selected by a transaction by calling the WithTimestampBound method on the transaction before using it. For example, to perform a query on data that is at most one minute stale, use See the documentation of TimestampBound for more details. To write values to a Cloud Spanner database, construct a Mutation. The spanner package has functions for inserting, updating and deleting rows. Except for the Delete methods, which take a Key or KeyRange, each mutation-building function comes in three varieties. One takes lists of columns and values along with the table name: One takes a map from column names to values: And the third accepts a struct value, and determines the columns from the struct field names: To apply a list of mutations to the database, use Apply: If you need to read before writing in a single transaction, use a ReadWriteTransaction. ReadWriteTransactions may abort and need to be retried. You pass in a function to ReadWriteTransaction, and the client will handle the retries automatically. Use the transaction's BufferWrite method to buffer mutations, which will all be executed at the end of the transaction: Spanner supports DML statements like INSERT, UPDATE and DELETE. Use ReadWriteTransaction.Update to run DML statements. It returns the number of rows affected. (You can call use ReadWriteTransaction.Query with a DML statement. The first call to Next on the resulting RowIterator will return iterator.Done, and the RowCount field of the iterator will hold the number of affected rows.) For large databases, it may be more efficient to partition the DML statement. Use client.PartitionedUpdate to run a DML statement in this way. Not all DML statements can be partitioned. This client has been instrumented to use OpenCensus tracing (http://opencensus.io). To enable tracing, see "Enabling Tracing for a Program" at https://godoc.org/go.opencensus.io/trace. OpenCensus tracing requires Go 1.8 or higher.
Package weasel provides means for serving content from a Google Cloud Storage (GCS) bucket, suitable for hosting on Google App Engine. See README.md for the design details. This package is a work in progress and makes no API stability promises.
Package weasel provides means for serving content from a Google Cloud Storage (GCS) bucket, suitable for hosting on Google App Engine. See README.md for the design details. This package is a work in progress and makes no API stability promises.
Package spanner provides a client for reading and writing to Cloud Spanner databases. See the packages under admin for clients that operate on databases and instances. Note: This package is in beta. Some backwards-incompatible changes may occur. See https://cloud.google.com/spanner/docs/getting-started/go/ for an introduction to Cloud Spanner and additional help on using this API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client that refers to the database of interest: Remember to close the client after use to free up the sessions in the session pool. Two Client methods, Apply and Single, work well for simple reads and writes. As a quick introduction, here we write a new row to the database and read it back: All the methods used above are discussed in more detail below. Every Cloud Spanner row has a unique key, composed of one or more columns. Construct keys with a literal of type Key: The keys of a Cloud Spanner table are ordered. You can specify ranges of keys using the KeyRange type: By default, a KeyRange includes its start key but not its end key. Use the Kind field to specify other boundary conditions: A KeySet represents a set of keys. A single Key or KeyRange can act as a KeySet. Use the KeySets function to build the union of several KeySets: AllKeys returns a KeySet that refers to all the keys in a table: All Cloud Spanner reads and writes occur inside transactions. There are two types of transactions, read-only and read-write. Read-only transactions cannot change the database, do not acquire locks, and may access either the current database state or states in the past. Read-write transactions can read the database before writing to it, and always apply to the most recent database state. The simplest and fastest transaction is a ReadOnlyTransaction that supports a single read operation. Use Client.Single to create such a transaction. You can chain the call to Single with a call to a Read method. When you only want one row whose key you know, use ReadRow. Provide the table name, key, and the columns you want to read: Read multiple rows with the Read method. It takes a table name, KeySet, and list of columns: Read returns a RowIterator. You can call the Do method on the iterator and pass a callback: RowIterator also follows the standard pattern for the Google Cloud Client Libraries: Always call Stop when you finish using an iterator this way, whether or not you iterate to the end. (Failing to call Stop could lead you to exhaust the database's session quota.) To read rows with an index, use ReadUsingIndex. The most general form of reading uses SQL statements. Construct a Statement with NewStatement, setting any parameters using the Statement's Params map: You can also construct a Statement directly with a struct literal, providing your own map of parameters. Use the Query method to run the statement and obtain an iterator: Once you have a Row, via an iterator or a call to ReadRow, you can extract column values in several ways. Pass in a pointer to a Go variable of the appropriate type when you extract a value. You can extract by column position or name: You can extract all the columns at once: Or you can define a Go struct that corresponds to your columns, and extract into that: For Cloud Spanner columns that may contain NULL, use one of the NullXXX types, like NullString: To perform more than one read in a transaction, use ReadOnlyTransaction: You must call Close when you are done with the transaction. Cloud Spanner read-only transactions conceptually perform all their reads at a single moment in time, called the transaction's read timestamp. Once a read has started, you can call ReadOnlyTransaction's Timestamp method to obtain the read timestamp. By default, a transaction will pick the most recent time (a time where all previously committed transactions are visible) for its reads. This provides the freshest data, but may involve some delay. You can often get a quicker response if you are willing to tolerate "stale" data. You can control the read timestamp selected by a transaction by calling the WithTimestampBound method on the transaction before using it. For example, to perform a query on data that is at most one minute stale, use See the documentation of TimestampBound for more details. To write values to a Cloud Spanner database, construct a Mutation. The spanner package has functions for inserting, updating and deleting rows. Except for the Delete methods, which take a Key or KeyRange, each mutation-building function comes in three varieties. One takes lists of columns and values along with the table name: One takes a map from column names to values: And the third accepts a struct value, and determines the columns from the struct field names: To apply a list of mutations to the database, use Apply: If you need to read before writing in a single transaction, use a ReadWriteTransaction. ReadWriteTransactions may abort and need to be retried. You pass in a function to ReadWriteTransaction, and the client will handle the retries automatically. Use the transaction's BufferWrite method to buffer mutations, which will all be executed at the end of the transaction: Spanner supports DML statements like INSERT, UPDATE and DELETE. Use ReadWriteTransaction.Update to run DML statements. It returns the number of rows affected. (You can call use ReadWriteTransaction.Query with a DML statement. The first call to Next on the resulting RowIterator will return iterator.Done, and the RowCount field of the iterator will hold the number of affected rows.) For large databases, it may be more efficient to partition the DML statement. Use client.PartitionedUpdate to run a DML statement in this way. Not all DML statements can be partitioned. This client has been instrumented to use OpenCensus tracing (http://opencensus.io). To enable tracing, see "Enabling Tracing for a Program" at https://godoc.org/go.opencensus.io/trace. OpenCensus tracing requires Go 1.8 or higher.
Package gcpjwt has Google Cloud Platform (Cloud KMS, IAM API, & AppEngine App Identity API) jwt-go implementations. Should work across virtually all environments, on or off of Google's Cloud Platform. It is highly recommended that you override the default algorithm implementations that you want to leverage a GCP service for in dgrijalva/jwt-go. You otherwise will have to manually pick the verification method for your JWTs and they will place non-standard headers in the rendered JWT (with the exception of signJwt from the IAM API which overwrites the header with its own). You should only need to override the algorithm(s) you plan to use. It is also incorrect to override overlapping, algorithms such as `gcpjwt.SigningMethodKMSRS256.Override()` and `gcpjwt.SigningMethodIAMJWT.Override()` Example: As long as a you override a default algorithm implementation as shown above, using the dgrijalva/jwt-go is mostly unchanged. Token creation is more/less done the same way as in the dgrijalva/jwt-go package. The key that you need to provide is always going to be a context.Context, usuaully with a configuration object loaded in: Example: Finally, the steps to validate a token should be straight forward. This library provides you with helper jwt.Keyfunc implementations to do the heavy lifting around getting the public certificates for verification: Example:
Package vfs provides a pluggable, extensible, and opinionated set of filesystem functionality for Go across a number of filesystem types such as os, S3, and GCS. When building our platform, initially we wrote a library that was something to the effect of Not only was ugly but because the behaviors of each "filesystem" were different and we had to constantly alter the file locations and pass a bucket string (even if the fs didn't know what a bucket was). We found a handful of third-party libraries that were interesting but none of them had everything we needed/wanted. Of particular inspiration was https://github.com/spf13/afero in its composition of the super-powerful stdlib io.* interfaces. Unfortunately, it didn't support Google Cloud Storage and there was still a lot of passing around of strings and structs. Few, if any, of the vfs-like libraries provided interfaces to easily and confidently create new filesystem backends. What we needed/wanted was the following(and more): Go install: Glide installation: We provide vfssimple as basic way of initializing filesystem backends (see each implementations's docs about authentication). vfssimple pulls in every c2fo/vfs backend. If you need to reduce the backend requirements (and app memory footprint) or add a third party backend, you'll need to implement your own "factory". See backend doc for more info. You can then use those file systems to initialize locations which you'll be referencing frequently, or initialize files directly With a number of files and locations between s3 and the local file system you can perform a number of actions without any consideration for the system's api or implementation details. Third-party Backends Feel free to send a pull request if you want to add your backend to the list. Things to add: Brought to you by the Enterprise Pipeline team at C2FO: John Judd - john.judd@c2fo.com Jason Coble - [@jasonkcoble](https://twitter.com/jasonkcoble) - jason@c2fo.com Chris Roush – chris.roush@c2fo.com https://github.com/c2fo/ Contributing Distributed under the MIT license. See `http://github.com/c2fo/vfs/License.md for more information.
Package cloud contains Google Cloud Platform APIs related types and common functions.
Package stackdriver contains the OpenCensus exporters for Stackdriver Monitoring and Stackdriver Tracing. This exporter can be used to send metrics to Stackdriver Monitoring and traces to Stackdriver trace. The package uses Application Default Credentials to authenticate by default. See: https://developers.google.com/identity/protocols/application-default-credentials Alternatively, pass the authentication options in both the MonitoringClientOptions and the TraceClientOptions fields of Options. This exporter support exporting OpenCensus views to Stackdriver Monitoring. Each registered view becomes a metric in Stackdriver Monitoring, with the tags becoming labels. The aggregation function determines the metric kind: LastValue aggregations generate Gauge metrics and all other aggregations generate Cumulative metrics. In order to be able to push your stats to Stackdriver Monitoring, you must: These steps enable the API but don't require that your app is hosted on Google Cloud Platform. This exporter supports exporting Trace Spans to Stackdriver Trace. It also supports the Google "Cloud Trace" propagation format header.
Package gcscache provides storage, backed by Google Cloud Storage, for certificates managed by the golang.org/x/crypto/acme/autocert package. This package is a work in progress and makes no API stability promises.
Package api is the root of the packages used to access Google Cloud Services. See https://godoc.org/google.golang.org/api for a full list of sub-packages. Within api there exist numerous clients which connect to Google APIs, and various utility packages. All clients in sub-packages are configurable via client options. These options are described here: https://godoc.org/google.golang.org/api/option. All the clients in sub-packages support authentication via Google Application Default Credentials (see https://cloud.google.com/docs/authentication/production), or by providing a JSON key file for a Service Account. See the authentication examples in https://godoc.org/google.golang.org/api/transport for more details. Due to the auto-generated nature of this collection of libraries, complete APIs or specific versions can appear or go away without notice. As a result, you should always locally vendor any API(s) that your code relies upon. Google APIs follow semver as specified by https://cloud.google.com/apis/design/versioning. The code generator and the code it produces - the libraries in the google.golang.org/api/... subpackages - are beta. Note that versioning and stability is strictly not communicated through Go modules. Go modules are used only for dependency management. Many parameters are specified using ints. However, underlying APIs might operate on a finer granularity, expecting int64, int32, uint64, or uint32, all of whom have different maximum values. Subsequently, specifying an int parameter in one of these clients may result in an error from the API because the value is too large. To see the exact type of int that the API expects, you can inspect the API's discovery doc. A global catalogue pointing to the discovery doc of APIs can be found at https://www.googleapis.com/discovery/v1/apis.
Package nds is a Go datastore API for Google Cloud Datastore that caches datastore calls in a cache in a strongly consistent manner. This often has the effect of making your app faster as cache access is often 10x faster than datastore access. It can also make your app cheaper to run as cache calls are typically cheaper. This package goes to great lengths to ensure that stale datastore values are never returned to clients, i.e. the caching layer is strongly consistent. It does this by using a similar strategy to Python's ndb. However, this package fixes a couple of subtle edge case bugs that are found in ndb. See http://goo.gl/3ByVlA for one such bug. There are currently no known consistency issues with the caching strategy employed by this package. Package nds' Client is used exactly the same way as the cloud.google.com/go/datastore.Client for implemented calls. Ensure that you change all your datastore client Get, Put, Delete, Mutate, and RunInTransaction function calls to use the nds client and Transaction type when converting your own code. The one caveat with transactions is when running queries, there is a helper function for adding the transaction to a datastore.Query. If you mix datastore and nds API calls then you are liable to get stale cache. You can implement your own nds.Cacher and use it in place of the cache backends provided by this package. The cache backends offered by Google such as AppEngine's Memcache and Cloud Memorystore (redis) are available via this package and can be used as references when adding your own.
Package spannerr (pronounced Spanner R, or Spanner-er) provides session management and a simple interface for Google Cloud Spanner's REST API. If you are not on running your services on Google App Engine, you should just use the official Cloud Spanner (gRPC) client: https://godoc.org/cloud.google.com/go/spanner
Package gonm automatically assigns a key from the interface, and autocahing interface local memory. gonm wrapped Google Cloud Datastore. I used https://godoc.org/cloud.google.com/go/datastore and https://godoc.org/github.com/mjibson/goon as a reference Gonm generate key from ID of structure property, and this key will use for get or put. All structures are complemented with ID of structure property after these method are used. Also, gonm stores the results in a local cache by default. Therefore, the same fetch can be performed at high speed. It is simple to use gonm. A key consists of an optional parent key, and parent key generate Parent of structure property. ID assumes int64 and string, and Parent assumes *datastore.Key like database api. example: create child-parent relationship If you want to use other property as key id, you need to put id tag in structure tag. The same applies to the parent key and key name. For parent Key, you need to put parent tag in structure. For Key kind, you need to put kind tag in structure. Gonm returns ErrNoIDField when the id cannot be obtained from the received structure. Check https://godoc.org/cloud.google.com/go/datastore#hdr-Properties to lean more about datastore properties. Of course you can use PropertyLoadSaver Interface. However, be careful because gonm wraps the datastore. Check https://godoc.org/cloud.google.com/go/datastore#hdr-The_PropertyLoadSaver_Interface to lean more about PropertyLoadSaver Interface. Queries of gonm is very similar datastore queries. Gonm use datastore.Query. Gonm support Run and GetAll, but I reconmmend using GetKeysOnly. Gonm.RunInTransaction runs a function in a transaction. To install and set up the emulator and its environment variables, see the documentation at https://cloud.google.com/datastore/docs/tools/datastore-emulator.
Package api is the root of the packages used to access Google Cloud Services. See https://godoc.org/google.golang.org/api for a full list of sub-packages. Within api there exist numerous clients which connect to Google APIs, and various utility packages. All clients in sub-packages are configurable via client options. These options are described here: https://godoc.org/google.golang.org/api/option. All the clients in sub-packages support authentication via Google Application Default Credentials (see https://cloud.google.com/docs/authentication/production), or by providing a JSON key file for a Service Account. See the authentication examples in https://godoc.org/google.golang.org/api/transport for more details. Due to the auto-generated nature of this collection of libraries, complete APIs or specific versions can appear or go away without notice. As a result, you should always locally vendor any API(s) that your code relies upon. Google APIs follow semver as specified by https://cloud.google.com/apis/design/versioning. The code generator and the code it produces - the libraries in the google.golang.org/api/... subpackages - are beta. Note that versioning and stability is strictly not communicated through Go modules. Go modules are used only for dependency management. Many parameters are specified using ints. However, underlying APIs might operate on a finer granularity, expecting int64, int32, uint64, or uint32, all of whom have different maximum values. Subsequently, specifying an int parameter in one of these clients may result in an error from the API because the value is too large. To see the exact type of int that the API expects, you can inspect the API's discovery doc. A global catalogue pointing to the discovery doc of APIs can be found at https://www.googleapis.com/discovery/v1/apis. This field can be found on all Request/Response structs in the generated clients. All of these types have the JSON `omitempty` field tag present on their fields. This means if a type is set to its default value it will not be marshalled. Sometimes you may actually want to send a default value, for instance sending an int of `0`. In this case you can override the `omitempty` feature by adding the field name to the `ForceSendFields` slice. See docs on any struct for more details. An error returned by a client's Do method may be cast to a *googleapi.Error or unwrapped to an *apierror.APIError. The https://pkg.go.dev/google.golang.org/api/googleapi#Error type is useful for getting the HTTP status code: The https://pkg.go.dev/github.com/googleapis/gax-go/v2/apierror#APIError type is useful for inspecting structured details of the underlying API response, such as the reason for the error and the error domain, which is typically the registered service name of the tool or product that generated the error: If an API call returns an Operation, that means it could take some time to complete the work initiated by the API call. Applications that are interested in the end result of the operation they initiated should wait until the Operation.Done field indicates it is finished. To do this, use the service's Operation client, and a loop, like so:
Package godscache is a wrapper around the official Google Cloud Datastore Go client library "cloud.google.com/go/datastore", which adds caching to datastore requests using memcached. Its name is a play on words, and it's actually called Go DS Cache. Note that it is wrapping the newer datastore library, which is intended for use with App Engine Flexible Environment, Compute Engine, or Kubernetes Engine, and is not for use with App Engine Standard. If you're looking for something similar for App Engine Standard, check out the highly recommended nds library: https://godoc.org/github.com/qedus/nds For the official Google library documentation, go here: https://godoc.org/cloud.google.com/go/datastore Godscache follows the Google Cloud Datastore client library API very closely, to act as a drop-in replacement for the official library from Google. Things which aren't implemented in this library can be used anyway, they just won't be cached. For example, the Client struct holds a Parent member which is the raw Google Datastore client, so you can use that instead to make your requests if you need to use some feature that's not implemented in godscache, or if you want to bypass the cache for some reason. To use godscache, you will need a Google Cloud project with an initialized Datastore on it, and a memcached instance to connect to. You can connect to as many memcached instances as you want.
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include `<`, `<=`, `>`, `>=`, `==`, and 'array-contains'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it.
Package vfs provides a pluggable, extensible, and opinionated set of file system functionality for Go across a number of file system types such as os, S3, and GCS. When building our platform, initially we wrote a library that was something to the effect of Not only was ugly but because the behaviors of each "file system" were different and we had to constantly alter the file locations and pass a bucket string (even if the fs didn't know what a bucket was). We found a handful of third-party libraries that were interesting but none of them had everything we needed/wanted. Of particular inspiration was https://github.com/spf13/afero in its composition of the super-powerful stdlib io.* interfaces. Unfortunately, it didn't support Google Cloud Storage and there was still a lot of passing around of strings and structs. Few, if any, of the vfs-like libraries provided interfaces to easily and confidently create new file system backends. What we needed/wanted was the following(and more): Pre 1.17: Post 1.17: Upgrading from v5 to v6 With v6.0.0, sftp.Options struct changed to accept an array of Key Exchange algorithms rather than a string. To update, change the syntax of the auth commands. becomes We provide vfssimple as basic way of initializing file system backends (see each implementations's docs about authentication). vfssimple pulls in every c2fo/vfs backend. If you need to reduce the backend requirements (and app memory footprint) or add a third party backend, you'll need to implement your own "factory". See backend doc for more info. You can then use those file systems to initialize locations which you'll be referencing frequently, or initialize files directly You can perform a number of actions without any consideration for the underlying system's api or implementation details. File's io.* interfaces may be used directly: * none so far Feel free to send a pull request if you want to add your backend to the list. Things to add: Brought to you by the Enterprise Pipeline team at C2FO: * John Judd - john.judd@c2fo.com * Jason Coble - [@jasonkcoble](https://twitter.com/jasonkcoble) - jason@c2fo.com * Chris Roush – chris.roush@c2fo.com * Moe Zeid - moe.zeid@c2fo.com https://github.com/c2fo/ Contributing Distributed under the MIT license. See `http://github.com/c2fo/vfs/License.md for more information. * absolute path - A path is said to be absolute if it provides the entire context need to find a file, including the file system root. An absolute path must begin with a slash and may include . and .. directories. * file path - A file path ends with a filename and therefore may not end with a slash. It may be relative or absolute. * location path - A location/dir path must end with a slash. It may be relative or absolute. * relative path - A relative path is a way to locate a dir or file relative to another directory. A relative path may not begin with a slash but may include . and .. directories. * URI - A Uniform Resource Identifier (URI) is a string of characters that unambiguously identifies a particular resource. To guarantee uniformity, all URIs follow a predefined set of syntax rules, but also maintain extensibility through a separately defined hierarchical naming scheme (e.g. http://).
Package gcpjwt has Google Cloud Platform (Cloud KMS, IAM API, & AppEngine App Identity API) jwt-go implementations. Should work across virtually all environments, on or off of Google's Cloud Platform. It is highly recommended that you override the default algorithm implementations that you want to leverage a GCP service for in dgrijalva/jwt-go. You otherwise will have to manually pick the verification method for your JWTs and they will place non-standard headers in the rendered JWT (with the exception of signJwt from the IAM API which overwrites the header with its own). You should only need to override the algorithm(s) you plan to use. It is also incorrect to override overlapping, algorithms such as `gcpjwt.SigningMethodKMSRS256.Override()` and `gcpjwt.SigningMethodIAMJWT.Override()` Example: As long as a you override a default algorithm implementation as shown above, using the dgrijalva/jwt-go is mostly unchanged. Token creation is more/less done the same way as in the dgrijalva/jwt-go package. The key that you need to provide is always going to be a context.Context, usuaully with a configuration object loaded in: Example: Finally, the steps to validate a token should be straight forward. This library provides you with helper jwt.Keyfunc implementations to do the heavy lifting around getting the public certificates for verification: Example:
Package spanner provides a client for reading and writing to Cloud Spanner databases. See the packages under admin for clients that operate on databases and instances. Note: This package is in beta. Some backwards-incompatible changes may occur. See https://cloud.google.com/spanner/docs/getting-started/go/ for an introduction to Cloud Spanner and additional help on using this API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client that refers to the database of interest: Remember to close the client after use to free up the sessions in the session pool. Two Client methods, Apply and Single, work well for simple reads and writes. As a quick introduction, here we write a new row to the database and read it back: All the methods used above are discussed in more detail below. Every Cloud Spanner row has a unique key, composed of one or more columns. Construct keys with a literal of type Key: The keys of a Cloud Spanner table are ordered. You can specify ranges of keys using the KeyRange type: By default, a KeyRange includes its start key but not its end key. Use the Kind field to specify other boundary conditions: A KeySet represents a set of keys. A single Key or KeyRange can act as a KeySet. Use the KeySets function to build the union of several KeySets: AllKeys returns a KeySet that refers to all the keys in a table: All Cloud Spanner reads and writes occur inside transactions. There are two types of transactions, read-only and read-write. Read-only transactions cannot change the database, do not acquire locks, and may access either the current database state or states in the past. Read-write transactions can read the database before writing to it, and always apply to the most recent database state. The simplest and fastest transaction is a ReadOnlyTransaction that supports a single read operation. Use Client.Single to create such a transaction. You can chain the call to Single with a call to a Read method. When you only want one row whose key you know, use ReadRow. Provide the table name, key, and the columns you want to read: Read multiple rows with the Read method. It takes a table name, KeySet, and list of columns: Read returns a RowIterator. You can call the Do method on the iterator and pass a callback: RowIterator also follows the standard pattern for the Google Cloud Client Libraries: Always call Stop when you finish using an iterator this way, whether or not you iterate to the end. (Failing to call Stop could lead you to exhaust the database's session quota.) To read rows with an index, use ReadUsingIndex. The most general form of reading uses SQL statements. Construct a Statement with NewStatement, setting any parameters using the Statement's Params map: You can also construct a Statement directly with a struct literal, providing your own map of parameters. Use the Query method to run the statement and obtain an iterator: Once you have a Row, via an iterator or a call to ReadRow, you can extract column values in several ways. Pass in a pointer to a Go variable of the appropriate type when you extract a value. You can extract by column position or name: You can extract all the columns at once: Or you can define a Go struct that corresponds to your columns, and extract into that: For Cloud Spanner columns that may contain NULL, use one of the NullXXX types, like NullString: To perform more than one read in a transaction, use ReadOnlyTransaction: You must call Close when you are done with the transaction. Cloud Spanner read-only transactions conceptually perform all their reads at a single moment in time, called the transaction's read timestamp. Once a read has started, you can call ReadOnlyTransaction's Timestamp method to obtain the read timestamp. By default, a transaction will pick the most recent time (a time where all previously committed transactions are visible) for its reads. This provides the freshest data, but may involve some delay. You can often get a quicker response if you are willing to tolerate "stale" data. You can control the read timestamp selected by a transaction by calling the WithTimestampBound method on the transaction before using it. For example, to perform a query on data that is at most one minute stale, use See the documentation of TimestampBound for more details. To write values to a Cloud Spanner database, construct a Mutation. The spanner package has functions for inserting, updating and deleting rows. Except for the Delete methods, which take a Key or KeyRange, each mutation-building function comes in three varieties. One takes lists of columns and values along with the table name: One takes a map from column names to values: And the third accepts a struct value, and determines the columns from the struct field names: To apply a list of mutations to the database, use Apply: If you need to read before writing in a single transaction, use a ReadWriteTransaction. ReadWriteTransactions may abort and need to be retried. You pass in a function to ReadWriteTransaction, and the client will handle the retries automatically. Use the transaction's BufferWrite method to buffer mutations, which will all be executed at the end of the transaction: Spanner supports DML statements like INSERT, UPDATE and DELETE. Use ReadWriteTransaction.Update to run DML statements. It returns the number of rows affected. (You can call use ReadWriteTransaction.Query with a DML statement. The first call to Next on the resulting RowIterator will return iterator.Done, and the RowCount field of the iterator will hold the number of affected rows.) For large databases, it may be more efficient to partition the DML statement. Use client.PartitionedUpdate to run a DML statement in this way. Not all DML statements can be partitioned. This client has been instrumented to use OpenCensus tracing (http://opencensus.io). To enable tracing, see "Enabling Tracing for a Program" at https://godoc.org/go.opencensus.io/trace. OpenCensus tracing requires Go 1.8 or higher.
Package logging contains a Stackdriver Logging client suitable for writing logs. For reading logs, and working with sinks, metrics and monitored resources, see package cloud.google.com/go/logging/logadmin. This client uses Logging API v2. See https://cloud.google.com/logging/docs/api/v2/ for an introduction to the API. Note: This package is in beta. Some backwards-incompatible changes may occur. Use a Client to interact with the Stackdriver Logging API. For most use cases, you'll want to add log entries to a buffer to be periodically flushed (automatically and asynchronously) to the Stackdriver Logging service. You should call Client.Close before your program exits to flush any buffered log entries to the Stackdriver Logging service. For critical errors, you may want to send your log entries immediately. LogSync is slow and will block until the log entry has been sent, so it is not recommended for normal use. An entry payload can be a string, as in the examples above. It can also be any value that can be marshaled to a JSON object, like a map[string]interface{} or a struct: If you have a []byte of JSON, wrap it in json.RawMessage: You may want use a standard log.Logger in your program. An Entry may have one of a number of severity levels associated with it. You can view Stackdriver logs for projects at https://console.cloud.google.com/logs/viewer. Use the dropdown at the top left. When running from a Google Cloud Platform VM, select "GCE VM Instance". Otherwise, select "Google Project" and then the project ID. Logs for organizations, folders and billing accounts can be viewed on the command line with the "gcloud logging read" command. To group all the log entries written during a single HTTP request, create two Loggers, a "parent" and a "child," with different log IDs. Both should be in the same project, and have the same MonitoredResouce type and labels. - Parent entries must have HTTPRequest.Request populated. (Strictly speaking, only the URL is necessary.) - A child entry's timestamp must be within the time interval covered by the parent request (i.e., older than parent.Timestamp, and newer than parent.Timestamp - parent.HTTPRequest.Latency, assuming the parent timestamp marks the end of the request. - The trace field must be populated in all of the entries and match exactly. You should observe the child log entries grouped under the parent on the console. The parent entry will not inherit the severity of its children; you must update the parent severity yourself.
Package gcscache provides storage, backed by Google Cloud Storage, for certificates managed by the golang.org/x/crypto/acme/autocert package. This package is a work in progress and makes no API stability promises.
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include '<', '<=', '>', '>=', '==', 'in', 'array-contains', and 'array-contains-any'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it. This package supports the Cloud Firestore emulator, which is useful for testing and development. Environment variables are used to indicate that Firestore traffic should be directed to the emulator instead of the production Firestore service. To install and run the emulator and its environment variables, see the documentation at https://cloud.google.com/sdk/gcloud/reference/beta/emulators/firestore/. Once the emulator is running, set FIRESTORE_EMULATOR_HOST to the API endpoint.
Package aeletsencrypt manages Let's Encrypt certificates for AppEngine. Package initialization registers an HTTP handler at /.well-known/letsencrypt restricted to app admins and AppEngine cron, which calls it daily. This handler uses the AppEngine Admin API as the AppEngine default service account to list custom domains, creating certificates when missing, and to list certificates, updating them 30 days before they expire. To create and update certificates with LetsEncrypt it creates a temporary account key, resolves the http-01 challenge for domain validation, creates a certificate key and request, receives the signed certificate with its chain and uploads it to AppEngine along with the key. Nothing is saved in the app itself. In the Google Cloud Console, configure custom domains (https://console.cloud.google.com/appengine/settings/domains), enable the AppEngine Admin API (https://console.cloud.google.com/apis/api/appengine.googleapis.com/overview) and grant the AppEngine default service account the AppEngine admin role (https://console.cloud.google.com/iam-admin/iam/project). In the Google Webmaster Console, add the AppEngine default service account as verified owner for the domains (https://www.google.com/webmasters/verification/details). Import this package anywhere in your app for its side-effect: during initialization it registers its handlers. Add the following handlers to your app.yaml: Handlers order matter, so insert above more generic handlers (e.g /.*). The "secure: optional" is to avoid https redirect, which might not work yet. Add the following cron jobs to your cron.yaml, creating it if necessary: At this point you are done, certificates for all custom domains will be created next time the cron job runs. To create certificates immediately, run the cron job now by visiting http://<any custom domain>/.well-known/letsencrypt. If you have several domains be mindful of Let's Encrypt rate-limits (https://letsencrypt.org/docs/rate-limits/) in particular 20 domains per week. If you hit this limit, just wait a week and let the cron job resume certificate creation for the remaining domains. If you add new custom domains later, the cron job will automatically create certificates next time it runs.
Package cloud contains Google Cloud Platform APIs related types and common functions.
Package cloud is the root of the packages used to access Google Cloud Services. See https://godoc.org/cloud.google.com/go for a full list of sub-packages. All clients in sub-packages are configurable via client options. These options are described here: https://godoc.org/google.golang.org/api/option. All the clients in sub-packages support authentication via Google Application Default Credentials (see https://cloud.google.com/docs/authentication/production), or by providing a JSON key file for a Service Account. See the authentication examples in this package for details. By default, all requests in sub-packages will run indefinitely, retrying on transient errors when correctness allows. To set timeouts or arrange for cancellation, use contexts. See the examples for details. Do not attempt to control the initial connection (dialing) of a service by setting a timeout on the context passed to NewClient. Dialing is non-blocking, so timeouts would be ineffective and would only interfere with credential refreshing, which uses the same context. Connection pooling differs in clients based on their transport. Cloud clients either rely on HTTP or gRPC transports to communicate with Google Cloud. Cloud clients that use HTTP (bigquery, compute, storage, and translate) rely on the underlying HTTP transport to cache connections for later re-use. These are cached to the default http.MaxIdleConns and http.MaxIdleConnsPerHost settings in http.DefaultTransport. For gRPC clients (all others in this repo), connection pooling is configurable. Users of cloud client libraries may specify option.WithGRPCConnectionPool(n) as a client option to NewClient calls. This configures the underlying gRPC connections to be pooled and addressed in a round robin fashion. Minimal docker images like Alpine lack CA certificates. This causes RPCs to appear to hang, because gRPC retries indefinitely. See https://github.com/GoogleCloudPlatform/google-cloud-go/issues/928 for more information. To see gRPC logs, set the environment variable GRPC_GO_LOG_SEVERITY_LEVEL. See https://godoc.org/google.golang.org/grpc/grpclog for more information. For HTTP logging, set the GODEBUG environment variable to "http2debug=1" or "http2debug=2". Google Application Default Credentials is the recommended way to authorize and authenticate clients. For information on how to create and obtain Application Default Credentials, see https://developers.google.com/identity/protocols/application-default-credentials. To arrange for an RPC to be canceled, use context.WithCancel. You can use a file with credentials to authenticate and authorize, such as a JSON key file associated with a Google service account. Service Account keys can be created and downloaded from https://console.developers.google.com/permissions/serviceaccounts. This example uses the Datastore client, but the same steps apply to the other client libraries underneath this package. In some cases (for instance, you don't want to store secrets on disk), you can create credentials from in-memory JSON and use the WithCredentials option. The google package in this example is at golang.org/x/oauth2/google. This example uses the PubSub client, but the same steps apply to the other client libraries underneath this package. To set a timeout for an RPC, use context.WithTimeout.
Package spanner provides a client for reading and writing to Cloud Spanner databases. See the packages under admin for clients that operate on databases and instances. Note: This package is in beta. Some backwards-incompatible changes may occur. See https://cloud.google.com/spanner/docs/getting-started/go/ for an introduction to Cloud Spanner and additional help on using this API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client that refers to the database of interest: Remember to close the client after use to free up the sessions in the session pool. Two Client methods, Apply and Single, work well for simple reads and writes. As a quick introduction, here we write a new row to the database and read it back: All the methods used above are discussed in more detail below. Every Cloud Spanner row has a unique key, composed of one or more columns. Construct keys with a literal of type Key: The keys of a Cloud Spanner table are ordered. You can specify ranges of keys using the KeyRange type: By default, a KeyRange includes its start key but not its end key. Use the Kind field to specify other boundary conditions: A KeySet represents a set of keys. A single Key or KeyRange can act as a KeySet. Use the KeySets function to build the union of several KeySets: AllKeys returns a KeySet that refers to all the keys in a table: All Cloud Spanner reads and writes occur inside transactions. There are two types of transactions, read-only and read-write. Read-only transactions cannot change the database, do not acquire locks, and may access either the current database state or states in the past. Read-write transactions can read the database before writing to it, and always apply to the most recent database state. The simplest and fastest transaction is a ReadOnlyTransaction that supports a single read operation. Use Client.Single to create such a transaction. You can chain the call to Single with a call to a Read method. When you only want one row whose key you know, use ReadRow. Provide the table name, key, and the columns you want to read: Read multiple rows with the Read method. It takes a table name, KeySet, and list of columns: Read returns a RowIterator. You can call the Do method on the iterator and pass a callback: RowIterator also follows the standard pattern for the Google Cloud Client Libraries: Always call Stop when you finish using an iterator this way, whether or not you iterate to the end. (Failing to call Stop could lead you to exhaust the database's session quota.) To read rows with an index, use ReadUsingIndex. The most general form of reading uses SQL statements. Construct a Statement with NewStatement, setting any parameters using the Statement's Params map: You can also construct a Statement directly with a struct literal, providing your own map of parameters. Use the Query method to run the statement and obtain an iterator: Once you have a Row, via an iterator or a call to ReadRow, you can extract column values in several ways. Pass in a pointer to a Go variable of the appropriate type when you extract a value. You can extract by column position or name: You can extract all the columns at once: Or you can define a Go struct that corresponds to your columns, and extract into that: For Cloud Spanner columns that may contain NULL, use one of the NullXXX types, like NullString: To perform more than one read in a transaction, use ReadOnlyTransaction: You must call Close when you are done with the transaction. Cloud Spanner read-only transactions conceptually perform all their reads at a single moment in time, called the transaction's read timestamp. Once a read has started, you can call ReadOnlyTransaction's Timestamp method to obtain the read timestamp. By default, a transaction will pick the most recent time (a time where all previously committed transactions are visible) for its reads. This provides the freshest data, but may involve some delay. You can often get a quicker response if you are willing to tolerate "stale" data. You can control the read timestamp selected by a transaction by calling the WithTimestampBound method on the transaction before using it. For example, to perform a query on data that is at most one minute stale, use See the documentation of TimestampBound for more details. To write values to a Cloud Spanner database, construct a Mutation. The spanner package has functions for inserting, updating and deleting rows. Except for the Delete methods, which take a Key or KeyRange, each mutation-building function comes in three varieties. One takes lists of columns and values along with the table name: One takes a map from column names to values: And the third accepts a struct value, and determines the columns from the struct field names: To apply a list of mutations to the database, use Apply: If you need to read before writing in a single transaction, use a ReadWriteTransaction. ReadWriteTransactions may abort and need to be retried. You pass in a function to ReadWriteTransaction, and the client will handle the retries automatically. Use the transaction's BufferWrite method to buffer mutations, which will all be executed at the end of the transaction: Spanner supports DML statements like INSERT, UPDATE and DELETE. Use ReadWriteTransaction.Update to run DML statements. It returns the number of rows affected. (You can call use ReadWriteTransaction.Query with a DML statement. The first call to Next on the resulting RowIterator will return iterator.Done, and the RowCount field of the iterator will hold the number of affected rows.) For large databases, it may be more efficient to partition the DML statement. Use client.PartitionedUpdate to run a DML statement in this way. Not all DML statements can be partitioned. This client has been instrumented to use OpenCensus tracing (http://opencensus.io). To enable tracing, see "Enabling Tracing for a Program" at https://godoc.org/go.opencensus.io/trace. OpenCensus tracing requires Go 1.8 or higher.
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include `<`, `<=`, `>`, `>=`, `==`, and 'array-contains'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it.
Package logging contains a Stackdriver Logging client suitable for writing logs. For reading logs, and working with sinks, metrics and monitored resources, see package cloud.google.com/go/logging/logadmin. This client uses Logging API v2. See https://cloud.google.com/logging/docs/api/v2/ for an introduction to the API. Note: This package is in beta. Some backwards-incompatible changes may occur. Use a Client to interact with the Stackdriver Logging API. For most use cases, you'll want to add log entries to a buffer to be periodically flushed (automatically and asynchronously) to the Stackdriver Logging service. You should call Client.Close before your program exits to flush any buffered log entries to the Stackdriver Logging service. For critical errors, you may want to send your log entries immediately. LogSync is slow and will block until the log entry has been sent, so it is not recommended for normal use. An entry payload can be a string, as in the examples above. It can also be any value that can be marshaled to a JSON object, like a map[string]interface{} or a struct: If you have a []byte of JSON, wrap it in json.RawMessage: You may want use a standard log.Logger in your program. An Entry may have one of a number of severity levels associated with it. You can view Stackdriver logs for projects at https://console.cloud.google.com/logs/viewer. Use the dropdown at the top left. When running from a Google Cloud Platform VM, select "GCE VM Instance". Otherwise, select "Google Project" and then the project ID. Logs for organizations, folders and billing accounts can be viewed on the command line with the "gcloud logging read" command. To group all the log entries written during a single HTTP request, create two Loggers, a "parent" and a "child," with different log IDs. Both should be in the same project, and have the same MonitoredResouce type and labels. - Parent entries must have HTTPRequest.Request populated. (Strictly speaking, only the URL is necessary.) - A child entry's timestamp must be within the time interval covered by the parent request (i.e., older than parent.Timestamp, and newer than parent.Timestamp - parent.HTTPRequest.Latency, assuming the parent timestamp marks the end of the request. - The trace field must be populated in all of the entries and match exactly. You should observe the child log entries grouped under the parent on the console. The parent entry will not inherit the severity of its children; you must update the parent severity yourself.
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include `<`, `<=`, `>`, `>=`, `==`, and 'array-contains'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it.
Package logging contains a Stackdriver Logging client suitable for writing logs. For reading logs, and working with sinks, metrics and monitored resources, see package cloud.google.com/go/logging/logadmin. This client uses Logging API v2. See https://cloud.google.com/logging/docs/api/v2/ for an introduction to the API. Note: This package is in beta. Some backwards-incompatible changes may occur. Use a Client to interact with the Stackdriver Logging API. For most use cases, you'll want to add log entries to a buffer to be periodically flushed (automatically and asynchronously) to the Stackdriver Logging service. You should call Client.Close before your program exits to flush any buffered log entries to the Stackdriver Logging service. For critical errors, you may want to send your log entries immediately. LogSync is slow and will block until the log entry has been sent, so it is not recommended for normal use. An entry payload can be a string, as in the examples above. It can also be any value that can be marshaled to a JSON object, like a map[string]interface{} or a struct: If you have a []byte of JSON, wrap it in json.RawMessage: You may want use a standard log.Logger in your program. An Entry may have one of a number of severity levels associated with it. You can view Stackdriver logs for projects at https://console.cloud.google.com/logs/viewer. Use the dropdown at the top left. When running from a Google Cloud Platform VM, select "GCE VM Instance". Otherwise, select "Google Project" and then the project ID. Logs for organizations, folders and billing accounts can be viewed on the command line with the "gcloud logging read" command. To group all the log entries written during a single HTTP request, create two Loggers, a "parent" and a "child," with different log IDs. Both should be in the same project, and have the same MonitoredResouce type and labels. - Parent entries must have HTTPRequest.Request populated. (Strictly speaking, only the URL is necessary.) - A child entry's timestamp must be within the time interval covered by the parent request (i.e., older than parent.Timestamp, and newer than parent.Timestamp - parent.HTTPRequest.Latency, assuming the parent timestamp marks the end of the request. - The trace field must be populated in all of the entries and match exactly. You should observe the child log entries grouped under the parent on the console. The parent entry will not inherit the severity of its children; you must update the parent severity yourself.
Package spanner provides a client for reading and writing to Cloud Spanner databases. See the packages under admin for clients that operate on databases and instances. Note: This package is in beta. Some backwards-incompatible changes may occur. See https://cloud.google.com/spanner/docs/getting-started/go/ for an introduction to Cloud Spanner and additional help on using this API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client that refers to the database of interest: Remember to close the client after use to free up the sessions in the session pool. Two Client methods, Apply and Single, work well for simple reads and writes. As a quick introduction, here we write a new row to the database and read it back: All the methods used above are discussed in more detail below. Every Cloud Spanner row has a unique key, composed of one or more columns. Construct keys with a literal of type Key: The keys of a Cloud Spanner table are ordered. You can specify ranges of keys using the KeyRange type: By default, a KeyRange includes its start key but not its end key. Use the Kind field to specify other boundary conditions: A KeySet represents a set of keys. A single Key or KeyRange can act as a KeySet. Use the KeySets function to build the union of several KeySets: AllKeys returns a KeySet that refers to all the keys in a table: All Cloud Spanner reads and writes occur inside transactions. There are two types of transactions, read-only and read-write. Read-only transactions cannot change the database, do not acquire locks, and may access either the current database state or states in the past. Read-write transactions can read the database before writing to it, and always apply to the most recent database state. The simplest and fastest transaction is a ReadOnlyTransaction that supports a single read operation. Use Client.Single to create such a transaction. You can chain the call to Single with a call to a Read method. When you only want one row whose key you know, use ReadRow. Provide the table name, key, and the columns you want to read: Read multiple rows with the Read method. It takes a table name, KeySet, and list of columns: Read returns a RowIterator. You can call the Do method on the iterator and pass a callback: RowIterator also follows the standard pattern for the Google Cloud Client Libraries: Always call Stop when you finish using an iterator this way, whether or not you iterate to the end. (Failing to call Stop could lead you to exhaust the database's session quota.) To read rows with an index, use ReadUsingIndex. The most general form of reading uses SQL statements. Construct a Statement with NewStatement, setting any parameters using the Statement's Params map: You can also construct a Statement directly with a struct literal, providing your own map of parameters. Use the Query method to run the statement and obtain an iterator: Once you have a Row, via an iterator or a call to ReadRow, you can extract column values in several ways. Pass in a pointer to a Go variable of the appropriate type when you extract a value. You can extract by column position or name: You can extract all the columns at once: Or you can define a Go struct that corresponds to your columns, and extract into that: For Cloud Spanner columns that may contain NULL, use one of the NullXXX types, like NullString: To perform more than one read in a transaction, use ReadOnlyTransaction: You must call Close when you are done with the transaction. Cloud Spanner read-only transactions conceptually perform all their reads at a single moment in time, called the transaction's read timestamp. Once a read has started, you can call ReadOnlyTransaction's Timestamp method to obtain the read timestamp. By default, a transaction will pick the most recent time (a time where all previously committed transactions are visible) for its reads. This provides the freshest data, but may involve some delay. You can often get a quicker response if you are willing to tolerate "stale" data. You can control the read timestamp selected by a transaction by calling the WithTimestampBound method on the transaction before using it. For example, to perform a query on data that is at most one minute stale, use See the documentation of TimestampBound for more details. To write values to a Cloud Spanner database, construct a Mutation. The spanner package has functions for inserting, updating and deleting rows. Except for the Delete methods, which take a Key or KeyRange, each mutation-building function comes in three varieties. One takes lists of columns and values along with the table name: One takes a map from column names to values: And the third accepts a struct value, and determines the columns from the struct field names: To apply a list of mutations to the database, use Apply: If you need to read before writing in a single transaction, use a ReadWriteTransaction. ReadWriteTransactions may abort and need to be retried. You pass in a function to ReadWriteTransaction, and the client will handle the retries automatically. Use the transaction's BufferWrite method to buffer mutations, which will all be executed at the end of the transaction: Spanner supports DML statements like INSERT, UPDATE and DELETE. Use ReadWriteTransaction.Update to run DML statements. It returns the number of rows affected. (You can call use ReadWriteTransaction.Query with a DML statement. The first call to Next on the resulting RowIterator will return iterator.Done, and the RowCount field of the iterator will hold the number of affected rows.) For large databases, it may be more efficient to partition the DML statement. Use client.PartitionedUpdate to run a DML statement in this way. Not all DML statements can be partitioned. This client has been instrumented to use OpenCensus tracing (http://opencensus.io). To enable tracing, see "Enabling Tracing for a Program" at https://godoc.org/go.opencensus.io/trace. OpenCensus tracing requires Go 1.8 or higher.
Package logging contains a Stackdriver Logging client suitable for writing logs. For reading logs, and working with sinks, metrics and monitored resources, see package cloud.google.com/go/logging/logadmin. This client uses Logging API v2. See https://cloud.google.com/logging/docs/api/v2/ for an introduction to the API. Note: This package is in beta. Some backwards-incompatible changes may occur. Use a Client to interact with the Stackdriver Logging API. For most use cases, you'll want to add log entries to a buffer to be periodically flushed (automatically and asynchronously) to the Stackdriver Logging service. You should call Client.Close before your program exits to flush any buffered log entries to the Stackdriver Logging service. For critical errors, you may want to send your log entries immediately. LogSync is slow and will block until the log entry has been sent, so it is not recommended for normal use. An entry payload can be a string, as in the examples above. It can also be any value that can be marshaled to a JSON object, like a map[string]interface{} or a struct: If you have a []byte of JSON, wrap it in json.RawMessage: You may want use a standard log.Logger in your program. An Entry may have one of a number of severity levels associated with it. You can view Stackdriver logs for projects at https://console.cloud.google.com/logs/viewer. Use the dropdown at the top left. When running from a Google Cloud Platform VM, select "GCE VM Instance". Otherwise, select "Google Project" and then the project ID. Logs for organizations, folders and billing accounts can be viewed on the command line with the "gcloud logging read" command. To group all the log entries written during a single HTTP request, create two Loggers, a "parent" and a "child," with different log IDs. Both should be in the same project, and have the same MonitoredResouce type and labels. - Parent entries must have HTTPRequest.Request populated. (Strictly speaking, only the URL is necessary.) - A child entry's timestamp must be within the time interval covered by the parent request (i.e., older than parent.Timestamp, and newer than parent.Timestamp - parent.HTTPRequest.Latency, assuming the parent timestamp marks the end of the request. - The trace field must be populated in all of the entries and match exactly. You should observe the child log entries grouped under the parent on the console. The parent entry will not inherit the severity of its children; you must update the parent severity yourself.
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include `<`, `<=`, `>`, `>=`, `==`, and 'array-contains'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it.
Package bigtable is an API to Google Cloud Bigtable. See https://cloud.google.com/bigtable/docs/ for general product documentation. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Use NewClient or NewAdminClient to create a client that can be used to access the data or admin APIs respectively. Both require credentials that have permission to access the Cloud Bigtable API. If your program is run on Google App Engine or Google Compute Engine, using the Application Default Credentials (https://developers.google.com/accounts/docs/application-default-credentials) is the simplest option. Those credentials will be used by default when NewClient or NewAdminClient are called. To use alternate credentials, pass them to NewClient or NewAdminClient using option.WithTokenSource. For instance, you can use service account credentials by visiting https://cloud.google.com/console/project/MYPROJECT/apiui/credential, creating a new OAuth "Client ID", storing the JSON key somewhere accessible, and writing Here, `google` means the golang.org/x/oauth2/google package and `option` means the google.golang.org/api/option package. The principal way to read from a Bigtable is to use the ReadRows method on *Table. A RowRange specifies a contiguous portion of a table. A Filter may be provided through RowFilter to limit or transform the data that is returned. To read a single row, use the ReadRow helper method. This API exposes two distinct forms of writing to a Bigtable: a Mutation and a ReadModifyWrite. The former expresses idempotent operations. The latter expresses non-idempotent operations and returns the new values of updated cells. These operations are performed by creating a Mutation or ReadModifyWrite (with NewMutation or NewReadModifyWrite), building up one or more operations on that, and then using the Apply or ApplyReadModifyWrite methods on a Table. For instance, to set a couple of cells in a table, To increment an encoded value in one cell, If a read or write operation encounters a transient error it will be retried until a successful response, an unretryable error or the context deadline is reached. Non-idempotent writes (where the timestamp is set to ServerTime) will not be retried. In the case of ReadRows, retried calls will not re-scan rows that have already been processed.