Research
Security News
Quasar RAT Disguised as an npm Package for Detecting Vulnerabilities in Ethereum Smart Contracts
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
com.google.cloud:google-cloud-os-config
Advanced tools
provides OS management tools that can be used for patch management, patch compliance, and configuration management on VM instances.
Java idiomatic client for Google Cloud Platform services.
Libraries are available on GitHub and Maven Central for developing Java applications that interact with individual Google Cloud services:
If the service is not listed, google-api-java-client interfaces with additional Google Cloud APIs using a legacy REST interface.
When building Java applications, preference should be given to the libraries listed in the table.
Most google-cloud
libraries require a project ID. There are multiple ways to specify this project ID.
google-cloud
libraries from within Compute/App Engine, there's no need to specify a project ID. It is automatically inferred from the production environment.google-cloud
elsewhere, you can do one of the following:Supply the project ID when building the service options. For example, to use Datastore from a project with ID "PROJECT_ID", you can write:
Datastore datastore = DatastoreOptions.newBuilder().setProjectId("PROJECT_ID").build().getService();
Specify the environment variable GOOGLE_CLOUD_PROJECT
to be your desired project ID.
Set the project ID using the Google Cloud SDK. To use the SDK, download the SDK if you haven't already, and set the project ID from the command line. For example:
gcloud config set project PROJECT_ID
google-cloud
determines the project ID from the following sources in the listed order, stopping once it finds a value:
GOOGLE_CLOUD_PROJECT
GOOGLE_APPLICATION_CREDENTIALS
environment variableIn cases where the library may expect a project ID explicitly, we provide a helper that can provide the inferred project ID:
import com.google.cloud.ServiceOptions;
...
String projectId = ServiceOptions.getDefaultProjectId();
google-cloud-java
uses
https://github.com/googleapis/google-auth-library-java
to authenticate requests. google-auth-library-java
supports a wide range of authentication types;
see the project's README
and javadoc for more
details.
When using Google Cloud libraries from a Google Cloud Platform environment such as Compute Engine, Kubernetes Engine, or App Engine, no additional authentication steps are necessary.
For example:
Storage storage = StorageOptions.getDefaultInstance().getService();
or:
CloudTasksClient cloudTasksClient = CloudTasksClient.create();
After downloading that key, you must do one of the following:
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/my/key.json
Storage storage = StorageOptions.newBuilder()
.setCredentials(ServiceAccountCredentials.fromStream(new FileInputStream("/path/to/my/key.json")))
.build()
.getService();
If running locally for development/testing, you can use the Google Cloud SDK.
Create Application Default Credentials with gcloud auth application-default login
, and then
google-cloud
will automatically detect such credentials.
If you already have an OAuth2 access token, you can use it to authenticate (notice that in this case, the access token will not be automatically refreshed):
Credentials credentials = GoogleCredentials.create(new AccessToken(accessToken, expirationTime));
Storage storage = StorageOptions.newBuilder()
.setCredentials(credentials)
.build()
.getService();
or:
Credentials credentials = GoogleCredentials.create(new AccessToken(accessToken, expirationTime));
CloudTasksSettings cloudTasksSettings = CloudTasksSettings.newBuilder()
.setCredentialProvider(FixedCredentialsProvider.create(credentials))
.build();
CloudTasksClient cloudTasksClient = CloudTasksClient.create(cloudTasksSettings);
If no credentials are provided, google-cloud
will attempt to detect them from the environment
using GoogleCredentials.getApplicationDefault()
which will search for Application Default
Credentials in the following locations (in order):
GOOGLE_APPLICATION_CREDENTIALS
environment variablegcloud auth application-default login
commandAuthenticating with API Keys is supported by a handful of Google Cloud APIs.
We are actively exploring ways to improve the API Key experience. Currently, to use an API Key with a Java client library, you need to set the header for the relevant service Client manually.
For example, to set the API Key with the Language service:
public LanguageServiceClient createGrpcClientWithApiKey(String apiKey) throws Exception {
// Manually set the api key via the header
Map<String, String> header = new HashMap<String, String>() { {put("x-goog-api-key", apiKey);}};
FixedHeaderProvider headerProvider = FixedHeaderProvider.create(header);
// Create the client
TransportChannelProvider transportChannelProvider = InstantiatingGrpcChannelProvider.newBuilder().setHeaderProvider(headerProvider).build();
LanguageServiceSettings settings = LanguageServiceSettings.newBuilder().setTransportChannelProvider(transportChannelProvider).build();
LanguageServiceClient client = LanguageServiceClient.create(settings);
return client;
}
An example instantiation with the Language Client using rest:
public LanguageServiceClient createRestClientWithApiKey(String apiKey) throws Exception {
// Manually set the api key header
Map<String, String> header = new HashMap<String, String>() { {put("x-goog-api-key", apiKey);}};
FixedHeaderProvider headerProvider = FixedHeaderProvider.create(header);
// Create the client
TransportChannelProvider transportChannelProvider = InstantiatingHttpJsonChannelProvider.newBuilder().setHeaderProvider(headerProvider).build();
LanguageServiceSettings settings = LanguageServiceSettings.newBuilder().setTransportChannelProvider(transportChannelProvider).build();
LanguageServiceClient client = LanguageServiceClient.create(settings);
return client;
}
To get help, follow the instructions in the Troubleshooting document.
Google Cloud client libraries use HTTPS and gRPC in underlying communication
with the services.
In both protocols, you can configure a proxy using https.proxyHost
and (optional) https.proxyPort
properties.
For a more custom proxy with gRPC, you will need supply a ProxyDetector
to
the ManagedChannelBuilder
:
import com.google.api.core.ApiFunction;
import com.google.api.gax.rpc.TransportChannelProvider;
import com.google.cloud.tasks.v2.CloudTasksClient;
import com.google.cloud.tasks.v2.CloudTasksSettings;
import com.google.cloud.tasks.v2.stub.CloudTasksStubSettings;
import io.grpc.HttpConnectProxiedSocketAddress;
import io.grpc.ManagedChannelBuilder;
import io.grpc.ProxiedSocketAddress;
import io.grpc.ProxyDetector;
import javax.annotation.Nullable;
import java.io.IOException;
import java.net.InetSocketAddress;
import java.net.SocketAddress;
public CloudTasksClient getService() throws IOException {
TransportChannelProvider transportChannelProvider =
CloudTasksStubSettings.defaultGrpcTransportProviderBuilder()
.setChannelConfigurator(
new ApiFunction<ManagedChannelBuilder, ManagedChannelBuilder>() {
@Override
public ManagedChannelBuilder apply(ManagedChannelBuilder managedChannelBuilder) {
return managedChannelBuilder.proxyDetector(
new ProxyDetector() {
@Nullable
@Override
public ProxiedSocketAddress proxyFor(SocketAddress socketAddress)
throws IOException {
return HttpConnectProxiedSocketAddress.newBuilder()
.setUsername(PROXY_USERNAME)
.setPassword(PROXY_PASSWORD)
.setProxyAddress(new InetSocketAddress(PROXY_HOST, PROXY_PORT))
.setTargetAddress((InetSocketAddress) socketAddress)
.build();
}
});
}
})
.build();
CloudTasksSettings cloudTasksSettings =
CloudTasksSettings.newBuilder()
.setTransportChannelProvider(transportChannelProvider)
.build();
return CloudTasksClient.create(cloudTasksSettings);
}
Long running operations (LROs) are often used for API calls that are expected to
take a long time to complete (i.e. provisioning a GCE instance or a Dataflow pipeline).
The initial API call creates an "operation" on the server and returns an Operation ID
to track its progress. LRO RPCs have the suffix Async
appended to the call name
(i.e. clusterControllerClient.createClusterAsync()
)
Our generated clients provide a nice interface for starting the operation and
then waiting for the operation to complete. This is accomplished by returning an
OperationFuture
.
When calling get()
on the OperationFuture
, the client library will poll the operation to
check the operation's status.
For example, take a sample createCluster
Operation in google-cloud-dataproc v4.20.0:
try (ClusterControllerClient clusterControllerClient = ClusterControllerClient.create()) {
CreateClusterRequest request =
CreateClusterRequest.newBuilder()
.setProjectId("{PROJECT_ID}")
.setRegion("{REGION}")
.setCluster(Cluster.newBuilder().build())
.setRequestId("{REQUEST_ID}")
.setActionOnFailedPrimaryWorkers(FailureAction.forNumber(0))
.build();
OperationFuture<Cluster, ClusterOperationMetadata> future =
clusterControllerClient.createClusterOperationCallable().futureCall(request);
// Do something.
Cluster response = future.get();
} catch (CancellationException e) {
// Exceeded the default RPC timeout without the Operation completing.
// Library is no longer polling for the Operation status. Consider
// increasing the timeout.
}
The polling operations have a default timeout that varies from service to service.
The library will throw a java.util.concurrent.CancellationException
with the message:
Task was cancelled.
if the timeout exceeds the operation. A CancellationException
does not mean that the backend GCP Operation was cancelled. This exception is thrown from the
client library when it has exceeded the total timeout without receiving a successful status from the operation.
Our client libraries respect the configured values set in the OperationTimedPollAlgorithm for each RPC.
Note: The client library handles the Operation's polling mechanism for you. By default, there is no need to manually poll the status yourself.
Each LRO RPC has a set of pre-configured default values. You can find these values by
searching in each Client's StubSettings
's class. The default LRO settings are initialized
inside the initDefaults()
method in the nested Builder class.
For example, in google-cloud-aiplatform v3.24.0, the default OperationTimedPollAlgorithm has these default values:
OperationTimedPollAlgorithm.create(
RetrySettings.newBuilder()
.setInitialRetryDelay(Duration.ofMillis(5000L))
.setRetryDelayMultiplier(1.5)
.setMaxRetryDelay(Duration.ofMillis(45000L))
.setInitialRpcTimeout(Duration.ZERO)
.setRpcTimeoutMultiplier(1.0)
.setMaxRpcTimeout(Duration.ZERO)
.setTotalTimeout(Duration.ofMillis(300000L))
.build())
Both retries and LROs share the same RetrySettings class. Note the corresponding link:
The RPC Timeout values have no use in LROs and can be omitted or set to the default values
(Duration.ZERO
for Timeouts or 1.0
for the multiplier).
To configure the LRO values, create an OperationTimedPollAlgorithm object and update the RPC's polling algorithm. For example:
ClusterControllerSettings.Builder settingsBuilder = ClusterControllerSettings.newBuilder();
TimedRetryAlgorithm timedRetryAlgorithm = OperationTimedPollAlgorithm.create(
RetrySettings.newBuilder()
.setInitialRetryDelay(Duration.ofMillis(500L))
.setRetryDelayMultiplier(1.5)
.setMaxRetryDelay(Duration.ofMillis(5000L))
.setInitialRpcTimeout(Duration.ZERO) // ignored
.setRpcTimeoutMultiplier(1.0) // ignored
.setMaxRpcTimeout(Duration.ZERO) // ignored
.setTotalTimeout(Duration.ofHours(24L)) // set polling timeout to 24 hours
.build());
settingsBuilder.createClusterOperationSettings()
.setPollingAlgorithm(timedRetryAlgorithm);
ClusterControllerClient clusterControllerClient = ClusterControllerClient.create(settingsBuilder.build());
Note: The configuration above only modifies the LRO values for the createClusterOperation
RPC.
The other RPCs in the Client will still use each RPC's pre-configured LRO values.
If you are using more than one Google Cloud client library, we recommend you use one of our Bill of Material (BOM) artifacts to help manage dependency versions. For more information, see Using the Cloud Client Libraries.
Java 8 or above is required for using the clients in this repository.
Clients in this repository use either HTTP or gRPC for the transport layer. All HTTP-based clients should work in all environments.
For clients that use gRPC, the supported platforms are constrained by the platforms that Forked Tomcat Native supports, which for architectures means only x86_64, and for operating systems means Mac OS X, Windows, and Linux. Additionally, gRPC constrains the use of platforms with threading restrictions.
Thus, the following are not supported:
The following environments should work (among others):
This library provides tools to help write tests for code that uses google-cloud services.
See TESTING to read more about using our testing helpers.
This library follows Semantic Versioning, with some additional qualifications:
Components marked with @BetaApi
or @Experimental
are considered to be "0.x"
features inside a "1.x" library. This means they can change between minor and
patch releases in incompatible ways. These features should not be used by any
library "B" that itself has consumers, unless the components of library B that
use @BetaApi
features are also marked with @BetaApi
. Features marked as
@BetaApi
are on a path to eventually become "1.x" features with the marker
removed.
Special exception for google-cloud-java: google-cloud-java is
allowed to depend on @BetaApi
features in gax-java without declaring the consuming
code @BetaApi
, because gax-java and google-cloud-java move in step
with each other. For this reason, gax-java should not be used
independently of google-cloud-java.
Components marked with @InternalApi
are technically public, but only
because of the limitations of Java's access
modifiers. For the purposes of semver, they should be considered private.
Interfaces marked with @InternalExtensionOnly
are public, but should only be
implemented by internal classes. For the purposes of semver, we reserve the right
to add to these interfaces without default implementations (for Java 7).
Please note these clients are currently under active development. Any release versioned 0.x.y is subject to backwards incompatible changes at any time.
Libraries defined at a Stable quality level are expected to be stable and all updates in the libraries are guaranteed to be backwards-compatible. Any backwards-incompatible changes will lead to the major version increment (1.x.y -> 2.0.0).
Libraries defined at a Preview quality level are still a work-in-progress and are more likely to get backwards-incompatible updates. Additionally, it's possible for Preview libraries to get deprecated and deleted before ever being promoted to Preview or Stable.
If you're using IntelliJ or Eclipse, you can add client libraries to your project using these IDE plugins:
Besides adding client libraries, the plugins provide additional functionality, such as service account key management. Refer to the documentation for each plugin for more details.
These client libraries can be used on App Engine standard for Java 8 runtime and App Engine flexible (including the Compat runtime). Most of the libraries do not work on the App Engine standard for Java 7 runtime. However, Datastore, Storage, and Bigquery should work.
Contributions to this library are always welcome and highly encouraged.
See google-cloud
's CONTRIBUTING documentation and the shared documentation for more information on how to get started.
Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms. See Code of Conduct for more information.
Apache 2.0 - See LICENSE for more information.
FAQs
provides OS management tools that can be used for patch management, patch compliance, and configuration management on VM instances.
We found that com.google.cloud:google-cloud-os-config demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
Security News
Research
A supply chain attack on Rspack's npm packages injected cryptomining malware, potentially impacting thousands of developers.
Research
Security News
Socket researchers discovered a malware campaign on npm delivering the Skuld infostealer via typosquatted packages, exposing sensitive data.