Research
Security News
Threat Actor Exposes Playbook for Exploiting npm to Build Blockchain-Powered Botnets
A threat actor's playbook for exploiting the npm ecosystem was exposed on the dark web, detailing how to build a blockchain-powered botnet.
@google-ai/generativelanguage
Advanced tools
Generative Language API client for Node.js
A comprehensive list of changes in each version may be found in the CHANGELOG.
Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in Client Libraries Explained.
Table of contents:
npm install @google-ai/generativelanguage
/**
* This snippet has been automatically generated and should be regarded as a code template only.
* It will require modifications to work.
* It may require correct/in-range values for request initialization.
* TODO(developer): Uncomment these variables before running the sample.
*/
/**
* Required. The model name to use with the format name=models/{model}.
*/
// const model = 'abc123'
/**
* Required. The free-form input text given to the model as a prompt.
* Given a prompt, the model will generate a TextCompletion response it
* predicts as the completion of the input text.
*/
// const prompt = {
// text: 'abc123'
// }
/**
* Controls the randomness of the output.
* Note: The default value varies by model, see the `Model.temperature`
* attribute of the `Model` returned the `getModel` function.
* Values can range from 0.0,1.0,
* inclusive. A value closer to 1.0 will produce responses that are more
* varied and creative, while a value closer to 0.0 will typically result in
* more straightforward responses from the model.
*/
// const temperature = 1234
/**
* Number of generated responses to return.
* This value must be between 1, 8, inclusive. If unset, this will default
* to 1.
*/
// const candidateCount = 1234
/**
* The maximum number of tokens to include in a candidate.
* If unset, this will default to 64.
*/
// const maxOutputTokens = 1234
/**
* The maximum cumulative probability of tokens to consider when sampling.
* The model uses combined Top-k and nucleus sampling.
* Tokens are sorted based on their assigned probabilities so that only the
* most liekly tokens are considered. Top-k sampling directly limits the
* maximum number of tokens to consider, while Nucleus sampling limits number
* of tokens based on the cumulative probability.
* Note: The default value varies by model, see the `Model.top_p`
* attribute of the `Model` returned the `getModel` function.
*/
// const topP = 1234
/**
* The maximum number of tokens to consider when sampling.
* The model uses combined Top-k and nucleus sampling.
* Top-k sampling considers the set of `top_k` most probable tokens.
* Defaults to 40.
* Note: The default value varies by model, see the `Model.top_k`
* attribute of the `Model` returned the `getModel` function.
*/
// const topK = 1234
/**
* The set of character sequences (up to 5) that will stop output generation.
* If specified, the API will stop at the first appearance of a stop
* sequence. The stop sequence will not be included as part of the response.
*/
// const stopSequences = 'abc123'
// Imports the Generativelanguage library
const {TextServiceClient} = require('@google-ai/generativelanguage').v1beta2;
// Instantiates a client
const generativelanguageClient = new TextServiceClient();
async function callGenerateText() {
// Construct request
const request = {
model,
prompt,
};
// Run request
const response = await generativelanguageClient.generateText(request);
console.log(response);
}
callGenerateText();
Samples are in the samples/
directory. Each sample's README.md
has instructions for running its sample.
Sample | Source Code | Try it |
---|---|---|
Generative_service.batch_embed_contents | source code | |
Generative_service.count_tokens | source code | |
Generative_service.embed_content | source code | |
Generative_service.generate_content | source code | |
Generative_service.stream_generate_content | source code | |
Model_service.get_model | source code | |
Model_service.list_models | source code | |
Discuss_service.count_message_tokens | source code | |
Discuss_service.generate_message | source code | |
Generative_service.batch_embed_contents | source code | |
Generative_service.count_tokens | source code | |
Generative_service.embed_content | source code | |
Generative_service.generate_answer | source code | |
Generative_service.generate_content | source code | |
Generative_service.stream_generate_content | source code | |
Model_service.create_tuned_model | source code | |
Model_service.delete_tuned_model | source code | |
Model_service.get_model | source code | |
Model_service.get_tuned_model | source code | |
Model_service.list_models | source code | |
Model_service.list_tuned_models | source code | |
Model_service.update_tuned_model | source code | |
Permission_service.create_permission | source code | |
Permission_service.delete_permission | source code | |
Permission_service.get_permission | source code | |
Permission_service.list_permissions | source code | |
Permission_service.transfer_ownership | source code | |
Permission_service.update_permission | source code | |
Retriever_service.batch_create_chunks | source code | |
Retriever_service.batch_delete_chunks | source code | |
Retriever_service.batch_update_chunks | source code | |
Retriever_service.create_chunk | source code | |
Retriever_service.create_corpus | source code | |
Retriever_service.create_document | source code | |
Retriever_service.delete_chunk | source code | |
Retriever_service.delete_corpus | source code | |
Retriever_service.delete_document | source code | |
Retriever_service.get_chunk | source code | |
Retriever_service.get_corpus | source code | |
Retriever_service.get_document | source code | |
Retriever_service.list_chunks | source code | |
Retriever_service.list_corpora | source code | |
Retriever_service.list_documents | source code | |
Retriever_service.query_corpus | source code | |
Retriever_service.query_document | source code | |
Retriever_service.update_chunk | source code | |
Retriever_service.update_corpus | source code | |
Retriever_service.update_document | source code | |
Text_service.batch_embed_text | source code | |
Text_service.count_text_tokens | source code | |
Text_service.embed_text | source code | |
Text_service.generate_text | source code | |
Discuss_service.count_message_tokens | source code | |
Discuss_service.generate_message | source code | |
Model_service.get_model | source code | |
Model_service.list_models | source code | |
Text_service.embed_text | source code | |
Text_service.generate_text | source code | |
Discuss_service.count_message_tokens | source code | |
Discuss_service.generate_message | source code | |
Model_service.create_tuned_model | source code | |
Model_service.delete_tuned_model | source code | |
Model_service.get_model | source code | |
Model_service.get_tuned_model | source code | |
Model_service.list_models | source code | |
Model_service.list_tuned_models | source code | |
Model_service.update_tuned_model | source code | |
Permission_service.create_permission | source code | |
Permission_service.delete_permission | source code | |
Permission_service.get_permission | source code | |
Permission_service.list_permissions | source code | |
Permission_service.transfer_ownership | source code | |
Permission_service.update_permission | source code | |
Text_service.batch_embed_text | source code | |
Text_service.count_text_tokens | source code | |
Text_service.embed_text | source code | |
Text_service.generate_text | source code | |
Quickstart | source code |
The Generative Language API Node.js Client API Reference documentation also contains samples.
Our client libraries follow the Node.js release schedule. Libraries are compatible with all current active and maintenance versions of Node.js. If you are using an end-of-life version of Node.js, we recommend that you update as soon as possible to an actively supported LTS version.
Google's client libraries support legacy versions of Node.js runtimes on a best-efforts basis with the following warnings:
Client libraries targeting some end-of-life versions of Node.js are available, and
can be installed through npm dist-tags.
The dist-tags follow the naming convention legacy-(version)
.
For example, npm install @google-ai/generativelanguage@legacy-8
installs client libraries
for versions compatible with Node.js 8.
This library follows Semantic Versioning.
This library is considered to be in preview. This means it is still a work-in-progress and under active development. Any release is subject to backwards-incompatible changes at any time.
More Information: Google Cloud Platform Launch Stages
Contributions welcome! See the Contributing Guide.
Please note that this README.md
, the samples/README.md
,
and a variety of configuration files in this repository (including .nycrc
and tsconfig.json
)
are generated from a central template. To edit one of these files, make an edit
to its templates in
directory.
Apache Version 2.0
See LICENSE
FAQs
Generative Language API client for Node.js
The npm package @google-ai/generativelanguage receives a total of 27,455 weekly downloads. As such, @google-ai/generativelanguage popularity was classified as popular.
We found that @google-ai/generativelanguage demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A threat actor's playbook for exploiting the npm ecosystem was exposed on the dark web, detailing how to build a blockchain-powered botnet.
Security News
NVD’s backlog surpasses 20,000 CVEs as analysis slows and NIST announces new system updates to address ongoing delays.
Security News
Research
A malicious npm package disguised as a WhatsApp client is exploiting authentication flows with a remote kill switch to exfiltrate data and destroy files.