🚀 DAY 5 OF LAUNCH WEEK:Introducing Webhook Events for Alert Changes.Learn more →
Socket
Book a DemoInstallSign in
Socket

io.github.bay73:generic-ai

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

io.github.bay73:generic-ai

Easy-to-use generic Kotlin API client for connecting to various AI providers with multiplatform support. Supported AI providers: AI21 Lab, Anthropic, AWS Bedrock, Azure OpenAI, Cerebras, Cohere, DeepSeek, Google Gemini, Grok, Inception Labs, Mistral, Novita, OpenAI, SambaNova, Together AI, Yandex AI Studio.

Source
mavenMaven
Version
0.6.6
Version published
Maintainers
1
Source

Generic AI client for Kotlin

License

Easy-to-use generic Kotlin API client for connecting to various AI providers with multiplatform support. Supported AI providers:

🛠️ Setup

  • To install the Generic-AI Kotlin client, add this dependency to your build.gradle file:
repositories {
    mavenCentral()
}

dependencies {
    implementation("io.github.bay73:generic-ai:0.6.6")
}

Generic-AI uses ktor library to work with http requests so you need to include ktor client corresponding to your platform.

Multiplatform

In multiplatform projects, include the generic-AI client dependency in commonMain and select a specific ktor engine for each target.

🚀 Basic usage

import com.bay.aiclient.AiClient

fun getResponse() {
    // Create a client for specified AI provider.
    val client = AiClient.get(AiClient.Type.OPEN_AI) { // Choose provider
        apiKey = "put your API key here"
        defaultModel = "gpt-4o-mini"    // Choose model
    }
    // Start request to the AI model.
    val job = client.generateText { prompt = "When the first LLM was created?" }
    // Wait for execution and get response.
    println(job.await().getOrThrow().response)
    
    // Get list of available models
    val models = client.models()
    models.models.forEach { println(it.id) }  
}

Usage in Java

The library created to use in Kotlin code and so usage in Java is cumbersome while possible. Special AiClientJava wrapper is created to use CompletableFuture for asynchronous execution. Here is a code example:

import com.bay.aiclient.AiClient;
import com.bay.aiclient.AiClientJava;
import com.bay.aiclient.domain.GenerateTextRequest;
import com.bay.aiclient.domain.GenerateTextResponse;

getResponse main() throws ExecutionException, InterruptedException {
    AiClient.Builder clientBuilder = AiClient.Companion.getBuilder(AiClient.Type.OPEN_AI); // Choose provider
    clientBuilder.setApiKey("put your API key here");
    clientBuilder.setDefaultModel("gpt-4o-mini");       // Choose model
    AiClient client = clientBuilder.build();
    AiClientJava javaClient = new AiClientJava(client);  // Java client uses CompletableFuture

    // Start request to the AI model.
    GenerateTextRequest.Builder requestBuilder = client.textGenerationRequestBuilder();
    requestBuilder.setPrompt("When the first LLM were released?");
    CompletableFuture<Result<GenerateTextResponse>> response = javaClient.generateText(requestBuilder.build());

    // Wait for execution and get response.
    response.join();
    if (response.isDone()) {
        System.out.println(response.get());
    }
}

đź”§ Generic settings

There are set of generic settings which can be used for any AI provider to customize model behavior for a specific request.

fun customizeRequest() {
    val response = cleint.generateText {
        model = "model_id" // Model id to use for a specific request.
        prompt = "" // User prompt which initiates generation 
        systemInstructions = "" // Additional system instruction to adjust AI behavior.
        responseFormat = ResponseFormat.JSON_OBJECT // Allows to specify response format. See details below.
        chatHistory = listOf<TextMessage>() // A list of chat messages in chronological order, representing a conversation between the user and the model.
        maxOutputTokens = 100 // The maximum number of tokens that can be generated as part of the response.
        stopSequences = listOf<String>() // A list of strings that the model uses to stop generation.
        temperature = 0.1 // A non-negative float that tunes the degree of randomness in generation. Lower temperatures mean less random generations.
        topP = 0.5 // An alternative way controlling the diversity of the model's responses. It's recommended to use either temperature or topP.
    }
}

responseFormat allows to specify returning text, generic JSON or validated JSON schema. The parameter has limited support by different providers:

ProviderTextGeneric JSONJSON schema
AI21 Labâś…âś…
Anthropicâś…
AWS Bedrockâś…
Azure OpenAIâś…âś…âś…
Cerebrasâś…âś…
Cohereâś…âś…âś…
DeepSeekâś…âś…
Google Geminiâś…âś…âś…
Grokâś…âś…âś…
Inception Labsâś…
Mistralâś…âś…
Novitaâś…âś…âś…
OpenAIâś…âś…âś… depending on model
SambaNovaâś…
Together AIâś…âś…âś… depending on model
Yandex AIâś…âś…âś…

⚙️ Provider specific settings

Additional generation parameters

Some AI providers have additional settings which can be adjusted. To use this you need to request client of a specific class:

fun getResponseWithSpecificParameters() {
    // Create a client for specified AI provider.
    val client = AiClient.get(CohereClient::class) { // Choose provider
        apiKey = "put your API key here"
        defaultModel = "command-r"    // Choose model
    }
    // Start request to the AI model.
    val response = client.generateText { 
        prompt = "When the first LLM was created?"
        seed = 5  // see provider documentation for usage of specific parameters 
        frequencyPenalty = 0.1
    }
}

AWS Bedrock authentication

AWS bedrock doesn't support token based authentication. To use it you need to provide special credentials object to the client constructor:

fun getBedrockClient() {
    // You can use either [DefaultCredentialsProvider](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials-chain.html) or specify access key and token directly.
    val credentials = BedrockClient.Credentials("region", true)
    // or 
    val credentials = BedrockClient.Credentials("region", false, "accessKeyId", "secretAccessKey", "sessionToken ")

    // Create a client using the credentials provider
    val client = BedrockClient.Builder(creadentals).build()
}

Azure OpenAI connection

Azure OpenAI requires the resource name which is used to access services as a part of endpoint. To specify resource name use dedicated client constructor which allow to set the resource name as a string:

fun getAzureOpenAiClient() {
    // Create a client using specific builder
    val client = AzureOpenAiClient.Builder().apply {
        resourceName = "your-resource-name"
        apiKey = "your-api-key"
    }.build()
}

Further usage of this client is the same as for all other AI clients.

Yandex AI Studio connection

Yandex requires to specify a folder - a space where Yandex Cloud resources are created and grouped. It is used as a part of foundation model URI. To specify the folder use dedicated client builder which allow to set the resource folder as a string:

fun getYandexOpenAiClient() {
    // Create a client using specific builder
    val client = YandexOpenAiClient.Builder().apply {
        resourceFolder = "your-resource-name"
        apiKey = "your-api-key"
    }.build()
}

Further usage of this client is the same as for all other AI clients.

đź’ˇ Sample application

Sample multiplatform application using the library is available at GitHub

đź“„ License

Generic AI Kotlin API Client is an open-sourced software licensed under the MIT license. Please note that this is an unofficial library and is not affiliated with or endorsed by any AI provider. Contributions are always welcome!

FAQs

Package last updated on 31 Jul 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts