Orama Cloud Client
Install
npm i @oramacloud/client
Integrating with Orama Cloud
import { OramaClient } from "@oramacloud/client";
const client = new OramaClient({
endpoint: "<Your Orama Cloud Endpoint>",
api_key: "<Your Orama Cloud API Key>",
});
const results = await client.search({
term: "red leather shoes",
});
Advanced search
const results = await client.search({
term: "red leather shoes",
where: {
price: {
lte: 9.99,
},
gender: "unisex",
},
limit: 5,
offset: 1,
});
Generating embeddings via the Secure Proxy
import { OramaProxy } from "@oramacloud/client";
const proxy = new OramaClient({
api_key: "<Your Orama Secure Proxy API Key>",
});
const embeddings = await proxy.generateEmbeddings(
"red leather shoes",
"openai/text-embedding-ada-002"
);
console.log(embeddings);
Available models:
orama/gte-small
. 384 dimensions, operated by Orama Cloud (preferred)orama/gte-medium
. 768 dimensions, operated by Orama Cloudorama/gte-large
. 1024 dimensions, operated by Orama Cloudopenai/text-embedding-ada-002
. 1536 dimensions, proxied to OpenAIopenai/text-embedding-3-small
. 1536 dimensions, proxied to OpenAIopenai/text-embedding-3-large
. 3072 dimensions, proxied to OpenAI
Generating chat completions via the Secure Proxy
You can generate chat completions via the Secure Proxy in two different ways:
Returning a single string
import { OramaProxy } from "@oramacloud/client";
const proxy = new OramaClient({
api_key: "<Your Orama Secure Proxy API Key>",
});
const chatParams = {
model: "openai/gpt-4",
messages: [{ role: "user", content: "Who is Michael Scott?" }],
};
const response = await proxy.chat(chatParams);
console.log(response);
Available models:
openai/gpt-4-1106-preview
openai/gpt-4
openai/gpt-3.5-turbo
openai/gpt-3.5-turbo-16k
Returning a stream (via async iterators)
import { OramaProxy } from "@oramacloud/client";
const proxy = new OramaClient({
api_key: "<Your Orama Secure Proxy API Key>",
});
const chatParams = {
model: "openai/gpt-4",
messages: [{ role: "user", content: "Who is Michael Scott?" }],
};
for await (const message of proxy.chatStream(chatParams)) {
console.log(message);
}
Available models:
openai/gpt-4-1106-preview
openai/gpt-4
openai/gpt-3.5-turbo
openai/gpt-3.5-turbo-16k
With React
import { OramaCloud, useSearch } from "@oramacloud/client/react";
export function App() {
return (
<OramaCloud
endpoint="<Your Orama Cloud Endpoint>"
apiKey="<Your Orama Cloud API Key>"
>
<Search />
</OramaCloud>
);
}
function Search() {
const { results, error } = useSearch({
term: "red leather shoes",
limit: 10,
offset: 5,
});
return (
<>
{results.hits.map((hit) => {
<div key={hit.id}>
<p> {hit.document.myCustomProperty} </p>
</div>;
})}
</>
);
}
With Vue
Import the composable into your component and it's ready to use.
<template>
<li v-for="hit in results?.hits" :key="hit.id">
{{ hit.id }}
</li>
</template>
<script setup>
import { useSearch } from "@oramacloud/client/vue";
import { orama } from './orama'
const { results } = useSearch({
cloudConfig: {
apiKey: "<Your Orama Cloud API Key>",
endpoint: "<Your Orama Cloud Endpoint>",
},
term: "guitar",
limit: 5
});
</script>