
Product
Introducing Tier 1 Reachability: Precision CVE Triage for Enterprise Teams
Socket’s new Tier 1 Reachability filters out up to 80% of irrelevant CVEs, so security teams can focus on the vulnerabilities that matter.
[!TIP] ai.matey works well with ai.captain
To help work with chrome's experimental window.ai API this package provides:
Documentation for the window.ai API here
A mock implementation of window.ai and it's sub-modules that can be used for testing.
Multiple API-Compatible clients that mirror window.ai
.
They can be used as drop-in replacements for window.ai
or as standalone clients.
Documentation for the window.ai API is provided here: https://github.com/johnhenry/ai.matey/blob/main/docs/api.md
To use the mock implementation, import the mock from ai.matey/mock
;
import ai from "https://cdn.jsdelivr.net/npm/ai.matey@0.0.42/mock/index.mjs";
// OR "https://ga.jspm.io/npm:ai.matey@0.0.42/mock/index.mjs"
npm install ai.matey
import ai from "ai.matey/mock";
//...
import ai from "ai.matey/mock";
const model = await ai.languageModel.create();
const poem = await model.prompt("write a poem");
console.log(poem);
To use the polyfill implementation, import the polyfill from ai.matey/mock/polyfill
;
This will automatically detect if the window.ai object is already defined and will not overwrite it.
import "ai.matey/mock/polyfill";
const model = await window.ai.languageModel.create();
const poem = await model.prompt("write a poem");
console.log(poem);
To overwrite the window.ai object, import the polyfill from ai.matey/mock/polyfill-overwrite
;
import "ai.matey/mock/polyfill-overwrite";
const model = await window.ai.languageModel.create();
const poem = await model.prompt("write a poem");
console.log(poem);
Use the OpenAI compatible client standalone, or as a drop-in replacement for window.ai
Note, that unlike with the mock implementation, these require instantiation.
Import the clients directly from the CDN
import Ollama from "https://cdn.jsdelivr.net/npm/ai.matey@0.0.42/ollama/index.mjs";
// OR "https://ga.jspm.io/npm:ai.matey@0.0.42/ollama/index.mjs"
const ai = new Ollama();
npm install ai.matey
import Ollama from "ai.matey/ollama";
To use the a client, import the mock from ai.matey/<client name>
;
Each client is pre-configured with a default endpoint and model that can be overwritten.
Client | Default Endpoint | Default Model | OpenAI API | CORS Compatible |
---|---|---|---|---|
anthropic | https://api.anthropic.com | claude-3-opus-20240229 | x | ✅ |
deepseek | https://api.deepseek.com | deepseek-chat | ✅ | ? |
gemini | https://generativelanguage.googleapis.com | gemini-2.0-flash-exp | x | ✅ |
groq | https://api.groq.com/openai | llama3-8b-8192 | ✅ | ✅ |
huggingface | https://api-inference.huggingface.co | mistralai/Mixtral-8x7B-Instruct-v0.1 | x | ✅ |
lmstudio | http://localhost:1234 | gemma-3-1b-it-qat | ✅ | ? |
mistral | https://api.mistral.ai | mistral-small-latest | ✅ | ✅ |
nvidia | https://integrate.api.nvidia.com | meta/llama-3.1-8b-instruct | ✅ | x |
ollama | http://localhost:11434 | llama3.2:latest | ✅ | ✅ |
openai | https://api.openai.com | gpt-4o-mini | ✅ | ✅ |
Except for Ollama an LMStudio, you must provide a credentials
object with a valid apiKey
property in the constructor's settings object.
import Client from "ai.matey/<client name>";
const ai = new Client({
endpoint: "<endpoint>", // optional
model: "<model>", // optional
credentials: {
apiKey: "<api key>", // required, except for Ollama client
},
});
import Ollama from "ai.matey/ollama";
// Instantiate w/ options
const ai = new Ollama();
// use the newly created `ai` object as you would `window.ai`
const model = await ai.languageModel.create();
const poem = await model.prompt("write a poem");
console.log(poem);
import OpenAI from "ai.matey/openai";
const ai = new OpenAI({ credentials: { apiKey: "<OPEN_AI_API_KEY>" } }); // use default endpoing
The library also provides a createClient
function that can be used to create
any of the clients via name.
import createClient from "ai.matey";
const ai = createClient("openai", {
credentials: { apiKey: "<OPEN_AI_API_KEY>" },
}); // use default endpoing
The are some differences between the client implmentations and the base window.ai
object.
The window.ai
object is a singleton, while the clients are not.
ai.<module>.create()
takes additional options:
maxHistorySize
- the maximum number of messages to keep in the conversation history.
0
-1
denotes no limit<model>.chat()
simulates OpenAI chat.completions.create requests and responses
ai.matey/window.ai.chat
that implements this chat interface atop window.ai
<model>.$
and <model>.$$
are proxies used to invoke the anymethod pattern where any method can be used to query model.
$
are asynchronous -- returning a promise fulfilled with the result
const poem = await model.$.write_a_poem()
$$
are streaming -- returning an async iterator yielding the result
for await (const chunk of model.$$.write_a_poem()) { console.log(chunk); }
Check out the playground in this repositrory. Run a static sterver (npx live-server .
) and navigate to playground.html
FAQs
Unknown package
The npm package ai.matey receives a total of 2 weekly downloads. As such, ai.matey popularity was classified as not popular.
We found that ai.matey demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Product
Socket’s new Tier 1 Reachability filters out up to 80% of irrelevant CVEs, so security teams can focus on the vulnerabilities that matter.
Research
/Security News
Ongoing npm supply chain attack spreads to DuckDB: multiple packages compromised with the same wallet-drainer malware.
Security News
The MCP Steering Committee has launched the official MCP Registry in preview, a central hub for discovering and publishing MCP servers.