Security News
The Risks of Misguided Research in Supply Chain Security
Snyk's use of malicious npm packages for research raises ethical concerns, highlighting risks in public deployment, data exfiltration, and unauthorized testing.
ellma
Easy LLM Assistants
To get the best out of ellma
, there are some concepts that you should understand.
In order to keep this library flexible, while also maintaining reasonable defaults, features that relate to external runtime functionality should be implemented with the adapter pattern. In this library, the adapter pattern consists of 2 main concepts: the interface and the adapter. The interface refers to the internal interface that we will use throughout the codebase. The adapter maps that internal interface to the external interface that a given implementation provides. Some examples of this are:
openai
endpoint for chat completions to the ChatIntegration
interface used by chat models.node:readline
terminal IO utilities to the IoPeripheral
interface used by features that deal with user input and output.api.openai.com/v1/chat/completions
to ChatIntegration
Take a look at the following interface for ChatIntegration
.
export type ChatIntegration = {
chat: (messages: ChatMessage[]) => Promise<ChatMessage>,
}
The interface is meant to be simple for the generic chat model to consume, so it has a single chat
property. The chat
property is a function that takes an array of ChatMessage
objects (the conversation so far) and returns a single ChatMessage
object (the reply). The interface for the function that calls /v1/chat/completions
, however, is a bit more complicated.
export type OpenAiChatApi = (config: {
apiKey: string,
messages: OpenAiChatMessage[],
model?: string,
organizationId?: string,
peripherals?: Partial<Peripherals>,
}) => Promise<OpenAiChatApiResponse>
Not only do we need the messages, but we also need the API key, the preferred model, and more. The function is essentially a raw implementation of the corresponding openai
endpoint, and an adapter must be used to map it to the ChatIntegration
interface.
Peripherals wrap environment-specific functionality that we use in our models, integrations, or even other peripherals. Some examples of this are:
To better understand this concept, take a look at the io
implementation under ./peripherals
. The adapter interface is defined by IoAdapter
as an object that has two async function properties: read
and write
. The terminal
adapter conforms to that interface, and the useIo
peripheral maps the terminal
adapter to the IoPeripheral
interface. This allows us to use the IoPeripheral
interface throughout the codebase without knowledge about specific implementations that end-users might choose to use. Additionally, we can expose helper functions in the peripheral that utilize the underlying adapter interface without requiring adapters to implement the function directly. The prompt
function is one example that uses the write
function to output something to a user followed by the read
function to receive user input.
These are the various types of AI models that we can use in our agents. Models are responsible for taking input and producing output. They can be used in a variety of ways, but they are typically used as the higher-level building blocks of an agent. For example, a model might be used to generate a response to a user's input, or it might be used to generate a new piece of content based on a given prompt. Some examples of this are:
The embedding model is a special type of model that is used to convert text into a vector representation. This is useful for a variety of tasks, including Q&A on a specific dataset. For example, we can generate vector representations of text with the embedding model and then store those vectors in a database alongside the text they represent. Then, we can generate a vector representation of a given input, query the database for the most similar pieces of text, and include one or more of those results in the prompt for the model.
These are the interfaces that allow us to communicate with third-party services. Integrations wrap the functionality of third-party services for use by models or peripherals. Integrations are organized by provider (e.g. openai
) and may expose multiple models or peripherals. The interface for an integration is defined by the consumer of the specific implementation. For example, the openai
integration exposes a chat
function that conforms to the ChatIntegration
interface defined by the chat model in ./models
.
Example model integrations
Integrations can also be used to wrap peripherals. An integration for firebase
, for example, could be used by a custom adapter for the storage
peripheral.
Example storage integrations
ellma
Install it with your preferred package manager.
# npm
npm i ellma
# pnpm
pnpm add ellma
# yarn
yarn add ellma
Import (or create) an integration, and use it to initialize a model. Use the model to generate output.
import { useChat } from 'ellma'
import { openai } from 'ellma/integrations'
const integration = openai({ apiKey: 'your-private-api-key' })
const { factory, model } = useChat({ integration })
const greeting = factory.human({ text: 'Good morning!' })
const reply = await model.generate(greeting)
console.log(reply.text) // 'Good morning! How may I assist you today?'
For more examples, check out the playground
directory.
ellma
Things are still changing, but I recommend you read through the "Concepts" section above before you get started.
Clone the repo to your machine.
git clone git@github.com:davidmyersdev/ellma.git
Install dependencies with pnpm
.
# ~/path/to/ellma
pnpm i
Create your .env
file.
# ~/path/to/ellma
cp .env.example .env
Add your OpenAI API key and (optionally) add your organization and user keys if you have them.
# ~/path/to/ellma/.env
VITE_OPENAI_API_KEY=your-api-key
# The rest are optional.
VITE_OPENAI_ORGANIZATION_ID=
VITE_OPENAI_USER_ID=
Run a playground example pnpm vite-node ./playground/<example>.ts
. To try out the basic chat implementation, run the following.
# ~/path/to/ellma
pnpm vite-node ./playground/chat-basic.ts
FAQs
Easy LLM Assistants
The npm package ellma receives a total of 5 weekly downloads. As such, ellma popularity was classified as not popular.
We found that ellma demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Snyk's use of malicious npm packages for research raises ethical concerns, highlighting risks in public deployment, data exfiltration, and unauthorized testing.
Research
Security News
Socket researchers found several malicious npm packages typosquatting Chalk and Chokidar, targeting Node.js developers with kill switches and data theft.
Security News
pnpm 10 blocks lifecycle scripts by default to improve security, addressing supply chain attack risks but sparking debate over compatibility and workflow changes.