Socket
Book a DemoInstallSign in
Socket

dollm

Package Overview
Dependencies
Maintainers
1
Versions
3
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

dollm

A module for interacting with large language models. concept:

npmnpm
Version
0.0.2
Version published
Weekly downloads
4
Maintainers
1
Weekly downloads
 
Created
Source

dollm

A module for interacting with large language models. concept:

  • use shard to define response characteristics.
  • define a character with shards.
  • trigger custom actions with parser.
  • support custom actions in shard.
  • on demand shard and character loading.
  • connect to different LLM providers with adapter.
  • manage conversation with thread
  • hot plug / switch shard, character, and adapter in thread.
  • @plotdb/block-based chatroom module with headless logic for quickly frontend integrating.

Usage

Install with npm (TODO):

npm install --save dollm

load required lib files:

<script src="path-to/index.min.js"></script>

API

  • context(ctx): configure dependencies. accepted dependencies:

    • fetch: browser fetch or equivalent runtime(e.g., nodejs) counterpart.
      • a sample usage for local dev (with rejectUnauthorized set to false):

        require! <[axios https]>
        dollm.context(fetch: axios.create httpsAgent: new https.Agent rejectUnauthorized: false)
        
  • send(opt): request a LLM response.

    • parameter is an object with following options:
      • messages: Array of message objects.
      • proc: callback for text updating event
      • forward: response object for forwarding remote response. optional
      • controller: for aborting request. optional
      • adapter: adapter object or constructor option providing model source information.
      • fetch: (deprecated) alternative fetch api (e.g., axios). optional.
        • this is deprecated. Use dollm.context instead.
    • returns a Promise resolving with {content: '...'} object.

Also, dollm works with following member classes. E.g., here is an example:

adapter = dollm.adapter.from.ollama do
  model: \gemma3:12b, url: \https://dollm.loco/api/ollama/chat

character = new dollm.character do
  name: "Singer Bot"
  system: "You are a bot that always reply with a song. (also with musical emoji)"

thread = new dollm.thread!
thread.init {character, adapter}
  .then -> thread.send {message: "Hi there?", proc: (->)}
  .then -> console.log it

which uses

  • character to define system prompt
  • adapter to define LLM provider
  • thread to maintain the chat session

Beyond this minimal example, you can also use:

  • shard to extend a given character
  • parser to trigger custom actions based on pattern in response
  • manager to specify how resources such as character and shard

See below for more information.

dollm.adapter

An adapter specifies how dollm can connect to a given LLM provider. dollm.adapter also provides factory functions for commonly seen providers such as grok, openai, claude or ollama run in local environment.

To manually create an adapter:

apt = new dollm.adapter(opt)

where opt is an object with following fields:

  • provider: provider name. Informative purpose.
  • name: adapter name. Informative purpose.
  • model: model name, usually the unique identifiers such as gpt-5-min provided by providers.
  • dummy: true if this is a dummy adapter (a fake or placeholder adapter)
  • url: API endpoint.
  • opt: additional options for API requests. Possible fields:
    • stream: should the request be stream or not. default true.
    • headers: key/value pair hash for additional headers passed along with the api request.
  • corsRequired: is cors required when using this adapter. default false.
  • payload(payload): optional. request payload mutator function.
    • will be given a payload that is expected to be sent to the specified api endpoint.
    • should return a payload based on the given parameter.
  • parse({buf, content}): a optional parsing function called when receiving input message.
    • should return a parsed text based on the given buf and content value.
    • see google in src/adapter.ls for sample usage.

While you can construct adapter manually, you can also use factory functions under dollm.adapter.from:

  • dummy({model})
    • dummy of returned adapter will be set to true when using this factory.
  • ollama({model, url}):
    • url: optional, default http://localhost:11434/api/chat
  • openai({model, apikey, temperature, max_tokens})
    • model defaults to gpt-4o-mini if omitted
  • perplexity({model, apikey, temperature, max_tokens})
    • model defaults to sonar if omitted
  • deepseek({model, apikey, temperature, max_tokens})
    • model defaults to deepseek-chat if omitted
  • xai({model, apikey, temperature, max_tokens})
    • model defaults to grok-beta if omitted
  • google({model, apikey, temperature, max_tokens, api_version})
    • model defaults to gemini-1.5-flash-8b if omitted.
    • api_version default to v1 if omitted
  • anthropic({model, apikey, temperature, max_tokens})
    • model defaults to claude-3-haiku-20240307 if omitted
  • proxy({from, route, .... })
    • this factory construct a adapter that goes through local api which serves as a proxy of other providers. it chooses a provider based on from option which points to one of the factory above, and makes request through route with all additional options passed on.
    • options:
      • from: one of the name available in ollama.adapter.from
      • route: local proxy api endpoint. /api/chat/proxy if omitted.
    • side note:
      • model defaults to llama3.1:8b if omitted.
      • "(via proxy)" will be appended in provider information.

dollm.character

dollm.charactere controls the characteristic of a chatbot by tweaking system prompt. used along with dollm.thread. Usage:

thread = new dollm.thread(...)
ch = new dollm.character(opt)
ch.init!then -> thread.character(ch)

alternatively:

thread = new dollm.thread(...)
thread.init {
  character: new dollm.character(opt)
  adapter: new dollm.adapter(...)
}
  .then -> ...

where opt is an object with following fields:

  • id: ID of this character
  • name: name of this character.
  • system: text to provide for system prompt. "you are a general chatbot." if omitted.
  • url: optional. if provided, init fetches system prompt from this url, expecting a plain text file.
  • manager: optional. dollm manager for fetching shards.
    • This is required only when it looks for some unavailable shards.
  • adapter: optional. the adapter to be used along with this character.
    • can be an adapter instance or the options object for calling constructor.
  • shards: an array of shards (either instance of dollm.shard or constructor of that ) for this character.

Object API:

  • init(): initialize this shard.
    • return a Promise which resolves when initialized.
  • id(): return the id of this character.
  • name(): return the name of this character.
  • system(prompt): return the prompt of this character.
    • when prompt is provided, replace the current prompt with it.
  • adapter(): return the (might not exist) adapter of this character.
  • adapt(o): set the adapter of this character.
    • o: an insance of dollm.adapter or the object for constructor.
  • adopt(s): adopt the given shard (can be an instance of dollm.shard, or object for constructor).
    • return a Promise which resolves when shard initialized.
  • discard(s): discard the given shard from this character.
  • shards(): return the list of shards used by this character.

dollm.shard

Similar to character but a smaller fragment of a characteristic serving as plugins of a character. Used along with dollm.character. Usage:

shard = new dollm.shard(opt)
shard.init!then -> ...

where opt is an object with following fields:

  • id: id of this shard.
  • name: optional. name of this shard. Auto generated from prompt if omitted.
  • url: optional. if provided, init fetches prompt from this url, expecting a plain text file.
  • get: optional. should be a function returning (a promise of) an (array of) text as prompt for this shard.
  • actions: for action execution (TBD)
  • prompt: prompt(s) of this shard. can be a string or array of string.

dollm.thread

Chat thread controller. It controls the chat messages, LLM characteristic and provider information. Usage:

thread = new dollm.thread(...)
thread.adapt(someAdapter)
thread.send(...).then ({content}) ->

You can give the LLM some character:

thread = new dollm.thread(...)
thread.adapt(someAdapter)
thread.adapt(someCharacter)
someCharacter.init!
  .then -> thread.send(...)
  .then ({content}) ->

or with init helper function:

thread = new dollm.thread(...)
thread.init {character, adapter}
  .then -> thread.send(...)
  .then -> ...

Constructor options:

  • name: the name of this thread. default general thread.
  • id: the id of this thread.
    • when omitted, use suuid to generate a random one if available, otherwise Math.random is used.
  • stateless: default false. Indicate if this thread is stateless.
    • if true, all chat history is discard; every message is considered the beginning of a new chat.
  • proc: TBD
  • opt: TBD

Class method:

  • factory({character, adapter}): construct and initialize a thread with given options.
    • return a promise which resolves with the initialized thread object.

API:

  • on(name, cb)
  • fire(name, value)
  • name(): return the name of this thread
  • model(): return the model name used by this thread.
  • init(opt): initialize with the given option `opt.
    • return a Promise which resolves when initialized.
    • options:
      • character: character to use. Either an instance of dollm.character or the object for constructor.
      • adapter: adapter to use. Either an instance of dollm.adapter or the object for constructor.
  • character(c): set the character to use.
    • parameter is either an instance of dollm.character or the object for constructor.
    • return a Promise which resolves when the character is initialized.
  • adapt(o): set the adapter to be used in this thread.
    • parameter is either an instance of dollm.character or the object for constructor.
    • adapter from character will be used if the given adapter is a dummy adapter.
  • adapter(): return the currently adapter used.
  • send(o):
    • return a Promise which resolves with an object containing the response from LLM:
      • send in turn calls dollm.send, and return the returned value from it.
    • options:
      • message: text message to be sent to LLM.
      • transient: message will be stored as a truncated short text after LLM responded.
      • proc: see dollm.fetch
      • forward: see dollm.fetch
      • fetch: see dollm.fetch
  • abort(): immediately stop the current transaction. fire aborted event.
  • reset(): reset conversation

dollm.manager

For loading and storing characters, shards and adapters. Usage:

new dollm.manager { ... }

where the constructor options:

  • apikey: an object storeing apikeys as provider / key pairs.

APIs:

  • builtin(): return builtin (or, available) resources as a categorized object with corresponding names, such as

    {
        characters: { ... }
        shards: { ... }
        adapters: { ... }
    }
    
  • load(opt): load a bundle of resources.

    • return a Promise which resolves when loaded.
    • options:
      • url: url to fetch. expect a JSON object with optional characters, shards and adapters fields,
        • each is a list of constructor object for the given type.
  • shard({id}): request the shard with the given id.

    • return a Promise which resolves with the requested shard, or reject if no such shard available.
  • character({id}): request the character with the given id.

    • return a Promise which resolves with the requested character, or reject if no such character available.
  • adapter({id}): request the adapter with the given id.

    • return a Promise which resolves with the requested adapter, or reject if no such adapter available.

dollm.parser

Note: dollm.parser is under development. Design and interface is subject to change.

dollm.parser provides a method to parse returned content to programmatic purpose. It usually comes along a series of actions which can be identified after parsing LLM's response. For example, here is a sample response:

Let me generate a random number for you:

```run {method: "getRandomNumber", params: []}```

In this example, we expect our parser to identify the whole content in the run quote and is able to parse the inner content, which in this case, to JSON. Parser then looks up a proper handler to take care of it, as an action.

Usually dollm.parser is not directly used, but is depended by actions through their host field and will be loaded indirectly. Furthermore dollm natively support a parser with 3 quote marks followed by a name exec such as:

```exec
{action: "some action name"}
```

and this default parser will be used for actions that don't specify a parser.

Usage:

parser = new dollm.parser( ... )

# alternatively, get default parser from factory
parser = dollm.parser.factory!

constructor options:

  • id: parser id.
  • proc({ctx}): optional. mutator of the given context object.
    • should return the modified version of the given ctx object
    • ctx: an object storing chat information, including:
      • content: content of the current message.
      • idx: index of the current message.
      • data: parsed data in matchStart, if any.
      • ext: the object {id, result, feedback} where
        • id is the id of the given parser. (TODO: rename to parser with bid?)
        • result: not available here but will store parsed result by action. Can be an object.
        • feedback: not available here but will store the action result for LLM to read. Should be a string.
      • error: the error message if error occurred during parsing.
  • render({ctx, node}): optional. renderer for the given ctx object. parameters:
    • ctx: the context object, as described in proc({ctx}).
    • node: the DOM node of which this context object is rendered.
  • matchStart(text): detect if an new action data is starting based on current ongoing input text.
    • if not detected, should return null. otherwise should return an object with following fields:
      • index: position of the beginning of the matched text.
      • len: length of the matched text.
      • data: optional. additional data parsed in the start pattern.
  • matchEnd(text): detech if the new action data ends.
    • if not detected, should return null. otherwise should return an object with following fields:
      • index: position of the beginning of the matched text.
      • len: length of the matched text.
      • stop: optional. abort the ongoing message if true.

Class Methods:

  • factory(opt): get prebuilt parsers. opt is an object with following fields:
    • name: parser name. default if omitted.
      • for now, only default is supported.

APIs:

  • id(): return the id of this parser.
  • add(actions): add additional action(s). parameter is an action or a list of actions.
  • remove(actions): remove specified action(s). parameter is an action or a list of actions.
  • clear(): remove all actions.
  • action(id): return action with id id; throw an Error with lderror id 404.
  • proc({ctx}): mutate the given ctx object.
  • render({ctx, node}): render on the node based on the given ctx object.

action object

Action is an plain object with following fields:

  • host
  • id: the id of this action
  • opt: optional. additional customizable object which is passed into action's proc() function.
  • proc({ctx, opt}): proecss the given ctx object.
    • should return an object {result, feedback} where result and feedback is described in parser's construtor option.
    • ctx: context object described above.
    • opt: action opt field describe above.
  • render({ctx, node}): render on the node based on the given ctx object.

headless block

dollm also provides a @plotdb/block-based headless block for quickly constructing a user interface. A demo block is also available for reference of how to use it and what the expected result of using it.

Usage (demo block):

mgr = block.manager(...)
mgr.from {name: "dollm", path: "block/fancy"}, {root: ..., data: ...} .then (ret) ->
  ret.interface.adapt!
  ...

block/fancy is for demo purpose only. Since the base block is headless, you have to implement a workable block yourself. Check src/block/fancy for reference about how to implement a block with the headless block.

Additionally, please note that JS in headless block runs in their own context and things may break if you also load dollm independently externally; e.g., instanceof will return false when comparing the object you constructed with the constructor in headless block. To solve this issue, use manager.scope.load or block extending to align libraries between both context.

Block constructing data ( data object passed in mgr.from call):

  • threadOpt: optional. object for options to construct the thread.

Interface API:

  • adapt(opt): configure block with additional resources, desired adapter and character. options:
    • opt is an object with following fields:
      • mgr: optional. a instance of dollm.manager.
      • actions: optional. a list of actions
      • adapter: optional. dollm.character instance or constructor option.
      • character: optional. dollm.character instance or constructor option.
    • either call adapt explicitly or fire a dollm.adapt event (described below) to complete block initialization.
  • thread(): return the thread used by this block
  • reset(): reset conversation

Block Events:

  • dollm:adapt: for child block to invoke base block adapt directly. event options:
    • see parameters of adapt function in dollm.thread

Run in Shell

Install dollm globally to run it as a shell command, or use npx dollm:

npm install -g dollm
npx dollm

which looks for following environment variables for the model you want to use:

DOLLM_PROFILE             # DEFAULT if omitted
DOLLM_<PROFILE>_PROVIDER  # provider name. check `dollm.adapter` section for available providers
DOLLM_<PROFILE>_MODEL     # model name
DOLLM_<PROFILE>_URL       # api endpoint for the given provider. optional
DOLLM_<PROFILE>_APIKEY    # api key. optional

Discussion

dollm provides means to access LLM programmatically with controllable characteristics, however advanced topics may also important and applicable in this module, such as prompt engineering / management, LLM Orchestration, etc.

Following are in the roadmap:

  • manage multiple threads with scene
  • customize scene with orcha.
    • orcha can be recursive, controlling other orchas
    • orcha are responsible to manage messages in every thread in every scene.
  • we can still consider using shard and character to construct scene or orcha.

A sample usage configuration:

  • orcha scene:
    • characters
      • thread A: Coder (character with coding shard and claude sonnet 4 adapter)
      • thread B: QA ( character with playwright MCP shard and gemini 2.5 flash adapter )
      • thread C: Summarizer
      • thread D: user
    • script:
      • start with user prompt send to Coder
      • Coder prepare required code based on user input
      • Coder summarizes result, pass along to QA with testing instruction.
      • QA triggers playwright and conduct testing based on instruction.
      • QA summarizes result, pass along to user with testing result.
      • Execution results are summarized and used to replace previous message in each thread.

License

  • dollm: released under AGPL License.
  • mcp-sdk-bundle (mcp-sdk-bundle.js / mcp-sdk-bundle.min.js): these are prebundled file from @modelcontextprotocol/sdk, released under MIT License.

FAQs

Package last updated on 30 Oct 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts