
Research
2025 Report: Destructive Malware in Open Source Packages
Destructive malware is rising across open source registries, using delays and kill switches to wipe code, break builds, and disrupt CI/CD.
A module for interacting with large language models. concept:
shard to define response characteristics.character with shards.parser.shard.shard and character loading.adapter.threadshard, character, and adapter in thread.@plotdb/block-based chatroom module with headless logic for quickly frontend integrating.Install with npm (TODO):
npm install --save dollm
load required lib files:
<script src="path-to/index.min.js"></script>
context(ctx): configure dependencies. accepted dependencies:
fetch: browser fetch or equivalent runtime(e.g., nodejs) counterpart.
a sample usage for local dev (with rejectUnauthorized set to false):
require! <[axios https]>
dollm.context(fetch: axios.create httpsAgent: new https.Agent rejectUnauthorized: false)
send(opt): request a LLM response.
messages: Array of message objects.proc: callback for text updating eventforward: response object for forwarding remote response. optionalcontroller: for aborting request. optionaladapter: adapter object or constructor option providing model source information.fetch: (deprecated) alternative fetch api (e.g., axios). optional.
dollm.context instead.{content: '...'} object.Also, dollm works with following member classes. E.g., here is an example:
adapter = dollm.adapter.from.ollama do
model: \gemma3:12b, url: \https://dollm.loco/api/ollama/chat
character = new dollm.character do
name: "Singer Bot"
system: "You are a bot that always reply with a song. (also with musical emoji)"
thread = new dollm.thread!
thread.init {character, adapter}
.then -> thread.send {message: "Hi there?", proc: (->)}
.then -> console.log it
which uses
character to define system promptadapter to define LLM providerthread to maintain the chat sessionBeyond this minimal example, you can also use:
shard to extend a given characterparser to trigger custom actions based on pattern in responsemanager to specify how resources such as character and shardSee below for more information.
An adapter specifies how dollm can connect to a given LLM provider. dollm.adapter also provides factory functions for commonly seen providers such as grok, openai, claude or ollama run in local environment.
To manually create an adapter:
apt = new dollm.adapter(opt)
where opt is an object with following fields:
provider: provider name. Informative purpose.name: adapter name. Informative purpose.model: model name, usually the unique identifiers such as gpt-5-min provided by providers.dummy: true if this is a dummy adapter (a fake or placeholder adapter)url: API endpoint.opt: additional options for API requests. Possible fields:
stream: should the request be stream or not. default true.headers: key/value pair hash for additional headers passed along with the api request.corsRequired: is cors required when using this adapter. default false.payload(payload): optional. request payload mutator function.
parse({buf, content}): a optional parsing function called when receiving input message.
buf and content value.google in src/adapter.ls for sample usage.While you can construct adapter manually, you can also use factory functions under dollm.adapter.from:
dummy({model})
dummy of returned adapter will be set to true when using this factory.ollama({model, url}):
http://localhost:11434/api/chatopenai({model, apikey, temperature, max_tokens})
model defaults to gpt-4o-mini if omittedperplexity({model, apikey, temperature, max_tokens})
model defaults to sonar if omitteddeepseek({model, apikey, temperature, max_tokens})
model defaults to deepseek-chat if omittedxai({model, apikey, temperature, max_tokens})
model defaults to grok-beta if omittedgoogle({model, apikey, temperature, max_tokens, api_version})
model defaults to gemini-1.5-flash-8b if omitted.api_version default to v1 if omittedanthropic({model, apikey, temperature, max_tokens})
model defaults to claude-3-haiku-20240307 if omittedproxy({from, route, .... })
from option which points to one of the factory above,
and makes request through route with all additional options passed on.from: one of the name available in ollama.adapter.fromroute: local proxy api endpoint. /api/chat/proxy if omitted.model defaults to llama3.1:8b if omitted.provider information.dollm.charactere controls the characteristic of a chatbot by tweaking system prompt. used along with dollm.thread. Usage:
thread = new dollm.thread(...)
ch = new dollm.character(opt)
ch.init!then -> thread.character(ch)
alternatively:
thread = new dollm.thread(...)
thread.init {
character: new dollm.character(opt)
adapter: new dollm.adapter(...)
}
.then -> ...
where opt is an object with following fields:
id: ID of this charactername: name of this character.system: text to provide for system prompt. "you are a general chatbot." if omitted.url: optional. if provided, init fetches system prompt from this url, expecting a plain text file.manager: optional. dollm manager for fetching shards.
adapter: optional. the adapter to be used along with this character.
shards: an array of shards (either instance of dollm.shard or constructor of that ) for this character.Object API:
init(): initialize this shard.
id(): return the id of this character.name(): return the name of this character.system(prompt): return the prompt of this character.
prompt is provided, replace the current prompt with it.adapter(): return the (might not exist) adapter of this character.adapt(o): set the adapter of this character.
o: an insance of dollm.adapter or the object for constructor.adopt(s): adopt the given shard (can be an instance of dollm.shard, or object for constructor).
discard(s): discard the given shard from this character.shards(): return the list of shards used by this character.Similar to character but a smaller fragment of a characteristic serving as plugins of a character. Used along with dollm.character. Usage:
shard = new dollm.shard(opt)
shard.init!then -> ...
where opt is an object with following fields:
id: id of this shard.name: optional. name of this shard. Auto generated from prompt if omitted.url: optional. if provided, init fetches prompt from this url, expecting a plain text file.get: optional. should be a function returning (a promise of) an (array of) text as prompt for this shard.actions: for action execution (TBD)prompt: prompt(s) of this shard. can be a string or array of string.Chat thread controller. It controls the chat messages, LLM characteristic and provider information. Usage:
thread = new dollm.thread(...)
thread.adapt(someAdapter)
thread.send(...).then ({content}) ->
You can give the LLM some character:
thread = new dollm.thread(...)
thread.adapt(someAdapter)
thread.adapt(someCharacter)
someCharacter.init!
.then -> thread.send(...)
.then ({content}) ->
or with init helper function:
thread = new dollm.thread(...)
thread.init {character, adapter}
.then -> thread.send(...)
.then -> ...
Constructor options:
name: the name of this thread. default general thread.id: the id of this thread.
suuid to generate a random one if available, otherwise Math.random is used.stateless: default false. Indicate if this thread is stateless.
proc: TBDopt: TBDClass method:
factory({character, adapter}): construct and initialize a thread with given options.
thread object.API:
on(name, cb)fire(name, value)name(): return the name of this threadmodel(): return the model name used by this thread.init(opt): initialize with the given option `opt.
character: character to use. Either an instance of dollm.character or the object for constructor.adapter: adapter to use. Either an instance of dollm.adapter or the object for constructor.character(c): set the character to use.
dollm.character or the object for constructor.adapt(o): set the adapter to be used in this thread.
dollm.character or the object for constructor.adapter(): return the currently adapter used.send(o):
send in turn calls dollm.send, and return the returned value from it.message: text message to be sent to LLM.transient: message will be stored as a truncated short text after LLM responded.proc: see dollm.fetchforward: see dollm.fetchfetch: see dollm.fetchabort(): immediately stop the current transaction. fire aborted event.reset(): reset conversationFor loading and storing characters, shards and adapters. Usage:
new dollm.manager { ... }
where the constructor options:
apikey: an object storeing apikeys as provider / key pairs.APIs:
builtin(): return builtin (or, available) resources as a categorized object with corresponding names, such as
{
characters: { ... }
shards: { ... }
adapters: { ... }
}
load(opt): load a bundle of resources.
url: url to fetch. expect a JSON object with optional characters, shards and adapters fields,
shard({id}): request the shard with the given id.
character({id}): request the character with the given id.
adapter({id}): request the adapter with the given id.
Note: dollm.parser is under development. Design and interface is subject to change.
dollm.parser provides a method to parse returned content to programmatic purpose. It usually comes along a series of actions which can be identified after parsing LLM's response. For example, here is a sample response:
Let me generate a random number for you:
```run {method: "getRandomNumber", params: []}```
In this example, we expect our parser to identify the whole content in the run quote and is able to parse the inner content, which in this case, to JSON. Parser then looks up a proper handler to take care of it, as an action.
Usually dollm.parser is not directly used, but is depended by actions through their host field and will be loaded indirectly. Furthermore dollm natively support a parser with 3 quote marks followed by a name exec such as:
```exec
{action: "some action name"}
```
and this default parser will be used for actions that don't specify a parser.
Usage:
parser = new dollm.parser( ... )
# alternatively, get default parser from factory
parser = dollm.parser.factory!
constructor options:
id: parser id.proc({ctx}): optional. mutator of the given context object.
ctx: an object storing chat information, including:
content: content of the current message.idx: index of the current message.data: parsed data in matchStart, if any.ext: the object {id, result, feedback} where
id is the id of the given parser. (TODO: rename to parser with bid?)result: not available here but will store parsed result by action. Can be an object.feedback: not available here but will store the action result for LLM to read. Should be a string.error: the error message if error occurred during parsing.render({ctx, node}): optional. renderer for the given ctx object. parameters:
ctx: the context object, as described in proc({ctx}).node: the DOM node of which this context object is rendered.matchStart(text): detect if an new action data is starting based on current ongoing input text.
index: position of the beginning of the matched text.len: length of the matched text.data: optional. additional data parsed in the start pattern.matchEnd(text): detech if the new action data ends.
index: position of the beginning of the matched text.len: length of the matched text.stop: optional. abort the ongoing message if true.Class Methods:
factory(opt): get prebuilt parsers. opt is an object with following fields:
name: parser name. default if omitted.
default is supported.APIs:
id(): return the id of this parser.add(actions): add additional action(s). parameter is an action or a list of actions.remove(actions): remove specified action(s). parameter is an action or a list of actions.clear(): remove all actions.action(id): return action with id id; throw an Error with lderror id 404.proc({ctx}): mutate the given ctx object.render({ctx, node}): render on the node based on the given ctx object.Action is an plain object with following fields:
hostid: the id of this actionopt: optional. additional customizable object which is passed into action's proc() function.proc({ctx, opt}): proecss the given ctx object.
{result, feedback} where result and feedback is described in parser's construtor option.ctx: context object described above.opt: action opt field describe above.render({ctx, node}): render on the node based on the given ctx object.dollm also provides a @plotdb/block-based headless block for quickly constructing a user interface. A demo block is also available for reference of how to use it and what the expected result of using it.
Usage (demo block):
mgr = block.manager(...)
mgr.from {name: "dollm", path: "block/fancy"}, {root: ..., data: ...} .then (ret) ->
ret.interface.adapt!
...
block/fancy is for demo purpose only. Since the base block is headless, you have to implement a workable block yourself. Check src/block/fancy for reference about how to implement a block with the headless block.
Additionally, please note that JS in headless block runs in their own context and things may break if you also load dollm independently externally; e.g., instanceof will return false when comparing the object you constructed with the constructor in headless block. To solve this issue, use manager.scope.load or block extending to align libraries between both context.
Block constructing data ( data object passed in mgr.from call):
threadOpt: optional. object for options to construct the thread.Interface API:
adapt(opt): configure block with additional resources, desired adapter and character. options:
mgr: optional. a instance of dollm.manager.actions: optional. a list of actionsadapter: optional. dollm.character instance or constructor option.character: optional. dollm.character instance or constructor option.adapt explicitly or fire a dollm.adapt event (described below) to complete block initialization.thread(): return the thread used by this blockreset(): reset conversationBlock Events:
dollm:adapt: for child block to invoke base block adapt directly. event options:
adapt function in dollm.threadInstall dollm globally to run it as a shell command, or use npx dollm:
npm install -g dollm
npx dollm
which looks for following environment variables for the model you want to use:
DOLLM_PROFILE # DEFAULT if omitted
DOLLM_<PROFILE>_PROVIDER # provider name. check `dollm.adapter` section for available providers
DOLLM_<PROFILE>_MODEL # model name
DOLLM_<PROFILE>_URL # api endpoint for the given provider. optional
DOLLM_<PROFILE>_APIKEY # api key. optional
dollm provides means to access LLM programmatically with controllable characteristics, however advanced topics may also important and applicable in this module, such as prompt engineering / management, LLM Orchestration, etc.
Following are in the roadmap:
scenescene with orcha.
orcha can be recursive, controlling other orchasorcha are responsible to manage messages in every thread in every scene.shard and character to construct scene or orcha.A sample usage configuration:
mcp-sdk-bundle.js / mcp-sdk-bundle.min.js):
these are prebundled file from @modelcontextprotocol/sdk, released under MIT License.FAQs
A module for interacting with large language models. concept:
We found that dollm demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Research
Destructive malware is rising across open source registries, using delays and kill switches to wipe code, break builds, and disrupt CI/CD.

Security News
Socket CTO Ahmad Nassri shares practical AI coding techniques, tools, and team workflows, plus what still feels noisy and why shipping remains human-led.

Research
/Security News
A five-month operation turned 27 npm packages into durable hosting for browser-run lures that mimic document-sharing portals and Microsoft sign-in, targeting 25 organizations across manufacturing, industrial automation, plastics, and healthcare for credential theft.