Latest Threat Research:Malicious dYdX Packages Published to npm and PyPI After Maintainer Compromise.Details
Socket
Book a DemoInstallSign in
Socket

mlx-ts

Package Overview
Dependencies
Maintainers
1
Versions
3
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

mlx-ts

AI SDK provider for local MLX (Swift) models on Apple Silicon (macOS).

latest
npmnpm
Version
0.0.4
Version published
Maintainers
1
Created
Source

mlx-ts

Local LLM inference on macOS using a Swift MLX host process + a TypeScript client / AI SDK provider.

This README is shown on the npm package page. The repo contains additional development notes.

Quickstart (end users)

Requirements

  • macOS Apple Silicon (darwin/arm64)
  • Node.js

Install

npm i mlx-ts

During install, mlx-ts downloads a prebuilt mlx-host (Swift) binary + mlx.metallib from GitHub Releases (no Xcode required).

Use with the AI SDK

import { createMlxProvider } from "mlx-ts";
import { generateText, streamText } from "ai";

const modelId = "mlx-community/Llama-3.2-1B-Instruct-4bit";

const mlx = createMlxProvider({
  model: modelId,
  // optional:
  // modelsDir: "/path/to/your/models-cache",
  // hostPath: process.env.MLX_HOST_BIN,
});

const model = mlx.languageModel(modelId);

// stream
const s = await streamText({
  model,
  maxTokens: 64,
  messages: [{ role: "user", content: "Say hello from a local MLX model." }],
});
for await (const chunk of s.textStream) process.stdout.write(chunk);
process.stdout.write("\n");

// one-shot
const g = await generateText({
  model,
  maxTokens: 64,
  messages: [{ role: "user", content: "Summarize MLX in one sentence." }],
});
console.log(g.text);

Runtime configuration

  • Force CPU vs GPU: set MLX_HOST_DEVICE=cpu (default is gpu).
  • Override host binary: set MLX_HOST_BIN=/path/to/mlx-host or pass { hostPath } to createMlxProvider.
  • Default model cache dir: OS cache directory (macOS: ~/Library/Caches/mlx-ts/models).
  • Override where models are cached: pass { modelsDir } to createMlxProvider or set MLX_MODELS_DIR.
  • Override where mlx-ts downloads assets from: set MLX_TS_HOST_BASE_URL (base URL containing mlx-host and mlx.metallib).

OpenCode integration

OpenCode supports OpenAI-compatible providers and allows setting options.baseURL (OpenCode Providers) and selecting models via provider_id/model_id (OpenCode Models).

mlx-ts ships a small OpenAI-compatible local server:

# Start local server (choose any MLX model id)
npx mlx-ts-opencode --model mlx-community/Llama-3.2-1B-Instruct-4bit --port 3755

# Generate an opencode.json snippet
npx mlx-ts-opencode --print-config --model mlx-community/Llama-3.2-1B-Instruct-4bit --port 3755 > opencode.json

FAQs

Package last updated on 11 Jan 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts