New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

@agentmark-ai/cli

Package Overview
Dependencies
Maintainers
2
Versions
17
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@agentmark-ai/cli

Agentmark's CLI

latest
npmnpm
Version
0.10.1
Version published
Maintainers
2
Created
Source

AgentMark CLI

The command-line tool for developing, testing, and evaluating AI agents with AgentMark.

Installation

npm install -g @agentmark-ai/cli

Or use it directly with npx:

npx @agentmark-ai/cli dev

Quick Start

# Scaffold a new project
npm create agentmark@latest

# Start the dev server (API + trace UI + hot reload)
agentmark dev

# Run a prompt with its test props
agentmark run-prompt my-prompt.prompt.mdx

# Run an experiment against a dataset
agentmark run-experiment my-prompt.prompt.mdx

Commands

agentmark dev

Start the local development environment: API server, webhook server, and trace UI.

agentmark dev
agentmark dev --api-port 9418 --app-port 3000
agentmark dev --remote    # Connect to AgentMark Cloud (login + trace forwarding)
agentmark dev --tunnel    # Expose webhook server publicly

The dev server auto-detects your project language (TypeScript or Python), finds your virtual environment, and handles port conflicts automatically.

agentmark run-prompt <filepath>

Execute a single prompt and display the result.

# Run with test props from the prompt's frontmatter
agentmark run-prompt customer-support.prompt.mdx

# Run with custom props
agentmark run-prompt customer-support.prompt.mdx --props '{"customer_question": "Where is my order?"}'

# Run with props from a file
agentmark run-prompt customer-support.prompt.mdx --props-file test-data.json

Output includes the LLM response, token usage, cost, and a link to the trace in the local UI.

agentmark run-experiment <filepath>

Run a prompt against every item in its dataset, with optional evaluations.

# Run with evals (default)
agentmark run-experiment my-prompt.prompt.mdx

# Skip evals
agentmark run-experiment my-prompt.prompt.mdx --skip-eval

# Output as JSON instead of table
agentmark run-experiment my-prompt.prompt.mdx --format json

# Fail if pass rate is below 80%
agentmark run-experiment my-prompt.prompt.mdx --threshold 80

Output formats: table (default), csv, json, jsonl.

agentmark build

Pre-compile .prompt.mdx files into JSON for production use with the file loader.

agentmark build
agentmark build --out dist/prompts

agentmark generate-types

Generate TypeScript type definitions from your prompts for type-safe usage in code.

agentmark generate-types

agentmark generate-schema

Generate a JSON Schema for .prompt.mdx frontmatter, enabling IDE validation and autocomplete.

agentmark generate-schema
agentmark generate-schema --out .agentmark

agentmark pull-models

Interactively select and add LLM models from a provider to your agentmark.json.

agentmark pull-models

agentmark login / agentmark logout

Authenticate with AgentMark Cloud.

agentmark login
agentmark logout

Link your project to an AgentMark Cloud app for trace forwarding.

agentmark link
agentmark link --app-id <uuid>

Documentation

Full documentation at docs.agentmark.co.

License

MIT

FAQs

Package last updated on 19 Mar 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts