New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

milieu-cli

Package Overview
Dependencies
Maintainers
1
Versions
7
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

milieu-cli

Check if AI agents can discover, understand, and use your product

latest
Source
npmnpm
Version
0.1.10
Version published
Weekly downloads
26
-85.71%
Maintainers
1
Weekly downloads
 
Created
Source

milieu-cli

The API is the new product UI. Your customers are no longer just humans. They're AI agents, LLM pipelines, and automated workflows. These customers don't browse your marketing site or read your docs the way people do. They parse your machine-readable signals, probe your endpoints, and decide in milliseconds whether your product is usable.

As an industry, we've spent decades perfecting UI & UX design for humans. Now we need the same rigor for the interface AI agents actually see: your product's milieu.

Table of contents

  • What is milieu?
  • Quick start
  • The 5 Bridges
  • Install
  • CI/CD integration
  • Options
  • How scoring works
  • Programmatic API
  • Requirements
  • License

What is milieu?

Milieu is the totality of machine-readable signals that surround your product — the environment an AI agent encounters when it tries to discover, understand, and integrate with what you've built. It's not any single file or endpoint. It's robots.txt and OpenAPI specs and llms.txt and JSON-LD and developer docs and SDK references, all working together. It's the difference between a product that AI agents can use and one they walk past.

Good design made products usable for humans. Good milieu design makes products usable for agents.

milieu-cli measures this. It scans your product surface and tells you what AI agents can actually see.

npx milieu-cli scan petstore.swagger.io
# Gate your pipeline on agent-readiness
milieu scan api.example.com --threshold 70

Quick start

Run your first scan in under a minute:

npx milieu-cli scan petstore.swagger.io

No config, no API keys. You'll get a scored report showing what AI agents can see when they visit that product surface.

Example products to try

Each of these exercises different parts of the scanner:

# The classic OpenAPI demo — Swagger spec at a well-known path
npx milieu-cli scan petstore.swagger.io

# Rich structured data — JSON-LD and Schema.org markup
npx milieu-cli scan schema.org

# Minimal API service — clean reachability, few standards signals
npx milieu-cli scan httpbin.org

Add --verbose to any scan to see individual check results and explanations:

npx milieu-cli scan petstore.swagger.io --verbose

Once you've seen how these score, scan your own product surface and compare.

The 5 Bridges

Milieu evaluates your product through five progressive bridges. Each one represents a layer of machine legibility that AI agents need, from "can I reach you?" to "can I trust you?"

BridgeQuestionWhat milieu checksScore
1ReachabilityCan agents reach you?HTTPS, HTTP status, robots.txt (RFC 9309), per-bot crawler policies (GPTBot, ClaudeBot, CCBot, Googlebot, Bingbot, PerplexityBot), meta robots, X-Robots-Tag0–100
2StandardsCan agents read you?OpenAPI spec, GraphQL introspection, XML sitemap, markdown content negotiation, llms.txt, llms-full.txt, MCP endpoint, JSON-LD, Schema.org, security.txt, ai-plugin.json0–100
3SeparationCan agents integrate with you?API endpoints, developer docs, SDK/package references, webhook supportDetection only*
4SchemaCan agents use you correctly?Planned
5ContextCan agents trust you?Planned

*Bridge 3 reports what's present rather than scoring quality.

The bridges are progressive: there's no point checking your OpenAPI spec (Bridge 2) if agents can't even reach your product surface (Bridge 1). There's no point looking for SDK references (Bridge 3) if you don't publish machine-readable standards (Bridge 2). Each bridge builds on the last.

Bridge 1 — Reachability is the front door. Can AI agents get to your content at all? Are you blocking specific crawlers without realizing it? This is the most actionable bridge for most products — many are unknowingly blocking GPTBot or ClaudeBot in their robots.txt.

Bridge 2 — Standards is the shared language. Do you speak the protocols AI agents understand? OpenAPI specs, GraphQL endpoints, XML sitemaps, markdown content negotiation, llms.txt, MCP endpoints, structured data — these are the machine-readable standards that let agents go beyond scraping your HTML. milieu also checks if your server returns markdown via HTTP content negotiation (Accept: text/markdown), which cuts agent token usage by ~80% compared to raw HTML.

Bridge 3 — Separation is the developer surface. Do you have a clear API boundary? Developer docs? SDKs? Webhooks? This is where agents look to determine if your product is something they can build with, not just read from.

Bridges 4-5 — Schema and Context are deeper evaluations of whether your APIs are well-designed and whether agents can trust the data. These require analysis beyond automated checks.

Crawler policies

The single most actionable finding for most products: are you blocking AI agents? milieu checks your robots.txt for policies on six specific bots:

  • GPTBot (OpenAI) · ClaudeBot (Anthropic) · CCBot (Common Crawl)
  • Googlebot (Google) · Bingbot (Microsoft) · PerplexityBot (Perplexity)

Each policy is checked individually — you might be allowing Googlebot but blocking GPTBot without realizing it. Use --verbose to see per-bot results.

Install

npx milieu-cli scan petstore.swagger.io   # one-off, no install
npm install -g milieu-cli                 # global install
milieu scan petstore.swagger.io           # short alias after install

Both milieu and milieu-cli work as commands after global install.

CI/CD integration

Track agent-readiness over time and prevent regressions:

# Fail the build if score drops below 70
milieu scan api.mycompany.com --threshold 70 --quiet

# Capture structured results
milieu scan api.mycompany.com --json > milieu-report.json

# Pretty-print for debugging
milieu scan api.mycompany.com --json --pretty

Exit codes: 0 = score meets threshold (or no threshold set), 1 = score below threshold or scan error.

Options

FlagDescriptionDefault
--jsonOutput raw JSON to stdoutoff
--prettyPretty-print JSON (use with --json)off
--verboseShow individual check details with explanationsoff
--explain-allShow explanations on all checks, not just failures (use with --verbose)off
--timeout <ms>Per-request timeout in milliseconds10000
--threshold <n>Exit non-zero if overall score < noff
--quietSuppress terminal outputoff

Check explanations

In --verbose mode, non-passing checks include a "why this matters" explanation, a plain-language sentence describing what the result means for AI agents. These explanations are status-aware: a failing robots.txt check tells you agents have no crawling guidance, while a passing one confirms your guidance is clear.

These explanations also appear in --json output as a why field on every check, designed for both human readers and LLMs generating recommendations.

How scoring works

The overall score is the average of scored bridges (currently Bridges 1 and 2). Bridge 3 reports detection status only and is excluded from the average.

Each check within a scored bridge contributes: pass = 1, partial = 0.5, fail = 0. Bridge score = (points / total_checks) * 100.

A "partial" means the signal exists but is incomplete: an OpenAPI spec served as YAML (detected but not fully parseable), or a robots.txt with valid structure but no explicit allow/disallow rules.

All checks are reproducible: same product surface state produces the same score every time.

Programmatic API

import { scan } from "milieu-cli";
import type { ScanResult, ScanOptions, BridgeResult, Check, CheckStatus } from "milieu-cli";

const options: ScanOptions = {
  timeout: 15000,  // per-request timeout in ms (default: 10000)
  verbose: true,   // include check details in result
  silent: true,    // suppress spinner output (recommended for library use)
};

const result = await scan("https://petstore.swagger.io", options);

console.log(result.overallScore);      // number (average of scored bridges)
console.log(result.overallScoreLabel); // "pass" | "partial" | "fail"
console.log(result.bridges);           // 5-element tuple of BridgeResult

Note: result.bridges always returns 5 elements. Bridges 1-2 have numeric scores. Bridge 3 has score: null (detection inventory). Bridges 4-5 have score: null (not yet evaluated). Handle nulls when mapping:

const scoredBridges = result.bridges.filter(b => b.score !== null);

Check explanations are available as a separate export for library consumers:

import { resolveExplanation } from "milieu-cli";

// Get the status-aware explanation for a check
const why = resolveExplanation("robots_txt", "fail");
// → "Without robots.txt, AI agents have no guidance on what they can access..."

Requirements

  • Node.js 18+

License

Apache-2.0

Keywords

ai

FAQs

Package last updated on 24 Mar 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts