
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
visual-ai-assertions
Advanced tools
AI-powered visual assertions for E2E tests — send screenshots to Claude, GPT, or Gemini and get structured results
AI-powered visual assertions for E2E tests. Send screenshots to Claude, GPT, or Gemini and get structured, typed results.
# Install the library (includes OpenAI SDK by default)
npm install visual-ai-assertions
# Optional: install additional provider SDKs
npm install @anthropic-ai/sdk # for Claude
npm install @google/genai # for Gemini
# Zod is a peer dependency
npm install zod
This library uses sharp for image processing. Sharp downloads native binaries automatically for most supported platforms.
If installation fails in CI, Docker, or a minimal Linux image:
vips-dev with apk add --no-cache vips-dev--platform=linux/amd64 or install the required build toolsimport { test, expect } from "@playwright/test";
import { visualAI } from "visual-ai-assertions";
const ai = visualAI();
// Provider auto-inferred from ANTHROPIC_API_KEY env var
test("login page looks correct", async ({ page }) => {
await page.goto("https://myapp.com/login");
const screenshot = await page.screenshot();
const result = await ai.check(screenshot, [
"A login form is visible with email and password fields",
"A 'Sign In' button is present and visually enabled",
"The company logo appears in the header",
"No error messages are displayed",
]);
// Simple pass/fail
expect(result.pass).toBe(true);
// Or inspect individual statements
for (const stmt of result.statements) {
expect(stmt.pass, `Failed: ${stmt.statement} — ${stmt.reasoning}`).toBe(true);
}
});
import { visualAI } from "visual-ai-assertions";
const ai = visualAI({ model: "gpt-5-mini" });
// Provider inferred from model prefix
describe("Product Page", () => {
it("should display all required elements", async () => {
await browser.url("https://myapp.com/products/1");
const screenshot = await browser.saveScreenshot("./screenshot.png");
const result = await ai.elementsVisible(screenshot, [
"Product title",
"Price tag",
"Add to Cart button",
"Product image",
]);
expect(result.pass).toBe(true);
});
});
visualAI(config?)Create an AI visual analysis instance. Provider is auto-inferred from the model name or API key environment variable.
import { visualAI, Provider, Model } from "visual-ai-assertions";
// Minimal — provider inferred from ANTHROPIC_API_KEY env var
const ai = visualAI();
// Explicit configuration
const ai = visualAI({
model: "claude-sonnet-4-6", // optional, sensible defaults per provider
apiKey: "sk-...", // optional, defaults to provider env var
debug: true, // optional, logs prompts/responses to stderr
maxTokens: 4096, // optional, default 4096
reasoningEffort: "high", // optional, "low" | "medium" | "high" | "xhigh"
trackUsage: false, // optional, defaults to false — usage stats to stderr
});
// Use constants for IDE autocomplete
const ai = visualAI({
model: Model.Anthropic.SONNET_4_6,
});
ai.check(image, statements, options?)Visual assertion. Returns pass: true only if ALL statements are true.
// Single statement
const result = await ai.check(screenshot, "The login button is visible");
// Multiple statements
const result = await ai.check(screenshot, [
"The login button is visible",
"No error messages are displayed",
]);
// With instructions
const result = await ai.check(screenshot, ["The form is submitted"], {
instructions: ["Ignore loading spinners that appear briefly"],
});
Returns: CheckResult
{
pass: boolean; // true only if ALL statements pass
reasoning: string; // overall summary
issues: Issue[]; // structured findings
statements: StatementResult[]; // per-statement breakdown
usage?: {
inputTokens: number;
outputTokens: number;
estimatedCost?: number; // USD
durationSeconds?: number; // API call duration
};
}
ai.ask(image, prompt, options?)Free-form analysis. Returns structured issues with priority and category.
const result = await ai.ask(screenshot, "Analyze this page for UI issues");
// Filter by priority
const critical = result.issues.filter((i) => i.priority === "critical");
// With instructions
const result = await ai.ask(screenshot, "Check for accessibility issues", {
instructions: ["Ignore contrast on decorative elements"],
});
Returns: AskResult
{
summary: string; // high-level analysis
issues: Issue[]; // categorized findings
usage?: {
inputTokens: number;
outputTokens: number;
estimatedCost?: number;
durationSeconds?: number;
};
}
ai.compare(imageA, imageB, options?)Compare two images and get structured differences.
import { writeFileSync } from "node:fs";
// Basic comparison
const result = await ai.compare(before, after);
// gemini-3-flash-preview includes an annotated diff by default.
// Pass { diffImage: false } to opt out.
// With custom prompt and instructions
const result = await ai.compare(before, after, {
prompt: "Focus on header layout changes",
instructions: ["Ignore date/time differences"],
});
// With AI-generated diff image (supported only by gemini-3-flash-preview)
const result = await ai.compare(before, after, {
diffImage: true,
});
if (result.diffImage) {
writeFileSync("diff.png", result.diffImage.data);
}
Returns: CompareResult
{
pass: boolean; // true if no critical/major changes
reasoning: string; // overall summary
changes: ChangeEntry[]; // list of visual differences
diffImage?: { // present when diffing is enabled explicitly or by Gemini 3 preview defaults
data: Buffer; // PNG image data
width: number;
height: number;
mimeType: "image/png";
};
usage?: UsageInfo;
}
Where ChangeEntry is:
{
description: string; // what changed
severity: "critical" | "major" | "minor";
}
Type-safe methods for common visual QA checks. All return CheckResult. Use Accessibility, Layout, and Content constants for IDE autocomplete.
import { Accessibility, Layout, Content } from "visual-ai-assertions";
// Check that UI elements are visible
await ai.elementsVisible(screenshot, ["Submit button", "Nav bar", "Footer"]);
// Check that UI elements are hidden
await ai.elementsHidden(screenshot, ["Loading spinner", "Error modal"]);
// Accessibility checks (contrast, readability, interactive visibility)
await ai.accessibility(screenshot);
await ai.accessibility(screenshot, {
checks: [Accessibility.CONTRAST, Accessibility.READABILITY],
});
// Layout checks (overlap, overflow, alignment)
await ai.layout(screenshot);
await ai.layout(screenshot, {
checks: [Layout.OVERLAP, Layout.OVERFLOW],
instructions: ["Sticky headers may overlap content — ignore if < 10px"],
});
// Page load verification
await ai.pageLoad(screenshot);
await ai.pageLoad(screenshot, { expectLoaded: false }); // expect loading state
// Content checks (placeholder text, errors, broken images)
await ai.content(screenshot);
await ai.content(screenshot, {
checks: [Content.PLACEHOLDER_TEXT, Content.ERROR_MESSAGES],
});
Every issue includes:
{
priority: "critical" | "major" | "minor";
category: "accessibility" |
"missing-element" |
"layout" |
"content" |
"styling" |
"functionality" |
"performance" |
"other";
description: string; // what the issue is
suggestion: string; // how to fix it
}
Accepts multiple formats:
// Buffer (from Playwright screenshot)
const screenshot = await page.screenshot();
await ai.check(screenshot, "...");
// File path
await ai.check("./screenshots/page.png", "...");
// Base64 string
await ai.check(base64String, "...");
// URL
await ai.check("https://example.com/screenshot.png", "...");
Oversized images are automatically resized to provider limits.
import {
formatCheckResult,
formatCompareResult,
assertVisualResult,
assertVisualCompareResult,
} from "visual-ai-assertions";
// Pretty-print results to console
const result = await ai.check(screenshot, ["Login form is visible"]);
console.log(formatCheckResult(result, "login-page"));
// Throw VisualAIAssertionError on failure (includes full result on error)
assertVisualResult(result, "login-page");
// Same for compare results
const diff = await ai.compare(before, after);
console.log(formatCompareResult(diff));
assertVisualCompareResult(diff, "regression-check");
All errors extend VisualAIError, and every concrete error includes an error.code string for programmatic handling:
import { isVisualAIKnownError } from "visual-ai-assertions";
try {
const result = await ai.check(screenshot, "Page is loaded");
} catch (error) {
if (isVisualAIKnownError(error)) {
switch (error.code) {
case "AUTH_FAILED":
// Invalid or missing API key
break;
case "RATE_LIMITED":
// Rate limited — error.retryAfter has seconds to wait
break;
case "IMAGE_INVALID":
// Invalid image: corrupt, unsupported format, etc.
break;
case "RESPONSE_PARSE_FAILED":
// AI returned unparseable response — error.rawResponse has raw text
break;
case "CONFIG_INVALID":
// Provider SDK not installed or invalid config
break;
case "ASSERTION_FAILED":
// assertVisualResult threw — error.result has the full failed result
break;
case "PROVIDER_ERROR":
case "VISUAL_AI_ERROR":
break;
}
}
}
The VisualAIKnownError union and isVisualAIKnownError() helper are useful when you want switch (error.code) to narrow to subclass-specific fields such as retryAfter, statusCode, or rawResponse. Class-based instanceof checks continue to work too.
| Provider | Environment Variable |
|---|---|
| Anthropic | ANTHROPIC_API_KEY |
| OpenAI | OPENAI_API_KEY |
GOOGLE_API_KEY |
| Variable | Description |
|---|---|
VISUAL_AI_MODEL | Default model when model is not set in config. Overrides the provider's default model. |
VISUAL_AI_DEBUG | Enable error diagnostic logging to stderr. Does not enable prompt/response logging. Use "true" or "1". |
VISUAL_AI_DEBUG_PROMPT | Enable prompt-only debug logging to stderr. Use "true" or "1". |
VISUAL_AI_DEBUG_RESPONSE | Enable response-only debug logging to stderr. Use "true" or "1". |
VISUAL_AI_TRACK_USAGE | Enable usage tracking (token counts and cost) to stderr. Use "true" or "1". |
| Option | Type | Default | Description |
|---|---|---|---|
apiKey | string | env var | API key for the provider |
model | string | provider default | Model to use |
debug | boolean | false | Enable error diagnostic logging to stderr |
debugPrompt | boolean | false | Log prompts to stderr |
debugResponse | boolean | false | Log responses to stderr |
maxTokens | number | 4096 | Max tokens for AI response |
reasoningEffort | string | undefined | "low" "medium" "high" "xhigh" — controls how deeply the model reasons |
trackUsage | boolean | false | Log token usage and estimated cost to stderr |
import type {
AskResult,
CheckResult,
CompareResult,
SupportedMimeType,
VisualAIConfig,
VisualAIErrorCode,
} from "visual-ai-assertions";
SupportedMimeType is the exported image MIME union:
type SupportedMimeType = "image/jpeg" | "image/png" | "image/webp" | "image/gif";
Default models:
| Provider | Default Model |
|---|---|
| Anthropic | claude-sonnet-4-6 |
| OpenAI | gpt-5-mini |
gemini-3-flash-preview |
Control how deeply the model reasons before responding. Higher effort produces more thorough analysis but uses more tokens and takes longer.
const ai = visualAI({
reasoningEffort: "high", // "low" | "medium" | "high" | "xhigh"
});
When omitted, each provider uses its default behavior. The "xhigh" level enables maximum reasoning depth (maps to Anthropic's "max" effort and OpenAI's "xhigh" via the Responses API).
| Provider | Native Parameter | "xhigh" maps to |
|---|---|---|
| Anthropic | thinking.type: "adaptive" + output_config.effort | effort: "max" |
| OpenAI | reasoning.effort (Responses API) | effort: "xhigh" |
thinkingConfig.thinkingBudget (1024 / 8192 / 24576) | 24576 (max budget) |
All listed models support image/vision input. Pass any model ID to the model config option.
| Model | Model ID | Input $/MTok | Output $/MTok | Notes |
|---|---|---|---|---|
| Claude Opus 4.6 | claude-opus-4-6 | $5 | $25 | Most capable, 128K max output |
| Claude Sonnet 4.6 | claude-sonnet-4-6 | $3 | $15 | Default — best value |
| Claude Haiku 4.5 | claude-haiku-4-5 | $1 | $5 | Fastest, budget-friendly |
| Model | Model ID | Input $/MTok | Output $/MTok | Notes |
|---|---|---|---|---|
| GPT-5.4 Pro | gpt-5.4-pro | $30 | $180 | Most capable, extended context |
| GPT-5.4 | gpt-5.4 | $2.50 | $15 | Best vision quality |
| GPT-5.2 | gpt-5.2 | $1.75 | $14 | Balanced quality and cost |
| GPT-5.4 mini | gpt-5.4-mini | $0.75 | $4.50 | Fast and affordable |
| GPT-5.4 nano | gpt-5.4-nano | $0.20 | $1.25 | Cheapest OpenAI option |
| GPT-5 mini | gpt-5-mini | $0.25 | $2 | Default — fast and cheap |
| Model | Model ID | Input $/MTok | Output $/MTok | Notes |
|---|---|---|---|---|
| Gemini 3.1 Pro | gemini-3.1-pro-preview | $2 | $12 | Preview — most advanced reasoning |
| Gemini 3.1 Flash Lite | gemini-3.1-flash-lite-preview | $0.25 | $1.50 | Preview — lightweight and cheap |
| Gemini 3 Flash | gemini-3-flash-preview | $0.50 | $3 | Default — fast and capable |
MIT
FAQs
AI-powered visual assertions for E2E tests — send screenshots to Claude, GPT, or Gemini and get structured results
We found that visual-ai-assertions demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.