evaluate statically-analyzable expressions
Gets the job done when JSON.stringify can't
Safely evaluate JavaScript (estree) expressions, sync and async.
Define a lazily evaluated property on an object
Evaluate Content Security Policies for a wide range of bypasses and weaknesses
A flexible math expression evaluator
Return a value or an evaluated function (with arguments).
Simple JavaScript expression evaluator
Gets the job done when JSON.stringify can't
Mathematical expression evaluator
A cross-browser / node.js validator powered by JSON Schema
Javascript/Typescript bindings for QuickJS, a modern Javascript interpreter, compiled to WebAssembly.
The modern build of lodash’s internal `baseValues` as a module.
The modern build of lodash’s internal `reEvaluate` as a module.
path.evaluate wrapped in a try catch
Mark scopes for deopt which contain a direct eval call
Safe builders for Trusted Types values
An interpreter for Typescript that can evaluate an arbitrary Node within a Typescript AST
Universal library for evaluating AI models
JavaScript expression parsing and evaluation.
An interpreter for Typescript that can evaluate an arbitrary Node within a Typescript AST
Stringify is to `eval` as `JSON.stringify` is to `JSON.parse`
Mathematical expression evaluator fork with exports map, prototype pollution and code injection security fixes
Evaluates the result of an expression given as postfix terms
TypeScript definitions for revalidator
An ASCII and LaTeX math parser and evaluator
Eval a string with a passed scope
Reduce function calls in a string, using a callback
Pre-evaluate code at build-time
The WebAssembly partial evaluator
Stub TypeScript definitions entry for math-expression-evaluator, which provides its own types definitions
JavaScript expression parsing and evaluation.
Alias for eval global.
Evaluate a polynomial using double-precision floating-point arithmetic.
A comprehensive evaluation framework for assessing AI model outputs across multiple dimensions.
Safer version of eval()
Journey Evaluator library
Evaluator
MongoDB Top Level API Package
GitHub Action for evaluating MCP server tool calls using LLM-based scoring
Much like tests in traditional software, evals are an important part of bringing LLM applications to production. The goal of this package is to help provide a starting point for you to write evals for your LLM applications, from which you can write more c
Pre-evaluate code at build-time with babel-macros