
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
textprompts
Advanced tools
So simple, it's not even worth vibe coding yet it just makes so much sense.
TypeScript/JavaScript companion to textprompts for loading and formatting prompt files.
Are you tired of vendors trying to sell you fancy UIs for prompt management that just make your system more confusing and harder to debug? Isn't it nice to just have your prompts next to your code?
But then you worry: Did my formatter change my prompt? Are those spaces at the beginning actually part of the prompt or just indentation?
textprompts solves this elegantly: treat your prompts as text files and keep your linters and formatters away from them. And you get prompt metadata headers for free!
textprompts/core entry point with zero node: imports for Cloudflare Workers, Deno Deploy, Vercel Edge# With npm
npm install textprompts
# With Bun
bun add textprompts
# With pnpm
pnpm add textprompts
Super simple by default - TextPrompts just loads text files with optional metadata:
greeting.txt):---
title = "Customer Greeting"
version = "1.0.0"
description = "Friendly greeting for customer support"
---
Hello {customer_name}!
Welcome to {company_name}. We're here to help you with {issue_type}.
Best regards,
{agent_name}
import { loadPrompt } from "textprompts";
// Just load it - works with or without metadata
const prompt = await loadPrompt("greeting.txt");
// Or use the static method
const alt = await Prompt.fromPath("greeting.txt");
// Use it safely - all placeholders must be provided
const message = prompt.prompt.format({
customer_name: "Alice",
company_name: "ACME Corp",
issue_type: "billing question",
agent_name: "Sarah"
});
console.log(message);
// Or use partial formatting when needed
const partial = prompt.prompt.format(
{ customer_name: "Alice", company_name: "ACME Corp" },
{ skipValidation: true }
);
// Result: "Hello Alice!\n\nWelcome to ACME Corp. We're here to help you with {issue_type}.\n\nBest regards,\n{agent_name}"
// Prompt objects expose `.meta` and `.prompt`.
// Use `prompt.prompt.format()` for safe formatting or `String(prompt)` for raw text.
Even simpler - no metadata required:
// simple_prompt.txt contains just: "Analyze this data: {data}"
const prompt = await loadPrompt("simple_prompt.txt"); // Just works!
const result = prompt.prompt.format({ data: "sales figures" });
Problem: Modern bundlers (Vite, Webpack, Rollup) often don't include .txt files in your bundle by default.
Solution: Load prompts directly from strings using Prompt.fromString():
import { Prompt } from "textprompts";
// Vite: Use ?raw suffix to import as string
import greetingContent from "./greeting.txt?raw";
// Or with Webpack using raw-loader
// import greetingContent from "raw-loader!./greeting.txt";
// Load from the string content
const prompt = Prompt.fromString(greetingContent);
// Works identically to file-based loading
const message = prompt.format({
customer_name: "Alice",
company_name: "ACME Corp",
issue_type: "billing question",
agent_name: "Sarah"
});
With metadata support:
import promptContent from "./system-prompt.txt?raw";
// The ?raw import includes TOML front-matter if present
const prompt = Prompt.fromString(promptContent, {
meta: "allow", // or MetadataMode.ALLOW
path: "system-prompt.txt" // Optional: for better error messages
});
console.log(prompt.meta?.title); // Access metadata
console.log(prompt.meta?.version); // Works like fromPath
When to use fromString vs fromPath:
fromPath() for Node.js/Bun server-side codefromString() for bundled frontend code (Vite, Webpack, etc.)fromString() when loading prompts from APIs or databases// Edge runtimes (Cloudflare Workers, Deno Deploy, Vercel Edge, browsers)
import { Prompt, parseString, PromptString } from "textprompts/core";
// Node.js (includes file-system APIs)
import { loadPrompt, savePrompt } from "textprompts";
Never ship a prompt with missing variables again:
import { PromptString } from "textprompts";
const template = new PromptString("Hello {name}, your order {order_id} is {status}");
// ✅ Strict formatting - all placeholders must be provided
const result = template.format({ name: "Alice", order_id: "12345", status: "shipped" });
// ❌ This catches the error by default
try {
template.format({ name: "Alice" }); // Missing order_id and status
} catch (error) {
console.error(error.message); // Missing format variables: ["order_id", "status"]
}
// ✅ Partial formatting - replace only what you have
const partial = template.format(
{ name: "Alice" },
{ skipValidation: true }
);
console.log(partial); // "Hello Alice, your order {order_id} is {status}"
TextPrompts is designed to be super simple by default - just load text files with optional metadata when available. No configuration needed!
import { loadPrompt, setMetadata, MetadataMode } from "textprompts";
// Default behavior: load metadata if available, otherwise just use the file content
const prompt = await loadPrompt("my_prompt.txt"); // Just works!
// Three modes available for different use cases:
// 1. ALLOW (default): Load metadata if present, don't worry if it's incomplete
setMetadata(MetadataMode.ALLOW); // Flexible metadata loading (default)
const flexible = await loadPrompt("prompt.txt"); // Loads any metadata found
// 2. IGNORE: Treat as simple text file, use filename as title
setMetadata(MetadataMode.IGNORE); // Super simple file loading
const simple = await loadPrompt("prompt.txt"); // No metadata parsing
console.log(simple.meta?.title); // "prompt" (from filename)
// 3. STRICT: Require complete metadata for production use
setMetadata(MetadataMode.STRICT); // Prevent errors in production
const strict = await loadPrompt("prompt.txt"); // Must have title, description, version
// Override per prompt when needed
const override = await loadPrompt("prompt.txt", { meta: "strict" });
Why this design?
import OpenAI from "openai";
import { loadPrompt } from "textprompts";
const systemPrompt = await loadPrompt("prompts/system.txt");
const client = new OpenAI();
const response = await client.chat.completions.create({
model: "gpt-5-mini",
messages: [
{
role: "system",
content: systemPrompt.prompt.format({
company_name: "ACME Corp",
tone: "professional"
})
},
{ role: "user", content: "Hello!" }
]
});
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { loadPrompt } from "textprompts";
const systemPrompt = await loadPrompt("prompts/system.txt");
const result = streamText({
model: openai('gpt-5-mini'),
messages: [
{
role: 'system',
content: systemPrompt.prompt.format({
company_name: "ACME Corp",
tone: "friendly"
})
},
{ role: 'user', content: 'Hello!' }
]
});
for await (const delta of result.textStream) {
process.stdout.write(delta);
}
import Anthropic from "@anthropic-ai/sdk";
import { loadPrompt } from "textprompts";
const systemPrompt = await loadPrompt("prompts/system.txt");
const anthropic = new Anthropic();
const message = await anthropic.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 1024,
system: systemPrompt.prompt.format({
company_name: "ACME Corp",
tone: "professional"
}),
messages: [
{ role: "user", content: "Hello!" }
]
});
import { loadPrompt } from "textprompts";
const env = process.env.NODE_ENV || "development";
const systemPrompt = await loadPrompt(`prompts/${env}/system.txt`);
// prompts/development/system.txt - verbose logging
// prompts/production/system.txt - concise responses
import { loadPrompt } from "textprompts";
// Easy A/B testing
const promptVersion = "v2"; // or "v1", "experimental", etc.
const prompt = await loadPrompt(`prompts/${promptVersion}/system.txt`);
// Git handles the rest:
// git checkout experiment-branch
// git diff main -- prompts/
TextPrompts uses TOML front-matter (optional) followed by your prompt content:
---
title = "My Prompt"
version = "1.0.0"
author = "Your Name"
description = "What this prompt does"
created = "2024-01-15"
---
Your prompt content goes here.
Use {variables} for templating.
Choose the right level of strictness for your use case:
You can also set the environment variable TEXTPROMPTS_METADATA_MODE to one of
strict, allow, or ignore before importing the library to configure the
default mode.
import { setMetadata, MetadataMode } from "textprompts";
// Set globally
setMetadata(MetadataMode.ALLOW); // Default: flexible metadata loading
setMetadata(MetadataMode.IGNORE); // Simple: no metadata parsing
setMetadata(MetadataMode.STRICT); // Production: require complete metadata
// Or override per prompt
const prompt = await loadPrompt("file.txt", { meta: "strict" });
loadPrompt(path, options?)Load a single prompt file.
async function loadPrompt(
path: string,
options?: {
meta?: MetadataMode | string | null;
}
): Promise<Prompt>
path: Path to the prompt filemeta: Metadata handling mode - MetadataMode.STRICT, MetadataMode.ALLOW, MetadataMode.IGNORE, or string equivalents. null uses global config.Returns a Prompt object with:
prompt.meta: Metadata from TOML front-matter (always present)prompt.prompt: The prompt content as a PromptStringprompt.path: Path to the original filesetMetadata(mode) / getMetadata()Set or get the global metadata handling mode.
function setMetadata(mode: MetadataMode | string): void
function getMetadata(): MetadataMode
mode: MetadataMode.STRICT, MetadataMode.ALLOW, MetadataMode.IGNORE, or string equivalentssavePrompt(path, content)Save a prompt to a file.
async function savePrompt(
path: string,
content: string | Prompt
): Promise<void>
path: Path to save the prompt filecontent: Either a string (creates template with required fields) or a Prompt objectparseSections(text) and section utilitiesParse mixed Markdown/XML prompt structure directly from a string or Uint8Array.
parseSections(text): Returns a ParseResult with sections, anchors, duplicateAnchors, frontmatter, and totalCharsgenerateSlug(heading): Creates the same auto-anchor slug used by the parser (lowercase, non-alphanumeric runs → _)normalizeAnchorId(id): Canonical normalization — lowercase, collapse non-alphanumeric runs to _, strip leading/trailing _injectAnchors(text): Inserts missing <a id="..."></a> lines before Markdown headingsrenderToc(result, path): Renders a human-readable table of contentsgetSectionText(text, anchorId): Look up a section body by anchor ID (fuzzy: normalizes both query and stored IDs)sliceSectionContent(text, section): Extract the body text of a section using its content boundary fieldsloadSection(path, anchorId, options?): Load a named section from a file as a Promptimport { injectAnchors, loadSection, parseSections, renderToc, getSectionText, sliceSectionContent, normalizeAnchorId } from "textprompts";
const result = parseSections("## Intro\n\nBody.");
console.log(result.sections[0].anchorId); // "intro"
const anchored = injectAnchors("## Intro\n\nBody.");
console.log(anchored.text); // <a id="intro"></a>\n## Intro...
console.log(renderToc(anchored.result, "prompt.txt"));
// Look up a section body (tolerates "my-section", "my_section", "MY_SECTION")
const sectionText = getSectionText(text, "intro");
console.log(sectionText); // "Body."
// Extract body content of a section (excludes heading line)
const text = "## Intro\n\nBody.";
const body = sliceSectionContent(text, result.sections[0]);
console.log(body); // "Body."
// Load a named XML section from a multi-section file
// agents.txt: <system id="default">...</system> <system id="expert">...</system>
const expert = await loadSection("agents.txt", "expert");
console.log(String(expert)); // "You are an expert assistant..."
// normalizeAnchorId is applied universally: XML tags, id= attrs, headings
console.log(normalizeAnchorId("My-Section")); // "my_section"
console.log(normalizeAnchorId("USER_TEMPLATE")); // "user_template"
All anchor IDs use a single canonical form: lowercase, non-alphanumeric runs collapsed to _, leading/trailing _ stripped.
| Source | Raw | Normalized |
|---|---|---|
| XML tag name | <user_template> | user_template |
XML id= attr | id="my-section" | my_section |
| Markdown heading | ## My Section | my_section |
<a id=""> | <a id="custom-ID"> | custom_id |
This means loadSection("file.txt", "my-section"), "my_section", and "MY_SECTION" all find the same section.
PromptStringA string wrapper that validates format() calls:
class PromptString {
readonly value: string;
readonly placeholders: Set<string>;
constructor(value: string);
format(options?: FormatOptions): string;
format(args: unknown[], kwargs?: Record<string, unknown>, options?: FormatCallOptions): string;
toString(): string;
valueOf(): string;
strip(): string;
slice(start?: number, end?: number): string;
get length(): number;
}
interface FormatOptions {
args?: unknown[];
kwargs?: Record<string, unknown>;
skipValidation?: boolean;
}
Examples:
import { PromptString } from "textprompts";
const template = new PromptString("Hello {name}, you are {role}");
// Strict formatting (default) - all placeholders required
const result = template.format({ name: "Alice", role: "admin" }); // ✅ Works
// template.format({ name: "Alice" }); // ❌ Throws Error
// Partial formatting - replace only available placeholders
const partial = template.format(
{ name: "Alice" },
{ skipValidation: true }
); // ✅ "Hello Alice, you are {role}"
// Access placeholder information
console.log([...template.placeholders]); // ['name', 'role']
PromptThe main prompt object:
class Prompt {
readonly path: string;
readonly meta: PromptMeta | null;
readonly prompt: PromptString;
static async fromPath(path: string, options?: { meta?: MetadataMode | string | null }): Promise<Prompt>;
static fromString(content: string, options?: { path?: string; meta?: MetadataMode | string | null }): Prompt;
toString(): string;
valueOf(): string;
strip(): string;
format(options?: FormatOptions): string;
format(args: unknown[], kwargs?: Record<string, unknown>, options?: FormatCallOptions): string;
get length(): number;
slice(start?: number, end?: number): string;
}
interface PromptMeta {
title?: string | null;
version?: string | null;
author?: string | null;
created?: string | null;
description?: string | null;
}
Prompt.fromString(content, options?)
Load a prompt from a string (useful for bundlers):
static fromString(
content: string,
options?: {
path?: string; // Optional path for metadata/error messages (default: "<string>")
meta?: MetadataMode | string | null; // Metadata mode (default: global config)
}
): Prompt
content: String containing the prompt (may include TOML front-matter)path: Optional path for better error messages and metadata extraction (defaults to "<string>")meta: Metadata handling mode (same as fromPath)Returns a Prompt object with the same structure as fromPath.
Examples:
import { Prompt, MetadataMode } from "textprompts";
// Simple usage
const prompt = Prompt.fromString("Hello {name}!");
// With Vite raw import
import content from "./prompt.txt?raw";
const prompt = Prompt.fromString(content, { path: "prompt.txt" });
// With strict metadata validation
const prompt = Prompt.fromString(content, { meta: MetadataMode.STRICT });
TextPrompts provides specific exception types:
import {
TextPromptsError, // Base exception
FileMissingError, // File not found
MissingMetadataError, // No TOML front-matter when required
InvalidMetadataError, // Invalid TOML syntax
MalformedHeaderError, // Malformed front-matter structure
} from "textprompts";
Organize by purpose: Group related prompts in folders
prompts/
├── customer-support/
├── content-generation/
└── code-review/
Use semantic versioning: Version your prompts like code
version = "1.2.0" # major.minor.patch
Document your variables: List expected variables in descriptions
description = "Requires: customer_name, issue_type, agent_name"
Test your prompts: Write unit tests for critical prompts
import { test, expect } from "bun:test";
import { loadPrompt } from "textprompts";
test("greeting prompt formats correctly", async () => {
const prompt = await loadPrompt("greeting.txt");
const result = prompt.prompt.format({
customer_name: "Test",
company_name: "Test Corp",
issue_type: "test",
agent_name: "Bot"
});
expect(result).toContain("Test");
});
Use environment-specific prompts: Different prompts for dev/prod
const env = process.env.NODE_ENV || "development";
const prompt = await loadPrompt(`prompts/${env}/system.txt`);
You could, but then you lose:
See the examples/ directory for complete, runnable examples:
Run them with:
bun examples/basic-usage.ts
bun examples/fromstring-example.ts
bun examples/sections-usage.ts
bun examples/openai-example.ts
bun examples/aisdk-example.ts
Full documentation is available in the docs/ directory:
MIT License - see LICENSE for details.
textprompts - Because your prompts deserve better than being buried in code strings. 🚀
FAQs
TypeScript companion to textprompts for loading and formatting prompt files.
The npm package textprompts receives a total of 857 weekly downloads. As such, textprompts popularity was classified as not popular.
We found that textprompts demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.