New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

textprompts

Package Overview
Dependencies
Maintainers
1
Versions
9
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

textprompts

TypeScript companion to textprompts for loading and formatting prompt files.

latest
npmnpm
Version
0.8.0
Version published
Weekly downloads
1.1K
38.33%
Maintainers
1
Weekly downloads
 
Created
Source

textprompts

So simple, it's not even worth vibe coding yet it just makes so much sense.

TypeScript/JavaScript companion to textprompts for loading and formatting prompt files.

Are you tired of vendors trying to sell you fancy UIs for prompt management that just make your system more confusing and harder to debug? Isn't it nice to just have your prompts next to your code?

But then you worry: Did my formatter change my prompt? Are those spaces at the beginning actually part of the prompt or just indentation?

textprompts solves this elegantly: treat your prompts as text files and keep your linters and formatters away from them. And you get prompt metadata headers for free!

Why textprompts?

  • Prompts live next to your code - no external systems to manage
  • Git is your version control - diff, branch, and experiment with ease
  • No formatter headaches - your prompts stay exactly as you wrote them
  • Minimal markup - just TOML front-matter when you need metadata (or no metadata if you prefer!)
  • Lightweight dependencies - minimal footprint with just TOML and YAML parsers
  • Safe formatting - catch missing variables before they cause problems
  • Works with everything - OpenAI, Anthropic, local models, function calls
  • Node.js & Bun compatible - works seamlessly with both runtimes
  • Dual ESM/CJS build support - works with both module systems
  • Edge-ready - textprompts/core entry point with zero node: imports for Cloudflare Workers, Deno Deploy, Vercel Edge

Installation

# With npm
npm install textprompts

# With Bun
bun add textprompts

# With pnpm
pnpm add textprompts

Quick Start

Super simple by default - TextPrompts just loads text files with optional metadata:

Loading from Files

  • Create a prompt file (greeting.txt):
---
title = "Customer Greeting"
version = "1.0.0"
description = "Friendly greeting for customer support"
---
Hello {customer_name}!

Welcome to {company_name}. We're here to help you with {issue_type}.

Best regards,
{agent_name}
  • Load and use it (no configuration needed):
import { loadPrompt } from "textprompts";

// Just load it - works with or without metadata
const prompt = await loadPrompt("greeting.txt");

// Or use the static method
const alt = await Prompt.fromPath("greeting.txt");

// Use it safely - all placeholders must be provided
const message = prompt.prompt.format({
  customer_name: "Alice",
  company_name: "ACME Corp",
  issue_type: "billing question",
  agent_name: "Sarah"
});

console.log(message);

// Or use partial formatting when needed
const partial = prompt.prompt.format(
  { customer_name: "Alice", company_name: "ACME Corp" },
  { skipValidation: true }
);
// Result: "Hello Alice!\n\nWelcome to ACME Corp. We're here to help you with {issue_type}.\n\nBest regards,\n{agent_name}"

// Prompt objects expose `.meta` and `.prompt`.
// Use `prompt.prompt.format()` for safe formatting or `String(prompt)` for raw text.

Even simpler - no metadata required:

// simple_prompt.txt contains just: "Analyze this data: {data}"
const prompt = await loadPrompt("simple_prompt.txt");  // Just works!
const result = prompt.prompt.format({ data: "sales figures" });

Loading from Strings (for Bundlers)

Problem: Modern bundlers (Vite, Webpack, Rollup) often don't include .txt files in your bundle by default.

Solution: Load prompts directly from strings using Prompt.fromString():

import { Prompt } from "textprompts";

// Vite: Use ?raw suffix to import as string
import greetingContent from "./greeting.txt?raw";

// Or with Webpack using raw-loader
// import greetingContent from "raw-loader!./greeting.txt";

// Load from the string content
const prompt = Prompt.fromString(greetingContent);

// Works identically to file-based loading
const message = prompt.format({
  customer_name: "Alice",
  company_name: "ACME Corp",
  issue_type: "billing question",
  agent_name: "Sarah"
});

With metadata support:

import promptContent from "./system-prompt.txt?raw";

// The ?raw import includes TOML front-matter if present
const prompt = Prompt.fromString(promptContent, {
  meta: "allow",  // or MetadataMode.ALLOW
  path: "system-prompt.txt"  // Optional: for better error messages
});

console.log(prompt.meta?.title);     // Access metadata
console.log(prompt.meta?.version);   // Works like fromPath

When to use fromString vs fromPath:

  • Use fromPath() for Node.js/Bun server-side code
  • Use fromString() for bundled frontend code (Vite, Webpack, etc.)
  • Use fromString() when loading prompts from APIs or databases

Edge Runtimes

// Edge runtimes (Cloudflare Workers, Deno Deploy, Vercel Edge, browsers)
import { Prompt, parseString, PromptString } from "textprompts/core";

// Node.js (includes file-system APIs)
import { loadPrompt, savePrompt } from "textprompts";

Core Features

Safe String Formatting

Never ship a prompt with missing variables again:

import { PromptString } from "textprompts";

const template = new PromptString("Hello {name}, your order {order_id} is {status}");

// ✅ Strict formatting - all placeholders must be provided
const result = template.format({ name: "Alice", order_id: "12345", status: "shipped" });

// ❌ This catches the error by default
try {
  template.format({ name: "Alice" });  // Missing order_id and status
} catch (error) {
  console.error(error.message);  // Missing format variables: ["order_id", "status"]
}

// ✅ Partial formatting - replace only what you have
const partial = template.format(
  { name: "Alice" },
  { skipValidation: true }
);
console.log(partial);  // "Hello Alice, your order {order_id} is {status}"

Simple & Flexible Metadata Handling

TextPrompts is designed to be super simple by default - just load text files with optional metadata when available. No configuration needed!

import { loadPrompt, setMetadata, MetadataMode } from "textprompts";

// Default behavior: load metadata if available, otherwise just use the file content
const prompt = await loadPrompt("my_prompt.txt");  // Just works!

// Three modes available for different use cases:
// 1. ALLOW (default): Load metadata if present, don't worry if it's incomplete
setMetadata(MetadataMode.ALLOW);  // Flexible metadata loading (default)
const flexible = await loadPrompt("prompt.txt");  // Loads any metadata found

// 2. IGNORE: Treat as simple text file, use filename as title
setMetadata(MetadataMode.IGNORE);  // Super simple file loading
const simple = await loadPrompt("prompt.txt");  // No metadata parsing
console.log(simple.meta?.title);  // "prompt" (from filename)

// 3. STRICT: Require complete metadata for production use
setMetadata(MetadataMode.STRICT);  // Prevent errors in production
const strict = await loadPrompt("prompt.txt");  // Must have title, description, version

// Override per prompt when needed
const override = await loadPrompt("prompt.txt", { meta: "strict" });

Why this design?

  • Default = Flexible: Parse metadata if present, no friction if absent
  • No configuration needed: Just load files and it works
  • Production-Safe: Use strict mode to catch missing metadata before deployment

Real-World Examples

OpenAI Integration

import OpenAI from "openai";
import { loadPrompt } from "textprompts";

const systemPrompt = await loadPrompt("prompts/system.txt");

const client = new OpenAI();
const response = await client.chat.completions.create({
  model: "gpt-5-mini",
  messages: [
    {
      role: "system",
      content: systemPrompt.prompt.format({
        company_name: "ACME Corp",
        tone: "professional"
      })
    },
    { role: "user", content: "Hello!" }
  ]
});

Vercel AI SDK Integration

import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { loadPrompt } from "textprompts";

const systemPrompt = await loadPrompt("prompts/system.txt");

const result = streamText({
  model: openai('gpt-5-mini'),
  messages: [
    {
      role: 'system',
      content: systemPrompt.prompt.format({
        company_name: "ACME Corp",
        tone: "friendly"
      })
    },
    { role: 'user', content: 'Hello!' }
  ]
});

for await (const delta of result.textStream) {
  process.stdout.write(delta);
}

Anthropic Claude Integration

import Anthropic from "@anthropic-ai/sdk";
import { loadPrompt } from "textprompts";

const systemPrompt = await loadPrompt("prompts/system.txt");

const anthropic = new Anthropic();

const message = await anthropic.messages.create({
  model: "claude-3-5-sonnet-20241022",
  max_tokens: 1024,
  system: systemPrompt.prompt.format({
    company_name: "ACME Corp",
    tone: "professional"
  }),
  messages: [
    { role: "user", content: "Hello!" }
  ]
});

Environment-Specific Prompts

import { loadPrompt } from "textprompts";

const env = process.env.NODE_ENV || "development";
const systemPrompt = await loadPrompt(`prompts/${env}/system.txt`);

// prompts/development/system.txt - verbose logging
// prompts/production/system.txt - concise responses

Prompt Versioning & Experimentation

import { loadPrompt } from "textprompts";

// Easy A/B testing
const promptVersion = "v2";  // or "v1", "experimental", etc.
const prompt = await loadPrompt(`prompts/${promptVersion}/system.txt`);

// Git handles the rest:
// git checkout experiment-branch
// git diff main -- prompts/

File Format

TextPrompts uses TOML front-matter (optional) followed by your prompt content:

---
title = "My Prompt"
version = "1.0.0"
author = "Your Name"
description = "What this prompt does"
created = "2024-01-15"
---
Your prompt content goes here.

Use {variables} for templating.

Metadata Modes

Choose the right level of strictness for your use case:

  • ALLOW (default) - Load metadata if present, don't worry about completeness
  • IGNORE - Simple text file loading, filename becomes title
  • STRICT - Require complete metadata (title, description, version) for production safety

You can also set the environment variable TEXTPROMPTS_METADATA_MODE to one of strict, allow, or ignore before importing the library to configure the default mode.

import { setMetadata, MetadataMode } from "textprompts";

// Set globally
setMetadata(MetadataMode.ALLOW);    // Default: flexible metadata loading
setMetadata(MetadataMode.IGNORE);   // Simple: no metadata parsing
setMetadata(MetadataMode.STRICT);   // Production: require complete metadata

// Or override per prompt
const prompt = await loadPrompt("file.txt", { meta: "strict" });

API Reference

loadPrompt(path, options?)

Load a single prompt file.

async function loadPrompt(
  path: string,
  options?: {
    meta?: MetadataMode | string | null;
  }
): Promise<Prompt>
  • path: Path to the prompt file
  • meta: Metadata handling mode - MetadataMode.STRICT, MetadataMode.ALLOW, MetadataMode.IGNORE, or string equivalents. null uses global config.

Returns a Prompt object with:

  • prompt.meta: Metadata from TOML front-matter (always present)
  • prompt.prompt: The prompt content as a PromptString
  • prompt.path: Path to the original file

setMetadata(mode) / getMetadata()

Set or get the global metadata handling mode.

function setMetadata(mode: MetadataMode | string): void
function getMetadata(): MetadataMode
  • mode: MetadataMode.STRICT, MetadataMode.ALLOW, MetadataMode.IGNORE, or string equivalents

savePrompt(path, content)

Save a prompt to a file.

async function savePrompt(
  path: string,
  content: string | Prompt
): Promise<void>
  • path: Path to save the prompt file
  • content: Either a string (creates template with required fields) or a Prompt object

parseSections(text) and section utilities

Parse mixed Markdown/XML prompt structure directly from a string or Uint8Array.

  • parseSections(text): Returns a ParseResult with sections, anchors, duplicateAnchors, frontmatter, and totalChars
  • generateSlug(heading): Creates the same auto-anchor slug used by the parser (lowercase, non-alphanumeric runs → _)
  • normalizeAnchorId(id): Canonical normalization — lowercase, collapse non-alphanumeric runs to _, strip leading/trailing _
  • injectAnchors(text): Inserts missing <a id="..."></a> lines before Markdown headings
  • renderToc(result, path): Renders a human-readable table of contents
  • getSectionText(text, anchorId): Look up a section body by anchor ID (fuzzy: normalizes both query and stored IDs)
  • sliceSectionContent(text, section): Extract the body text of a section using its content boundary fields
  • loadSection(path, anchorId, options?): Load a named section from a file as a Prompt
import { injectAnchors, loadSection, parseSections, renderToc, getSectionText, sliceSectionContent, normalizeAnchorId } from "textprompts";

const result = parseSections("## Intro\n\nBody.");
console.log(result.sections[0].anchorId); // "intro"

const anchored = injectAnchors("## Intro\n\nBody.");
console.log(anchored.text); // <a id="intro"></a>\n## Intro...

console.log(renderToc(anchored.result, "prompt.txt"));

// Look up a section body (tolerates "my-section", "my_section", "MY_SECTION")
const sectionText = getSectionText(text, "intro");
console.log(sectionText); // "Body."

// Extract body content of a section (excludes heading line)
const text = "## Intro\n\nBody.";
const body = sliceSectionContent(text, result.sections[0]);
console.log(body); // "Body."

// Load a named XML section from a multi-section file
// agents.txt: <system id="default">...</system>  <system id="expert">...</system>
const expert = await loadSection("agents.txt", "expert");
console.log(String(expert)); // "You are an expert assistant..."

// normalizeAnchorId is applied universally: XML tags, id= attrs, headings
console.log(normalizeAnchorId("My-Section")); // "my_section"
console.log(normalizeAnchorId("USER_TEMPLATE")); // "user_template"

Anchor ID normalization

All anchor IDs use a single canonical form: lowercase, non-alphanumeric runs collapsed to _, leading/trailing _ stripped.

SourceRawNormalized
XML tag name<user_template>user_template
XML id= attrid="my-section"my_section
Markdown heading## My Sectionmy_section
<a id=""><a id="custom-ID">custom_id

This means loadSection("file.txt", "my-section"), "my_section", and "MY_SECTION" all find the same section.

PromptString

A string wrapper that validates format() calls:

class PromptString {
  readonly value: string;
  readonly placeholders: Set<string>;

  constructor(value: string);

  format(options?: FormatOptions): string;
  format(args: unknown[], kwargs?: Record<string, unknown>, options?: FormatCallOptions): string;

  toString(): string;
  valueOf(): string;
  strip(): string;
  slice(start?: number, end?: number): string;
  get length(): number;
}

interface FormatOptions {
  args?: unknown[];
  kwargs?: Record<string, unknown>;
  skipValidation?: boolean;
}

Examples:

import { PromptString } from "textprompts";

const template = new PromptString("Hello {name}, you are {role}");

// Strict formatting (default) - all placeholders required
const result = template.format({ name: "Alice", role: "admin" });  // ✅ Works
// template.format({ name: "Alice" });  // ❌ Throws Error

// Partial formatting - replace only available placeholders
const partial = template.format(
  { name: "Alice" },
  { skipValidation: true }
);  // ✅ "Hello Alice, you are {role}"

// Access placeholder information
console.log([...template.placeholders]);  // ['name', 'role']

Prompt

The main prompt object:

class Prompt {
  readonly path: string;
  readonly meta: PromptMeta | null;
  readonly prompt: PromptString;

  static async fromPath(path: string, options?: { meta?: MetadataMode | string | null }): Promise<Prompt>;
  static fromString(content: string, options?: { path?: string; meta?: MetadataMode | string | null }): Prompt;

  toString(): string;
  valueOf(): string;
  strip(): string;
  format(options?: FormatOptions): string;
  format(args: unknown[], kwargs?: Record<string, unknown>, options?: FormatCallOptions): string;
  get length(): number;
  slice(start?: number, end?: number): string;
}

interface PromptMeta {
  title?: string | null;
  version?: string | null;
  author?: string | null;
  created?: string | null;
  description?: string | null;
}

Prompt.fromString(content, options?)

Load a prompt from a string (useful for bundlers):

static fromString(
  content: string,
  options?: {
    path?: string;  // Optional path for metadata/error messages (default: "<string>")
    meta?: MetadataMode | string | null;  // Metadata mode (default: global config)
  }
): Prompt
  • content: String containing the prompt (may include TOML front-matter)
  • path: Optional path for better error messages and metadata extraction (defaults to "<string>")
  • meta: Metadata handling mode (same as fromPath)

Returns a Prompt object with the same structure as fromPath.

Examples:

import { Prompt, MetadataMode } from "textprompts";

// Simple usage
const prompt = Prompt.fromString("Hello {name}!");

// With Vite raw import
import content from "./prompt.txt?raw";
const prompt = Prompt.fromString(content, { path: "prompt.txt" });

// With strict metadata validation
const prompt = Prompt.fromString(content, { meta: MetadataMode.STRICT });

Error Handling

TextPrompts provides specific exception types:

import {
  TextPromptsError,       // Base exception
  FileMissingError,       // File not found
  MissingMetadataError,   // No TOML front-matter when required
  InvalidMetadataError,   // Invalid TOML syntax
  MalformedHeaderError,   // Malformed front-matter structure
} from "textprompts";

Best Practices

  • Organize by purpose: Group related prompts in folders

    prompts/
    ├── customer-support/
    ├── content-generation/
    └── code-review/
    
  • Use semantic versioning: Version your prompts like code

    version = "1.2.0"  # major.minor.patch
    
  • Document your variables: List expected variables in descriptions

    description = "Requires: customer_name, issue_type, agent_name"
    
  • Test your prompts: Write unit tests for critical prompts

    import { test, expect } from "bun:test";
    import { loadPrompt } from "textprompts";
    
    test("greeting prompt formats correctly", async () => {
      const prompt = await loadPrompt("greeting.txt");
      const result = prompt.prompt.format({
        customer_name: "Test",
        company_name: "Test Corp",
        issue_type: "test",
        agent_name: "Bot"
      });
      expect(result).toContain("Test");
    });
    
  • Use environment-specific prompts: Different prompts for dev/prod

    const env = process.env.NODE_ENV || "development";
    const prompt = await loadPrompt(`prompts/${env}/system.txt`);
    

Why Not Just Use Template Strings?

You could, but then you lose:

  • Metadata tracking (versions, authors, descriptions)
  • Safe formatting (catch missing variables)
  • Organized storage (searchable, documentable)
  • Version control benefits (proper diffs, blame, history)
  • Tooling support (CLI, validation, testing)

Examples

See the examples/ directory for complete, runnable examples:

Run them with:

bun examples/basic-usage.ts
bun examples/fromstring-example.ts
bun examples/sections-usage.ts
bun examples/openai-example.ts
bun examples/aisdk-example.ts

Documentation

Full documentation is available in the docs/ directory:

License

MIT License - see LICENSE for details.

textprompts - Because your prompts deserve better than being buried in code strings. 🚀

FAQs

Package last updated on 29 Mar 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts