@mondaydotcomorg/atp-runtime
Runtime APIs available to agents during code execution (LLM, embedding, approval, cache, logging, progress).
Overview
This package provides the atp.* runtime APIs that agents can use when executing code on ATP servers. These APIs enable LLM calls, embeddings, approvals, caching, logging, and progress reporting.
Installation
npm install @mondaydotcomorg/atp-runtime
Architecture
graph TB
Runtime[Runtime Registry] --> LLM[atp.llm.*]
Runtime --> Embedding[atp.embedding.*]
Runtime --> Approval[atp.approval.*]
Runtime --> Cache[atp.cache.*]
Runtime --> Log[atp.log.*]
Runtime --> Progress[atp.progress.*]
LLM --> Pause[Pause Mechanism]
Approval --> Pause
Embedding --> Pause
Cache --> Provider[CacheProvider]
Log --> Logger[Logger]
Runtime APIs
atp.llm.*
LLM operations that pause execution and route to client-provided LLM.
const response = await atp.llm.call({
prompt: 'What is the capital of France?',
model: 'gpt-4',
temperature: 0.7,
systemPrompt: 'You are a helpful assistant',
});
const user = await atp.llm.extract({
prompt: 'Extract user info: John Doe, john@example.com',
schema: {
type: 'object',
properties: {
name: { type: 'string' },
email: { type: 'string' },
},
required: ['name', 'email'],
},
});
const category = await atp.llm.classify({
text: 'This product is amazing!',
categories: ['positive', 'negative', 'neutral'],
});
atp.embedding.*
Embedding operations for semantic search.
const embeddingId = await atp.embedding.embed('Important document content');
const results = await atp.embedding.search('find similar documents', {
topK: 5,
minSimilarity: 0.7,
});
const similarity = await atp.embedding.similarity(vec1, vec2);
const all = await atp.embedding.getAll();
const count = await atp.embedding.count();
await atp.embedding.clear();
atp.approval.*
Request human approval during execution.
const result = await atp.approval.request('Delete all user data?', {
critical: true,
affectedUsers: 150,
});
if (result.approved) {
await deleteData();
} else {
return { cancelled: true, reason: result.response };
}
atp.cache.*
Cache data with TTL support.
await atp.cache.set('user:123', userData, 3600);
const cached = await atp.cache.get('user:123');
const exists = await atp.cache.has('user:123');
await atp.cache.delete('user:123');
atp.log.*
Structured logging with multiple levels.
atp.log.trace('Detailed trace', { requestId: '123' });
atp.log.debug('Debug info', { state });
atp.log.info('User logged in', { userId: '456' });
atp.log.warn('Deprecated API used', { api: 'v1' });
atp.log.error('Failed to connect', { error, retries: 3 });
atp.log.fatal('System crash', { reason });
atp.progress.*
Report progress for long-running operations.
atp.progress.report({
current: 5,
total: 10,
message: 'Processing items...',
metadata: {
itemsPerSecond: 2.5,
},
});
Usage in Agent Code
When agents execute code on ATP server, these APIs are automatically available:
const items = ['apple', 'banana', 'cherry', 'date', 'elderberry'];
const results = [];
atp.log.info('Starting fruit analysis', { count: items.length });
for (let i = 0; i < items.length; i++) {
atp.progress.report({
current: i + 1,
total: items.length,
message: `Processing ${items[i]}`,
});
const analysis = await atp.llm.call({
prompt: `Analyze this fruit: ${items[i]}`,
});
const embeddingId = await atp.embedding.embed(analysis);
await atp.cache.set(`analysis:${items[i]}`, analysis, 3600);
results.push({ fruit: items[i], analysis, embeddingId });
}
const approval = await atp.approval.request('Analysis complete. Proceed with storage?', {
resultCount: results.length,
});
if (!approval.approved) {
atp.log.warn('User rejected storage');
return { cancelled: true };
}
atp.log.info('Analysis complete', { results: results.length });
return results;
Pause/Resume Mechanism
sequenceDiagram
participant Code
participant Runtime
participant Server
participant Client
Code->>Runtime: atp.llm.call()
Runtime->>Server: Pause execution
Server->>Client: Request LLM callback
Client->>LLM: Call LLM API
LLM-->>Client: Response
Client->>Server: Resume with result
Server->>Runtime: Restore state
Runtime-->>Code: Return LLM result
Initialization
Runtime APIs are automatically initialized by the ATP server. For standalone use:
import {
setClientLLMCallback,
initializeCache,
initializeApproval,
initializeVectorStore,
initializeLogger,
} from '@mondaydotcomorg/atp-runtime';
setClientLLMCallback({
call: async (prompt, options) => {
},
});
initializeCache(cacheProvider);
initializeApproval({
request: async (message, context) => {
},
});
initializeVectorStore(embeddingHandler);
initializeLogger({
level: 'info',
pretty: true,
});
Replay Mode
For deterministic execution and testing:
import { setReplayMode } from '@mondaydotcomorg/atp-runtime';
setReplayMode(true);
const result = await atp.llm.call({ prompt: 'Hello' });
Type Definitions
The runtime exports TypeScript definitions for all APIs:
import type {
LLMCallOptions,
LLMExtractOptions,
LLMClassifyOptions,
EmbeddingSearchOptions,
EmbeddingSearchResult,
ApprovalRequest,
ApprovalResponse,
ProgressUpdate,
} from '@mondaydotcomorg/atp-runtime';
Metadata Generation
Runtime APIs are decorated with metadata for automatic discovery:
import { GENERATED_METADATA } from '@mondaydotcomorg/atp-runtime';
console.log(GENERATED_METADATA);
Error Handling
import { PauseExecutionError, isPauseError } from '@mondaydotcomorg/atp-runtime';
try {
const result = await atp.llm.call({ prompt: 'Hello' });
} catch (error) {
if (isPauseError(error)) {
console.log('Paused for:', error.callbackType);
} else {
throw error;
}
}
Advanced Features
Sequence Numbers
Track call order for replay:
import { getCallSequenceNumber } from '@mondaydotcomorg/atp-runtime';
const seq = getCallSequenceNumber();
console.log('Current sequence:', seq);
Execution Context
Run code in specific execution context:
import { runInExecutionContext } from '@mondaydotcomorg/atp-runtime';
runInExecutionContext('exec-123', () => {
const result = await atp.llm.call({ prompt: 'Hello' });
});
TypeScript Support
Full TypeScript definitions with strict typing.
License
MIT