
Security News
Deno 2.6 + Socket: Supply Chain Defense In Your CLI
Deno 2.6 introduces deno audit with a new --socket flag that plugs directly into Socket to bring supply chain security checks into the Deno CLI.
@moderation-api/sdk
Advanced tools
The Moderation API Node library provides convenient access to the Moderation API from applications written in server-side JavaScript.
Use the Moderation API to analyze text and images for offensive content, profanity, toxicity, discrimination, sentiment, language and more - or detect, hide, and extract data entities like emails, phone numbers, addresses and more.
See the moderation-api-node API docs for Node.js.
Install the package with:
npm install @moderation-api/sdk
# or
pnpm add @moderation-api/sdk
The package needs to be configured with your project's API key, which is available in your Project Dashboard.
The API key can be provided in two ways:
MODAPI_SECRET_KEY environment variableimport ModerationAPI from '@moderation-api/sdk';
// Option 1: Use environment variable MODAPI_SECRET_KEY
const moderationApi = new ModerationAPI();
// Option 2: Pass key explicitly (overrides environment variable)
const moderationApi = new ModerationAPI({
key: 'proj_...',
});
// Submit content for moderation
const result = await moderationApi.content.submit({
content: {
type: 'text',
text: 'Hello world!',
},
});
// Check if content was flagged
console.log(result.evaluation.flagged);
// Use the API's recommendation
console.log(result.recommendation.action); // 'allow', 'review', or 'reject'
The client works with TypeScript and is fully typed.
The SDK provides the following main features:
Use content.submit to moderate text, images, video, audio, or complex objects:
// Text moderation
const result = await moderationApi.content.submit({
content: {
type: 'text',
text: 'Your text here',
},
contentId: 'message-123',
authorId: 'user-123',
conversationId: 'room-123',
metaType: 'message',
metadata: {custom: 'data'},
});
// Image moderation
const result = await moderationApi.content.submit({
content: {
type: 'image',
url: 'https://example.com/image.jpg',
},
contentId: 'image-456',
});
// Video moderation
const result = await moderationApi.content.submit({
content: {
type: 'video',
url: 'https://example.com/video.mp4',
},
});
// Audio moderation
const result = await moderationApi.content.submit({
content: {
type: 'audio',
url: 'https://example.com/audio.mp3',
},
});
// Object moderation (for complex data with multiple fields)
const result = await moderationApi.content.submit({
content: {
type: 'object',
data: {
title: {type: 'text', text: 'Post title'},
body: {type: 'text', text: 'Post content'},
thumbnail: {type: 'image', url: 'https://example.com/thumb.jpg'},
},
},
});
The response includes both a flagged field and a recommendation with the API's suggested action:
const result = await moderationApi.content.submit({
content: {type: 'text', text: 'Some content'},
});
// Simple boolean check
if (result.evaluation.flagged) {
console.log('Content was flagged by policies');
}
// Use the API's recommendation (considers severity, thresholds, and more)
switch (result.recommendation.action) {
case 'reject':
// Block the content completely
console.log('Content should be rejected');
break;
case 'review':
// Send to moderation queue for human review
console.log('Content needs manual review');
break;
case 'allow':
// Content is safe to publish
console.log('Content is approved');
break;
}
// Access detailed policy results
result.policies.forEach(policy => {
console.log(`Policy ${policy.id}: flagged=${policy.flagged}, probability=${policy.probability}`);
});
The following endpoints are deprecated. Please use content.submit instead:
// DEPRECATED - Use content.submit instead
await moderationApi.moderate.text({value: 'text'});
await moderationApi.moderate.image({url: 'url'});
await moderationApi.moderate.video({url: 'url'});
await moderationApi.moderate.audio({url: 'url'});
await moderationApi.moderate.object({value: {}});
// Get queue stats
const stats = await moderationApi.queueView.getStats();
// Get queue items
const items = await moderationApi.queueView.getItems();
// Resolve/unresolve items
await moderationApi.queueView.resolveItem('item_id');
await moderationApi.queueView.unresolveItem('item_id');
// Get wordlists
const wordlists = await moderationApi.wordlist.list();
// Add words to wordlist
await moderationApi.wordlist.addWords('wordlist_id', {
words: ['word1', 'word2'],
});
// Remove words from wordlist
await moderationApi.wordlist.removeWords('wordlist_id', {
words: ['word1'],
});
// Create an author
const author = await moderationApi.author.create({
authorId: 'user_123',
username: 'john_doe',
email: 'john@example.com',
});
// List authors
const authors = await moderationApi.author.list();
// Get author details
const authorDetails = await moderationApi.author.get('author_id');
// Update author
await moderationApi.author.update('author_id', {
username: 'jane_doe',
email: 'jane@example.com',
});
// Delete author
await moderationApi.author.delete('author_id');
// Get account information
const account = await moderationApi.account.get();
Moderation API can optionally sign the webhook events it sends to your endpoint, allowing you to validate that they were not sent by a third-party. You can read more about it here.
The webhook secret can be provided in two ways:
MODAPI_WEBHOOK_SECRET environment variableconstructEvent()Please note that you must pass the raw request body, exactly as received from Moderation API, to the constructEvent() function; this will not work with a parsed (i.e., JSON) request body.
Here's what it looks like using Next.js:
import {buffer} from 'micro';
const handler = async (req, res) => {
const webhookRawBody = await buffer(req);
const webhookSignatureHeader = req.headers['modapi-signature'];
// Option 1: Use environment variable MODAPI_WEBHOOK_SECRET
const payload = await moderationApi.webhooks.constructEvent(
webhookRawBody,
webhookSignatureHeader
);
// Option 2: Pass secret explicitly (overrides environment variable)
const payload = await moderationApi.webhooks.constructEvent(
webhookRawBody,
webhookSignatureHeader,
'whsec_...'
);
};
// disable body parser so we can access raw body
export const config = {
api: {
bodyParser: false,
},
};
export default handler;
The SDK uses typed errors for better error handling:
try {
const result = await moderationApi.content.submit({
content: {type: 'text', text: 'Hello world!'},
});
} catch (error) {
if (error.status === 401) {
console.error('Invalid API key');
} else if (error.status === 429) {
console.error('Rate limit exceeded');
} else {
console.error('An error occurred:', error.message);
}
}
New features and bug fixes are released on the latest major version of the @moderation-api/sdk package. If you are on an older major version, we recommend that you upgrade to the latest in order to use the new features and bug fixes including those for security vulnerabilities. Older major versions of the package will continue to be available for use, but will not be receiving any updates.
Reach out at support@moderationapi.com
FAQs
The official TypeScript library for the Moderation API API
The npm package @moderation-api/sdk receives a total of 481 weekly downloads. As such, @moderation-api/sdk popularity was classified as not popular.
We found that @moderation-api/sdk demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Deno 2.6 introduces deno audit with a new --socket flag that plugs directly into Socket to bring supply chain security checks into the Deno CLI.

Security News
New DoS and source code exposure bugs in React Server Components and Next.js: what’s affected and how to update safely.

Security News
Socket CEO Feross Aboukhadijeh joins Software Engineering Daily to discuss modern software supply chain attacks and rising AI-driven security risks.