New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

localizer-ai

Package Overview
Dependencies
Maintainers
0
Versions
2
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

localizer-ai

A powerful CLI tool for automating content localization using OpenAI GPT-4 or MistralAI, helping to localize apps. Supports multiple file types, preserves formatting, and enables context-aware translations.

  • 0.0.1
  • latest
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
2
decreased by-50%
Maintainers
0
Weekly downloads
 
Created
Source

Content Localizer CLI Documentation

Content Localizer CLI Logo

NPM Version License: MIT Node.js Version PRs Welcome

A powerful CLI tool for automating content localization using OpenAI GPT-4 or MistralAI, helping to localize apps. Supports multiple file types, preserves formatting, and enables context-aware translations.

InstallationDocumentationContributing

Table of Contents

Features

  • 🤖 AI-powered translations using OpenAI GPT-4 or MistralAI
  • 📁 Smart directory structure replication
  • 🔄 Support for multiple file types (.md, .txt, .json)
  • 🎯 Format preservation (markdown, special characters)
  • 🚀 Parallel processing with rate limiting
  • 📦 Perfect for VitePress projects
  • 🌍 Context-aware translations

Why Choose Content Localizer AI CLI?

Advantages Over Traditional LLM Solutions

  1. Format-Aware Translation

    • Intelligently preserves complex Markdown structures
    • Maintains JSON/JavaScript object hierarchies
  2. Smart Context Management

    • File-level context support for accurate translations
    • Deep context for nested JSON structures
  3. Developer-Centric Features

    • Native support for VitePress and documentation frameworks
    • Built-in parallel processing with rate limiting
  4. Project Structure Preservation

    • Maintains source directory hierarchy
    • Handles multiple file types in single pass
  5. Developer Experience

    • Simple CLI interface
    • Minimal configuration needed

This tool combines the power of LLMs with specialized handling for development-focused content, making it superior to generic translation services or basic LLM implementations for technical documentation and code-related content.

Limitations

⚠️ Early Stage Project: This tool is in its early stages of development.

  • Markdown Formatting: Some complex markdown structures may not be perfectly preserved during translation
  • Text File Formatting: Special formatting in .txt files might require manual review
  • Work in Progress: Active development is ongoing to improve formatting accuracy
  • Rate limiting: You might hit the rate limit of the AI service you are using when using a free account.

These limitations are being actively addressed and will be improved in future versions. For best results, review translated output for critical content.

Prerequisites

  • Node.js >= 16.0.0
  • npm or yarn
  • OpenAI API key or MistralAI API key

Installation

# Install globally
npm install -g localizer-ai

# Set up API key for OpenAI
npm config set -g openai-key YOUR_API_KEY

# Or set up API key for MistralAI
npm config set -g mistralai-key YOUR_API_KEY

Usage

Quick Start

  1. Create a new configuration file:
localizer-ai create-config
  1. Start the translation process:
localizer-ai translate

Configuration

Create a localizer-ai.config.json file:

{
  "source": "docs/en",
  "fileTypes": [".md", ".txt"],
  "locales": ["fr", "es", "de"],
  "from": "en",
  "destination": "localized",
  "aiServiceProvider": "openAI",
  "parallelProcessing": true,
  "llmConfig": {
    "temperature": 0.4,
    "maxTokens": 1000
  }
}
Configuration Options
OptionDescriptionDefault
sourceSource directory pathRequired
fileTypesArray of file extensionsRequired
localesTarget language codesRequired
fromSource language code"en"
destinationOutput directorySame as source
aiServiceProviderAI service to use"openAI"
parallelProcessingEnable parallel processingtrue
llmConfigAI model configuration{}

Note on Default Models:

  • OpenAI: Uses gpt-4o-mini model by default
  • MistralAI: Uses open-mistral-nemo model by default

Translation Context

File-level Context
{
  "docs/api.md": "Technical API documentation",
  "docs/guide.md": "User guide content"
}
Deep Context (JSON files)
{
  "docs/config.json": {
    "api.endpoints": "API endpoint descriptions",
    "$fileContext": "Configuration documentation"
  }
}

Architecture

Core Components

  1. CLI Interface (src/cli/commandExecutor.js)
async function commandExecutor() {
  displayWelcomeMessage();
  const args = process.argv.slice(2);
  // ... command handling logic
}
  1. Translation Engine (src/utils/textTranslator.js)
async function translateText({ content, from, to, localeContext, fileType }) {
  // ... translation logic
}
  1. File Processing (src/utils/fileReplicator.js)
async function replicateFiles(
  sourcePath,
  locales,
  fileTypes,
  from,
  destinationPath
) {
  // ... file replication logic
}

AI Integration

OpenAI Implementation
const askOpenAI = async ({ question, systemPrompt }) => {
  const response = await openai.chat.completions.create({
    model: "gpt-4",
    messages: [
      { role: "system", content: systemPrompt },
      { role: "user", content: question },
    ],
    temperature: 0.4,
    ...llmConfig,
  });
  return response.choices[0].message.content;
};
MistralAI Implementation
const askMistralAI = async ({ question, systemPrompt }) => {
  const chatResponse = await client.chat.complete({
    model: "open-mistral-nemo",
    messages: [
      { role: "system", content: systemPrompt },
      { role: "user", content: question },
    ],
    ...llmConfig,
  });
  return chatResponse.choices[0].message.content;
};

Advanced Features

Format Preservation

The tool maintains formatting for:

Markdown
  • Headers (h1-h6)
  • Code blocks with language specification
  • Lists (ordered and unordered)
  • Links and images
  • Bold and italic text
  • Task lists
  • Tables
JSON
  • Nested structure preservation
  • Type consistency
  • Formatting maintenance

Rate Limiting

class AIRequestQueue {
  constructor(delayMs = 1500) {
    this.queue = [];
    this.isProcessing = false;
    this.delayMs = delayMs;
  }

  async processQueue() {
    // ... queue processing logic with rate limiting
  }
}

Contributing

  1. Fork the repository
  2. Create your feature branch:
git checkout -b feature/AmazingFeature
  1. Install dependencies:
npm install
  1. Run in development mode:
npm run dev
  1. Commit your changes:
git commit -m 'Add some AmazingFeature'
  1. Push to the branch:
git push origin feature/AmazingFeature
  1. Open a Pull Request

Development Guidelines

  1. Code Style
  • Use ESLint configuration
  • Follow JSDoc documentation standards
  • Maintain test coverage
  1. Commit Messages
  • Use conventional commits format
  • Include issue references
  1. Testing
npm test

API Documentation

Core Functions

translateText
/**
 * Translates content from one language to another
 * @param {Object} options Translation options
 * @param {string} options.content Content to translate
 * @param {string} options.from Source language
 * @param {string} options.to Target language
 * @returns {Promise<string>} Translated content
 */
async function translateText(options) {
  // Implementation
}
replicateFiles
/**
 * Replicates directory structure with translations
 * @param {string} sourcePath Source directory
 * @param {string[]} locales Target locales
 * @param {string[]} fileTypes File types to process
 * @returns {Promise<void>}
 */
async function replicateFiles(sourcePath, locales, fileTypes) {
  // Implementation
}

Security

  • API keys stored securely using npm config
  • Rate limiting for API calls
  • Input validation for file operations
  • Safe file system operations

Roadmap

  • Add support for more AI providers
  • Support for more file formats
  • Batch processing optimization
  • Translation quality metrics

License

This project is licensed under the MIT License - see the LICENSE file for details.

Support

  • Create an issue
  • Star the project
  • Follow updates

Credits

Created by Takasi Venkata Sandeep

Keywords

FAQs

Package last updated on 21 Nov 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc