New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

clipify

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

clipify

A powerful tool for processing video content into social media-friendly segments

2.1.4
PyPI
Maintainers
1

Clipify Logo

Clipify

An AI-powered video processing toolkit for creating social media-optimized content with automated transcription, captioning, and thematic segmentation.

Development Status PyPI version Python License Downloads GitHub stars Documentation Status Code style: black

🌟 Key Features

Content Processing

  • Video Processing Pipeline
    • Automated audio extraction and speech-to-text conversion
    • Smart thematic segmentation using AI
    • Mobile-optimized format conversion (9:16, 4:5, 1:1)
    • Intelligent caption generation and overlay

AI Capabilities

  • Advanced Analysis
    • Context-aware content segmentation
    • Dynamic title generation
    • Smart keyword and hashtag extraction
    • Sentiment analysis for content optimization

Platform Options

  • Desktop Application

    • Intuitive graphical interface
    • Drag-and-drop functionality
    • Real-time processing feedback
    • Batch processing capabilities
  • Server Deployment

    • RESTful API integration
    • Asynchronous processing with webhooks
    • Multi-tenant architecture
    • Containerized deployment support

🚀 Quick Start

Desktop Application

🚀 Check out our full project based on Clipify on https://github.com/adelelawady/Clipify-hub 🚀

Download and install the latest version:

Download Installable Download Server

Python Package Installation

# Via pip
pip install clipify

# From source
git clone https://github.com/adelelawady/Clipify.git
cd Clipify
pip install -r requirements.txt

💻 Usage Examples

Basic Implementation

from clipify.core.clipify import Clipify

# Initialize with basic configuration
clipify = Clipify(
    provider_name="hyperbolic",
    api_key="your-api-key",
    model="deepseek-ai/DeepSeek-V3",
    convert_to_mobile=True,
    add_captions=True
)

# Process video
result = clipify.process_video("input.mp4")

# Handle results
if result:
    print(f"Created {len(result['segments'])} segments")
    for segment in result['segments']:
        print(f"Segment {segment['segment_number']}: {segment['title']}")

Advanced Configuration

clipify = Clipify(
    # AI Configuration
    provider_name="hyperbolic",
    api_key="your-api-key",
    model="deepseek-ai/DeepSeek-V3",
    max_tokens=5048,
    temperature=0.7,
    
    # Video Processing
    convert_to_mobile=True,
    add_captions=True,
    mobile_ratio="9:16",
    
    # Caption Styling
    caption_options={
        "font": "Bangers-Regular.ttf",
        "font_size": 60,
        "font_color": "white",
        "stroke_width": 2,
        "stroke_color": "black",
        "highlight_current_word": True,
        "word_highlight_color": "red",
        "shadow_strength": 0.8,
        "shadow_blur": 0.08,
        "line_count": 1,
        "padding": 50,
        "position": "bottom"
    }
)

AudioExtractor

from clipify.audio.extractor import AudioExtractor

# Initialize audio extractor
extractor = AudioExtractor()

# Extract audio from video
audio_path = extractor.extract_audio(
    video_path="input_video.mp4",
    output_path="extracted_audio.wav"
)

if audio_path:
    print(f"Audio successfully extracted to: {audio_path}")

SpeechToText

from clipify.audio.speech import SpeechToText

# Initialize speech to text converter
converter = SpeechToText(model_size="base")  # Options: tiny, base, small, medium, large

# Convert audio to text with timing
result = converter.convert_to_text("audio_file.wav")

if result:
    print("Transcript:", result['text'])
    print("\nWord Timings:")
    for word in result['word_timings'][:5]:  # Show first 5 words
        print(f"Word: {word['text']}")
        print(f"Time: {word['start']:.2f}s - {word['end']:.2f}s")

VideoConverter

from clipify.video.converter import VideoConverter

# Initialize video converter
converter = VideoConverter()

# Convert video to mobile format with blurred background
result = converter.convert_to_mobile(
    input_video="landscape_video.mp4",
    output_video="mobile_video.mp4",
    target_ratio="9:16"  # Options: "1:1", "4:5", "9:16"
)

if result:
    print("Video successfully converted to mobile format")

VideoConverterStretch

from clipify.video.converterStretch import VideoConverterStretch

# Initialize stretch converter
stretch_converter = VideoConverterStretch()

# Convert video using stretch method
result = stretch_converter.convert_to_mobile(
    input_video="landscape.mp4",
    output_video="stretched.mp4",
    target_ratio="4:5"  # Options: "1:1", "4:5", "9:16"
)

if result:
    print("Video successfully converted using stretch method")

VideoProcessor

from clipify.video.processor import VideoProcessor

# Initialize video processor with caption styling
processor = VideoProcessor(
    # Font settings
    font="Bangers-Regular.ttf",
    font_size=60,
    font_color="white",
    
    # Text effects
    stroke_width=2,
    stroke_color="black",
    shadow_strength=0.8,
    shadow_blur=0.08,
    
    # Caption behavior
    highlight_current_word=True,
    word_highlight_color="red",
    line_count=1,
    padding=50,
    position="bottom"  # Options: "bottom", "top", "center"
)

# Process video with captions
result = processor.process_video(
    input_video="input_video.mp4",
    output_video="captioned_output.mp4",
    use_local_whisper="auto"  # Options: "auto", True, False
)

if result:
    print("Video successfully processed with captions")

# Process multiple video segments
segment_files = ["segment1.mp4", "segment2.mp4", "segment3.mp4"]
processed_segments = processor.process_video_segments(
    segment_files=segment_files,
    output_dir="processed_segments"
)

The VideoProcessor provides powerful captioning capabilities:

  • Customizable font styling and text effects
  • Word-level highlighting for better readability
  • Shadow and stroke effects for visibility
  • Automatic speech recognition using Whisper
  • Support for batch processing multiple segments

VideoCutter

from clipify.video.cutter import VideoCutter

# Initialize video cutter
cutter = VideoCutter()

# Cut a specific segment
result = cutter.cut_video(
    input_video="full_video.mp4",
    output_video="segment.mp4",
    start_time=30.5,  # Start at 30.5 seconds
    end_time=45.2     # End at 45.2 seconds
)

if result:
    print("Video segment successfully cut")

SmartTextProcessor

from clipify.core.text_processor import SmartTextProcessor
from clipify.core.ai_providers import HyperbolicAI

# Initialize AI provider and text processor
ai_provider = HyperbolicAI(api_key="your_api_key")
processor = SmartTextProcessor(ai_provider)

# Process text content
text = "Your long text content here..."
segments = processor.segment_by_theme(text)

if segments:
    for segment in segments['segments']:
        print(f"\nTitle: {segment['title']}")
        print(f"Keywords: {', '.join(segment['keywords'])}")
        print(f"Content length: {len(segment['content'])} chars")

📦 Project Structure

clipify/
├── clipify/
│   ├── __init__.py                 # Package initialization and version
│   ├── core/
│   │   ├── __init__.py
│   │   ├── clipify.py             # Main Clipify class
│   │   ├── processor.py           # Content processing logic
│   │   ├── text_processor.py      # Text analysis and segmentation
│   │   └── ai_providers.py        # AI provider implementations
│   ├── video/
│   │   ├── __init__.py
│   │   ├── cutter.py             # Video cutting functionality
│   │   ├── converter.py          # Mobile format conversion
│   │   ├── converterStretch.py   # Alternative conversion method
│   │   └── processor.py          # Video processing and captions
│   ├── audio/
│   │   ├── __init__.py
│   │   ├── extractor.py          # Audio extraction from video
│   │   └── speech.py             # Speech-to-text conversion
│   └── utils/                    # Utility functions
│       ├── __init__.py
│       └── helpers.py
├── .gitignore                   # Git ignore rules
├── LICENSE                      # MIT License
├── MANIFEST.in                  # Package manifest
├── README.md                    # Project documentation
├── requirements.txt             # Dependencies
└── setup.py                     # Package setup

🛠️ Configuration Options

AI Providers

  • hyperbolic: Default provider with DeepSeek-V3 model
  • openai: OpenAI GPT models support
  • anthropic: Anthropic Claude models
  • ollama: Local model deployment

Video Formats

  • Aspect Ratios: 1:1, 4:5, 9:16
  • Output Formats: MP4, MOV
  • Quality Presets: Low, Medium, High

Caption Customization

  • Font customization
  • Color schemes
  • Position options
  • Animation effects
  • Word highlighting

🤝 Contributing

We welcome contributions! Here's how you can help:

  • Fork the repository
  • Create a feature branch (git checkout -b feature/amazing-feature)
  • Commit changes (git commit -m 'Add amazing feature')
  • Push to branch (git push origin feature/amazing-feature)
  • Open a Pull Request

Please read our Contributing Guidelines for details.

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🌐 Support

🙏 Acknowledgments

  • FFmpeg for video processing
  • OpenAI for AI capabilities
  • PyTorch community
  • All contributors and supporters

Buy me a coffee

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts