šŸš€ Big News:Socket Has Acquired Secure Annex.Learn More →
Socket
Book a DemoSign in
Socket

vargai

Package Overview
Dependencies
Maintainers
2
Versions
113
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

vargai

AI video generation SDK — JSX for videos. Generate, compose and render AI videos with Kling, Flux, ElevenLabs, and more through one API. Built on Vercel AI SDK.

latest
Source
npmnpm
Version
0.4.0-alpha113
Version published
Weekly downloads
380
117.14%
Maintainers
2
Weekly downloads
Ā 
Created
Source

varg — AI Video Generation SDK

Create AI videos with JSX. One SDK for Kling, Flux, ElevenLabs, Sora and more. Built on Vercel AI SDK.

npm version npm downloads GitHub stars License

Docs Ā· Dashboard Ā· Quickstart Ā· Models Ā· Discord

varg is an open-source TypeScript SDK for AI video generation. One API key, one gateway — generate images, video, speech, music, lipsync, and captions through varg.* providers. Write videos as JSX components (like React), render locally or in the cloud.

Get started

Install the varg skill into Claude Code, Cursor, Windsurf, or any agent that supports skills. Zero code — just prompt.

# 1. Install the varg skill
npx -y skills add vargHQ/skills --all --copy -y

# 2. Set your API key (get one at app.varg.ai)
export VARG_API_KEY=varg_live_xxx

# 3. Create your first video
claude "create a 10-second product video for white sneakers, 9:16, UGC style, with captions and background music"

The agent writes declarative JSX, varg handles AI generation + caching + rendering.

For developers

# Install with bun (recommended)
bun install vargai ai

# Or with npm
npm install vargai ai

# Set up project (auth, skills, hello.tsx, cache dirs)
bunx vargai init

vargai init handles everything: signs you in, installs the agent skill, creates a starter template, and sets up your project structure.

Then render the starter template:

bunx vargai render hello.tsx

Or ask your AI agent to create something from scratch.

How it works

Your prompt / JSX code
        |
   varg gateway (api.varg.ai)
   /     |      \        \
 Kling  Flux  ElevenLabs  Wan ...   (AI providers)
   \     |      /        /
    varg render engine
        |
   output.mp4
  • One API key (VARG_API_KEY) routes to all providers through the varg gateway
  • Declarative JSX — compose videos like React components with <Clip>, <Video>, <Music>, <Captions>
  • Automatic caching — same props = instant cache hit at $0. Re-render without re-generating
  • Local or cloud — render with bunx vargai render locally, or submit via the Cloud Render API

Quick examples

Image to video

import { Render, Clip, Image, Video } from "vargai/react";
import { varg } from "vargai/ai";

const character = Image({
  prompt: "cute kawaii orange cat, round body, big eyes, Pixar style",
  model: varg.imageModel("nano-banana-pro"),
  aspectRatio: "9:16",
});

export default (
  <Render width={1080} height={1920}>
    <Clip duration={5}>
      <Video
        prompt={{ text: "cat waves hello, bounces happily", images: [character] }}
        model={varg.videoModel("kling-v3")}
      />
    </Clip>
  </Render>
);
bunx vargai render hello.tsx

With music and captions

import { Render, Clip, Image, Video, Speech, Captions, Music } from "vargai/react";
import { varg } from "vargai/ai";

const character = Image({
  model: varg.imageModel("nano-banana-pro"),
  prompt: "friendly robot, blue metallic, expressive eyes",
  aspectRatio: "9:16",
});

const voiceover = Speech({
  model: varg.speechModel("eleven_v3"),
  voice: "adam",
  children: "Hello! I'm your AI assistant. Let me show you something cool!",
});

export default (
  <Render width={1080} height={1920}>
    <Music prompt="upbeat electronic, cheerful" model={varg.musicModel()} volume={0.15} />
    <Clip duration={5}>
      <Video
        prompt={{ text: "robot talking, subtle head movements", images: [character] }}
        model={varg.videoModel("kling-v3")}
      />
    </Clip>
    <Captions src={voiceover} style="tiktok" color="#ffffff" withAudio />
  </Render>
);

Talking head with lipsync

import { Render, Clip, Image, Video, Speech, Captions, Music } from "vargai/react";
import { varg } from "vargai/ai";

const voiceover = Speech({
  model: varg.speechModel("eleven_v3"),
  voice: "josh",
  children: "With varg, you can create any videos at scale!",
});

const baseCharacter = Image({
  prompt: "woman, sleek black bob hair, fitted black t-shirt, natural look",
  model: varg.imageModel("nano-banana-pro"),
  aspectRatio: "9:16",
});

const animatedCharacter = Video({
  prompt: {
    text: "woman speaking naturally, subtle head movements, friendly expression",
    images: [baseCharacter],
  },
  model: varg.videoModel("kling-v3"),
});

export default (
  <Render width={1080} height={1920}>
    <Music prompt="modern tech ambient, subtle electronic" model={varg.musicModel()} volume={0.1} />
    <Clip duration={5}>
      <Video
        prompt={{ video: animatedCharacter, audio: voiceover }}
        model={varg.videoModel("sync-v2-pro")}
      />
    </Clip>
    <Captions src={voiceover} style="tiktok" color="#ffffff" withAudio />
  </Render>
);

Components

ComponentPurposeKey props
<Render>Root containerwidth, height, fps
<Clip>Time segmentduration, transition, cutFrom, cutTo
<Image>AI or static imageprompt, src, model, zoom, aspectRatio, resize
<Video>AI or source videoprompt, src, model, volume, cutFrom, cutTo
<Speech>Text-to-speechvoice, model, volume, children
<Music>Background musicprompt, src, model, volume, loop, ducking
<Title>Text overlayposition, color, start, end
<Subtitle>Subtitle textbackgroundColor
<Captions>Auto-generated subssrc, srt, style, color, activeColor, withAudio
<Overlay>Positioned layerleft, top, width, height, keepAudio
<Split>Side-by-sidedirection
<Slider>Before/after revealdirection
<Swipe>Tinder-style cardsdirection, interval
<TalkingHead>Animated charactercharacter, src, voice, model, lipsyncModel
<Packshot>End card with CTAbackground, logo, cta, blinkCta

Caption styles

<Captions src={voiceover} style="tiktok" />     // word-by-word highlight
<Captions src={voiceover} style="karaoke" />    // fill left-to-right
<Captions src={voiceover} style="bounce" />     // words bounce in
<Captions src={voiceover} style="typewriter" /> // typing effect

Transitions

67 GL transitions available:

<Clip transition={{ name: "fade", duration: 0.5 }}>
<Clip transition={{ name: "crossfade", duration: 0.5 }}>
<Clip transition={{ name: "wipeleft", duration: 0.5 }}>
<Clip transition={{ name: "cube", duration: 0.8 }}>

Models

All models are accessed through varg.* — one API key, one provider.

import { varg } from "vargai/ai";

Video

ModelUse caseCredits (5s)
varg.videoModel("kling-v3")Best quality, latest150
varg.videoModel("kling-v3-standard")Good quality, cheaper50
varg.videoModel("kling-v2.5")Previous gen, reliable50
varg.videoModel("wan-2.5")Good for characters50
varg.videoModel("minimax")Alternative50
varg.videoModel("sync-v2-pro")Lipsync (video + audio)50

Image

ModelUse caseCredits
varg.imageModel("nano-banana-pro")Versatile, fast5
varg.imageModel("nano-banana-pro/edit")Image-to-image editing5
varg.imageModel("flux-schnell")Fast generation5
varg.imageModel("flux-pro")High quality25
varg.imageModel("recraft-v3")Alternative10

Audio

ModelUse caseCredits
varg.speechModel("eleven_v3")Text-to-speech25
varg.speechModel("eleven_multilingual_v2")Multilingual TTS25
varg.musicModel()Music generation25
varg.transcriptionModel("whisper")Speech-to-text5

1 credit = $0.01. Cache hits are always free.

CLI

bunx vargai login                              # sign in (email OTP or API key)
bunx vargai init                               # set up project (auth + skills + template)
bunx vargai render video.tsx                   # render a video
bunx vargai render video.tsx --preview         # free preview with placeholders
bunx vargai render video.tsx --verbose         # render with detailed output
bunx vargai balance                            # check credit balance
bunx vargai topup                              # add credits
bunx vargai run image --prompt "sunset"        # generate a single image
bunx vargai run video --prompt "ocean waves"   # generate a single video
bunx vargai list                               # list available models and actions
bunx vargai studio                             # open visual editor

Environment

# Required — one key for everything
VARG_API_KEY=varg_live_xxx

Get your API key at app.varg.ai. Bun auto-loads .env files.

Bring your own keys (optional)

You can use provider keys directly if you prefer:

FAL_API_KEY=fal_xxx                # fal.ai direct
ELEVENLABS_API_KEY=xxx             # ElevenLabs direct
OPENAI_API_KEY=sk_xxx              # OpenAI / Sora
REPLICATE_API_TOKEN=r8_xxx         # Replicate

See the BYOK docs for details.

Pricing

ActionModelCreditsCost
Imagenano-banana-pro5$0.05
Imageflux-pro25$0.25
Video (5s)kling-v3150$1.50
Speecheleven_v325$0.25
Musicmusic_v125$0.25
Cache hitany0$0.00

A typical 3-clip video costs $2-5. Cache hits are always free.

Star History

star-history-202643

Contributing

See CONTRIBUTING.md for development setup.

License

Apache-2.0 — see LICENSE.md

Keywords

ai-video

FAQs

Package last updated on 30 Apr 2026

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts