New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details
Socket
Book a DemoSign in
Socket

lmapp

Package Overview
Dependencies
Maintainers
1
Versions
14
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

lmapp

Local LLM Made Simple - Consumer-friendly AI assistant

Source
pipPyPI
Version
0.4.1
Maintainers
1

LMAPP

Local LLM CLI – AI everywhere. Easy, simple, and undeniable.
Online or offline. The future is yours to command.

License: MIT PyPI CI codecov Status

v0.4.0 - Agent Mode. Production Ready. Fully Featured. Free.

See Demo & Features for examples and use cases.

🚀 Quick Start

Full installation and setup: see QUICKSTART.md. Customize your AI's behavior: see Roles & Workflows Guide.

Everyday commands:

lmapp chat          # Start chatting locally
lmapp chat --agent  # Start in auto-Agent Mode (Copilot-like)
lmapp server start  # Start API server (Web App & VS Code)
lmapp status        # Check backend/model status

🌐 Web Interface (New in v0.4.0)

Access lmapp from your browser without installing the desktop app.

  • Start the server: lmapp server start
  • Open http://localhost:8000 in your browser.
  • Chat, run workflows, and manage settings.

🎥 Demonstrations

Recommended: Check out our Feature Tour to see lmapp in action!

We offer two ways to explore lmapp:

Explore real-world use cases including:

  • 🤖 auto-Agent Mode (Autonomous terminal & file operations)
  • 📝 Content Generation (Blogs, Emails, Code)
  • 🔍 Document Analysis (Summarization, Q&A)
  • 📊 Data Extraction (JSON from text)

🎯 Features

🤖 auto-Agent Mode (New in v0.4.0)

Turn your terminal into an autonomous coding assistant.

$ lmapp chat --agent
> Create a python script to calculate fibonacci
[AGENT] Creating fib.py...
[AGENT] Running fib.py...

💬 Chat

$ lmapp chat --model mistral
╔════════════════════════════════════════════╗
║        Chat with Mistral (Local)           ║
╚════════════════════════════════════════════╝

You: Explain quantum computing in simple terms

AI: Quantum computers use quantum bits (qubits) instead of regular bits.
While regular bits are 0 or 1, qubits can be both at once (superposition).
This lets them solve certain problems exponentially faster...

You: What are the use cases?

AI: Key use cases include:
  • Drug discovery (molecular simulation)
  • Finance (portfolio optimization)
  • Cryptography (breaking encryption)
  • Machine learning (optimization)

� VS Code Integration

Turn VS Code into an AI-powered IDE with our extension.

  • Start the Server:
    lmapp server start
    
  • Open Dashboard: Go to http://localhost:8000 to see status and chat.
  • Install Extension: Install lmapp-vscode (coming soon to marketplace).
  • Enjoy: Get inline code completions and chat directly in your editor.
$ lmapp rag index ~/my_docs
📁 Indexing documents...
✓ Processed: README.md (1,234 tokens)
✓ Processed: GUIDE.pdf (5,678 tokens)
✓ Processed: NOTES.txt (892 tokens)
✓ Index created: 7,804 tokens in 12 documents

$ lmapp rag search "how to optimize python code"
📊 Search Results (3 matches):

1. GUIDE.pdf - Line 45 (score: 0.92)
   "Optimization techniques include: list comprehensions,
    caching, and using built-in functions instead of loops"

2. NOTES.txt - Line 12 (score: 0.88)
   "Profile code with cProfile before optimizing"

3. README.md - Line 89 (score: 0.81)
   "Performance tips for production code"

$ lmapp chat --with-context
You: Summarize the best Python optimization tips from my docs

AI: Based on your documents, here are the key optimization tips:
  1. Use list comprehensions instead of loops
  2. Profile with cProfile before optimizing
  3. Leverage built-in functions (map, filter, etc.)
  4. Implement caching for expensive operations

📦 Batch Processing

$ lmapp batch create inputs.json
Processing 5 queries in batch...
[████████████████████] 100% (5/5)

Job created: batch_20250211_143022
Estimated time: 45 seconds

$ lmapp batch results batch_20250211_143022 --json
{
  "job_id": "batch_20250211_143022",
  "status": "completed",
  "results": [
    {"input": "Explain AI", "output": "AI is..."},
    {"input": "What is ML?", "output": "Machine learning..."},
    ...
  ],
  "completed_at": "2025-02-11T14:30:47Z"
}

🔌 Plugins

$ lmapp plugin list
Available Plugins:
  ✓ translator     - Real-time translation (8 languages)
  ✓ summarizer     - Extract key points from long text
  ✓ code-reviewer  - Analyze code and suggest improvements
  ✓ sql-generator  - Write SQL queries from descriptions
  ✓ regex-helper   - Build and test regex patterns
  ✓ json-validator - Validate and format JSON
  ✓ git-helper     - Explain git commands and operations
  ✓ api-tester     - Test REST APIs interactively

$ lmapp plugin install translator
Installing translator plugin...
✓ Downloaded (245 KB)
✓ Installed successfully
Ready to use: lmapp translate --help

$ lmapp translate --text "Hello World" --to spanish
Translation (Spanish):
"¡Hola Mundo!"

⚙️ Configuration

$ lmapp config show
Current Configuration:
  Model: mistral (7B)
  Temperature: 0.7
  Max Tokens: 2048
  Context Size: 4096
  System Prompt: You are a helpful AI assistant

$ lmapp config set temperature 0.3
✓ Configuration updated

$ lmapp config --set-prompt
Enter your custom system prompt:
> You are a Python expert. Help with code, explain concepts clearly.
✓ System prompt saved

$ lmapp status
Status Report:
  ✓ Backend: Ollama (running)
  ✓ Model: mistral (7.4B)
  ✓ Memory: 6.2 GB / 16 GB
  ✓ Performance: 45 tokens/sec

💡 Who Is This For?

Perfect Fit

  • Developers - Code explanations, debugging, documentation, CLI workflows
  • Students & Researchers - Study partner, research assistance, offline-first
  • SysAdmins - Command lookups, automation scripts, system analysis
  • Professionals - Writing, analysis, research, note-taking
  • Privacy-Conscious Users - Want AI without cloud dependencies
  • Gamers - Quick tips, cheat codes, game help without searching websites
  • Travelers - Offline knowledge access (camping, flights, road trips, deployment)
  • Anyone who values control over convenience

🌟 Real-World Examples

Example 1: No Internet? No Problem!

Scenario: Camping, road trip, flight, deployment - anywhere without internet
Question: "Why does the sky look blue?"

$ lmapp chat
> AI: Hello, how may I help you?
> You: Why does the sky look blue?

> AI: Short answer: the sky is blue because air molecules scatter 
     sunlight more strongly at short (blue) wavelengths than at long 
     (red) wavelengths — a process called Rayleigh scattering.

     A bit more detail:
     • Sunlight contains all visible colors
     • Rayleigh scattering scales as 1/λ^4
     • Blue light (~450 nm) scattered many times more than red (~650 nm)
     • At sunrise/sunset, blue light scattered out → sky looks red/orange
     • Clouds look white (Mie scattering of larger water droplets)
     
     Extra neat fact: scattered sky light is partially polarized!

> AI: Is there anything else I can help you with?

Key Point: Full scientific answers WITHOUT internet

Example 2: Gaming Power User

Scenario: Playing GTA San Andreas, need cheat codes and tips

$ lmapp chat
> You: I'm playing GTA San Andreas, show me cheat codes and top 20 tips

> AI: Nice! Which platform? (PC, PS2/3/4/5, Xbox)
> You: PC

> AI: Here are PC cheat codes (type during gameplay):
     HESOYAM - Health, Armor, $250,000
     AEZAKMI - Never Wanted
     BAGUVIX - Infinite Health
     FULLCLIP - Infinite Ammo
     [20+ more codes...]

     Top 20 Tips & Tricks:
     1. Save often — use multiple save slots
     2. Complete side jobs for cash/stat boosts (Ambulance, Firefighter)
     3. Grind stats: stamina, muscle, driving/flying skill
     4. Learn weapon ranges, aim for headshots
     5. Keep Armor on you at all times
     [15+ more detailed tips...]

> AI: Need anything else? Specific mission help?

Key Point: Skip endless website searches, get comprehensive game help instantly

📖 Basic Usage

# Start chat
lmapp chat

# Use specific model
lmapp chat --model mistral

# Check status
lmapp status

# View configuration
lmapp config show

Supported Backends: Ollama, llamafile (auto-detected). Extensible architecture supports custom backends.

See QUICKSTART.md for complete usage guide.

✅ Quality & Features

  • 🧪 587 tests (100% coverage)
  • 🔒 100% private (no cloud, no tracking)
  • Fast & lightweight (<200ms startup)
  • 🔌 8 production plugins
  • 🔍 RAG system (semantic search)
  • 📦 Batch processing
  • 💾 Session persistence
  • 🌐 Web UI (optional)

🔐 Privacy & Security

  • 100% Local - Everything runs on your device
  • No Cloud - No internet after setup
  • No Telemetry - Zero tracking
  • Open Source - MIT licensed
  • Your Data - You own it all

🗺️ Roadmap

v0.3.0 (Current) - Production ready
v0.4.0+ - Mobile/desktop apps, team features, enterprise tier

🤝 Contributing

Help wanted! See Contributing Guide for code contributions, bug reports, or feature ideas.

All contributions welcome: bug fixes, features, documentation, tests, and ideas.

💬 Support

⚙️ Troubleshooting

IssueSolution
command not foundAdd ~/.local/bin to $PATH or use pipx install lmapp
ModuleNotFoundErrorReinstall: pip install --upgrade lmapp
Debian/Ubuntu issuesUse pipx install lmapp instead of pip

See Troubleshooting Guide for more.

❓ FAQ

Q: How do I install?
pip install lmapp

Q: How do I update?
pip install --upgrade lmapp

Q: Can I use commercially?
Yes! MIT License allows it. See LICENSE.

Q: Does it collect data?
No. 100% local, no telemetry.

More questions? See Troubleshooting Guide.

📚 Documentation

📄 License

MIT License - See LICENSE file

This means:

  • ✅ Use commercially
  • ✅ Modify and distribute
  • ✅ Include in closed-source projects
  • ✅ Just include the license

Third-Party Licenses

  • Ollama: MIT License
  • llamafile: Apache 2.0 License
  • Pydantic: MIT License
  • Pytest: MIT License
  • AI Models: Various (see model documentation)

🙏 Built With

  • Ollama - LLM management platform
  • llamafile - Portable LLM runtime
  • Pydantic - Data validation
  • Pytest - Testing framework
  • Meta, Mistral, and other amazing AI model creators

⭐ Show Your Support

If lmapp helps you, please:

  • ⭐ Star this repository
  • 🐛 Report bugs and suggest features
  • 📢 Share with friends and colleagues
  • 🤝 Contribute improvements
  • 📝 Share your use cases

📞 Get Started Now

pip install lmapp
lmapp chat

📖 Documentation Map

DocumentPurpose
QUICKSTART.md5-minute setup guide ⭐ Start here
docs/installation.mdInstallation methods for all platforms
docs/CONFIGURATION.mdConfiguration, environment, and settings
docs/development.mdDeveloper workflow and tips
TROUBLESHOOTING.mdSolutions for common issues
SECURITY.mdSecurity policy and vulnerability reporting
CHANGELOG.mdRelease history
CONTRIBUTING.mdContribution guidelines
CODE_OF_CONDUCT.mdCommunity standards
LICENSELicense terms
DEMO.mdLive examples and feature tour
API_REFERENCE.mdLightweight CLI + HTTP API reference

Additional references:

Welcome to the future of local AI. 🚀 This is the way...

Keywords

ai

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts