
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
lance-context
Advanced tools
MCP plugin for semantic code search using LanceDB - gives AI coding agents deep context from your entire codebase
An MCP plugin that adds semantic code search to Claude Code and other AI coding agents, giving them deep context from your entire codebase.
AI coding agents typically need to read entire files to understand your codebase, which consumes significant context tokens. lance-context dramatically reduces token usage by:
| Without lance-context | With lance-context | Savings |
|---|---|---|
| Read 5-10 files to find auth code (~5000 lines) | search_code returns 3 chunks (~150 lines) | ~97% |
| Read entire file to understand structure | get_symbols_overview returns compact list | ~80-90% |
| Explore many files to understand codebase | summarize_codebase + list_concepts | ~95% |
| Read and compare files for duplicates | search_similar returns targeted results | ~90% |
The web dashboard displays real-time token savings statistics:
Add lance-context to Claude Code:
claude mcp add --scope user --transport stdio lance-context -- npx -y lance-context
Restart Claude Code to start using semantic search.
For faster startup (no npm check on each run):
npm install -g lance-context
This automatically registers lance-context with Claude Code. Update manually with npm update -g lance-context.
If automatic registration didn't work, manually add to Claude Code:
claude mcp add --scope user --transport stdio lance-context -- npx -y lance-context@latest
In Claude Code, run /mcp to see lance-context in the list of MCP servers.
For project-specific MCP configuration, add a .mcp.json to your project root:
{
"mcpServers": {
"lance-context": {
"command": "npx",
"args": ["-y", "lance-context@latest"]
}
}
}
Create a .lance-context.json file in your project root to customize indexing behavior. All options are optional - lance-context works out of the box with sensible defaults.
For most projects, you only need to specify what to include:
{
"patterns": ["**/*.ts", "**/*.js"],
"instructions": "This is a TypeScript monorepo. Use semantic search to find relevant utilities."
}
{
"patterns": ["**/*.ts", "**/*.tsx", "**/*.js", "**/*.jsx"],
"excludePatterns": ["**/node_modules/**", "**/dist/**", "**/*.test.ts"],
"embedding": {
"backend": "gemini"
},
"chunking": {
"maxLines": 100,
"overlap": 20
},
"search": {
"semanticWeight": 0.7,
"keywordWeight": 0.3
},
"dashboard": {
"enabled": true,
"port": 24300,
"openBrowser": true
},
"instructions": "Project-specific instructions for AI agents working with this codebase."
}
| Option | Description | Default |
|---|---|---|
patterns | Glob patterns for files to index | ["**/*.ts", "**/*.tsx", "**/*.js", "**/*.jsx", "**/*.py", "**/*.go", "**/*.rs", "**/*.java", "**/*.rb", "**/*.php", "**/*.c", "**/*.cpp", "**/*.h", "**/*.hpp", "**/*.cs", "**/*.swift", "**/*.kt"] |
excludePatterns | Glob patterns for files to exclude | ["**/node_modules/**", "**/dist/**", "**/.git/**", "**/build/**", "**/target/**", "**/__pycache__/**", "**/venv/**", "**/.venv/**", "**/vendor/**", "**/*.min.js", "**/*.min.css"] |
embedding.backend | Embedding provider: "gemini" or "ollama" | Auto-detect based on available API keys |
embedding.model | Override the default embedding model | Backend default |
embedding.ollamaConcurrency | Max concurrent Ollama requests (1-200) | 100 |
indexing.batchSize | Texts per embedding batch request (1-1000) | 200 |
chunking.maxLines | Maximum lines per chunk | 100 |
chunking.overlap | Overlapping lines between chunks for context continuity | 20 |
search.semanticWeight | Weight for semantic (vector) similarity (0-1) | 0.7 |
search.keywordWeight | Weight for BM25 keyword matching (0-1) | 0.3 |
dashboard.enabled | Enable the web dashboard | true |
dashboard.port | Port for the dashboard server | 24300 |
dashboard.openBrowser | Auto-open browser when dashboard starts | true |
instructions | Project-specific instructions returned by get_project_instructions | None |
Without a .lance-context.json file, lance-context will:
GEMINI_API_KEY is set, otherwise use local Ollama with qwen3-embedding:0.6bSet these environment variables to configure embedding backends:
| Variable | Description | Default |
|---|---|---|
GEMINI_API_KEY | Google Gemini API key for cloud embeddings (free tier available) | None |
OLLAMA_URL | Custom Ollama server URL for local embeddings | http://localhost:11434 |
LANCE_CONTEXT_PROJECT | Override the project path to index | Current working directory |
Backend Selection Priority:
embedding.backend is set in config, use that backendGEMINI_API_KEY is set, use Gemini┌─────────────────────────────────────────────────────────────┐
│ MCP Server (index.ts) │
│ Exposes tools: index_codebase, search_code │
└─────────────────┬───────────────────────────────────────────┘
│
┌─────────────────▼───────────────────────────────────────────┐
│ CodeIndexer (indexer.ts) │
│ - AST-aware chunking for supported languages │
│ - Incremental indexing (only re-index changed files) │
│ - Hybrid search (semantic + keyword scoring) │
└─────────────────┬───────────────────────────────────────────┘
│
┌─────────────────▼───────────────────────────────────────────┐
│ Embedding Backends (embeddings/) │
│ Gemini │ Ollama (local) │
└─────────────────┬───────────────────────────────────────────┘
│
┌─────────────────▼───────────────────────────────────────────┐
│ LanceDB Vector Store │
│ Stored in .lance-context/ directory │
└─────────────────────────────────────────────────────────────┘
lance-context automatically selects the best available backend (in priority order):
Google Gemini (if GEMINI_API_KEY is set, free tier available)
export GEMINI_API_KEY=AIza...
Ollama (recommended for most users - free, local, no rate limits)
Ollama provides free, local embeddings with no API rate limits. Perfect for indexing large codebases.
Requirements: Ollama 0.2.0 or newer (for batch embedding API)
Install Ollama from ollama.com
Verify version (must be 0.2.0+):
ollama --version
Pull the embedding model:
ollama pull qwen3-embedding:0.6b
Verify it's working:
ollama run qwen3-embedding:0.6b "test"
That's it! lance-context will automatically use Ollama when no Gemini API key is set.
| Model | Size | Quality | Best For |
|---|---|---|---|
qwen3-embedding:0.6b | 639MB | Good | Most users (default) |
qwen3-embedding:4b | 2.5GB | Better | Users with 16GB+ RAM |
qwen3-embedding:8b | 4.7GB | Best | Users with 32GB+ RAM |
To use a different model, add to your .lance-context.json:
{
"embedding": {
"backend": "ollama",
"model": "qwen3-embedding:4b"
}
}
See Project Configuration for all configuration options including how to specify a backend.
Once installed, you'll have access to these tools:
index_codebaseIndex your codebase for semantic search:
> index_codebase
Indexed 150 files, created 800 chunks.
With custom patterns:
> index_codebase(patterns: ["**/*.py"], excludePatterns: ["**/tests/**"])
search_codeSearch using natural language:
> search_code(query: "authentication middleware")
## Result 1: src/middleware/auth.ts:1-50
...
get_index_statusCheck index status:
> get_index_status
{
"indexed": true,
"fileCount": 150,
"chunkCount": 800,
"lastUpdated": "2024-12-27T12:00:00Z"
}
clear_indexClear the index:
> clear_index
Index cleared.
get_project_instructionsGet project-specific instructions from the config:
> get_project_instructions
Use semantic search for exploring this codebase. Always run tests before committing.
lance-context includes a web dashboard for monitoring index status and usage.
The dashboard starts automatically when the MCP server runs and is available at:
http://127.0.0.1:24300
The browser opens automatically on startup (configurable).
Configure the dashboard via the dashboard options in .lance-context.json. See Configuration Options Reference for details.
.lance-context/ directory)TypeScript, JavaScript, Python, Go, Rust, Java, Ruby, PHP, C/C++, C#, Swift, Kotlin, and more.
This error means no API keys are set and Ollama is not running/accessible.
Solutions:
# Install from https://ollama.com, then:
ollama pull qwen3-embedding:0.6b
export GEMINI_API_KEY=AIza...This occurs when switching between embedding backends (e.g., from Gemini to Ollama). Each backend produces different vector dimensions.
Solution: Force a full reindex:
> index_codebase(forceReindex: true)
Large codebases may take time to index initially.
Tips:
excludePatterns to skip unnecessary directories (tests, generated code)If you encounter strange search results or errors:
Solution: Clear and rebuild the index:
> clear_index
> index_codebase
Or manually delete the .lance-context/ directory and re-index.
MIT - See LICENSE for details.
Contributions welcome! Please read our Contributing Guide before submitting PRs.
Built with:
Inspired by:
FAQs
MCP plugin for semantic code search using LanceDB - gives AI coding agents deep context from your entire codebase
We found that lance-context demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.