Socket
Book a DemoInstallSign in
Socket

nascoder-terminal-browser-mcp

Package Overview
Dependencies
Maintainers
1
Versions
6
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

nascoder-terminal-browser-mcp

Ultra-Pro Terminal Browser MCP Server - Browse websites, scrape documentation, and extract content directly in terminal without saving files

Source
npmnpm
Version
1.0.3
Version published
Weekly downloads
16
-20%
Maintainers
1
Weekly downloads
 
Created
Source

🌐 NasCoder Terminal Browser MCP

Ultra-Pro Terminal Browser MCP Server - Browse websites, scrape documentation, and extract content directly in your terminal without saving any files!

npm version downloads

⚡ Quick Start (2 minutes)

Method 1: Simple CLI Tool (Easiest)

# Install
npm install -g nascoder-terminal-browser-mcp

# Use immediately
browse https://example.com
browse https://docs.python.org --format summary
browse https://news.ycombinator.com --format links

Method 2: Amazon Q CLI Integration

# Install
npm install -g nascoder-terminal-browser-mcp

2. Add to Q CLI

Edit ~/.config/amazonq/mcp.json:

{
  "mcpServers": {
    "nascoder-terminal-browser": {
      "command": "npx",
      "args": ["nascoder-terminal-browser-mcp"],
      "timeout": 30000,
      "disabled": false
    }
  }
}

3. Restart Q CLI

# Exit Q CLI
/quit

# Start again
q chat

4. Try It!

Browse https://example.com and show me the content

🔥 Why Choose This MCP?

FeatureStandard ToolsNasCoder Terminal Browser
File Downloads❌ Creates files✅ No files - terminal only
Browser SupportLimited✅ Multiple engines (lynx, w3m, links)
Fallback MethodNone✅ fetch+html-to-text backup
Link ExtractionManual✅ Automatic link parsing
Content FormattingRaw HTML✅ Clean terminal formatting
Error HandlingBasic✅ Advanced retry & fallback
Output ControlFixed✅ Multiple format options

🎯 What You Get

Terminal Web Browsing

  • No file pollution - Everything displayed directly in terminal
  • Multiple browser engines - lynx, w3m, links, elinks with auto-selection
  • Smart fallback - Uses fetch+html-to-text if no terminal browsers available
  • Clean formatting - Optimized for terminal reading

Advanced Features

  • Link extraction - Automatically find and list all page links
  • Content truncation - Prevent overwhelming output with length limits
  • Multiple formats - Choose between full, content-only, links-only, or summary
  • Error resilience - Multiple fallback methods ensure success

Developer Friendly

  • Zero configuration - Works out of the box
  • Comprehensive logging - Debug issues easily
  • Flexible options - Customize behavior per request
  • MCP standard - Integrates with any MCP-compatible system

🛠️ Available Tools

1. terminal_browse

Browse websites and display content directly in terminal.

Parameters:

  • url (required) - Website URL to browse
  • browser - Terminal browser to use (auto, lynx, w3m, links, elinks)
  • format - Output format (full, content-only, links-only, summary)
  • extractLinks - Extract page links (true/false)
  • maxLength - Maximum content length to prevent overwhelming output

Example:

Use terminal_browse to visit https://docs.github.com with format=summary

2. check_browsers

Check which terminal browsers are available on your system.

Example:

Check what terminal browsers are available

Extract all links from a webpage without showing full content.

Parameters:

  • url (required) - Website URL to extract links from
  • maxLinks - Maximum number of links to return (default: 50)

Example:

Extract all links from https://news.ycombinator.com

🚀 Usage Examples

🎯 Simple CLI Commands

# Browse any website
browse https://example.com

# Get page summary with stats
browse https://docs.python.org --format summary

# Extract all links
browse https://news.ycombinator.com --format links

# Full content with metadata
browse https://github.com/trending --format full

# Limit content length
browse https://very-long-page.com --max-length 1000

# Use specific browser
browse https://example.com --browser lynx

📋 Available Formats

  • content - Clean page text (default)
  • summary - Brief overview with stats
  • links - All extracted links
  • full - Complete content with links

🔧 CLI Options

browse <url> [options]

Options:
  --format, -f     Output format (content, summary, links, full)
  --max-length, -l Maximum content length [default: 2000]
  --browser, -b    Browser to use (auto, lynx, w3m, links)
  --help, -h       Show help

🤖 Amazon Q CLI Integration

Browse https://example.com

Documentation Reading

Browse https://docs.python.org/3/ with format=content-only
Extract links from https://github.com/trending

Quick Summary

Browse https://news.ycombinator.com with format=summary

Specific Browser

Browse https://example.com using lynx browser

📊 Output Formats

Full Format (default)

  • Complete page content
  • All extracted links
  • Metadata and statistics
  • Method used for browsing

Content-Only

  • Just the page text content
  • No links or metadata
  • Clean reading experience
  • Only the extracted links
  • Perfect for navigation
  • Numbered list format

Summary

  • Brief content preview
  • Key statistics
  • Quick overview

🔧 Terminal Browser Support

Supported Browsers

  • lynx - Best text formatting, recommended
  • w3m - Good table support, images in some terminals
  • links - Interactive features, mouse support
  • elinks - Enhanced links with more features

Installation Commands

# macOS (Homebrew)
brew install lynx w3m links

# Ubuntu/Debian
sudo apt install lynx w3m links elinks

# CentOS/RHEL
sudo yum install lynx w3m links elinks

Auto-Selection Priority

  • lynx (best formatting)
  • w3m (good compatibility)
  • links (interactive features)
  • elinks (enhanced features)
  • fetch+html-to-text (always available fallback)

🎨 Advanced Usage

Custom Content Length

{
  "url": "https://very-long-page.com",
  "maxLength": 5000
}
{
  "url": "https://documentation-site.com",
  "format": "links-only",
  "maxLinks": 100
}

Specific Browser

{
  "url": "https://example.com",
  "browser": "w3m",
  "extractLinks": false
}

🔍 Troubleshooting

"No terminal browsers found"

# Install at least one terminal browser
brew install lynx  # macOS
sudo apt install lynx  # Ubuntu

"Browser failed" errors

  • The tool automatically falls back to fetch+html-to-text
  • Check internet connectivity
  • Some sites may block terminal browsers

Content too long

Use maxLength parameter to limit output:
Browse https://long-page.com with maxLength=2000

Q CLI doesn't see MCP

  • Check ~/.config/amazonq/mcp.json syntax
  • Restart Q CLI (/quit then q chat)
  • Verify package installation: npm list -g nascoder-terminal-browser-mcp

📈 Performance Features

Smart Caching

  • No file system caching (by design)
  • Memory-efficient processing
  • Fast response times

Error Handling

  • Multiple fallback methods
  • Graceful degradation
  • Comprehensive error messages

Resource Management

  • 30-second timeout protection
  • Memory-conscious content truncation
  • Efficient link extraction

🎉 Success Stories

"Finally, a way to browse documentation without cluttering my filesystem with temp files!" - Developer

"The automatic fallback from lynx to fetch+html-to-text saved my workflow when lynx wasn't available." - DevOps Engineer

"Perfect for scraping API docs directly in my terminal. The link extraction is incredibly useful." - API Developer

📋 Comparison

ToolFiles CreatedBrowser SupportLink ExtractionFallback Method
NasCoder Terminal BrowserNone4 browsersAutomaticfetch+html-to-text
curl + html2text❌ Temp files❌ None❌ Manual❌ None
wget + pandoc❌ Downloads❌ None❌ Manual❌ None
lynx alone❌ Can save files✅ lynx only❌ Manual❌ None

📄 License

MIT - Feel free to use, modify, and distribute

🚀 Ready to browse the web in your terminal without file clutter?

Install now and experience the difference!

npm install -g nascoder-terminal-browser-mcp

Built with ❤️ by NasCoder (@freelancernasimofficial)

Keywords

mcp

FAQs

Package last updated on 13 Jul 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts