
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
grokipedia-api
Advanced tools
A client for accessing Grokipedia content - an open-source, comprehensive collection of all knowledge. Works in Node.js and browsers.
A client library for accessing content from Grokipedia, an open-source, comprehensive collection of all knowledge.
Available in both Python and JavaScript/TypeScript!
pip install grokipedia-api
For async support:
pip install grokipedia-api[async]
For MCP server functionality (Python 3.10+):
pip install grokipedia-api[mcp]
For all features:
pip install grokipedia-api[all]
Or install from source:
git clone https://github.com/AkeBoss-tech/grokipedia-api.git
cd grokipedia-api
pip install -e .
from grokipedia_api import GrokipediaClient
# Create a client
client = GrokipediaClient()
# Search for articles
results = client.search("Python programming")
print(f"Found {len(results['results'])} results")
# Get a specific page
page = client.get_page("United_Petroleum")
print(f"Title: {page['page']['title']}")
print(f"Content: {page['page']['content'][:200]}...")
from grokipedia_api import GrokipediaClient
client = GrokipediaClient()
# Search with pagination
results = client.search("machine learning", limit=20, offset=0)
for result in results['results']:
print(f"- {result['title']}")
print(f" Slug: {result['slug']}")
print(f" Views: {result['viewCount']}")
print()
from grokipedia_api import GrokipediaClient
client = GrokipediaClient()
# Get full page with all content
page = client.get_page("United_Petroleum", include_content=True)
# Access structured data
title = page['page']['title']
content = page['page']['content']
citations = page['page']['citations']
print(f"Article: {title}")
print(f"\nCitations: {len(citations)}")
for citation in citations:
print(f"- [{citation['id']}] {citation['title']}")
print(f" {citation['url']}")
For faster, concurrent operations with async/await:
import asyncio
from grokipedia_api import AsyncGrokipediaClient, search_many, get_many_pages
async def main():
# Basic async usage
async with AsyncGrokipediaClient() as client:
results = await client.search("Python programming")
print(f"Found {len(results['results'])} results")
page = await client.get_page("United_Petroleum")
print(f"Title: {page['page']['title']}")
# Search multiple queries concurrently
queries = ["Python", "JavaScript", "Rust"]
all_results = await search_many(queries, limit=5)
print(f"Total results: {len(all_results)}")
# Get multiple pages concurrently
slugs = ["United_Petroleum", "Python_(programming_language)"]
pages = await get_many_pages(slugs)
for page_data in pages:
print(f"✓ {page_data['page']['title']}")
asyncio.run(main())
from grokipedia_api import GrokipediaClient
# Use context manager for automatic cleanup
with GrokipediaClient() as client:
results = client.search("Python")
for result in results['results']:
print(result['title'])
For AI agent integrations:
# Start the MCP server
grokipedia-mcp
The server exposes tools for searching and retrieving Grokipedia content via the Model Context Protocol. See MCP_SERVER.md for detailed documentation.
# Search for articles
grokipedia search "Python programming"
# Get a specific page
grokipedia get "United_Petroleum" --citations
# Get full content
grokipedia get "United_Petroleum" --full
The main client class for interacting with Grokipedia.
search(query, limit=12, offset=0)
Search for articles in Grokipedia.
Parameters:
query (str): Search query stringlimit (int): Maximum number of results to return (default: 12)offset (int): Number of results to skip for pagination (default: 0)Returns:
results: List of search result dictionariestotal_count: Total number of results (if available)Example:
results = client.search("Python programming", limit=20)
get_page(slug, include_content=True, validate_links=True)
Get a specific page by its slug.
Parameters:
slug (str): Page slug (e.g., "United_Petroleum")include_content (bool): Whether to include full content (default: True)validate_links (bool): Whether to validate links (default: True)Returns:
page: Page information including title, content, citations, images, etc.found: Boolean indicating if the page was foundExample:
page = client.get_page("Python_(programming_language)")
search_pages(query, limit=12)
Search for pages and return results as a list.
Parameters:
query (str): Search query stringlimit (int): Maximum number of results to return (default: 12)Returns:
Example:
pages = client.search_pages("machine learning")
The library provides custom exceptions:
GrokipediaError: Base exception for all Grokipedia errorsGrokipediaNotFoundError: Raised when a requested resource is not foundGrokipediaAPIError: Raised when there's an API-related errorGrokipediaRateLimitError: Raised when rate limit is exceededfrom grokipedia_api import GrokipediaClient
from grokipedia_api.exceptions import GrokipediaNotFoundError
client = GrokipediaClient()
try:
page = client.get_page("NonExistentPage")
except GrokipediaNotFoundError:
print("Page not found!")
npm install grokipedia-api
Or with yarn:
yarn add grokipedia-api
const { GrokipediaClient } = require('grokipedia-api');
// Create a client
const client = new GrokipediaClient();
// Search for articles
client.search('Python programming')
.then(results => {
console.log(`Found ${results.results.length} results`);
results.results.forEach(result => {
console.log(`- ${result.title} (${result.slug})`);
});
})
.catch(error => {
console.error('Error:', error.message);
});
// Get a specific page
client.getPage('United_Petroleum')
.then(page => {
console.log(`Title: ${page.page.title}`);
console.log(`Content: ${page.page.content.substring(0, 200)}...`);
})
.catch(error => {
console.error('Error:', error.message);
});
import { GrokipediaClient } from 'grokipedia-api';
const client = new GrokipediaClient();
// Search for articles
const results = await client.search('machine learning', 20);
console.log(`Found ${results.results.length} results`);
// Get a specific page
const page = await client.getPage('United_Petroleum', true);
console.log(`Title: ${page.page.title}`);
console.log(`Citations: ${page.page.citations?.length || 0}`);
const { GrokipediaClient } = require('grokipedia-api');
async function main() {
const client = new GrokipediaClient();
try {
// Search with pagination
const results = await client.search('machine learning', 20, 0);
console.log(`Total results: ${results.total_count || 'unknown'}`);
for (const result of results.results) {
console.log(`- ${result.title}`);
console.log(` Slug: ${result.slug}`);
console.log(` Views: ${result.viewCount || 'N/A'}`);
}
// Get full page content
const page = await client.getPage('United_Petroleum', true);
console.log(`\nArticle: ${page.page.title}`);
console.log(`\nCitations: ${page.page.citations?.length || 0}`);
if (page.page.citations) {
page.page.citations.forEach(citation => {
console.log(`- [${citation.id}] ${citation.title}`);
console.log(` ${citation.url}`);
});
}
} catch (error) {
console.error('Error:', error.message);
} finally {
client.close();
}
}
main();
const { GrokipediaClient } = require('grokipedia-api');
// Create client with custom options
const client = new GrokipediaClient({
baseUrl: 'https://grokipedia.com', // Optional: custom base URL
timeout: 30000, // Request timeout in ms (default: 30000)
useCache: true, // Enable caching (default: true)
cacheTtl: 604800, // Cache TTL in seconds (default: 7 days)
});
// Search for pages (returns just the results array)
const pages = await client.searchPages('Python', 10);
pages.forEach(page => {
console.log(`${page.title}: ${page.snippet.substring(0, 100)}...`);
});
// Clear cache if needed
client.clearCache();
The main client class for interacting with Grokipedia.
new GrokipediaClient(options?: ClientOptions)
Options:
baseUrl (string, optional): Custom base URL (default: https://grokipedia.com)timeout (number, optional): Request timeout in milliseconds (default: 30000)useCache (boolean, optional): Enable caching (default: true)cacheTtl (number, optional): Cache time-to-live in seconds (default: 604800 = 7 days)search(query, limit?, offset?)
Search for articles in Grokipedia.
Parameters:
query (string): Search query stringlimit (number, optional): Maximum number of results to return (default: 12)offset (number, optional): Number of results to skip for pagination (default: 0)Returns: Promise<SearchResponse>
Example:
const results = await client.search('Python programming', 20);
getPage(slug, includeContent?, validateLinks?)
Get a specific page by its slug.
Parameters:
slug (string): Page slug (e.g., "United_Petroleum")includeContent (boolean, optional): Whether to include full content (default: true)validateLinks (boolean, optional): Whether to validate links (default: true)Returns: Promise<PageResponse>
Example:
const page = await client.getPage('Python_(programming_language)');
searchPages(query, limit?)
Search for pages and return results as a list.
Parameters:
query (string): Search query stringlimit (number, optional): Maximum number of results to return (default: 12)Returns: Promise<SearchResult[]>
Example:
const pages = await client.searchPages('machine learning');
clearCache()
Clear the in-memory cache.
Example:
client.clearCache();
close()
Close the client and clean up resources.
Example:
client.close();
The package includes full TypeScript definitions. Import types as needed:
import {
GrokipediaClient,
Page,
PageResponse,
SearchResult,
SearchResponse,
Citation,
Image,
ClientOptions,
} from 'grokipedia-api';
The library provides custom exceptions:
GrokipediaError: Base exception for all Grokipedia errorsGrokipediaNotFoundError: Raised when a requested resource is not foundGrokipediaAPIError: Raised when there's an API-related errorGrokipediaRateLimitError: Raised when rate limit is exceededconst { GrokipediaClient, GrokipediaNotFoundError } = require('grokipedia-api');
const client = new GrokipediaClient();
try {
const page = await client.getPage('NonExistentPage');
} catch (error) {
if (error instanceof GrokipediaNotFoundError) {
console.log('Page not found!');
} else {
console.error('Error:', error.message);
}
}
Both Python and JavaScript versions include:
Python-specific features:
JavaScript-specific features:
Both libraries provide structured data models:
Page: Represents a full Grokipedia pageCitation: Represents a citation in an articleImage: Represents an image in an articleSearchResult: Represents a search result# Clone the repository
git clone https://github.com/AkeBoss-tech/grokipedia-api.git
cd grokipedia-api
# Install in development mode with dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Format code
black grokipedia_api tests
# Lint code
ruff check grokipedia_api tests
# Clone the repository
git clone https://github.com/AkeBoss-tech/grokipedia-api.git
cd grokipedia-api
# Install dependencies
npm install
# Build TypeScript
npm run build
# Run tests
npm test
# Lint code
npm run lint
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
If you encounter any issues or have questions, please open an issue on GitHub.
FAQs
A client for accessing Grokipedia content - an open-source, comprehensive collection of all knowledge. Works in Node.js and browsers.
We found that grokipedia-api demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.