Research
Security News
Malicious npm Package Targets Solana Developers and Hijacks Funds
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
A lightweight, dependency free Python class that acts as wrapper for Crawlbase API.
Choose a way of installing:
pip install crawlbase
Then import the CrawlingAPI, ScraperAPI, etc as needed.
from crawlbase import CrawlingAPI, ScraperAPI, LeadsAPI, ScreenshotsAPI, StorageAPI
First initialize the CrawlingAPI class.
api = CrawlingAPI({ 'token': 'YOUR_CRAWLBASE_TOKEN' })
Pass the url that you want to scrape plus any options from the ones available in the API documentation.
api.get(url, options = {})
Example:
response = api.get('https://www.facebook.com/britneyspears')
if response['status_code'] == 200:
print(response['body'])
You can pass any options from Crawlbase API.
Example:
response = api.get('https://www.reddit.com/r/pics/comments/5bx4bx/thanks_obama/', {
'user_agent': 'Mozilla/5.0 (Windows NT 6.2; rv:20.0) Gecko/20121202 Firefox/30.0',
'format': 'json'
})
if response['status_code'] == 200:
print(response['body'])
Pass the url that you want to scrape, the data that you want to send which can be either a json or a string, plus any options from the ones available in the API documentation.
api.post(url, dictionary or string data, options = {})
Example:
response = api.post('https://producthunt.com/search', { 'text': 'example search' })
if response['status_code'] == 200:
print(response['body'])
You can send the data as application/json
instead of x-www-form-urlencoded
by setting option post_content_type
as json.
import json
response = api.post('https://httpbin.org/post', json.dumps({ 'some_json': 'with some value' }), { 'post_content_type': 'json' })
if response['status_code'] == 200:
print(response['body'])
If you need to scrape any website built with Javascript like React, Angular, Vue, etc. You just need to pass your javascript token and use the same calls. Note that only .get
is available for javascript and not .post
.
api = CrawlingAPI({ 'token': 'YOUR_JAVASCRIPT_TOKEN' })
response = api.get('https://www.nfl.com')
if response['status_code'] == 200:
print(response['body'])
Same way you can pass javascript additional options.
response = api.get('https://www.freelancer.com', { 'page_wait': 5000 })
if response['status_code'] == 200:
print(response['body'])
You can always get the original status and crawlbase status from the response. Read the Crawlbase documentation to learn more about those status.
response = api.get('https://craiglist.com')
print(response['headers']['original_status'])
print(response['headers']['pc_status'])
If you have questions or need help using the library, please open an issue or contact us.
The usage of the Scraper API is very similar, just change the class name to initialize.
scraper_api = ScraperAPI({ 'token': 'YOUR_NORMAL_TOKEN' })
response = scraper_api.get('https://www.amazon.com/DualSense-Wireless-Controller-PlayStation-5/dp/B08FC6C75Y/')
if response['status_code'] == 200:
print(response['json']['name']) # Will print the name of the Amazon product
To find email leads you can use the leads API, you can check the full API documentation if needed.
leads_api = LeadsAPI({ 'token': 'YOUR_NORMAL_TOKEN' })
response = leads_api.get_from_domain('microsoft.com')
if response['status_code'] == 200:
print(response['json']['leads'])
Initialize with your Screenshots API token and call the get
method.
screenshots_api = ScreenshotsAPI({ 'token': 'YOUR_NORMAL_TOKEN' })
response = screenshots_api.get('https://www.apple.com')
if response['status_code'] == 200:
print(response['headers']['success'])
print(response['headers']['url'])
print(response['headers']['remaining_requests'])
print(response['file'])
or specifying a file path
screenshots_api = ScreenshotsAPI({ 'token': 'YOUR_NORMAL_TOKEN' })
response = screenshots_api.get('https://www.apple.com', { 'save_to_path': 'apple.jpg' })
if response['status_code'] == 200:
print(response['headers']['success'])
print(response['headers']['url'])
print(response['headers']['remaining_requests'])
print(response['file'])
or if you set store=true
then screenshot_url
is set in the returned headers
screenshots_api = ScreenshotsAPI({ 'token': 'YOUR_NORMAL_TOKEN' })
response = screenshots_api.get('https://www.apple.com', { 'store': 'true' })
if response['status_code'] == 200:
print(response['headers']['success'])
print(response['headers']['url'])
print(response['headers']['remaining_requests'])
print(response['file'])
print(response['headers']['screenshot_url'])
Note that screenshots_api.get(url, options)
method accepts an options
Initialize the Storage API using your private token.
storage_api = StorageAPI({ 'token': 'YOUR_NORMAL_TOKEN' })
Pass the url that you want to get from Crawlbase Storage.
response = storage_api.get('https://www.apple.com')
if response['status_code'] == 200:
print(response['headers']['original_status'])
print(response['headers']['pc_status'])
print(response['headers']['url'])
print(response['headers']['rid'])
print(response['headers']['stored_at'])
print(response['body'])
or you can use the RID
response = storage_api.get('RID_REPLACE')
if response['status_code'] == 200:
print(response['headers']['original_status'])
print(response['headers']['pc_status'])
print(response['headers']['url'])
print(response['headers']['rid'])
print(response['headers']['stored_at'])
print(response['body'])
Note: One of the two RID or URL must be sent. So both are optional but it's mandatory to send one of the two.
To delete a storage item from your storage area, use the correct RID
if storage_api.delete('RID_REPLACE'):
print('delete success')
else:
print('Unable to delete')
To do a bulk request with a list of RIDs, please send the list of rids as an array
response = storage_api.bulk(['RID1', 'RID2', 'RID3', ...])
if response['status_code'] == 200:
for item in response['json']:
print(item['original_status'])
print(item['pc_status'])
print(item['url'])
print(item['rid'])
print(item['stored_at'])
print(item['body'])
To request a bulk list of RIDs from your storage area
rids = storage_api.rids()
print(rids)
You can also specify a limit as a parameter
storage_api.rids(100)
To get the total number of documents in your storage area
total_count = storage_api.totalCount()
print(total_count)
If you need to use a custom timeout, you can pass it to the class instance creation like the following:
api = CrawlingAPI({ 'token': 'TOKEN', 'timeout': 120 })
Timeout is in seconds.
Copyright 2023 Crawlbase
FAQs
A Python class that acts as wrapper for Crawlbase scraping and crawling API
We found that crawlbase demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Security News
Research
Socket researchers have discovered malicious npm packages targeting crypto developers, stealing credentials and wallet data using spyware delivered through typosquats of popular cryptographic libraries.
Security News
Socket's package search now displays weekly downloads for npm packages, helping developers quickly assess popularity and make more informed decisions.