![Create React App Officially Deprecated Amid React 19 Compatibility Issues](https://cdn.sanity.io/images/cgdhsj6q/production/04fa08cf844d798abc0e1a6391c129363cc7e2ab-1024x1024.webp?w=400&fit=max&auto=format)
Security News
Create React App Officially Deprecated Amid React 19 Compatibility Issues
Create React App is officially deprecated due to React 19 issues and lack of maintenance—developers should switch to Vite or other modern alternatives.
scraper-api-datachaser
Advanced tools
The scraping SaaS platform provides a RESTful API for developers to perform web scraping tasks. Users can submit scraping tasks, monitor task status, retrieve scraped data, and manage their account through the API.
ScraperApi - JavaScript client for scraper_api The scraping SaaS platform provides a RESTful API for developers to perform web scraping tasks. Users can submit scraping tasks, monitor task status, retrieve scraped data, and manage their account through the API. This SDK is automatically generated by the OpenAPI Generator project:
To publish the library as a npm, please follow the procedure in "Publishing npm packages".
Then install it via:
npm install scraper_api --save
Finally, you need to build the module:
npm run build
To use the library locally without publishing to a remote npm registry, first install the dependencies by changing into the directory containing package.json
(and this README). Let's call this JAVASCRIPT_CLIENT_DIR
. Then run:
npm install
Next, link it globally in npm with the following, also from JAVASCRIPT_CLIENT_DIR
:
npm link
To use the link you just defined in your project, switch to the directory you want to use your scraper_api from, and run:
npm link /path/to/<JAVASCRIPT_CLIENT_DIR>
Finally, you need to build the module:
npm run build
If the library is hosted at a git repository, e.g.https://github.com/GIT_USER_ID/GIT_REPO_ID then install it via:
npm install GIT_USER_ID/GIT_REPO_ID --save
The library also works in the browser environment via npm and browserify. After following
the above steps with Node.js and installing browserify with npm install -g browserify
,
perform the following (assuming main.js is your entry file):
browserify main.js > bundle.js
Then include bundle.js in the HTML pages.
Using Webpack you may encounter the following error: "Module not found: Error: Cannot resolve module", most certainly you should disable AMD loader. Add/merge the following section to your webpack config:
module: {
rules: [
{
parser: {
amd: false
}
}
]
}
Please follow the installation instruction and execute the following JS code:
var ScraperApi = require('scraper_api');
var defaultClient = ScraperApi.ApiClient.instance;
// Configure API key authorization: key
var key = defaultClient.authentications['key'];
key.apiKey = "YOUR API KEY"
// Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null)
//key.apiKeyPrefix['Key'] = "Token"
var api = new ScraperApi.APIKeysApi()
var body = new ScraperApi.DeleteApiKeyRequest(); // {DeleteApiKeyRequest}
var callback = function(error, data, response) {
if (error) {
console.error(error);
} else {
console.log('API called successfully. Returned data: ' + data);
}
};
api.keysDelete(body, callback);
All URIs are relative to https://scraper.datachaser.local/api/v1
Class | Method | HTTP request | Description |
---|---|---|---|
ScraperApi.APIKeysApi | keysDelete | DELETE /keys | Delete API key by id |
ScraperApi.APIKeysApi | keysGet | GET /keys | Retrieve API keys list |
ScraperApi.APIKeysApi | keysPost | POST /keys | Create new API Key |
ScraperApi.AuthenticationApi | accountPut | PUT /account | Update user account |
ScraperApi.DataApi | dataGet | GET /data | Get all data |
ScraperApi.DataApi | dataIdDelete | DELETE /data/{id} | Delete data by id |
ScraperApi.DataApi | dataJobIdGet | GET /data/job/{id} | Get data by job id |
ScraperApi.DataApi | dataUserIdPost | POST /data/user/{id} | WebHook to get data when done scraped by user id |
ScraperApi.JobsApi | jobAsyncPost | POST /job/async | Receives a job and process it asyncrhonously |
ScraperApi.JobsApi | jobGet | GET /job | Get jobs |
ScraperApi.JobsApi | jobIdDelete | DELETE /job/{id} | Delete existing job |
ScraperApi.JobsApi | jobIdGet | GET /job/{id} | Retrieve specific job status |
ScraperApi.JobsApi | jobIdPut | PUT /job/{id} | Update existing job |
ScraperApi.JobsApi | jobPost | POST /job | Create new job |
ScraperApi.JobsApi | jobSyncPost | POST /job/sync | Receives a job and process it syncrhonously |
ScraperApi.LogsApi | logJobIdGet | GET /log/job/{id} | Get log by job id |
ScraperApi.LogsApi | logsGet | GET /logs | Get all logs |
Authentication schemes defined for the API:
FAQs
The scraping SaaS platform provides a RESTful API for developers to perform web scraping tasks. Users can submit scraping tasks, monitor task status, retrieve scraped data, and manage their account through the API.
We found that scraper-api-datachaser demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Create React App is officially deprecated due to React 19 issues and lack of maintenance—developers should switch to Vite or other modern alternatives.
Security News
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
Security News
The Linux Foundation is warning open source developers that compliance with global sanctions is mandatory, highlighting legal risks and restrictions on contributions.