
Security News
npm Adopts OIDC for Trusted Publishing in CI/CD Workflows
npm now supports Trusted Publishing with OIDC, enabling secure package publishing directly from CI/CD workflows without relying on long-lived tokens.
SEO python scraper to extract data from major searchengine result pages. Extract data like url, title, snippet, richsnippet and the type from searchresults for given keywords. Detect Ads or make automated screenshots. You can also fetch text content of urls provided in searchresults or by your own. It's usefull for SEO and business related research tasks.
.. image:: https://img.shields.io/pypi/v/SerpScrap.svg :target: https://pypi.python.org/pypi/SerpScrap
.. image:: https://readthedocs.org/projects/serpscrap/badge/?version=latest :target: http://serpscrap.readthedocs.io/en/latest/ :alt: Documentation Status
.. image:: https://travis-ci.org/ecoron/SerpScrap.svg?branch=master :target: https://travis-ci.org/ecoron/SerpScrap
.. image:: https://img.shields.io/docker/pulls/ecoron/serpscrap.svg :target: https://hub.docker.com/r/ecoron/serpscrap
SEO python scraper to extract data from major searchengine result pages. Extract data like url, title, snippet, richsnippet and the type from searchresults for given keywords. Detect Ads or make automated screenshots. You can also fetch text content of urls provided in searchresults or by your own. It's usefull for SEO and business related research tasks.
Also get a screenshot of each result page. You can also scrape the text content of each result url. It is also possible to save the results as CSV for future analytics. If required you can also use your own proxylist.
See http://serpscrap.readthedocs.io/en/latest/ for documentation.
Source is available at https://github.com/ecoron/SerpScrap
The easy way to do:
.. code-block:: python
pip uninstall SerpScrap -y pip install SerpScrap --upgrade
More details in the install
_ section of the documentation.
SerpScrap in your applications
.. code-block:: python
#!/usr/bin/python3
import pprint import serpscrap
keywords = ['example']
config = serpscrap.Config() config.set('scrape_urls', False)
scrap = serpscrap.SerpScrap() scrap.init(config=config.get(), keywords=keywords) results = scrap.run()
for result in results: pprint.pprint(result)
More detailes in the examples
_ section of the documentation.
To avoid encode/decode issues use this command before you start using SerpScrap in your cli.
.. code-block:: bash
chcp 65001 set PYTHONIOENCODING=utf-8
.. image:: https://raw.githubusercontent.com/ecoron/SerpScrap/master/docs/logo.png :target: https://github.com/ecoron/SerpScrap
Notes about major changes between releases
I recommend an update to the latest version of SerpScrap, because the searchengine has updated the markup of search result pages(serp)
SerpScrap is using Chrome headless
_ and lxml
_ to scrape serp results. For raw text contents of fetched URL's, it is using beautifulsoup4
_ .
SerpScrap also supports PhantomJs
_ ,which is deprecated, a scriptable headless WebKit, which is installed automaticly on the first run (Linux, Windows).
The scrapcore was based on GoogleScraper
_ , an outdated project, and has many changes and improvemts.
.. target-notes::
.. _install
: http://serpscrap.readthedocs.io/en/latest/install.html
.. _examples
: http://serpscrap.readthedocs.io/en/latest/examples.html
.. _Chrome headless
: http://chromedriver.chromium.org/
.. _lxml
: https://lxml.de/
.. _beautifulsoup4
: https://www.crummy.com/software/BeautifulSoup/
.. _PhantomJs
: https://github.com/ariya/phantomjs
.. _GoogleScraper
: https://github.com/NikolaiT/GoogleScraper
FAQs
SEO python scraper to extract data from major searchengine result pages. Extract data like url, title, snippet, richsnippet and the type from searchresults for given keywords. Detect Ads or make automated screenshots. You can also fetch text content of urls provided in searchresults or by your own. It's usefull for SEO and business related research tasks.
We found that SerpScrap demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
npm now supports Trusted Publishing with OIDC, enabling secure package publishing directly from CI/CD workflows without relying on long-lived tokens.
Research
/Security News
A RubyGems malware campaign used 60 malicious packages posing as automation tools to steal credentials from social media and marketing tool users.
Security News
The CNA Scorecard ranks CVE issuers by data completeness, revealing major gaps in patch info and software identifiers across thousands of vulnerabilities.