Security News
Oracle Drags Its Feet in the JavaScript Trademark Dispute
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
http-crawler is a library for crawling websites. It uses requests_ to speak HTTP.
Installation
Install with pip_:
.. code-block:: console
$ pip install http-crawler
Usage
~~~~~
The ``http_crawler`` module provides one generator function, ``crawl``.
``crawl`` is called with a URL, and yields instances of requests_'s |Response|_ class.
``crawl`` will request the page at the given URL, and will extract all URLs from the response. It will then make a request for each of those URLs, and will repeat the process until it has requested every URL linked to from pages on the original URL's domain. It will not extract or process URLs from any page with a different domain to the original URL.
For instance, this is how you would use ``crawl`` to find and log any broken links on a site:
.. code-block:: pycon
>>> from http_crawler import crawl
>>> for rsp in crawl('http://www.example.com'):
>>> if rsp.status_code != 200:
>>> print('Got {} at {}'.format(rsp.status_code, rsp.url))
``crawl`` has a number of options:
- ``follow_external_links`` (default ``True``) If set, ``crawl`` will make a request for every URL it encounters, including ones with a different domain to the original URL. If not set, ``crawl`` will ignore all URLs that have a different domain to the original URL. In either case, ``crawl`` will not extract further URLs from a page with a different domain to the original URL.
- ``ignore_fragments`` (default ``True``) If set, ``crawl`` will ignore the fragment part of any URL. This means that if ``crawl`` encounters ``http://domain/path#anchor``, it will make a request for ``http://domain/path``. Moreover, it means that if ``crawl`` encounters ``http://domain/path#anchor1`` and ``http://domain/path#anchor2``, it will only make one request.
- ``verify`` (default ``True``) This option controls the behaviour of SSL certificate verification. See the `requests documentation`_ for more details.
Motivation
~~~~~~~~~~
Why another crawling library? There are certainly lots of Python tools for crawling websites, but all that I could find were either too complex, too simple, or had too many dependencies.
http-crawler is designed to be a `library and not a framework`_, so it should be straightforward to use in applications or other libraries.
Contributing
There are a handful of enhancements on the issue tracker
_ that would be suitable for somebody looking to contribute to Open Source for the first time.
For instructions about making Pull Requests, see GitHub's guide
_.
All contributions should include tests with 100% code coverage, and should comply with PEP 8
. The project uses tox for running tests and checking code quality metrics.
To run the tests:
.. code-block:: console
$ tox
.. _requests: http://docs.python-requests.org/en/master/
.. _pip: https://pip.pypa.io/en/stable/
.. |Response| replace:: Response
.. _Response: http://docs.python-requests.org/en/master/api/#requests.Response
.. _library and not a framework
: http://tomasp.net/blog/2015/library-frameworks/
.. _issue tracker
: https://github.com/inglesp/http-crawler/issues
.. _GitHub's guide
: https://help.github.com/articles/using-pull-requests/
.. _PEP 8
: https://www.python.org/dev/peps/pep-0008/
.. _tox: https://tox.readthedocs.io/en/latest/
.. _requests documentation
: http://docs.python-requests.org/en/master/user/advanced/#ssl-cert-verification
FAQs
A library for crawling websites
We found that http-crawler demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
Security News
The Linux Foundation is warning open source developers that compliance with global sanctions is mandatory, highlighting legal risks and restrictions on contributions.
Security News
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.