h1. Cobweb v1.1.0
"@cobweb_gem":https://twitter.com/cobweb_gem
!https://badge.fury.io/rb/cobweb.png!:http://badge.fury.io/rb/cobweb
!https://circleci.com/gh/stewartmckee/cobweb.svg?style=shield&circle-token=07357f0bd17ac67e21ea161fb9abdb35ecac4c2e!
!https://gemnasium.com/stewartmckee/cobweb.png!
!https://coveralls.io/repos/stewartmckee/cobweb/badge.png?branch=master(Coverage Status)!:https://coveralls.io/r/stewartmckee/cobweb
h2. Intro
CobWeb has three methods of running. Firstly it is a http client that allows get and head requests returning a hash of data relating to the requested resource. The second main function is to utilize this combined with the power of Resque to cluster the crawls allowing you crawl quickly. Lastly you can run the crawler with a block that uses each of the pages found in the crawl.
I've created a sample app to help with setting up cobweb at http://github.com/stewartmckee/cobweb_sample
h3. Resque
When running on resque, passing in a Class and queue name it will enqueue all resources to this queue for processing, passing in the hash it has generated. You then implement the perform method to process the resource for your own application.
h3. Standalone
CobwebCrawler takes the same options as cobweb itself, so you can use any of the options available for that. An example is listed below.
While the crawler is running, you can view statistics on http://localhost:4567
h3. Command Line
Cobweb can also be ran from the command line to perform various pre-canned tasks.
- report - output a csv with data from the crawl
- export - creates a local replication of the data on the server based off the url structure. Text data is stored in yaml format.
Run "cobweb --help" for more info
h3. Data Returned For Each Page
The data available in the returned hash are:
- @:url@ - url of the resource requested
- @:status_code@ - status code of the resource requested
- @:response_time@ - response time of the resource requested
- @:mime_type@ - content type of the resource
- @:character_set@ - character set of content determined from content type
- @:length@ - length of the content returned
- @:body@ - content of the resource
- @:location@ - location header if returned
- @:redirect_through@ - if your following redirects, any redirects are stored here detailing where you were redirected through to get to the final location
- @:headers@ - hash or the headers returned
- @:links@ - hash or links on the page split in to types
** @:links@ - urls from a tags within the resource
** @:images@ - urls from img tags within the resource
** @:related@ - urls from link tags
** @:scripts@ - urls from script tags
** @:styles@ - urls from within link tags with rel of stylesheet and from url() directives with stylesheets
The source for the links can be overridden, contact me for the syntax (don't have time to put it into this documentation, will as soon as i have time!)
h3. Statistics
Statistics are available during the crawl, you can create a Stats object passing in a hash with redis_options and crawl_id. Stats has a get_statistics method that returns a hash of the statistics available to you. It is also returned by default from the CobwebCrawler.crawl standalone crawling method.
The data available within statistics is as follows:
- @:average_length@ - average size of each objet
- @:minimum_length@ - minimum length returned
- @:queued_at@ - date and time that the crawl was started at (eg: "2012-09-10T23:10:08+01:00")
- @:maximum_length@ - maximum length of object received
- @:status_counts@ - hash with the status returned as the key and value as number of pages (eg: {"404" => 1, "200" => 1})
- @:mime_counts@ - hash containing the mime type as key and count or pages as value (eg: {"text/html" => 8, "image/jpeg" => 25)})
- @:queue_counter@ - size of queue waiting to be processed for crawl
- @:page_count@ - number of html pages retrieved
- @:total_length@ - total size of data received
- @:current_status@ - Current status of crawl
- @:asset_count@ - count of non-html objects received
- @:page_size@ - total size of pages received
- @:average_response_time@ - average response time of all objects
- @:crawl_counter@ - number of objects that have been crawled
- @:minimum_response_time@ - quickest response time of crawl
- @:maximum_response_time@ - longest response time of crawl
- @:asset_size@ - total size of all non-assets received
h2. Installation
Install crawler as a gem
bc. gem install cobweb
or in a @Gemfile@
bc. gem 'cobweb'
h2. Usage
h3. Cobweb
h4. new(options)
Creates a new crawler object based on a base_url
-
options - The following hash keys can be defined:
** @:follow_redirects@ - transparently follows redirects and populates the :redirect_through key in the content hash (Default: true)
** @:redirect_limit@ - sets the limit to be used for concurrent redirects (Default: 10)
** @:queue_system@ - sets the the queue system :resque or :sidekiq (Default: :resque)
** @:processing_queue@ - specifies the processing queue for content to be sent to (Default: 'CobwebProcessJob' when using resque, 'CrawlProcessWorker' when using sidekiq)
** @:crawl_finished_queue@ - specifies the processing queue for statistics to be sent to after finishing crawling (Default: 'CobwebFinishedJob' when using resque, 'CrawlFinishedWorker' when using sidekiq)
** @:debug@ - enables debug output (Default: false)
** @:quiet@ - hides default output (Default: false)
** @:cache@ - sets the ttl for caching pages, set to nil to disable caching (Default: 300)
** @:timeout@ - http timeout for requests (Default: 10)
** @:redis_options@ - hash containing the initialization options for redis (e.g. {:host => "redis.mydomain.com"}) (Default: {})
** @:internal_urls@ - array of strings representing internal url forms for your site (eg: ['http://test.com/', 'http://blog.test.com/', 'http://externaltest.com/']) (Default: [], although your first url's scheme, host and domain are added)
** @:first_page_redirect_internal@ - if true and the first page crawled is a redirect, it will add the final destination of redirects to the internal_urls (e.g. http://www.test.com gets redirected to http://test.com) (Default: true)
** @:crawl_id@ - the id used internally for identifying the crawl. Can be used by the processing job to seperate crawls
** @:internal_urls@ - an array of urls with * wildcards that represent urls internal to the site (ie pages within the same domain)
** @:external_urls@ - an array of urls with * wildcards that represent urls external to the site (overrides internal_urls)
** @:seed_urls@ - an array of urls that are put into the queue regardless of any other setting, combine with {:external_urls => ""} to limit to seed urls
** @:obey_robots@ - boolean determining if robots.txt should be honoured. (default: false)
** @:user_agent@ - user agent string to match in robots.txt (not sent as user_agent of requests yet) (default: cobweb)
** @:crawl_limit_by_page@ - sets the crawl counter to only use html page types when counting objects crawled
** @:valid_mime_types@ - an array of mime types that takes wildcards (eg 'text/') defaults to @['/*']@
** @:direct_call_process_job@ - boolean that specifies whether objects should be passed directly to a processing method or should be put onto a queue
** @:raise_exceptions@ - defaults to handling exceptions with debug output, setting this to true will raise exceptions in your app
** @:use_encoding_safe_process_job@ - Base64-encode the body when storing job in queue; set to true when you are expecting non-ASCII content (Default: false)
** @:proxy_addr@ - hostname of a proxy to use for crawling (e. g., 'myproxy.example.net', default: nil)
** @:proxy_port@ - port number of the proxy (default: nil)
** @:treat_https_as_http@ - determines whether https and http urls are treated as the same (defaults to true, ie treated as the same)
bc. crawler = Cobweb.new(:follow_redirects => false)
h4. start(base_url)
Starts a crawl through resque. Requires the @:processing_queue@ to be set to a valid class for the resque job to work with the data retrieved.
- @base_url@ - the url to start the crawl from
Once the crawler starts, if the first page is redirected (eg from http://www.test.com to http://test.com) then the endpoint scheme, host and domain is added to the internal_urls automatically.
bc. crawler.start("http://www.google.com/")
h4. get(url)
Simple get that obey's the options supplied in new.
bc. crawler.get("http://www.google.com/")
h4. head(url)
Simple get that obey's the options supplied in new.
bc. crawler.head("http://www.google.com/")
h4. Processing Queue
The @:processing_queue@ option is used to specify the class that contains the resque perform method to pass the content onto. This class should be defined in your application to perform any tasks you wish to the content. There are two options however, for running this. Firstly, the default settings will push the content crawled onto a resque queue for that class. This allows you the flexibility of running in queues on seperate machines etc. The main drawback to this is that all your content is stored in redis within the queue. This can be memory intensive if you are crawling large sites, or have large content that is being crawled. To get around this you can specify that the crawl_job calls the perform method on the processing queue class directly, thereby not using memory in redis for the content. This is performed by using the :direct_call_process_job. If you set that option to 'true' then instead of the job being queued, it will be executed within the crawl_job queue.
h3. CobwebCrawler
CobwebCrawler is the standalone crawling class. If you don't want to use resque or sidekiq and just want to crawl the site within your ruby process, you can use this class.
bc. crawler = CobwebCrawler.new(:cache => 600)
statistics = crawler.crawl("http://www.pepsico.com")
You can also run within a block and get access to each page as it is being crawled.
bc. statistics = CobwebCrawler.new(:cache => 600).crawl("http://www.pepsico.com") do |page|
puts "Just crawled #{page[:url]} and got a status of #{page[:status_code]}."
end
puts "Finished Crawl with #{statistics[:page_count]} pages and #{statistics[:asset_count]} assets."
There are some specific options for CobwebCrawler in addition to the normal cobweb options
- @thread_count@ - specifies the number of threads used by the crawler, defaults to 1
h3. CobwebCrawlHelper
The CobwebCrawlHelper class is a helper class to assist in getting information about a crawl and to perform functions against the crawl
bc. crawl = CobwebCrawlHelper.new(options)
- @options@ - the hash of options passed into Cobweb.new (must include a @:crawl_id@)
h2. Contributing/Testing
Firstly, you could
Feel free to contribute small or large bits of code, just please make sure that there are rspec test for the features your submitting. We also test on travis at http://travis-ci.org/#!/stewartmckee/cobweb if you want to see the state of the project.
Continuous integration testing is performed by the excellent Travis: http://travis-ci.org/#!/stewartmckee/cobweb
h2. Todo
- Tidy up classes with link parsing
- Refactoring of code to simplify design
- Remove requirement of redis from standalone crawler
- Add redis settings to standalone crawler (ie to connect to remote redis)
- Add ability to start and stop crawls from web interface
- Allow crawler to start as web interface only (ie not run crawls at start)
- Fix content encoding issue requiring separate process job
- DRY the cobweb get/head calls, its got a lot of duplication
- Investigate using event machine for single threaded crawling
h3. Big changes
- Refactor into a module and refactor class names to remove cobweb and increase simplicity
h2. License
h3. The MIT License
Copyright (c) 2013 Active Information Design
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.