Security News
Fluent Assertions Faces Backlash After Abandoning Open Source Licensing
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
Install:
npm install robotstxt
all examples use coffee script syntax
require:
robotsTxt = require 'robotstxt'
parse a robots.txt:
#robotsTxt(url, user_agent)
google_robots_txt = robotsTxt 'http://www.google.com/robots.txt', 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.iamnnotreallyagooglebot.com/)'
assign event handler after all that parsing is done
google_robots_txt.on 'ready', (gate_keeper) ->
#returns false
#note: only the path and the query of the url gets parsed, everything else (i.e. http://, the domain-name) gets ditched
console.log gate_keeper.isAllowed 'http://www.google.com/setnewsprefs?sfsdfg'
#returns false
console.log gate_keeper.isAllowed '/setnewsprefs?sfsdfg'
#returns true
#note: only the path and the query of the url gets parsed, everything else (i.e. http://, the domain-name) gets ditched
console.log gate_keeper.isDisallowed 'http://www.google.com/setnewsprefs?sfsdfg'
#returns true
console.log gate_keeper.isDisallowed '/setnewsprefs?sfsdfg'
gate_keeper methods:
#asks the gate_keeper if it's ok to crawle an url
isAllowed url
#asks the gate_keeper if it's not ok to crawle an url
isDisallowed url
#answeres the question, why an url is allowed/disallowed
why url
#if you want to change the user agent that is used for this question
setUserAgent user_agent
#if you want to know which robots.txt group is used with which user_agent
#per default uses the user agent set with setUserAgent
getGroup (user_agent)
robotsTxt methods
#fetches parses url with user_agent
#returns an robots_txt event emitter
robotsTxt(url, user_agent)
#blank robots_txt object
blank_robots_txt = robotsTxt()
#crawls and parses a robots.txt
#throws an 'crawled' event
blank_robots_txt.crawl: (protocol, host, port, path, user_agent, encoding)
#parses a txt string line after line
#throws a 'ready' event
blank_robots_txt.parse(txt)
robotsTxt events
#thrown after the whole robots.txt is crawled
robotsTxt.on 'crawled' (txt) -> ...
#thrown after all lines of the robots.txt are parsed
robotsTxt.on 'ready' (gate_keeper)
#if something did not quite work
#note: it's a good idea to always implement this error listener
#to prevent strange looking error messages in case there are internet connection issues
robotsTxt.on 'error' (error)
NOTES
the default user-agent used is
#robotsTxt(url, user_agent)
Mozilla/5.0 (compatible; Open-Source-Coffee-Script-Robots-Txt-Checker/2.1; +http://example.com/bot.html
i strongly recommend using your own user agent
i.e.:
myapp_robots_txt = robotsTxt 'http://www.google.com/robots.txt', 'Mozilla/5.0 (compatible; MyAppBot/2.1; +http://www.example.com/)'
if you want to simulate another crawler (for testing purposes only, of course) see this list for the correct user agent strings
a web interface can be found at https://github.com/franzenzenhofer/webrobotstxt
FAQs
a robotstxt parser for node.js
We found that robotstxt demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Fluent Assertions is facing backlash after dropping the Apache license for a commercial model, leaving users blindsided and questioning contributor rights.
Research
Security News
Socket researchers uncover the risks of a malicious Python package targeting Discord developers.
Security News
The UK is proposing a bold ban on ransomware payments by public entities to disrupt cybercrime, protect critical services, and lead global cybersecurity efforts.