Research
Security News
Malicious npm Package Targets Solana Developers and Hijacks Funds
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Rack Middleware adhering to the Google Ajax Crawling Scheme, using a headless browser to render JS heavy pages and serve a dom snapshot of the rendered state to a requesting search engine.
Details of the scheme can be found at: https://developers.google.com/webmasters/ajax-crawling/docs/getting-started
gem install google_ajax_crawler
In your config.ru
require 'google_ajax_crawler'
use GoogleAjaxCrawler::Crawler do |config|
config.page_loaded_js = "MyApp.isPageLoaded()"
end
app = -> env { [200, {'Content-Type' => 'text/plain'}, "b" ] }
run app
create in the initializer folder :
google_ajax_crawler_middleware.rb
with
if defined?(Rails.configuration) && Rails.configuration.respond_to?(:middleware)
require 'google_ajax_crawler'
Rails.configuration.middleware.use GoogleAjaxCrawler::Crawler do |config|
config.page_loaded_test = -> driver { driver.page.evaluate_script('document.getElementById("loading") == null') }
end
end
Concurrent requests must be enabled to allow your site to snapshot itself. If concurrent requests are not allowed, the site will simple hang on a crawler request.
In config/application.rb :
config.threadsafe!
In the examples folder, each driver has a rackup file (at the moment only one driver, capybara-webkit, exists), which can be launched:
rackup examples/capybara_webkit.ru
Examples for how to use the crawler with Backbone.JS, Angular.JS and plain ol javascript are accesible via:
Curl, or open a browser to http://localhost:9292/[framework]#!test and view source.... This is how a search engine will see your page before snapshotting. NOTE: don't look at the markup through a web inspector as it will most likely display dom elements rendered on the fly by js.
Change the url to http://localhost:9292/[framework]?_escaped_fragment_=test , and then again curl or view source to see how the DOM state has been captured
As determining when a page has completed rendering can depend on a number of qualitative factors (i.e. all ajax requests have responses, certain content has been displayed, or even when there are no loaders / spinners visible on the page), you can specify one of two ways to tell the crawler that your page has finished loading / rendering and to return a snapshot of the rendered dom at that time.
Tell the crawler the client side javascript function (returning true/false) you have created, that determines when your page has finished loading / rendering.
use GoogleAjaxCrawler::Crawler do |config|
config.page_loaded_js = "MyApp.isPageLoaded()"
end
A server side test determining when your page has finished loading / rendering. The configured crawler driver is passed to the lambda to allow querying of the current page's dom state from the server side.
use GoogleAjaxCrawler::Crawler do |config|
config.page_loaded_test = -> driver { driver.page.has_css?('.loading') == false }
end
The max time (in seconds) the crawler should wait before returning a response. After the timeout has been reached, a snapshot of the DOM in its current state is returned. Defaults to 30 seconds.
The configured google ajax crawler driver used to query the current page state. Defaults to capybara_webkit.
How often (in seconds) to test the page state with the configured page_loaded_test. Defaults to 0.5 seconds.
What response headers shoudl be returned with the dom snapshot. Default headers specify the content-type text/html.
The parameter name used by a search bot to idenitfy which client side route to snapshot. Defaults to escaped_fragment.
Snapshot requests are passed an additional query string param (?search_engine=true), allowing you to optionally execute client side code. This is particularly handy should you have stats tracking code (i.e. Google Analytics), which you don't want executed / included when search engines are trawling your site.
All free - Use, modify, fork to your hearts content... See LICENSE.txt for further details.
FAQs
Unknown package
We found that google_ajax_crawler demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Security News
Research
Socket researchers have discovered malicious npm packages targeting crypto developers, stealing credentials and wallet data using spyware delivered through typosquats of popular cryptographic libraries.
Security News
Socket's package search now displays weekly downloads for npm packages, helping developers quickly assess popularity and make more informed decisions.