Research
Security News
Malicious npm Package Targets Solana Developers and Hijacks Funds
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
SeoCache is a dedicated cache for SEO with JavaScript rendering :fire:
You can find in this article, a more detail explanation of the purpose and usage of this gem:
Google credo is: Don't waste my bot time!
In order to reduce Googlebot crawling time, let's provide the complete HTML files in a specific cache.
This cache is suitable for static (generated or not) pages but not for user private pages.
Add this line to your application's Gemfile:
gem 'seo_cache'
And then execute:
$ bundle
Or install it yourself as:
$ gem install seo_cache
Install chromium or chrome driver on your device (the chromedriver will be automatically downloaded).
Declare the middleware. For instance in config/initializers/seo_cache.rb
:
require 'seo_cache'
# See options below
Rails.application.config.middleware.use SeoCache::Middleware
Required at least Ruby 2.6.
Chrome path (required) (disk
or memory
):
SeoCache.chrome_path = Rails.env.development? ? '/usr/bin/chromium-browser' : '/usr/bin/chromium'
Choose a cache mode (memory
(default) or disk
):
SeoCache.cache_mode = 'memory'
Cache path (required):
SeoCache.cache_path = Rails.root.join('public', 'seo_cache')
Redis URL (required if memory cache):
SeoCache.redis_url = "redis://localhost:6379/"
Redis prefix (if needed):
SeoCache.redis_namespace = '_my_project:seo_cache'
Redis database index (if needed):
SeoCache.redis_db_index = 1
Waiting time before the page is fully loaded (optional in sec, useful for async elements):
SeoCache.wait_time_for_page_loading = 3
Specific log file (if you want to log missed cache urls):
SeoCache.logger_path = Rails.root.join('log', 'seo_cache.log')
Activate missed cache urls:
SeoCache.log_missed_cache = true
Domains to whitelist (authorize only domains which contains these hosts):
SeoCache.whitelist_hosts = []
URLs to blacklist:
SeoCache.blacklist_urls = %w[^/assets/.* ^/admin.*]
Params to blacklist:
SeoCache.blacklist_params = %w[page]
URLs to whitelist:
SeoCache.whitelist_urls = []
Cache page even if user is connected (be careful when caching sensitive data):
SeoCache.cache_with_user_connected = true
Parameter to add manually to the URl to force page caching, if you want to cache a specific URL (e.g. https://<my_website>/?_seo_cache_=true
):
SeoCache.force_cache_url_param = '_seo_cache_'
Cache only the pages with these HTTP status code (don't cache by default not found or error pages):
SeoCache.cache_only_status = [<your_list>]
URL extension to ignore when caching (already defined):
SeoCache.extensions_to_ignore = [<your_list>]
List of bot agents (already defined):
SeoCache.crawler_user_agents = [<your_list>]
Parameter added to URL when generating the page, avoid infinite rendering (override only if already used):
SeoCache.prerender_url_param = '_prerender_'
If you encounter the following error DevToolsActivePort file doesn't exist
, you can add the following option:
SeoCache.chrome_debugging_port = '9222'
Be aware, JS will be rendered twice: once by this gem and once by client. For React, this not a problem but with jQuery plugins, it can duplicate elements in the page (you have to check the redundancy).
Disk cache is recommended by default. Nginx will directly fetch HTML files on disk. The TTFB (time to first byte) will be under 200ms :). You can use memory cache if you have lot of RAM, but if you shut down your server, you will lost all the generated pages! So prefer cache disk storage.
You can check if seo mode is active in your controllers, with the following variable:
def check_seo_mode
@seo_mode = (params.key?(SeoCache.prerender_url_param) || params.key?(SeoCache.force_cache_url_param))
end
And if you want to access to this variable in JS files:
window.seoMode = "{@seo_mode}"
Too see in browser the cached page, open a browser and set the user agent to:
Googlebot (Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html))
For instance, with Chrome or Chromium, you can change the user agent in "Network" / "Network conditions" panel.
To (re)cache a page, add this parameter to the url (and the browser must use the Googlebot user agent):
/?_seo_cache_=true
To automate caching, create a cron rake task (e.g. in lib/tasks/populate_seo_cache.rake
):
namespace :my_project do
desc 'Populate cache for SEO'
task populate_seo_cache: :environment do |_task, _args|
require 'seo_cache/populate_cache'
paths_to_cache = public_paths_like_sitemap
SeoCache::PopulateCache.new('https://<your-domain-name>', paths_to_cache).new.perform
end
end
You can add the force_cache: true
option to SeoCache::PopulateCache
for overwrite cached data.
If your website has several domains (.com, .fr, ...), you need to generate your urls for each locale (each key is the locale). And use the with_locale_keys: true
option.
If you want to execute only through a rake task, you can comment the line which include the middleware. keep all options configured and remove only the middleware. Thus all pages will be cached and SeoCache isn't called for pages not in cache. It's useful if you have a script which generates all website pages (based on sitemap for instance) and you run script every day.
If you use disk caching, add this to your Nginx configuration:
# Before any block:
map $http_user_agent $limit_bots {
default 0;
~*(google|bing|yandex|msnbot) 1;
}
map $request_method $is_get {
default 0;
GET 1;
}
# Before location block:
set $bot_request '';
if ($is_get) {
set $bot_request 'GET_';
}
if ($limit_bots) {
set $bot_request "${bot_request}BOT";
}
set $cache_extension '';
if ($bot_request = GET_BOT) {
set $cache_extension '.html';
}
# Inside location block:
location / {
# Ignore url with blacklisted params (e.g. page)
if ($arg_page) {
break;
}
if ($arg__seo_cache_) {
break;
}
# Local rewrite to remove trailing slash
rewrite ^/(.*)/$ /$1 last;
try_files /seo_cache/$uri/index$cache_extension /seo_cache/$uri$cache_extension /seo_cache/$uri $uri @rubyproxy;
# Or for multi-domains:
# try_files /seo_cache/fr/$uri/index$cache_extension /seo_cache/fr/$uri$cache_extension /seo_cache/fr/$uri $uri @rubyproxy;
}
location @rubyproxy {
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
send_timeout 600;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass_header Server;
proxy_pass http://0.0.0.0:3020;
}
This configuration allow you to use SEO cache only for bots (you can add other bots at the beginning of the file) and HTML GET requests. For connected users, you don't want to use the cached page (user connection, basket, ...). The performance are less important for users than bots.
If you use Heroku server, you can't store files on dynos. But you have two alternatives:
Use the memory mode
Use a second server (a dedicated one) to store HTML files and combine with Nginx.
To intercept the request, use the following middleware in Rails:
In config/initializers
, create a new file:
require 'bot_detector'
if Rails.env.production?
Rails.application.config.middleware.insert_before ActionDispatch::Static, BotDetector
end
Then in lib
directory, for instance, manage the request:
class BotRedirector
CRAWLER_USER_AGENTS = ['googlebot', 'yahoo', 'bingbot', 'baiduspider', 'facebookexternalhit', 'twitterbot', 'rogerbot', 'linkedinbot', 'embedly', 'bufferbot', 'quora link preview', 'showyoubot', 'outbrain', 'pinterest/0.', 'developers.google.com/+/web/snippet', 'www.google.com/webmasters/tools/richsnippets', 'slackbot', 'vkShare', 'W3C_Validator', 'redditbot', 'Applebot', 'WhatsApp', 'flipboard', 'tumblr', 'bitlybot', 'SkypeUriPreview', 'nuzzel', 'Discordbot', 'Google Page Speed', 'Qwantify'].freeze
IGNORE_URLS = [
'/robots.txt'
].freeze
def initialize(app)
@app = app
end
def call(env)
if env['HTTP_USER_AGENT'].present? && CRAWLER_USER_AGENTS.any? { |crawler_user_agent| env['HTTP_USER_AGENT'].downcase.include?(crawler_user_agent.downcase) }
begin
request = Rack::Request.new(env)
return @app.call(env) if IGNORE_URLS.any? { |ignore_url| request.fullpath.downcase =~ /^#{ignore_url.downcase}/ }
url = URI.parse(ENV['SEO_SERVER'] + request.fullpath)
headers = {
'User-Agent' => env['HTTP_USER_AGENT'],
'Accept-Encoding' => 'gzip'
}
req = Net::HTTP::Get.new(url.request_uri, headers)
# req.basic_auth(ENV['SEO_USER_ID'], ENV['SEO_PASSWD']) # if authentication mechanism
http = Net::HTTP.new(url.host, url.port)
http.use_ssl = true if url.scheme == 'https'
response = http.request(req)
if response['Content-Encoding'] == 'gzip'
response.body = ActiveSupport::Gzip.decompress(response.body)
response['Content-Length'] = response.body.length
response.delete('Content-Encoding')
end
return [response.code.to_i, { 'Content-Type' => response.header['Content-Type'] }, [response.body]]
rescue => error
Rails.logger.error("[bot_redirection] #{error.message}")
@app.call(env)
end
else
@app.call(env)
end
end
end
If you use a second server, all links must be relatives in your HTML files, to avoid multi-domains links.
Inspired by prerender gem.
Bug reports and pull requests are welcome on GitHub at https://github.com/floXcoder/seo_cache. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the Contributor Covenant code of conduct.
The gem is available as open source under the terms of the MIT License.
Everyone interacting in the SeoCache project’s codebases, issue trackers, chat rooms and mailing lists is expected to follow the code of conduct.
FAQs
Unknown package
We found that seo_cache demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Security News
Research
Socket researchers have discovered malicious npm packages targeting crypto developers, stealing credentials and wallet data using spyware delivered through typosquats of popular cryptographic libraries.
Security News
Socket's package search now displays weekly downloads for npm packages, helping developers quickly assess popularity and make more informed decisions.