=Custom Benchmarks
== About
Custom Benchmarks allow you to easily add your own information to a
benchmark line logged at the end of each request. e.g.,
Finished JsonController#index in 0.40306 (2 reqs/sec) | Rendering: 0.05003 (12%) | DB: 0.04571 (11%) | Search: 0.16429,1 (40%) | Time: 1163542055 | PID: 22426 | 200 OK [http://www.zvents.com/welcome/index]
At present, this line appears in addition to the standard log line (described
below) that starts with the "Completed in". I was unable to replace the
existing log line without incurring other side effects. I believe the
problem is due to attempting to monkey patch a method (perform_action)
that has been aliased several times by the Rails core. Adding an extra
line to the log file is not optimal, but it seems to be the best choice
for now.
Typically, the standard log line includes the latency associated with
executing specific parts of a request. In the example above, we have added a
measurement of search latency. But you can use Custom Benchmarks to add
any information to the log line. The example above also shows the ID of
the process (PID) that served this request. The PID is useful when parsing
information from logs that contain data from multiple processes.
== Installation
- Install the plugin or the gem
$ script/plugin install svn://rubyforge.org/var/svn/zventstools/projects/custom_benchmarks
- OR -
gem install custom_benchmarks
== Simple Example: Logging the Process ID
To add the PID as a custom benchmark field, simply add a custom_benchmark
line like the following to your ApplicationController:
class ApplicationController < ActionController::Base
custom_benchmark {|runtime| " | PID: #{$$}" }
...
end
Declare your custom_benchmark with a block that expects an input parameter
called runtime. runtime, which isn't used in this example, contains the
overall latency of the entire request. Later, we'll show you an example
of using runtime to calculate percentage latency below. custom_benchmark
expects your block to return a string - which will be inserted in the
log file immediate before the status (e.g., 200 OK [http://www.zvents.com/])
== Complex Example: Logging Arbitrary Latency
Let's say that your application includes a search function that is powered
by Lucene. Like SQL calls issued to a database, calls to Lucene can take
a while so you want to log your search latency.
The first step is to set up a mechanism that allows you to record your
search latency for each request. You can do that with something like this:
class MySearch
@@latency = 0.0
cattr_accessor :latency
def run_search
@@latency = Benchmark::measure{
# execute the call to Lucene here
}.real
end
def self.get_timing_summary(runtime)
summary = " | Search: #{sprintf("%.5f",@@latency)} (#{sprintf("%d", (@@latency * 100) / runtime)}%)"
@@latency = 0.0
summary
end
end
The run_search method uses Benchmark::measure to record the latency of the
search. The get_timing_summary class method, which will be invoked by
a custom_benchmark, returns a formatted string summarizing the search
latency in absolute and percentage terms. It also resets the value
of @@latency to avoid affecting subsequent queries.
Finally, we just need to add a custom_benchmark statement to the
ApplicationController:
custom_benchmark {|runtime| MySearch.get_timing_summary(runtime) }
== Default Integrations: Adapters
custom_benchmarks now ships with default integrations with external
libraries. The first, and only, integration (as of 0.0.3 release) is
with MemCacheClient. Following the instructions in
vendor/plugins/custom_benchmarks/init.rb to enable adapters.