Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
NAME bj
SYNOPSIS bj (migration_code|generate_migration|migrate|setup|plugin|run|submit|list|set|config|pid) [options]+
DESCRIPTION
Backgroundjob (Bj) is a brain dead simple zero admin background priority queue for Rails. Bj is robust, platform independent (including windows), and supports internal or external manangement of the background runner process.
Jobs can be submitted to the queue directly using the api or from the command line using the ./script/bj:
api:
Bj.submit 'cat /etc/password'
command line:
bj submit cat /etc/password
Bj's priority queue lives in the database and is therefore durable - your jobs will live across an app crash or machine reboot. The job management is comprehensive capturing stdout, stderr, exit_status, and temporal statistics about each job:
jobs = Bj.submit array_of_commands, :priority => 42
...
jobs.each do |job|
if job.finished?
p job.stdout
p job.stderr
p job.exit_status
p job.started_at
p job.finished_at
end
end
In addition the background runner process logs all commands run and their exit_status to a log named using the following convention:
rails_root/log/bj.#{ HOSTNAME }.#{ RAILS_ENV }.log
Bj allows you to submit jobs to multiple databases; for instance, if your application is running in development mode you may do:
Bj.in :production do
Bj.submit 'my_job.exe'
end
Bj manages the ever growing list of jobs ran by automatically archiving them into another table (by default jobs > 24 hrs old are archived) to prevent the jobs table from becoming bloated and huge.
All Bj's tables are namespaced and accessible via the Bj module:
Bj.table.job.find(:all) # jobs table
Bj.table.job_archive.find(:all) # archived jobs
Bj.table.config.find(:all) # configuration and runner state
Bj always arranges for submitted jobs to run with a current working directory of RAILS_ROOT and with the correct RAILS_ENV setting. For example, if you submit a job in production it will have ENV['RAILS_ENV'] == 'production'.
When Bj manages the background runner it will never outlive the rails application - it is started and stopped on demand as the rails app is started and stopped. This is also true for ./script/console - Bj will automatically fire off the background runner to process jobs submitted using the console.
Bj ensures that only one background process is running for your application - firing up three mongrels or fcgi processes will result in only one background runner being started. Note that the number of background runners does not determine throughput - that is determined primarily by the nature of the jobs themselves and how much work they perform per process.
If one ignores platform specific details the design of Bj is quite simple: the main Rails application submits jobs to table, stored in the database. The act of submitting triggers exactly one of two things to occur:
1) a new long running background runner to be started
2) an existing background runner to be signaled
The background runner refuses to run two copies of itself for a given hostname/rails_env combination. For example you may only have one background runner processing jobs on localhost in development mode.
The background runner, under normal circumstances, is managed by Bj itself - you need do nothing to start, monitor, or stop it - it just works. However, some people will prefer manage their own background process, see 'External Runner' section below for more on this.
The runner simply processes each job in a highest priority oldest-in fashion, capturing stdout, stderr, exit_status, etc. and storing the information back into the database while logging it's actions. When there are no jobs to run the runner goes to sleep for 42 seconds; however this sleep is interuptable, such as when the runner is signaled that a new job has been submitted so, under normal circumstances there will be zero lag between job submission and job running for an empty queue.
For the paranoid control freaks out there (myself included) it is quite possible to manage and monitor the runner process manually. This can be desirable in production setups where monitoring software may kill leaking rails apps periodically.
Recalling that Bj will only allow one copy of itself to process jobs per hostname/rails_env pair we can simply do something like this in cron
cmd = bj run --forever \
--rails_env=development \
--rails_root=/Users/ahoward/rails_root
*/15 * * * * $cmd
this will simply attempt the start the background runner every 15 minutes if, and only if, it's not already running.
In addtion to this you'll want to tell Bj not to manage the runner itself using
Bj.config["production.no_tickle"] = true
Note that, for clusting setups, it's as simple as adding a crontab and config entry like this for each host. Because Bj throttles background runners per hostname this will allow one runner per hostname - making it quite simple to cluster three nodes behind a besieged rails application.
Bj runs it's jobs as command line applications. It ensures that all jobs run in RAILS_ROOT so it's quite natural to apply a pattern such as
mkdir ./jobs
edit ./jobs/background_job_to_run
...
Bj.submit "./jobs/background_job_to_run"
If you need to run you jobs under an entire rails environment you'll need to do this:
Bj.submit "./script/runner ./jobs/background_job_to_run"
Obviously "./script/runner" loads the rails environment for you. It's worth noting that this happens for each job and that this is by design: the reason is that most rails applications leak memory like a sieve so, if one were to spawn a long running process that used the application code base you'd have a lovely doubling of memory usage on you app servers. Although loading the rails environment for each background job requires a little time, a little cpu, and a lot less memory. A future version of Bj will provide a way to load the rails environment once and to process background jobs in this environment, but anyone wanting to use this in production will be required to duct tape their entire chest and have a team of oxen rip off the tape without screaming to prove steelyness of spirit and profound understanding of the other side.
Don't forget that you can submit jobs with command line arguments:
Bj.submit "./jobs/a.rb 1 foobar --force"
and that you can do powerful things by passing stdin to a job that powers through a list of work. For instance, assume a "./jobs/bulkmail" job resembling
STDIN.each do |line|
address = line.strip
mail_message_to address
end
then you could
stdin = [
"foo@bar.com",
"bar@foo.com",
"ara.t.howard@codeforpeople.com",
]
Bj.submit "./script/runner ./jobs/bulkmail", :stdin => stdin
and all those emails would be sent in the background.
Bj's power is putting jobs in the background in a simple and robust fashion. It's your task to build intelligent jobs that leverage batch processing, and other, possibilities. The upshot of building tasks this way is that they are quite easy to test before submitting them from inside your application.
Bj can be installed two ways: as a plugin or via rubygems
plugin:
1) ./script/plugin install http://codeforpeople.rubyforge.org/svn/rails/plugins/bj
2) ./script/bj setup
gem:
1) $sudo gem install bj
2) add "require 'bj'" to config/environment.rb
3) bj setup
submit jobs for background processing. 'jobs' can be a string or array of strings. options are applied to each job in the 'jobs', and the list of submitted jobs is always returned. options (string or symbol) can be
:rails_env => production|development|key_in_database_yml
when given this keyword causes bj to submit jobs to the
specified database. default is RAILS_ENV.
:priority => any number, including negative ones. default is zero.
:tag => a tag added to the job. simply makes searching easier.
:env => a hash specifying any additional environment vars the background
process should have.
:stdin => any stdin the background process should have. must respond_to
to_s
eg:
jobs = Bj.submit 'echo foobar', :tag => 'simple job'
jobs = Bj.submit '/bin/cat', :stdin => 'in the hat', :priority => 42
jobs = Bj.submit './script/runner ./scripts/a.rb', :rails_env => 'production'
jobs = Bj.submit './script/runner /dev/stdin',
:stdin => 'p RAILS_ENV',
:tag => 'dynamic ruby code'
jobs Bj.submit array_of_commands, :priority => 451
when jobs are run, they are run in RAILS_ROOT. various attributes are available only once the job has finished. you can check whether or not a job is finished by using the #finished method, which simple does a reload and checks to see if the exit_status is non-nil.
eg:
jobs = Bj.submit list_of_jobs, :tag => 'important'
...
jobs.each do |job|
if job.finished?
p job.exit_status
p job.stdout
p job.stderr
end
end
See lib/bj/api.rb for more details.
http://quintess.com/
http://www.engineyard.com/
http://igicom.com/
http://eparklabs.com/
http://your_company.com/ <<-- (targeted marketing aimed at *you*)
1.0.1
PARAMETERS --rails_root=rails_root, -R (0 ~> rails_root=) the rails_root will be guessed unless you set this --rails_env=rails_env, -E (0 ~> rails_env=development) set the rails_env --log=log, -l (0 ~> log=STDERR) set the logfile --help, -h
AUTHOR ara.t.howard@gmail.com
URIS http://codeforpeople.com/lib/ruby/ http://rubyforge.org/projects/codeforpeople/ http://codeforpeople.rubyforge.org/svn/rails/plugins/
FAQs
Unknown package
We found that bricooke-bj demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.