Twitterpunch =============== Twitterpunch is designed to work with PhotoBooth and OS X Folder Actions. When this script is called with the name of an image file, it will post the image to Twitter, along with a message randomly chosen from a list and a specified hashtag. If you call the script with the `--stream` argument instead, it will listen for tweets to that hashtag and download them to a specified directory. If the tweet came from another user, Twitterpunch will speak it aloud. Typically, you'll run one copy on an OSX laptop with PhotoBooth, and a separate copy on another machine (either Windows or OSX) for the viewer. You can also use a mobile device as a remote control, if you like. This will allow the user to enter a custom message for each photo that gets tweeted out, if they'd like. Configuration =========== Configure the program via the `~/.twitterpunch/config.yaml` YAML file. This file should look similar to the example below. --- :twitter: # twitter configuration :consumer_key: <consumer key> :consumer_secret: <consumer secret> :access_token: <access token> :access_token_secret: <access secret> :messages: # list of messages to attach - Hello there # to outgoing tweets - I'm a posting fool - minimally viable product :hashtag: Twitterpunch # The hashtag to post and listen to :handle: Twitterpunch # The twitter username to post as :photodir: ~/Pictures/twitterpunch/ # Where to save downloaded images :logfile: ~/.twitterpunch/activity.log # Where to save logs :viewer: # Use the built-in slideshow viewer :count: 5 # How many images to have onscreen at once :remote: :timeout: 45 # How long the button should remain disabled for :apptitle: dslrBooth # The photo booth application title :hotkey: space # Which hotkey to send to trigger a photo 1. Generate a skeleton configuration file * `twitterpunch --configure` 1. Edit the configuration file as needed. You'll be prompted with the path. * If you have your own Twitter application credentials, you're welcome to use them. 1. Authorize the application with the Twitter API. * `twitterpunch --authorize` Usage ========== ### Using OS X PhotoBooth 1. Start PhotoBooth at least once to generate its library. 1. Install the Twitterpunch Folder Action * `twitterpunch --install` * It may claim that it could not be attached, fear not. 1. Profit! * _and by that, I mean take some shots with PhotoBooth!_ *Note*: if the folder action doesn't seem to work and photos aren't posted to Twitter, here are some troubleshooting steps to take: 1. Run Twitterpunch by hand with photos as arguments. This may help you isolate configuration or authorization issues. * `twitterpunch foo.jpg` 1. Correct the path in the workflow. * `which twitterpunch` * Edit the Twitterpunch folder action to include that path. #### Using the remote web app Configure the remote web app using the `:remote` hash in `config.yaml`. You can usually find the title of the app using `system_profiler -detailLevel full SPApplicationsDataType` and grepping for the name or path to the `.app`. In this example, the title is _dslrBooth_. [ben@ganymede] ~ $ system_profiler -detailLevel full SPApplicationsDataType | grep -B8 dslrBooth.app dslrBooth: Version: 2.9 Obtained from: Identified Developer Last Modified: 10/14/17, 9:50 PM Kind: Intel 64-Bit (Intel): Yes Signed by: Developer ID Application: Hope Pictures LLC (MZR5GHAQX4), Developer ID Certification Authority, Apple Root CA Location: /Applications/dslrBooth.app 1. Run the app with `twitterpunch --remote` 1. Browse to the app with http://{address}:8080 1. [optional] If on an iOS device, add to your homescreen * This will give you "app behaviour", such as full screen, and a nice icon #### Troubleshooting. 1. Make sure the folder action is installed properly 1. Use the Finder to navigate to `~/Pictures/` 1. Right click on the `Photo Booth Library` icon and choose _Show Package Contents_. 1. Right click on the `Pictures` folder and choose `Services > Folder Actions Setup` 1. Make sure that the `Twitterpunch` action is attached. 1. Install the folder action 1. Open the `resources` folder of this gem. * Likely to be found in `/Library/Ruby/Gems/{version}/gems/twitterpunch-#{version}/resources/`. 1. Double click on the `Twitterpunch` folder action and install it. * It may claim that it could not be attached, fear not. ### Using something besides PhotoBooth Configure the program you are using for your photo shoot to call Twitterpunch each time it snaps a photo. Pass the name of the new photo as a command line argument. Alternatively, you could batch them, as Twitterpunch can accept multiple files at once. [ben@ganymede] ~ $ twitterpunch photo.jpg [photo2.jpg photo3.jpg photo4.jpg] You can manually install the Folder Action, or you can follow the automated install process after tweaking the workflow slightly. 1. Identify where the app stores the resulting image files. 1. Edit the Twitterpunch folder action to include that path. 1. Follow the steps above to install the Folder Action. ### Viewing the Twitter stream Twitterpunch will run on OS X or Windows equally well. Simply configure it on the computer that will act as the Twitter display and then run in streaming mode. [ben@ganymede] ~ $ twitterpunch --stream There are two modes that Twitterpunch can operate in. 1. If a `:hashtag` is defined then all images tweeted to the configured hashtag will be displayed in the slideshow. 1. Otherwise, Twitterpunch will stream the `:handle` Twitter user's stream and display all images either posted by that user or addressed to that user. With protected tweets, you can have rudimentary access control. In either mode, tweets that come from any other user will also be spoken aloud. If you don't want to use the built-in slideshow viewer, you can disable it by removing the `:viewer` key from your `~/.twitterpunch/config.yaml` config file. Twitterpunch will then simply download the tweeted images and save them into the `:photodir` directory. You can then use anything you like to view them. There are currently two decent viewing options I am aware of. * Windows background image: * Configure the Windows background to randomly cycle through photos in a directory. * Hide desktop icons. * Hide the taskbar. * Disable screensaver and power savings. * Drawbacks: You're using Windows and you have to install Ruby & RubyGems manually. * OS X screensaver: * Choose one of the sexy screensavers and configure it to show photos from the `:photodir` * Set screensaver to a super short timeout. * Disable power savings. * Drawbacks: The screensaver doesn't reload dynamically, so I have to kick it and you'll see it reloading each time a new tweet comes in. Limitations =========== * It currently requires manual setup for Folder Actions. * Rubygame is kind of a pain to set up. Contact ======= * Author: Ben Ford * Email: binford2k@gmail.com * Twitter: @binford2k * IRC (Freenode): binford2k
Germinate is a tool for writing about code. With Germinate, the source code IS the article. For example, given the following source code: # #!/usr/bin/env ruby # :BRACKET_CODE: <pre>, </pre> # :PROCESS: ruby, "ruby %f" # :SAMPLE: hello def hello(who) puts "Hello, #{who}" end hello("World") # :TEXT: # Check out my amazing program! Here's the hello method: # :INSERT: @hello:/def/../end/ # And here's the output: # :INSERT: @hello|ruby When we run the <tt>germ format</tt> command the following output is generated: Check out my amazing program! Here's the hello method: <pre> def hello(who) puts "Hello, #{who}" end </pre> And here's the output: <pre> Hello, World </pre> To get a better idea of how this works, please take a look at link:examples/basic.rb, or run: germ generate > basic.rb To generate an example article to play with. Germinate is particularly useful for writing articles, such as blog posts, which contain code excerpts. Instead of forcing you to keep a source code file and an article document in sync throughout the editing process, the Germinate motto is "The source code IS the article". Specially marked comment sections in your code file become the article text. Wherever you need to reference the source code in the article, use insertion directives to tell Germinate what parts of the code to excerpt. An advanced selector syntax enables you to be very specific about which lines of code you want to insert. If you also want to show the output of your code, Germinate has you covered. Special "process" directives enable you to define arbitrary commands which can be run on your code. The output of the command then becomes the excerpt text. You can define an arbitrary number of processes and have different excerpts showing the same code as processed by different commands. You can even string processes together into pipelines. Development of Germinate is graciously sponsored by Devver, purveyor of fine cloud-based services to busy Ruby developers. If you like this tool please check them out at http://devver.net.
Process yaml files with embedded ruby
Backup data is saved to yaml files. The reverse process load the yaml and restores the records. Works for new and updates not for deletions.
Loose collection of CLI commands to sort and process Flac files
Helper to process different types of files (JSON, CSV, etc..) line-by-line from the command line
Log file lines should be prefixed with log level, timestamp, process id and an easy-to-grep "end of prefix" marker. This subclass of Logger::Formatter does those things.
Germinate is a tool for writing about code. With Germinate, the source code IS the article. For example, given the following source code: # #!/usr/bin/env ruby # :BRACKET_CODE: <pre>, </pre> # :PROCESS: ruby, "ruby %f" # :SAMPLE: hello def hello(who) puts "Hello, #{who}" end hello("World") # :TEXT: # Check out my amazing program! Here's the hello method: # :INSERT: @hello:/def/../end/ # And here's the output: # :INSERT: @hello|ruby When we run the <tt>germ format</tt> command the following output is generated: Check out my amazing program! Here's the hello method: <pre> def hello(who) puts "Hello, #{who}" end </pre> And here's the output: <pre> Hello, World </pre> To get a better idea of how this works, please take a look at link:examples/basic.rb, or run: germ generate > basic.rb To generate an example article to play with. Germinate is particularly useful for writing articles, such as blog posts, which contain code excerpts. Instead of forcing you to keep a source code file and an article document in sync throughout the editing process, the Germinate motto is "The source code IS the article". Specially marked comment sections in your code file become the article text. Wherever you need to reference the source code in the article, use insertion directives to tell Germinate what parts of the code to excerpt. An advanced selector syntax enables you to be very specific about which lines of code you want to insert. If you also want to show the output of your code, Germinate has you covered. Special "process" directives enable you to define arbitrary commands which can be run on your code. The output of the command then becomes the excerpt text. You can define an arbitrary number of processes and have different excerpts showing the same code as processed by different commands. You can even string processes together into pipelines. Development of Germinate is graciously sponsored by Devver, purveyor of fine cloud-based services to busy Ruby developers. If you like this tool please check them out at http://devver.net.
Processes a UIAutomation plist file as generated by TuneupJS into a JUnit XML format so it can be parsed by a CI server such as Jenkins.
QSpec inserts spec files to a queue. Workers process that queue one by one.
A Jekyll hook plugin for generating a sitemap.txt file during the Jekyll build process.
If you have files that you some how need to process, file_worker is your friend.
only purpose is provide some reusable defaults for processing my (.xls-, .csv-, .txt)files
Library to process structured CSV files
A simple, parallel processing framework for CSV files.
Turnitin Core API (TCA) provides direct API access to the core functionality provided by Turnitin. TCA supports file submission, similarity report generation, group management, and visualization of report matches via Cloud Viewer or PDF download. Below is the full flow to successfully set up an integration scope, an API Key, and make calls to TCA. Integration Scope and API Key management is done via the Admin Console UI by logging in as an admin user. For more details, go to our [developer portal documentation page](https://developers.turnitin.com/docs). ## Integration Scope and API Key Management TCA API calls must provide an API Key for authentication, so you must first have at least one integration scope associated with at least one API Key to use TCA. ### Admin Console UI First, login to Admin Console UI as an *Admin* user with permission to create Integration Scopes, under a tenant that is licensed to use the TCA product Integration Scopes (you can create a new one, or add keys to existing) * Click `Integrations` in the side bar --> `+ Add Integration` at top the top of the page --> Enter a name --> `Add` Button API Keys * Click `Integrations` in the side bar --> `Create API Key` Button next to a given Integration Scope --> Enter a name --> click `Create and View button` * Copy/Save the key manually or click save to clipboard button to copy it (this is the only time it will show) ## TCA Flow * Register a webhook * Create a submission * Upload a file for the submission * Wait for the submission upload to process * If you registered a webhook, a callback will be sent to it when upload is complete * The status of the *submission* will also update to `COMPLETE` * Request a Similarity Report * Wait for similarity report to process * If you registered a webhook, a callback will be sent to it when report is complete * The status of the *report* will also be updated to `COMPLETE` * Request a URL with parameters to view the Similarity Report
AWS Lambda integration plugin for Shrine File Attachment toolkit for Ruby applications. Used for invoking AWS Lambda functions for processing files already stored in some AWS S3 bucket.
Miscellaneous methods that may or may not be useful. sh:: Safely pass untrusted parameters to sh scripts. fork_and_check:: Run a block in a forked process and raise an exception if the process returns a non-zero value. do_and_exit, do_and_exit!:: Run a block. If the block does not run exit!, a successful exec or equivalent, run exit(1) or exit!(1) ourselves. Useful to make sure a forked block either runs a successful exec or dies. Any exceptions from the block are printed to standard error. overwrite:: Safely replace a file. Writes to a temporary file and then moves it over the old file. tempname_for:: Generates an unique temporary path based on a filename. The generated filename resides in the same directory as the original one. try_n_times:: Retries a block of code until it succeeds or a maximum number of attempts (default 10) is exceeded. Exception#to_formatted_string:: Returns a string that looks like how Ruby would dump an uncaught exception. IO#best_datasync:: Tries fdatasync, falling back to fsync, falling back to flush.
It automates the backup process of heroku and uploads the backup file to the amazon s3 cloud servers.
This is a development tool for restarting processes when you update a file
Ply is a ruby gem for reading Stanford PLY-format 3D model files. The PLY file format is a flexible format for storing semi-structured binary data, and is often used to stored polygonalized 3D models generated with range scanning hardware. You can find some examples of the format at the {Stanford 3D Scanning Repository}[http://graphics.stanford.edu/data/3Dscanrep/]. Ply provides a simple API for quick access to the data in a PLY file (including examining the structure of a particular file's content), and an almost-as-simple event-driven API which can be used to process extremely large ply files in a streaming fashion, without needing to keep the full dataset represented in the file in memory. Ply handles all three types of PLY files (ascii, binary-big-endian and binary-little-endian). If you don't have any Stanford PLY files on hand, you probably don't need this gem, but if you're curious, the PLY file format is described at Wikipedia[http://en.wikipedia.org/wiki/PLY_(file_format)].
This is a buildr extension that processes OSGi bundles using the iPojo "pojoization" tool that generates appropriate metadata from annotations and a config file.
Library which can traverse archived file (using `libarchive` under the hood) and yield io like object on each file entry inside it for further processing.
ansible-make-role process a single-file role definition file and generate an Ansible role from it. The role is defined in the role.yml file and contain a section for each Ansible main.yml file: 'defaults', 'vars', 'tasks', and 'handlers'. Definitions outside of those sections (notably 'dependencies') are going to the meta/main.yml file
* A regex based parser that processes the ITunes Music Library.xml file and generates a sqlite3 database for additional data mining. Also generates treemaps from data parsed.
Importing of CSV Files as Array(s) of Hashes with featured to process large csv files and better support for file encoding.
gephi_keeper is a Ruby conversion gem that links YourTwapperKeeper (http://your.twapperkeeper.com/) to Gephi (http://www.gephi.org) by converting JSON output from YTK to Gephi's GEXF format. YourTwapperKeeper is a self-hosted service that archives tweets. Gephi is a graph visualization tool. YTK exports the archived tweets in a number of formats, one of which is JSON. The JSON files produced by YTK are converted by gephi_keeper into Gephi-readable GEXF files. The GEXF files can then be processed by Gephi to produce colorful graph representations of Twitter output.
This is an extraction of a useful pattern for pulling emails and their attached files and processing them.
Ruby module which process flat files from IANA
An extension to the Potluck gem that provides control over the Nginx process and its configuration files from Ruby.
Allows carrierwave to process Base64 encoded file uploads by providing a `#{mounted_as}_data=` method that accepts Base64 encoded strings and sets the #{mounted_as} column accordingly.
EasyCRC let's you calculate CRC32 checksums of files without locking the whole Ruby process
Middle ware to process directives in your javascript and css (maybe other stuff later?) and then compile it to a single file, also plan on adding a caching layer via redis or memcache so its not so i/o bound
Custom formatter for xcpretty that saves on a PMD file all the errors, and warnings, so you can process them easily later.
ffi-generator is a ruby-ffi wrapper code generator based on SWIG interfaces. ffi-generator is able to traverse a XML parse tree file generated by SWIG and to produce a ruby file with ruby-ffi wrapper code inside. ffi-generator is shipped with a command line tool (ffi-gen) and a rake task that automates the code generation process. ffi-generator XML capabilities are provided by nokogiri.
Concatenating and processing asset files with possibly complicated dependencies.
formatter with additional processing to write on the stdout and log file.
A tool to combine PDF tools, OCR tools and image processing into a single interface as both a CLI and a library.
EventLoggerRails is a Rails engine for emitting structured events in logs during the execution of business processes for analysis and visualization. It allows teams to define events in a simple, centralized configuration file, and then log those events in JSON format for further processing.
Haecksler is the German word for a wood chipper, the library processes fixed-length files
Inspect and process video or audio files
# Rake::ToolkitProgram Create toolkit programs easily with `Rake` and `OptionParser` syntax. Bash completions and usage help are baked in. ## Installation Add this line to your application's Gemfile: ```ruby gem 'rake-toolkit_program' ``` And then execute: $ bundle Or install it yourself as: $ gem install rake-toolkit_program ## Quickstart * Shebang it up (in a file named `awesome_tool.rb`) ```ruby #!/usr/bin/env ruby ``` * Require the library ```ruby require 'rake/toolkit_program' ``` * Make your life easier ```ruby Program = Rake::ToolkitProgram ``` * Define your command tasks ```ruby Program.command_tasks do desc "Build it" task 'build' do # Ruby code here end desc "Test it" task 'test' => ['build'] do # Rake syntax ↑↑↑↑↑↑↑ for dependencies # Ruby code here end end ``` You can use `Program.args` in your tasks to access the other arguments on the command line. For argument parsing integrated into the help provided by the program, see the use of `Rake::Task(Rake::ToolkitProgram::TaskExt)#parse_args` below. * Wire the mainline ```ruby Program.run(on_error: :exit_program!) if $0 == __FILE__ ``` * In the shell, prepare to run the program (UNIX/Linux systems only) ```console $ chmod +x awesome_tool.rb $ ./awesome_tool.rb --install-completions Completions installed in /home/rtweeks/.bashrc Source /home/rtweeks/.bash-complete/awesome_tool.rb-completions for immediate availability. $ source /home/rtweeks/.bash-complete/awesome_tool.rb-completions ``` * Ask for help ```console $ ./awesome_tool.rb help *** ./awesome_tool.rb Toolkit Program *** . . . ``` ## Usage Let's look at a short sample toolkit program -- put this in `awesome.rb`: ```ruby #!/usr/bin/env ruby require 'rake/toolkit_program' require 'ostruct' ToolkitProgram = Rake::ToolkitProgram ToolkitProgram.title = "My Awesome Toolkit of Awesome" ToolkitProgram.command_tasks do desc <<-END_DESC.dedent Fooing myself I'm not sure what I'm doing, but I'm definitely fooing! END_DESC task :foo do a = ToolkitProgram.args puts "I'm fooed#{' on a ' if a.implement}#{a.implement}" end.parse_args(into: OpenStruct.new) do |parser, args| parser.no_positional_args! parser.on('-i', '--implement IMPLEMENT', 'An implement on which to be fooed') do |val| args.implement = val end end end if __FILE__ == $0 ToolkitProgram.run(on_error: :exit_program!) end ``` Make sure to `chmod +x awesome.rb`! What does this support? $ ./awesome.rb foo I'm fooed $ ./awesome.rb --help *** My Awesome Toolkit of Awesome *** Usage: ./awesome.rb COMMAND [OPTION ...] Avaliable options vary depending on the command given. For details of a particular command, use: ./awesome.rb help COMMAND Commands: foo Fooing myself help Show a list of commands or details of one command Use help COMMAND to get more help on a specific command. $ ./awesome.rb help foo *** My Awesome Toolkit of Awesome *** Usage: ./awesome.rb foo [OPTION ...] Fooing myself I'm not sure what I'm doing, but I'm definitely fooing! Options: -i, --implement IMPLEMENT An implement on which to be fooed $ ./awesome.rb --install-completions Completions installed in /home/rtweeks/.bashrc Source /home/rtweeks/.bash-complete/awesome.rb-completions for immediate availability. $ source /home/rtweeks/.bash-complete/awesome.rb-completions $ ./awesome.rb <tab><tab> foo help $ ./awesome.rb f<tab> ↳ ./awesome.rb foo $ ./awesome.rb foo <tab> ↳ ./awesome.rb foo -- $ ./awesome.rb foo --<tab><tab> --help --implement $ ./awesome.rb foo --i<tab> ↳ ./awesome.rb foo --implement $ ./awesome.rb foo --implement <tab><tab> --help awesome.rb $ ./awesome.rb foo --implement spoon I'm fooed on a spoon ### Defining Toolkit Commands Just define tasks in the block of `Rake::ToolkitProgram.command_tasks` with `task` (i.e. `Rake::DSL#task`). If `desc` is used to provide a description, the task will become visible in help and completions. When a command task is initially defined, positional arguments to the command are available as an `Array` through `Rake::ToolkitProgram.args`. ### Option Parsing This gem extends `Rake::Task` with a `#parse_args` method that creates a `Rake::ToolkitProgram::CommandOptionParser` (derived from the standard library's `OptionParser`) and an argument accumulator and `yield`s them to its block. * The arguments accumulated through the `Rake::ToolkitProgram::CommandOptionParser` are available to the task in `Rake::ToolkitProgram.args`, replacing the normal `Array` of positional arguments. * Use the `into:` keyword of `#parse_args` to provide a custom argument accumulator object for the associated command. The default argument accumulator constructor can be defined with `Rake::ToolkitProgram.default_parsed_args`. Without either of these, the default accumulator is a `Hash`. * Options defined using `OptionParser#on` (or any of the variants) will print in the help for the associated command. ### Positional Arguments Accessing positional arguments given after the command name depends on whether or not `Rake::Task(Rake::ToolkitProgram::TaskExt)#parse_args` has been called on the command task. If this method is not called, positional arguments will be an `Array` accessible through `Rake::ToolkitProgram.args`. When `Rake::Task(Rake::ToolkitProgram::TaskExt)#parse_args` is used: * `Rake::ToolkitProgram::CommandOptionParser#capture_positionals` can be used to define how positional arguments are accumulated. * If the argument accumulator is a `Hash`, the default (without calling this method) is to assign the `Array` of positional arguments to the `nil` key of the `Hash`. * For other types of accumulators, the positional arguments are only accessible if `Rake::ToolkitProgram::CommandOptionParser#capture_positionals` is used to define how they are captured. * If a block is given to this method, the block of the method will receive the `Array` of positional arguments. If it is passed an argument value, that value is used as the key under which to store the positional arguments if the argument accumulator is a `Hash`. * `Rake::ToolkitProgram::CommandOptionParser#expect_positional_cardinality` can be used to set a rule for the count of positional arguments. This will affect the _usage_ presented in the help for the associated command. * `Rake::ToolkitProgram::CommandOptionParser#map_positional_args` may be used to transform (or otherwise process) positional arguments one at a time and in the context of options and/or arguments appearing earlier on the command line. ### Convenience Methods * `Rake::Task(Rake::ToolkitProgram::TaskExt)#prohibit_args` is a quick way, for commands that accept no options or positional arguments, to declare this so the help and bash completions reflect this. It is equivalent to using `#parse_args` and telling the parser `parser.expect_positional_cardinality(0)`. * `Rake::ToolkitProgram::CommandOptionParser#no_positional_args!` is a shortcut for calling `#expect_positional_cardinality(0)` on the same object. * `Rake::Task(Rake::ToolkitProgram::TaskExt)#invalid_args!` and `Rake::ToolkitProgram::CommandOptionParser#invalid_args!` are convenient ways to raise `Rake::ToolkitProgram::InvalidCommandLine` with a message. ## OptionParser in Rubies Before and After v2.4 The `OptionParser` class was extended in Ruby 2.4 to simplify capturing options into a `Hash` or other container implementing `#[]=` in a similar way. This gem supports that, but it means that behavior varies somewhat between the pre-2.4 era and the 2.4+ era. To have consistent behavior across that version change, the recommendation is to use a `Struct`, `OpenStruct`, or custom class to hold program options rather than `Hash`. ## Development After checking out the repo, run `bin/setup` to install dependencies. You can also run `bin/console` for an interactive prompt that will allow you to experiment. To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org). To run the tests, use `rake`, `rake test`, or `rspec spec`. Tests can only be run on systems that support `Kernel#fork`, as this is used to present a pristine and isolated environment for setting up the tool. If run using Ruby 2.3 or earlier, some tests will be pending because functionality expects Ruby 2.4's `OptionParser`. ## Contributing Bug reports and pull requests are welcome on GitHub at https://github.com/PayTrace/rake-toolkit_program. For further details on contributing, see [CONTRIBUTING.md](./CONTRIBUTING.md).
When using a read-only file-system, it makes sense to upload assests directly to cloud storage. This gem makes it easy to upload directly to an S3 bucket using the Plupload JS Library. Callback functions can be triggered to run once files have finished uploading so that they can be processed by your application.
Molder is a handy command line tool for generating and running (in parallel, using a pool of processes with a configurable size) a set of related and yet different commands. A YAML file defines both the attributes and the command template, and Molder then merges the two with CLI arguments to give you a consistent set of commands for, eg. provisioning thousands of virtual hosts in a cloud. The gem is not limnited to any particular cloud, tool, or a command, and can be used across various domains to generate a consistent set of commands based on the YAML-supplied attributes and templates, that might vary across custom dimensions. For example, you could generate 600 provisioning commands for hosts in EC2, numbered from 1 to 100, but constrained to the zones "a", "b", "c", and data centers "dc" (values: ['us-west2', 'us-east1' ]). Behind the scenes Molder uses another Ruby gem Parallel — for actually running the provisioning commands.
Library and DSL to process log files with a pipe-and-filter architecture
This gem provides a Rake task that records the git status in a separate file, which can then be included, e.g. as part of a file-based build process.
siRESTa is a DSL for declarative REST APIs. It can generate a ruby API (w/ sinatra) and Client (w/ excon) for you, based on a YAML file. Processing requests is done using a monad.
Turnstile is a Redis-based library that can accurately track total number of concurrent users accessing a web/API based server application. It can break it down by "platform" or a device type, and returns data in JSON, CSV of NAD formats. While user tracking may happen synchronously using a Rack middleware, another method is provided that is based on log file analysis, and can therefore be performed outside web server process.
== Welcome to syc-spector home :: https://github.com/sugaryourcoffee/syc-spector == Description The sycspector scans a file for patterns provided on the command line. Lines that match the pattern are saved to a file with valid values and those lines that don't match the pattern are added to a file with invalid values. The valid and invalid files as well as the used pattern are stored in a history file. The saved values are used for a subsequent call to sycspector with --show and -f for fix to show the results or to prompt the invalid values to fix them. Fixed values can be appended to the valid values file. == Installation sycspector can be installed as a gem from http://RubyGems.org with $ gem install syc-spector == Invokation Examples Rearches for email addresses in the provided file 'email_addresses' $ sycspector email_addresses -p email Lines that are not recognized can be prompted, fixed and appended to the valid file with $ sycspector -fa To show the result of the invokation use $ sycspector --show To fix the values from the input file at the first scan $ sycspector -f email-addresses -p email To sort the values $ sycspector -s email-addresses -p email To fix, sort and remove duplicates (individualize) $ sycspector -fsi email-addresses -p email Matching patterns like 'name, firstname' $ syscpector name -p "\w+, \w+" Scanning only whole lines use $ sycspector name -p "\A\w+, \w+\A" If the file contains lines like "Doe, John and Doe, Jane" these won't be saved at the first scan but can be scanned with the --fix switch and appended to the valid values from the last run $ sycspector -fa Fixing a specific file by specifying the invalid file as inputfile $ sycspector -fa 2013016-083346_invalid_name -o 2013016-083346_valid_name Specifying the file where the results (valid and invalid) should go to $ sycspector -fa -o outputfile To process all at once $ sycspector -fis inputfile -o outputfile -p "\A\w+, w+\Z" --show == License syc-spector is released under the {MIT License}[http://www.opensource.org/licenses/MIT].
Template Configurator is a utility to write configuration files from ERB templates. When the file's content changes, it can then call an init script to intelligently reload the configuration. Through out the entire process exclusive file locks are used on the output file and json file to help ensure they are unmanipulated during the transformation process.