Simple converter between URIs and Pathnames. It creates valid, unique and readable filenames from URIs and viceversa. It can be used to name files while saving data from websites and conversely, read files assigned to URIs while, for instance, simulating or stubbing web accesses by means of reading files
Converts Ableton Live .als files to XML as they're saved. The point of this is so that you can use git with your .als files. Git is a lot more useful with line based text files then binary files.
Lib for managing config files based on OpenStruct, includes file saving and prompt at creation.
Make your field act as a git repo. Save the content to a file, and load the content from a file.
http://www.engineyard.com/blog/2010/extending-rails-3-with-railties/ http://www.igvita.com/2010/08/04/rails-3-internals-railtie-creating-plugins/ h1. Morning Glory Morning Glory is comprised of a rake task and helper methods that manages the deployment of static assets into an Amazon CloudFront CDN's S3 Bucket, improving the performance of static assets on your Rails web applications. _NOTE: You will require an Amazon Web Services (AWS) account in order to use this gem. Specially: S3 for storing the files you wish to distribute, and CloudFront for CDN distribution of those files._ This version of Morning Glory works with Rails 3.x and Ruby 1.9.x h2. What does it do? Morning Glory provides an easy way to deploy Ruby on Rails application assets to the Amazon CloudFront CDN. It solves a number of common issues with S3/CloudFront. For instance, CloudFront won't automatically expire old assets stored on edge nodes when you redeploy new assets (the Cloudfront expiry time is 24 hours minimum). To fix this Morning Glory will automatically namespace asset releases for you, then update all references to those renamed assets within your stylesheets ensuring there are no broken asset links. It also provides a helper method to rewrite all standard Rails asset helper generated URLs to your CloudFront CDN distributions, as well as handling switching between HTTP and HTTPS. Morning Glory was also built with SASS (Syntactically Awesome Stylesheets) in mind. If you use Sass for your stylesheets they will automatically be built before deployment to the CDN. See http://sass-lang.com/ for more information on Sass.s h2. What it doesn't do Morning Glory cannot configure your CloudFront distributions for you automatically. You will manually have to login to your AWS Management Console account, "https://console.aws.amazon.com/cloudfront/home":https://console.aws.amazon.com/cloudfront/home, and set up a distribution pointing to an S3 Bucket. h2. Installation <pre> gem 'morning_glory' </pre> h2. Usage Morning Glory provides it's functionality via rake tasks. You'll need to specify the target rails environment configuration you want to deploy for by using the @RAILS_ENV={env}@ parameter (for example, @RAILS_ENV=production@). <pre> rake morning_glory:cloudfront:deploy RAILS_ENV={YOUR_TARGET_ENVIRONMENT} </pre> h2. Configuration h3. The Morning Glory configuration file, @config/morning_glory.yml@ You can specify a configuration section for every rails environment (production, staging, testing, development). This section can have the following properties defined: <pre> --- production: enabled: true # Is MorningGlory enabled for this environment? bucket: cdn.production.foo.com # The bucket to deploy your assets into s3_logging_enabled: true # Log the deployment to S3 revision: "20100317134627" # The revision prefix. This timestamp automatically generateed on deployment delete_prev_rev: true # Delete the previous asset release (save on S3 storage space) </pre> h3. The Amazon S3 authentication keys configuration file, @config/s3.yml@ This file provides the access credentials for your Amazon AWS S3 account. You can configure keys for all your environments (production, staging, testing, development). <pre> --- production: access_key_id: YOUR_ACCESS_KEY secret_access_key: YOUR_SECRET_ACCESS_KEY </pre> Note: If you are deploying your system to Heroku, you can configure your Amazon AWS S3 information with the environment variables S3_KEY and S3_SECRET instead of using a configuration file. h3. Set up an asset_host For each environment that you'd like to utilise the CloudFront CDN for you'll need to define the asset_host within the @config/environments/{ENVIRONMENT}.rb@ configuration file. As of June 2010 AWS supports HTTPS requests on the CloudFront CDN, so you no longer have to worry about switching servers. (Yay!) h4. Example config/environments/production.rb @asset_host@ snippet: Here we're targeting a CNAME domain with HTTP support. <pre> ActionController::Base.asset_host = Proc.new { |source, request| if request.ssl? "#{request.protocol}#{request.host_with_port}" else "#{request.protocol}assets.example.com" end } </pre> h3. Why do we have to use a revision-number/namespace/timestamp? Once an asset has been deployed to the Amazon Cloudfront edge servers it cannot be modified - the version exists until it expires (minimum of 24 hours). To get around this we need to prefix the asset path with a revision of some sort - in MorningGlory's case we use a timestamp. That way you can deploy many times during a 24 hour period and always have your latest revision available on your web site. h2. Dependencies h3. AWS S3 Required for uploading the assets to the Amazon Web Services S3 buckets. See "http://amazon.rubyforge.org/":http://amazon.rubyforge.org/ for more documentation on installation. h2. About the name Perhaps not what you'd expect; a "Morning Glory":http://en.wikipedia.org/wiki/Morning_Glory_cloud is a rare cloud formation observed by glider pilots in Australia (see my side project, "YourFlightLog.com for flight-logging software for paraglider and hang-glider pilots":http://www.yourflightlog.com, from which the Morning Glory plugin was originally extracted). Copyright (c) 2010 "@AdamBurmister":http://twitter.com/adamburmister/, released under the MIT license
A Cinch plugin to count +1, scores are saved in yaml file for persistence.
README ====== This is a simple API to evaluate information retrieval results. It allows you to load ranked and unranked query results and calculate various evaluation metrics (precision, recall, MAP, kappa) against a previously loaded gold standard. Start this program from the command line with: retreval -l <gold-standard-file> -q <query-results> -f <format> -o <output-prefix> The options are outlined when you pass no arguments and just call retreval You will find further information in the RDOC documentation and the HOWTO section below. If you want to see an example, use this command: retreval -l example/gold_standard.yml -q example/query_results.yml -f yaml -v INSTALLATION ============ If you have RubyGems, just run gem install retreval You can manually download the sources and build the Gem from there by `cd`ing to the folder where this README is saved and calling gem build retreval.gemspec This will create a gem file called which you just have to install with `gem install <file>` and you're done. HOWTO ===== This API supports the following evaluation tasks: - Loading a Gold Standard that takes a set of documents, queries and corresponding judgements of relevancy (i.e. "Is this document relevant for this query?") - Calculation of the _kappa measure_ for the given gold standard - Loading ranked or unranked query results for a certain query - Calculation of _precision_ and _recall_ for each result - Calculation of the _F-measure_ for weighing precision and recall - Calculation of _mean average precision_ for multiple query results - Calculation of the _11-point precision_ and _average precision_ for ranked query results - Printing of summary tables and results Typically, you will want to use this Gem either standalone or within another application's context. Standalone Usage ================ Call parameters --------------- After installing the Gem (see INSTALLATION), you can always call `retreval` from the commandline. The typical call is: retreval -l <gold-standard-file> -q <query-results> -f <format> -o <output-prefix> Where you have to define the following options: - `gold-standard-file` is a file in a specified format that includes all the judgements - `query-results` is a file in a specified format that includes all the query results in a single file - `format` is the format that the files will use (either "yaml" or "plain") - `output-prefix` is the prefix of output files that will be created Formats ------- Right now, we focus on the formats you can use to load data into the API. Currently, we support YAML files that must adhere to a special syntax. So, in order to load a gold standard, we need a file in the following format: * "query" denotes the query * "documents" these are the documents judged for this query * "id" the ID of the document (e.g. its filename, etc.) * "judgements" an array of judgements, each one with: * "relevant" a boolean value of the judgment (relevant or not) * "user" an optional identifier of the user Example file, with one query, two documents, and one judgement: - query: 12th air force germany 1957 documents: - id: g5701s.ict21311 judgements: [] - id: g5701s.ict21313 judgements: - relevant: false user: 2 So, when calling the program, specify the format as `yaml`. For the query results, a similar format is used. Note that it is necessary to specify whether the result sets are ranked or not, as this will heavily influence the calculations. You can specify the score for a document. By "score" we mean the score that your retrieval algorithm has given the document. But this is not necessary. The documents will always be ranked in the order of their appearance, regardless of their score. Thus in the following example, the document with "07" at the end is the first and "25" is the last, regardless of the score. --- query: 12th air force germany 1957 ranked: true documents: - score: 0.44034874 document: g5701s.ict21307 - score: 0.44034874 document: g5701s.ict21309 - score: 0.44034874 document: g5701s.ict21311 - score: 0.44034874 document: g5701s.ict21313 - score: 0.44034874 document: g5701s.ict21315 - score: 0.44034874 document: g5701s.ict21317 - score: 0.44034874 document: g5701s.ict21319 - score: 0.44034874 document: g5701s.ict21321 - score: 0.44034874 document: g5701s.ict21323 - score: 0.44034874 document: g5701s.ict21325 --- query: 1612 ranked: true documents: - score: 1.0174774 document: g3290.np000144 - score: 0.763108 document: g3201b.ct000726 - score: 0.763108 document: g3400.ct000886 - score: 0.6359234 document: g3201s.ct000130 --- **Note**: You can also use the `plain` format, which will load the gold standard in a different way (but not the results): my_query my_document_1 false my_query my_document_2 true See that every query/document/relevancy pair is separated by a tabulator? You can also add the user's ID in the fourth column if necessary. Running the evaluation ----------------------- After you have specified the input files and the format, you can run the program. If needed, the `-v` switch will turn on verbose messages, such as information on how many judgements, documents and users there are, but this shouldn't be necessary. The program will first load the gold standard and then calculate the statistics for each result set. The output files are automatically created and contain a YAML representation of the results. Calculations may take a while depending on the amount of judgements and documents. If there are a thousand judgements, always consider a few seconds for each result set. Interpreting the output files ------------------------------ Two output files will be created: - `output_avg_precision.yml` - `output_statistics.yml` The first lists the average precision for each query in the query result file. The second file lists all supported statistics for each query in the query results file. For example, for a ranked evaluation, the first two entries of such a query result statistic look like this: --- 12th air force germany 1957: - :precision: 0.0 :recall: 0.0 :false_negatives: 1 :false_positives: 1 :true_negatives: 2516 :true_positives: 0 :document: g5701s.ict21313 :relevant: false - :precision: 0.0 :recall: 0.0 :false_negatives: 1 :false_positives: 2 :true_negatives: 2515 :true_positives: 0 :document: g5701s.ict21317 :relevant: false You can see the precision and recall for that specific point and also the number of documents for the contingency table (true/false positives/negatives). Also, the document identifier is given. API Usage ========= Using this API in another ruby application is probably the more common use case. All you have to do is include the Gem in your Ruby or Ruby on Rails application. For details about available methods, please refer to the API documentation generated by RDoc. **Important**: For this implementation, we use the document ID, the query and the user ID as the primary keys for matching objects. This means that your documents and queries are identified by a string and thus the strings should be sanitized first. Loading the Gold Standard ------------------------- Once you have loaded the Gem, you will probably start by creating a new gold standard. gold_standard = GoldStandard.new Then, you can load judgements into this standard, either from a file, or manually: gold_standard.load_from_yaml_file "my-file.yml" gold_standard.add_judgement :document => doc_id, :query => query_string, :relevant => boolean, :user => John There is a nice shortcut for the `add_judgement` method. Both lines are essentially the same: gold_standard.add_judgement :document => doc_id, :query => query_string, :relevant => boolean, :user => John gold_standard << :document => doc_id, :query => query_string, :relevant => boolean, :user => John Note the usage of typical Rails hashes for better readability (also, this Gem was developed to be used in a Rails webapp). Now that you have loaded the gold standard, you can do things like: gold_standard.contains_judgement? :document => "a document", :query => "the query" gold_standard.relevant? :document => "a document", :query => "the query" Loading the Query Results ------------------------- Now we want to create a new `QueryResultSet`. A query result set can contain more than one result, which is what we normally want. It is important that you specify the gold standard it belongs to. query_result_set = QueryResultSet.new :gold_standard => gold_standard Just like the Gold Standard, you can read a query result set from a file: query_result_set.load_from_yaml_file "my-results-file.yml" Alternatively, you can load the query results one by one. To do this, you have to create the results (either ranked or unranked) and then add documents: my_result = RankedQueryResult.new :query => "the query" my_result.add_document :document => "test_document 1", :score => 13 my_result.add_document :document => "test_document 2", :score => 11 my_result.add_document :document => "test_document 3", :score => 3 This result would be ranked, obviously, and contain three documents. Documents can have a score, but this is optional. You can also create an Array of documents first and add them altogether: documents = Array.new documents << ResultDocument.new :id => "test_document 1", :score => 20 documents << ResultDocument.new :id => "test_document 2", :score => 21 my_result = RankedQueryResult.new :query => "the query", :documents => documents The same applies to `UnrankedQueryResult`s, obviously. The order of ranked documents is the same as the order in which they were added to the result. The `QueryResultSet` will now contain all the results. They are stored in an array called `query_results`, which you can access. So, to iterate over each result, you might want to use the following code: query_result_set.query_results.each_with_index do |result, index| # ... end Or, more simply: for result in query_result_set.query_results # ... end Calculating statistics ---------------------- Now to the interesting part: Calculating statistics. As mentioned before, there is a conceptual difference between ranked and unranked results. Unranked results are much easier to calculate and thus take less CPU time. No matter if unranked or ranked, you can get the most important statistics by just calling the `statistics` method. statistics = my_result.statistics In the simple case of an unranked result, you will receive a hash with the following information: * `precision` - the precision of the results * `recall` - the recall of the results * `false_negatives` - number of not retrieved but relevant items * `false_positives` - number of retrieved but nonrelevant * `true_negatives` - number of not retrieved and nonrelevantv items * `true_positives` - number of retrieved and relevant items In case of a ranked result, you will receive an Array that consists of _n_ such Hashes, depending on the number of documents. Each Hash will give you the information at a certain rank, e.g. the following to lines return the recall at the fourth rank. statistics = my_ranked_result.statistics statistics[3][:recall] In addition to the information mentioned above, you can also get for each rank: * `document` - the ID of the document that was returned at this rank * `relevant` - whether the document was relevant or not Calculating statistics with missing judgements ---------------------------------------------- Sometimes, you don't have judgements for all document/query pairs in the gold standard. If this happens, the results will be cleaned up first. This means that every document in the results that doesn't appear to have a judgement will be removed temporarily. As an example, take the following results: * A * B * C * D Our gold standard only contains judgements for A and C. The results will be cleaned up first, thus leading to: * A * C With this approach, we can still provide meaningful results (for precision and recall). Other statistics ---------------- There are several other statistics that can be calculated, for example the **F measure**. The F measure weighs precision and recall and has one parameter, either "alpha" or "beta". Get the F measure like so: my_result.f_measure :beta => 1 If you don't specify either alpha or beta, we will assume that beta = 1. Another interesting measure is **Cohen's Kappa**, which tells us about the inter-agreement of assessors. Get the kappa statistic like this: gold_standard.kappa This will calculate the average kappa for each pairwise combination of users in the gold standard. For ranked results one might also want to calculate an **11-point precision**. Just call the following: my_ranked_result.eleven_point_precision This will return a Hash that has indices at the 11 recall levels from 0 to 1 (with steps of 0.1) and the corresponding precision at that recall level.
Parses a hash string of the format `'{ :a => "something" }'` into an actual ruby hash object `{ a: "something" }`. This is useful when you by mistake serialize hashes and save it in database column or a text file and you want to convert them back to hashes without the security issues of executing `eval(hash_string)`. By default only following classes are allowed to be deserialized: * TrueClass * FalseClass * NilClass * Numeric * String * Array * Hash A HashParser::BadHash exception is thrown if unserializable values are present.
Creates aliases for class methods, instance methods, constants, delegated methods and more. Aliases can be easily searched or saved as YAML config files to load later. Custom alias types are easy to create with the DSL Alias provides. Although Alias was created with the irb user in mind, any Ruby console program can hook into Alias for creating configurable aliases.
This gem provides an IO like object, that can be used with any logging class (such as Ruby's native Logger). This object will save its input to a file, and allows: rotation by time / date, compression of old files and removal of old compressed files. Its purpuse is to supplement logging classes, allowing everything related to logging management, to be done within Ruby, without relying on external tools (such as logrotate).
== DESCRIPTION: Provides a script and library to parse stories saved in the RSpec plain text story format and creates a PDF file with printable 3"x5" index cards suitable for using in Agile planning and prioritization. == FEATURES/PROBLEMS: * Create a PDF with each page as a 3x5 sheet, or as 4 cards per 8.5 x 11 sheet * Included script reads stories from STDIN and writes PDF to STDOUT * TODO: Improve test coverage * TODO: Improve documentation == SYNOPSIS: From the command line with stories2cards < /path/to/stories.txt Or via Ruby story_text = File.read('my_story') pdf_content = PDF::Storycards::Writer.make_pdf(story_text, :style => :card_1up) == REQUIREMENTS:
'currently' is a console application designed to keep a running 'concentration log' of what you are currently doing in the moment. It is ideal for capturing your thoughts at work before the weekend so you're ready to go on Monday, or keeping you focused if you work in a distracting environment. 'currently' can save to a local file or connect to catch.com via its API
Zero configuration file watcher for Objective-C that runs tests, installs Pods each time you save a file
this gem is to create a main files into de new rails applications using a rails g binstall to save time into by creating equals files using basic gem pg compass using haml bcrypt and omniauth-twitter be added in the future new utils
A Cinch plugin to store and replay messages, messages are saved in yaml file for persistence.
Simple authentication gem to ask user for username/password and save to a file if desired
==== Topic Maps for Rails (rtm-rails) RTM-Rails is the Rails-Adapter for Ruby Topic Maps. It allows simple configuration of topicmaps in config/topicmaps.yml. ==== Overview From a developer's perspective, RTM is a schema-less database management system. The Topic Maps standard (described below) on which RTM is based provides a way of creating a self-describing schema just by using it. You can use RTM as a complement data storage to ActiveRecord in your Rails apps. ==== Quickstart - existing Rails project jruby script/generate topicmaps Run the command above after installing rtm-rails. This will create * a minimal default configuration: config/topicmaps.yml and * a file with more examples and explanations config/topicmaps.example.yml * a file README.topicmaps.txt which contains more information how to use it and where to find more information * an initializer to load the topicmaps at startup * a rake task to migrate the topic maps backends in your rails application. ==== Quickstart - new Rails project For a new Rails application these are the complete initial steps: jruby -S rails my_topicmaps_app cd my_topicmaps_app jruby -S script/generate jdbc jruby -S script/generate topicmaps # The following lines are necessary because Rails does not have a template # for the H2 database and Ontopia does not support the Rails default SQLite3. sed -e "s/sqlite3/h2/" config/database.yml > config/database.yml.h2 mv config/database.yml.h2 config/database.yml # Prepare the database and then check if all is OK jruby -S rake topicmaps:migrate_backends jruby -S rake topicmaps:check ==== Usage inside the application When everything is fine, let's create our first topic: jruby -S script/console TM[:example].get!("http://example.org/my/first/topic") # and save the topic map TM[:example].commit Access the configured topic maps anywhere in your application like this: TM[:example] To retrieve all topics, you can do TM[:example].topics To retrieve a specific topic by its subject identifier: TM[:example].get("http://example.org/my/topic") Commit the changes to the database permanently: TM[:example].commit ... or abort the transaction: TM[:example].abort More information can be found on http://rtm.topicmapslab.de/ ==== Minimal configuration default: topicmaps: example: http://rtm.topicmapslab.de/example1/ The minimal configuration creates a single topic map, named :example with the locator given. This topic map will be persisted in the same database as your ActiveRecord connection if not specified otherwise. The default backend is OntopiaRDBMS (from the rtm-ontopia gem). A more complete configuration can be found in config/topicmaps.example.yml after running "jruby script/generate topicmaps". It also includes how to specifiy multiple connections to different data stores and so on. ==== Topic Maps Topic Maps is an international industry standard (ISO13250) for interchangeably representing information about the structure of information resources used to define topics, and the relationships between topics. A set of one or more interrelated documents that employs the notation defined by this International Standard is called a topic map. A topic map defines a multidimensional topic space - a space in which the locations are topics, and in which the distances between topics are measurable in terms of the number of intervening topics which must be visited in order to get from one topic to another, and the kinds of relationships that define the path from one topic to another, if any, through the intervening topics, if any. In addition, information objects can have properties, as well as values for those properties, assigned to them. The Topic Maps Data Model which is used in this implementation can be found on http://www.isotopicmaps.org/sam/sam-model/. ==== License Copyright 2009 Topic Maps Lab, University of Leipzig. Apache License, Version 2.0
Updates to PDFKit by Jared Pace and Relevance with stack overflow post suggested by Ashish implemented to save server side pdf file, and code to save pdf file on repository.
Save your various database credentials to a configuration file and zip around easily
Keeps your files more secure by ensuring saved files are chmoded 0700 by the same user who is running the process.
A simple Ruby API for extracting contact data, such as emails, addresses, and phone numbers from text documents and hyperlinks. Also has the ability to save the extracted data as JSON objects and files. For more info, see https://github.com/jweinst1/ContactDetective
Change translations directly in a published site. Use by visiting http://.../locales/edit. Saving will overwrite YAML locale files.
Twitterpunch =============== Twitterpunch is designed to work with PhotoBooth and OS X Folder Actions. When this script is called with the name of an image file, it will post the image to Twitter, along with a message randomly chosen from a list and a specified hashtag. If you call the script with the `--stream` argument instead, it will listen for tweets to that hashtag and download them to a specified directory. If the tweet came from another user, Twitterpunch will speak it aloud. Typically, you'll run one copy on an OSX laptop with PhotoBooth, and a separate copy on another machine (either Windows or OSX) for the viewer. You can also use a mobile device as a remote control, if you like. This will allow the user to enter a custom message for each photo that gets tweeted out, if they'd like. Configuration =========== Configure the program via the `~/.twitterpunch/config.yaml` YAML file. This file should look similar to the example below. --- :twitter: # twitter configuration :consumer_key: <consumer key> :consumer_secret: <consumer secret> :access_token: <access token> :access_token_secret: <access secret> :messages: # list of messages to attach - Hello there # to outgoing tweets - I'm a posting fool - minimally viable product :hashtag: Twitterpunch # The hashtag to post and listen to :handle: Twitterpunch # The twitter username to post as :photodir: ~/Pictures/twitterpunch/ # Where to save downloaded images :logfile: ~/.twitterpunch/activity.log # Where to save logs :viewer: # Use the built-in slideshow viewer :count: 5 # How many images to have onscreen at once :remote: :timeout: 45 # How long the button should remain disabled for :apptitle: dslrBooth # The photo booth application title :hotkey: space # Which hotkey to send to trigger a photo 1. Generate a skeleton configuration file * `twitterpunch --configure` 1. Edit the configuration file as needed. You'll be prompted with the path. * If you have your own Twitter application credentials, you're welcome to use them. 1. Authorize the application with the Twitter API. * `twitterpunch --authorize` Usage ========== ### Using OS X PhotoBooth 1. Start PhotoBooth at least once to generate its library. 1. Install the Twitterpunch Folder Action * `twitterpunch --install` * It may claim that it could not be attached, fear not. 1. Profit! * _and by that, I mean take some shots with PhotoBooth!_ *Note*: if the folder action doesn't seem to work and photos aren't posted to Twitter, here are some troubleshooting steps to take: 1. Run Twitterpunch by hand with photos as arguments. This may help you isolate configuration or authorization issues. * `twitterpunch foo.jpg` 1. Correct the path in the workflow. * `which twitterpunch` * Edit the Twitterpunch folder action to include that path. #### Using the remote web app Configure the remote web app using the `:remote` hash in `config.yaml`. You can usually find the title of the app using `system_profiler -detailLevel full SPApplicationsDataType` and grepping for the name or path to the `.app`. In this example, the title is _dslrBooth_. [ben@ganymede] ~ $ system_profiler -detailLevel full SPApplicationsDataType | grep -B8 dslrBooth.app dslrBooth: Version: 2.9 Obtained from: Identified Developer Last Modified: 10/14/17, 9:50 PM Kind: Intel 64-Bit (Intel): Yes Signed by: Developer ID Application: Hope Pictures LLC (MZR5GHAQX4), Developer ID Certification Authority, Apple Root CA Location: /Applications/dslrBooth.app 1. Run the app with `twitterpunch --remote` 1. Browse to the app with http://{address}:8080 1. [optional] If on an iOS device, add to your homescreen * This will give you "app behaviour", such as full screen, and a nice icon #### Troubleshooting. 1. Make sure the folder action is installed properly 1. Use the Finder to navigate to `~/Pictures/` 1. Right click on the `Photo Booth Library` icon and choose _Show Package Contents_. 1. Right click on the `Pictures` folder and choose `Services > Folder Actions Setup` 1. Make sure that the `Twitterpunch` action is attached. 1. Install the folder action 1. Open the `resources` folder of this gem. * Likely to be found in `/Library/Ruby/Gems/{version}/gems/twitterpunch-#{version}/resources/`. 1. Double click on the `Twitterpunch` folder action and install it. * It may claim that it could not be attached, fear not. ### Using something besides PhotoBooth Configure the program you are using for your photo shoot to call Twitterpunch each time it snaps a photo. Pass the name of the new photo as a command line argument. Alternatively, you could batch them, as Twitterpunch can accept multiple files at once. [ben@ganymede] ~ $ twitterpunch photo.jpg [photo2.jpg photo3.jpg photo4.jpg] You can manually install the Folder Action, or you can follow the automated install process after tweaking the workflow slightly. 1. Identify where the app stores the resulting image files. 1. Edit the Twitterpunch folder action to include that path. 1. Follow the steps above to install the Folder Action. ### Viewing the Twitter stream Twitterpunch will run on OS X or Windows equally well. Simply configure it on the computer that will act as the Twitter display and then run in streaming mode. [ben@ganymede] ~ $ twitterpunch --stream There are two modes that Twitterpunch can operate in. 1. If a `:hashtag` is defined then all images tweeted to the configured hashtag will be displayed in the slideshow. 1. Otherwise, Twitterpunch will stream the `:handle` Twitter user's stream and display all images either posted by that user or addressed to that user. With protected tweets, you can have rudimentary access control. In either mode, tweets that come from any other user will also be spoken aloud. If you don't want to use the built-in slideshow viewer, you can disable it by removing the `:viewer` key from your `~/.twitterpunch/config.yaml` config file. Twitterpunch will then simply download the tweeted images and save them into the `:photodir` directory. You can then use anything you like to view them. There are currently two decent viewing options I am aware of. * Windows background image: * Configure the Windows background to randomly cycle through photos in a directory. * Hide desktop icons. * Hide the taskbar. * Disable screensaver and power savings. * Drawbacks: You're using Windows and you have to install Ruby & RubyGems manually. * OS X screensaver: * Choose one of the sexy screensavers and configure it to show photos from the `:photodir` * Set screensaver to a super short timeout. * Disable power savings. * Drawbacks: The screensaver doesn't reload dynamically, so I have to kick it and you'll see it reloading each time a new tweet comes in. Limitations =========== * It currently requires manual setup for Folder Actions. * Rubygame is kind of a pain to set up. Contact ======= * Author: Ben Ford * Email: binford2k@gmail.com * Twitter: @binford2k * IRC (Freenode): binford2k
Minimise your CSS files using by compressing and combining them into one file. Reduce HTTP requests, file size, and save bandwidth. See http://github.com/nathankleyn/mini_css for more information.
Get Facebook comments and save these in txt, csv file.
A gem that creates a data file based on arrays or strings that you feed it. When you save the file, you can choose from multiple encryption methods, asymetric, symetric, etc. etc.
RSpecTimer will track the amount of time each of your tests take to complete, and when it's done, can save the data to a YAML file.
Backup data is saved to yaml files. The reverse process load the yaml and restores the records. Works for new and updates not for deletions.
Wor-prof (Wprof) is a gem for Ruby On Rails which its only purpose is to measure a RoR app's performance through a profile with different times of response. Catch all request and save them into a database, csv file or send to external service, it's easy to configure and use. Wprof can take measure of HTTParty requests and your own methods.
Create configuration files from templates using environment variables or property lists. Save as text, yaml and json
This ActiveRecord extension lets you pass a symbol to ActiveRecord::Base#find_by_sql, which refers to a query file saved under app/queries/. Variable replacement and ERB are available, and comments and indentation are stripped.
This gem is intended for use in a single file Ruby script that uses bundler/inline. You start your script once with `ruby your-script.rb` and keepgoing will take control, and re-run your script every time you save your script. You can concentrate on tinkering, while keepgoing will, well keep going, providing you with a fast and effortless feedback loop for your ruby experiments.
The XBD gem allows you to create, load and save XBD files. XBD files are arbitrary, self-describing, hierarchical, binary data structures consisting of "tags", "attributes", and "sub-tags".
Turnitin Core API (TCA) provides direct API access to the core functionality provided by Turnitin. TCA supports file submission, similarity report generation, group management, and visualization of report matches via Cloud Viewer or PDF download. Below is the full flow to successfully set up an integration scope, an API Key, and make calls to TCA. Integration Scope and API Key management is done via the Admin Console UI by logging in as an admin user. For more details, go to our [developer portal documentation page](https://developers.turnitin.com/docs). ## Integration Scope and API Key Management TCA API calls must provide an API Key for authentication, so you must first have at least one integration scope associated with at least one API Key to use TCA. ### Admin Console UI First, login to Admin Console UI as an *Admin* user with permission to create Integration Scopes, under a tenant that is licensed to use the TCA product Integration Scopes (you can create a new one, or add keys to existing) * Click `Integrations` in the side bar --> `+ Add Integration` at top the top of the page --> Enter a name --> `Add` Button API Keys * Click `Integrations` in the side bar --> `Create API Key` Button next to a given Integration Scope --> Enter a name --> click `Create and View button` * Copy/Save the key manually or click save to clipboard button to copy it (this is the only time it will show) ## TCA Flow * Register a webhook * Create a submission * Upload a file for the submission * Wait for the submission upload to process * If you registered a webhook, a callback will be sent to it when upload is complete * The status of the *submission* will also update to `COMPLETE` * Request a Similarity Report * Wait for similarity report to process * If you registered a webhook, a callback will be sent to it when report is complete * The status of the *report* will also be updated to `COMPLETE` * Request a URL with parameters to view the Similarity Report
An ActionMailer delivery mechanism to save mails as files in a directory.
Provides an ActiveRecord-like method of loading and saving static content files (from Bridgetown, Jekyll, etc.)
Store your results file in a neat file structure
Generate word lists and display them on STDOUT or save them to a file.
This ActiveRecord extension lets you pass a symbol to ActiveRecord::Base#find_by_sql, which refers to a query file saved under app/queries/. Variable replacement and ERB are available, and comments and indentation are stripped.
I propose a new file format called "XYML." Xyml module has the following functions: * loads a XYML file as an instance of Xyml::Document, saves an instance of Xyml::Document as a XYML file. * loads an XML subset file as an instance of Xyml::Document, saves an instance of Xyml::Document as an XML file. * saves an instance of Xyml::Document as a JSON file. * converts an instance of Xyml::Document to an instance of REXML::Document and vice versa. The instance of REXML::Document supports a subset of XML specifications. Xyml_element module provides XYML element API methods. These API methods can be used to elements in Xyml::Document.
CanCamel allows to manage access levels with all the power of cancan style (saving semantics). It means, that you can use `can?` method anywhere you want to check privileges of a selected user to access something in some way. See readme file at https://github.com/JelF/can_camel/blob/master/README.md to learn more
Edit SQL files in your Rails project with your favourite editor; View results in your browser in realtime as your files are saved.
isi.rb converts ISI Export Format to BibTeX Format. This is a Ruby script. You can use this script as a library. You can get the tagged Marked List in Web of Science by pushing the [SAVE TO FILE] button.
Custom formatter for xcpretty that saves on a PMD file all the errors, and warnings, so you can process them easily later.
Deploy with Rsync to your server from any local (or remote) repository. Saves you the need to install Git on your production machine and deploy all of your development files each time! Works with the new Capistrano v3! Suitable for deploying any apps, be it Ruby or Node.js. Initially gem has been developed by Moll (https://github.com/moll/capistrano-rsync). Scm support is introduced by Seantan (https://github.com/seantan/capistrano-rsync).
ObStore allows you to save any object to a file along with metadata about that object. You can also set an expiry for an object and ObStore will lazily delete the data for you.
= dm-is-published This plugin makes it very easy to add different states to your models, like 'draft' vs 'live'. By default it also adds validations of the field value. Originally inspired by the Rails plugin +acts_as_publishable+ by <b>fr.ivolo.us</b>. == Installation # Add GitHub to your RubyGems sources $ gem sources -a http://gems.github.com $ (sudo)? gem install kematzy-dm-is-published <b>NB! Depends upon the whole DataMapper suite being installed, and has ONLY been tested with DM 0.10.0 (next branch).</b> == Getting Started First of all, for a better understanding of this gem, make sure you study the '<tt>dm-is-published/spec/integration/published_spec.rb</tt>' file. ---- Require +dm-is-published+ in your app. require 'dm-core' # must be required first require 'dm-is-published' Lets say we have an Article class, and each Article can have a current state, ie: whether it's Live, Draft or an Obituary awaiting the death of someone famous (real or rumored) class Article include DataMapper::Resource property :id, Serial property :title, String ...<snip> is :published end Once you have your Article model we can create our Articles just as normal Article.create(:title => 'Example 1') The instance of <tt>Article.get(1)</tt> now has the following things for free: * a <tt>:publish_status</tt> attribute with the value <tt>'live'</tt>. Default choices are <tt>[ :live, :draft, :hidden ]</tt>. * <tt>:is_live?, :is_draft? or :is_hidden?</tt> methods that returns true/false based upon the state. * <tt>:save_as_live</tt>, <tt>:save_as_draft</tt> or <tt>:save_as_hidden</tt> converts the instance to the state and saves it. * <tt>:publishable?</tt> method that returns true for models where <tt>is :published </tt> has been declared, but <b>false</b> for those where it has not been declared. The Article class also gets a bit of new functionality: Article.all(:draft) => finds all Articles with :publish_status = :draft Article.all(:draft, :author => @author_joe ) => finds all Articles with :publish_status = :draft and author == Joe Todo Need to write more documentation here.. == Usage Scenarios In a Blog/Publishing scenario you could use it like this: class Article ...<snip>... is :published :live, :draft, :hidden end Whereas in another scenario - like in a MenuItem model for a Restaurant - you could use it like this: class MenuItem ...<snip>... is :published :on, :off # the item is either on the menu or not end == RTFM As I said above, for a better understanding of this gem/plugin, make sure you study the '<tt>dm-is-published/spec/integration/published_spec.rb</tt>' file. == Errors / Bugs If something is not behaving intuitively, it is a bug, and should be reported. Report it here: http://github.com/kematzy/dm-is-published/issues == Credits Copyright (c) 2009-07-11 [kematzy gmail com] Loosely based on the ActsAsPublishable plugin by [http://fr.ivolo.us/posts/acts-as-publishable] == Licence Released under the MIT license.
This package provides a handy command to help you recover from Vim recovery files. Vim makes .swp files as you edit your files so that if an edit session crashes you can recover the latest changes you haven't saved. Unfortunately though, there's no easy way to compare a saved file with the recovery file, so if a session does crash you are often left with many .swp files that you have to save under new names one by one and compare to the original files. vimrecover searches for swap files in the current directory, converts them to files that can be compared to the original files, lets you compare them with meld, deletes them if they are the same as the original files and generally helps you clean up the mess. vimrecover is an interactive command-line program.
Guard::Java automatically run your unit tests when you save files (much like Eclipse's Build on Save
File editor makes it easy to edit a file in place, with options for saving a backup