wortsammler is an environment to manage documentation. Basically it comprises of * a directory structure to organize the document sources * a manifest file to control the publication process * a tool to produce the doucments Particular features of wortsammler are * various output formats * support of requirement management * generate documents for different audiences based on single sources * text snippets (markdown and xlsx) wortsammler is based on ruby, pandoc, latex
This action processes and uploads your symbol files to Dynatrace
Use SeedHelper to create rake tasks to be used in your Seeds file. Use the output formatters to provide feedback on the results of your Seeds process.
Library for processing flat files
This plugin simplifies and clarifies the multistage deploy process by reading settings from a simple YAML file that can be updated programatically. Even if the file is only managed by humans, there are still several benefits including centralizing stage/role configuration in one file, discouraging per-stage logic in deference to properly hooked before/after callbacks, and simplified configuration reuse.
With daemon-spawn you can start, stop and restart processes that run \ in the background. Processed are tracked by a simple PID file written to disk.
Pannier is a Ruby tool for the processing of web assets like CSS and JavaScript files. It can be used as a standalone asset organizer or mounted within any Rack-compatible application.
Applies ERB template processing to a Latex file and compiles it to a PDF. Supports layouts, partials, and string escaping. Also supplies a Guard task to watch for modifications and auto-building files.
File based queue for coordinating multiple processes.
A tool for creating LaTex files from ERb templates and processing them into PDF format.
The DevCreek gem enables programmers to collect and transmit metrics from their Ruby Test::Unit and RSpec test suites to a DevCreek server. Please visit the DevCreek site (http://devcreek.com/index.html) for more info. == FEATURES/PROBLEMS: Supported frameworks include Test::Unit and RSpec (> 1.10). == SYNOPSIS: The DevCreek Ruby Gem is library that, when loaded, will automatically listen to and collect metrics from your Test::Unit/RSpec unit tests. All you have to do is load the DevCreek library in your code and give it your DevCreek account info so that it can transmit the metrics to the server. Here is the simplest example of how to load DevCreek: -------- #Load the devcreek gem require 'rubygems' require 'devcreek' #set your account info DevCreek::Core.instance().load_from_yaml("#{ENV['HOME']}/.yoursettingsfile.devcreek.yml") -------- There are two ways to provide DevCreek with your account settings. The first (as shown above) is to point DevCreek to a settings file. The 'enabled' attribute tells devcreek whether or not it should actually transmit the metrics that it collects. The yaml file would like this: -------- user: your_devcreek_username password: your_devcreek_password project: your_devcreek_project enabled: true -------- The other way to provide DevCreek with your settings is via a hash. So, instead of loading a yaml file, you could do this: -------- #Load the devcreek gem require 'rubygems' require 'devcreek' #set your account info DevCreek::Core.instance().load( :user => 'your_devcreek_username', :password => 'your_devcreek_password', :project => 'your_devcreek_project', :enabled => true ) -------- The first method is preferrable because it allows you to keep your account settings outside of your project (and therefore your source control tool). If you only have 1 test file, you can place the code to load devcreek in the test file and your done. However, most projects will have many test files. In this case, you need to make sure that the Ruby interpreter loads devcreek before running the test classes. This can be done via the Ruby '-r' option. For example, assuming your code to load devcreek is in a file called foo.rb, you would run your tests from the command line like this: ruby -r foo.rb test/test_* If you run your tests from a Rakefile, then you need to tell rake to include the -r option when it runs the tests (rake runs it's tests in a separate Ruby process). You can do this pretty easily in your Rakefile, like so; -------- require 'rake/testtask' Rake::TestTask.new('all_tests') do |t| t.ruby_opts = ['-r foo.rb'] t.test_files = ['test/test_*.rb'] end --------
A command line tool to tag MP4 TV shows with metadata pulled from TheTVDB.com. It uses AtomicParsley to process the file.
Managing Credit card processing with ease via command line app with file input
The private iOS iPhone and iPad frameworks are full of wonderful goodies that are just waiting to be discovered. The fantastic class-dump tool lets you peek into those frameworks and generates the header files that you need to use them in your project. Private Dumper greatly simplifies the process of dumping the header files of all private frameworks for a given SDK version.
Inspect and process video or audio files.
Extract Curves a simplistic GTK Ruby-based appliaction which can convert the raster image file result of a geometric-trace-producing process's interaction with the characteristic of motion of another (interesting) process into a list of rectangular coordinates (in raster image's system) representing the inferred characteristic of motion of an image blob. Blob recognition is done by color: * by maximum pixel neighbor-to-neighbor difference * by maximum difference from blob's average color * by maximum difference from a pixel neighborhood's average color (using RGB or HSV). Use other software to pre-process (e.g. enhance contrast, or even reduce to gray scale), but Extract Curves's skeletonization is done based on the hypothesis of a recognized image blob, as opposed to a collection of pixels. Output is human-readable (tab-separated).
Extract Curves a simplistic GTK Ruby-based appliaction which can convert the raster image file result of a geometric-trace-producing process's interaction with the characteristic of motion of another (interesting) process into a list of rectangular coordinates (in raster image's system) representing the inferred characteristic of motion of the midline of an image blob. Blob recognition is done by color: * by maximum pixel neighbor-to-neighbor difference * by maximum difference from blob's average color * by maximum difference from a pixel neighborhood's average color (using RGB or HSV). Use other software to pre-process (e.g. enhance contrast, or even reduce to gray scale), but Extract Curves's skeletonization is done based on the hypothesis of a recognized image blob, as opposed to a collection of pixels. Output is human-readable (tab-separated).
Going through many job listings and finding the rigth one may be time consuming process. That's why this tool has been built. It allows to automate the process, retrieve necessary data and store it in CSV file in just a few minutes. The main focus is to inform an user about the location (time-zone) required.
Configurable S3 or file system image storage and processing HTTP API server. It is using HTTP Thumbnailer as image processing backend.
$Id: README.txt 204 2010-11-30 02:20:04Z pwilkins $ sm-transcript reads results of SLS processing and produces transcripts for the SpokenMedia browser. For each file in the source folder whose extension matches the source type, a file of destination type is created in the destination folder. All of these parameters have default values. Note: Examples of the commands you enter in the terminal are for *nix. The command prompt in the examples is: felix$ <command line> If you are a Windows user, make the usual adjustments. Requirements: sm-transcript is written in Ruby and packaged as a RubyGem. Since Ruby is not a compiled language, you will need to have Ruby installed on your machine to run sm-transcript. You can determine if Ruby is installed by typing "ruby -v" at a terminal prompt. It should return the version of Ruby that is installed. If Ruby is not installed on your machine, navigate to http://www.ruby-lang.org/ and follow the installation instructions. sm-transcript was developed using Ruby 1.8. Other Ruby versions have not been tested as of this release. Installation: You can get sm-transcript as either a RubyGem or as source from svn. The preferred way to install this package is as a Rubygem. You can download and install the gem with this command: felix$ sudo gem install [--verbose] sm-transcript This command downloads the most recent version of the gem from rubygems.org and makes it active. Previous versions of the gem remain installed, but are deactivated. You must use "sudo" to properly install the gem. If you execute "gem install" (omitting the "sudo") the gem is installed in your home gem repository and it isn't in your path without additional configuration. Note: You need sudo privileges to run the command as written. If you can't sudo, then you can install it locally and will need some additional configuration. Contact me (or your local Ruby wizard) for assistance. The executable is now in your path. You can cleanly uninstall the gem with this command: felix$ sudo gem uninstall sm-transcript If you have access to our svn repository, you are welcome to check out the code. Be warned that the trunk tip is not necessarily stable. It changes frequently as enhancements (and bug fixes) are added. (note that the 'smb_transcript' in the command line below is not a typo.) svn co svn+ssh://svn.mit.edu/oeit-tsa/SMB/smb_transcript/trunk sm_transcript build the gem by running this command from the directory you installed the source. This is what it looks like on my machine: felix$ rake gem The gem will be built and put in ./pkg You can now use the gem installation instructions above. Using the App: Run with no command line parameters, the app reads *.wrd files out of ./results and writes *.t1.html files to ./transcripts. These directories are relative to where sm_transcript is called. Note: destination files are overwritten without a warning prompt. If you want to preserve an existing output file, rename it before running the app again. For example, run the app by navigating to the bin folder and enter projects/sm_transcript/bin felix$ sm_transcript This command run from this folder will read *.wrd files from bin/results and write *-t1.html to bin/transcripts. Usage: sm_transcript [options] --srcdir PATH Read files from this folder (Default: ./results) --destdir PATH Write files to this folder (Default: ./transcripts) --srctype wrd | seg | txt | ttml | srt Kind of file to process (Default: wrd) --desttype html | ttml | datajs | json Kind of file to output (Default: html) -h, --help Show this message There is a serious gotch'a in specifying the srctype parameter: it must match the case of the file extension that you're processing. This means that if the srt files that you are processing have the extension .SRT, then you must specify the srctype as "SRT". Pretty lame, I know. I will update the gem with a fix shortly. My apologies until then. Troubleshooting: sm-transcript requires additional gems to operate. The RubyGem installation should install dependencies automatically, but when it doesn't, you get an error that includes ... no such file to load -- builder (LoadError) in the first few lines when you run sm-transcript, the problem is a missing dependent gem. (the error above indicates that the Builder gem is missing.) Try installing the missing gem. For the error above, the command looks like this on my computer: felix$ sudo gem install builder See "Required Gems" below for more information. A warning message such as: "WARNING: Nokogiri was built against LibXML version 2.7.6, but has dynamically loaded 2.7.7"" may be safely ignored. If you continue to have trouble, feel free to contact me. Upgrading: You can easily upgrade by simply executing the same command you used to install the gem. Running install again will add the newer version and make it active. By default the most recent version is used, but older versions are still available, simply inactive. If are using svn, you should already know what to do. Required Gems: builder - create structured data, such as XML extensions - added for the 'require_relative' command. (To get this command in Ruby 1.8 you need to install this gem, for Ruby 1.9 the command is already part of the core.) htmlentities - html parsing json - create JSON structured data nokogiri - xml parsing library optparse - option parsing of command line ostruct - open data structures ppcommand - pp is a pretty printer. It is used only for debugging rake - make for Ruby rubygems - support for gems (shouldn't be needed for Ruby 1.9) shoulda - enhancement for Test::Unit This command installs gems on OSX and Linux: felix$ sudo gem install <gem name> I recommend running the following command to update to latest version of rubygems before loading new gems. felix$ sudo gem update --system Unit Tests: You may run all unit tests by navigating to the test folder and running rake with no parameters (the default rake task runs all tests). On my computer, it looks like this: projects/sm_transcript/test felix$ rake Release Notes: Initial Version - runs under Ruby 1.8.x. version 0.0.4 - fixes bug when processing .WRD files with CRLF line endings. version 0.0.5 - removed due to posting error version 0.0.6 - added srctype of ttml and desttype of json, fixed bug where beginning time of word was actually for previous word. version 0.0.7 - added srt as srctype version 0.0.8 - fixed bug that dropped last phrase from transcripts version 1.0.0 - declared this version 1.0.0 to conform more closely with gem numbering conventions. All tests run successfully. To Do: - specify individual files for processing rather than folders - fix bug in srt processing: can't read Creole srt content. - allow user to modify the "t1" file extension for addition languages of the same transcript. - update code to run under Ruby 1.9
This gem will help you deploy the application on multiple servers in parallel. It takes original mina deploy.rb file, changes application_name, domain and starts deploying process.
Bunnicula is a simple AMQP relay implemented as a ruby daemon (a-la daemon-kit). Similar in intent to shovel, Bunnicula is intended to enable the common messaging scenario where services and applications publish messages to an AMQP broker on the local LAN for speed and reliability which are then subsequently relayed to a remote AMQP instance by a relay process which isnļæ½t so irritable as message producers tend to be when it comes to network speed and reliability. Bunnicula can be configured via configuration file (a Ruby DSL) or, for most common configurations, through command line arguments.
Inventory-Rake-Tasks-YARD Inventory-Rake-Tasks-YARD provides Rakeā¬ā£ tasks for YARDā¬ā using your Inventoryā¬ā. ā¬ā£ See http://rake.rubyforge.org/ ā¬ā See http://yardoc.org/ ā¬ā See http://disu.se/software/inventory/ ā¬Āŗ Installation Install Inventory-Rake-Tasks-YARD with % gem install inventory-rake-tasks-yard ā¬Āŗ Usage Include the following code in your ĪĆā£RakefileĪĆā (assuming that youĪĆĆve already set up Inventory-Rakeā¬ā£: Inventory::Rake::Tasks.unless_installing_dependencies do require 'inventory-rake-tasks-yard-1.0' Inventory::Rake::Tasks::YARD.new end ThisĪĆĆll define the following tasks: = .yardopts (file). = Create .yardopts file; depends on the file defining this task and Rakefile. = html. = Generate documentation in HTML format for all lib files in the inventory; depends on .yardopts file. ĪĆā£Inventory::Rake::Tasks::YARDĪĆā takes a couple of options, but the ones you might want to adjust are = :options. = The options to pass to YARD; will be passed to `Shellwords.shelljoin`. = :globals. = The globals to pass to YARD. = :files. = The files to process; mainly used if you want to add additional files to process beyond the lib files in the inventory. The options passed to YARD will be augmented with any options you list in a file named ĪĆā£.yardopts.taskĪĆā, where ĪĆā£taskĪĆā is the name of the Rake task invoking YARD, for example, ĪĆā£.yardopts.htmlĪĆā for the default HTML-generating task. You can use this to add options that are local to your installation and should thus not be listed in the Rakefile itself. See the {API documentation}ā¬ā for more information. ā¬ā£ See http://disu.se/software/inventory-rake/ ā¬ā See http://disu.se/software/inventory-rake-tasks-yard/api/Inventory/Rake/Tasks/YARD/ ā¬Āŗ Financing Currently, most of my time is spent at my day job and in my rather busy private life. Please motivate me to spend time on this piece of software by donating some of your money to this project. Yeah, I realize that requesting money to develop software is a bit, well, capitalistic of me. But please realize that I live in a capitalistic society and I need money to have other people give me the things that I need to continue living under the rules of said society. So, if you feel that this piece of software has helped you out enough to warrant a reward, please PayPal a donation to now@disu.seā¬ā£. Thanks! Your support wonĪĆĆt go unnoticed! ā¬ā£ Send a donation: https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=now%40disu%2ese&item_name=Inventory-Rake-Tasks-YARD ā¬Āŗ Reporting Bugs Please report any bugs that you encounter to the {issue tracker}ā¬ā£. ā¬ā£ See https://github.com/now/inventory-rake-tasks-yard/issues ā¬Āŗ Authors Nikolai Weibull wrote the code, the tests, the manual pages, and this README. ā¬Āŗ Licensing Inventory-Rake-Tasks-YARD is free software: you may redistribute it and/or modify it under the terms of the {GNU Lesser General Public License, version 3}ā¬ā£ or laterā¬ā, as published by the {Free Software Foundation}ā¬ā. ā¬ā£ See http://disu.se/licenses/lgpl-3.0/ ā¬ā See http://gnu.org/licenses/ ā¬ā See http://fsf.org/
erbextensions is a library that extends the Standard ERB library. One key functionality that it provides is a method for programatically processing a named erb file, which is very useful when including one erb document into another erb document.
Ame Ame provides a simple command-line interface API for Ruby¹. It can be used to provide both simple interfaces like that of ā¹rmāŗĀ² and complex ones like that of ā¹gitāŗĀ³. It uses Rubyās own classes, methods, and argument lists to provide an interface that is both simple to use from the command-line side and from the Ruby side. The provided command-line interface is flexible and follows commond standards for command-line processing. ¹ See http://ruby-lang.org/ ² See http://pubs.opengroup.org/onlinepubs/9699919799/utilities/rm.html ³ See http://git-scm.com/docs/ § Usage Letās begin by looking at two examples, one where we mimic the POSIX¹ command-line interface to the ā¹rmāŗ command. Looking at the entry² in the standard, ā¹rmāŗ takes the following options: = -f. = Do not prompt for confirmation. = -i. = Prompt for confirmation. = -R. = Remove file hierarchies. = -r. = Equivalent to /-r/. It also takes the following arguments: = FILE. = A pathname or directory entry to be removed. And actually allows one or more of these /FILE/ arguments to be given. We also note that the ā¹rmāŗ command is described as a command to āremove directory entriesā. ¹ See http://pubs.opengroup.org/onlinepubs/9699919799/utilities/contents.html ² See http://pubs.opengroup.org/onlinepubs/9699919799/utilities/rm.html Letās turn this specification into one using Ameās API. We begin by adding a flag for each of the options listed above: class Rm < Ame::Root flag 'f', '', false, 'Do not prompt for confirmation' flag 'i', '', nil, 'Prompt for confirmation' do |options| options['f'] = false end flag 'R', '', false, 'Remove file hierarchies' flag 'r', '', nil, 'Equivalent to -R' do |options| options['r'] = true end A flag¹ is a boolean option that doesnāt take an argument. Each flag gets a short and long name, where an empty name means that thereās no corresponding short or long name for the flag, a default value (true, false, or nil), and a description of what the flag does. Each flag can also optionally take a block that can do further processing. In this case we use this block to modify the Hash that maps option names to their values passed to the block to set other flagsā values than the ones that the block is associated with. As these flags (āiā and ārā) arenāt themselves of interest, their default values have been set to nil, which means that they wonāt be included in the Hash that maps option names to their values when passed to the method. ¹ See http://disu.se/software/ame-1.0/api/user/Ame/Class#flag-class-method There are quite a few other kinds of options besides flags that can be defined using Ame, but flags are all that are required for this example. Weāll get to the other kinds in later examples. Next we add a āsplusā argument. splus 'FILE', String, 'File to remove' A splus¹ argument is like a Ruby āsplatā, that is, an Array argument at the end of the argument list to a method preceded by a star, except that a splus requires at least one argument. A splus argument gets a name for the argument (ā¹FILEāŗ), the type of argument it represents (String), and a description. ¹ See http://disu.se/software/ame-1.0/api/user/Ame/Class#splus-class-method Then we add a description of the command (method) itself: description 'Remove directory entries' Descriptions¹ will be used in help output to assist the user in using the command. ¹ See http://disu.se/software/ame-1.0/api/user/Ame/Class#description-class-method Finally, we add the Ruby method thatāll implement the command (all preceding code included here for completeness): class Rm < Ame::Root version '1.0.0' flag 'f', '', false, 'Do not prompt for confirmation' flag 'i', '', nil, 'Prompt for confirmation' do |options| options['f'] = false end flag 'R', '', false, 'Remove file hierarchies' flag 'r', '', nil, 'Equivalent to -R' do |options| options['r'] = true end splus 'FILE', String, 'File to remove' description 'Remove directory entries' def rm(files, options = {}) require 'fileutils' FileUtils.send options['R'] ? :rm_r : :rm, [first] + rest, :force => options['f'] end end Actually, another bit of code was also added, namely version '1.0.0' This sets the version¹ String of the command. This information is used when the command is invoked with the āā¹--versionāŗā flag. This flag is automatically added, so you donāt need to add it yourself. Another flag, āā¹--helpāŗā, is also added automatically. When given, this flagāll make Ame output usage information of the command. ¹ See http://disu.se/software/ame-1.0/api/user/Ame/Class#version-class-method To actually run the command, all you need to do is invoke Rm.process Thisāll invoke the command using the command-line arguments stored in ā¹ARGVāŗ, but you can also specify other ones if you want to: Rm.process 'rm', %w[-r /tmp/*] The first argument to #process¹ is the name of the method to invoke, which defaults to ā¹File.basename($0)āŗ, and the second argument is an Array of Strings that should be processed as command-line arguments passed to the command. ¹ See http://disu.se/software/ame-1.0/api/user/Ame/Class#process-class-method If youād store the complete ā¹Rmāŗ class defined above in a file called ā¹rmāŗ and add ā¹#! /usr/bin/ruby -wāŗ at the beginning and ā¹Rm.processāŗ at the end, youād have a fully functional ā¹rmāŗ command (after making it executable). Letās see it in action: % rm --help Usage: rm [OPTIONS]... FILE... Remove directory entries Arguments: FILE... File to remove Options: -R Remove file hierarchies -f Do not prompt for confirmation --help Display help for this method -i Prompt for confirmation -r Equivalent to -R --version Display version information % rm --version rm 1.0.0 Some commands are more complex than ā¹rmāŗ. For example, ā¹gitāŗĀ¹ has a rather complex command-line interface. We wonāt mimic it all here, but letās introduce the rest of the Ame API using a fake ā¹gitāŗ clone as an example. ¹ See http://git-scm.com/docs/ ā¹Gitāŗ uses sub-commands to achieve most things. Implementing sub-commands with Ame is done using a ādispatchā. Weāll discuss dispatches in more detail later, but suffice it to say that a dispatch delegates processing to a child class thatāll handle the sub-command in question. We begin by defining our main ā¹gitāŗ command using a class called ā¹Gitāŗ under the ā¹Git::CLIāŗ namespace: module Git end class Git::CLI < Ame::Root version '1.0.0' class Git < Ame::Class description 'The stupid content tracker' def initialize; end Weāre setting things up to use the ā¹Gitāŗ class as a dispatch in the ā¹Git::CLIāŗ class. The description on the ā¹initializeāŗ method will be used as a description of the ā¹gitāŗ dispatch command itself. Next, letās add the ā¹format-patchāŗĀ¹ sub-command: description 'Prepare patches for e-mail submission' flag ?n, 'numbered', false, 'Name output in [PATCH n/m] format' flag ?N, 'no-numbered', nil, 'Name output in [PATCH] format' do |options| options['numbered'] = false end toggle ?s, 'signoff', false, 'Add Signed-off-by: line to the commit message' switch '', 'thread', 'STYLE', nil, Ame::Types::Enumeration[:shallow, :deep], 'Controls addition of In-Reply-To and References headers' flag '', 'no-thread', nil, 'Disables addition of In-Reply-To and Reference headers' do |options, _| options.delete 'thread' end option '', 'start-number', 'N', 1, 'Start numbering the patches at N instead of 1' multioption '', 'to', 'ADDRESS', String, 'Add a To: header to the email headers' optional 'SINCE', 'N/A', 'Generate patches for commits after SINCE' def format_patch(since = '', options = {}) p since, options end ¹ See http://git-scm.com/docs/git-format-patch/ Weāre using quite a few new Ame commands here. Letās look at each in turn: toggle ?s, 'signoff', false, 'Add Signed-off-by: line to the commit message' A ātoggleā¹ is a flag that also has an inverse. Beyond the flags āsā and āsignoffā, the toggle also defines āno-signoffā, which will set āsignoffā to false. This is useful if you want to support configuration files that set āsignoffāās default to true, but still allow it to be overridden on the command line. ¹ See http://disu.se/software/ame-1.0/api/user/Ame/Class#toggle-class-method When using the short form of a toggle (and flag and switch), multiple ones may be juxtaposed after the initial one. For example, āā¹-snāŗā is equivalent to āā¹-s -nāŗā to āgit format-patchāŗā. switch '', 'thread', 'STYLE', nil, Ame::Types::Enumeration[:shallow, :deep], 'Controls addition of In-Reply-To and References headers' A āswitchā¹ is an option that takes an optional argument. This allows you to have separate defaults for when the switch isnāt present on the command line and for when itās given without an argument. The third argument to a switch is the name of the argument. Weāre also introducing a new concept here in ā¹Ame::Types::Enumerationāŗ. An enumeration² allows you to limit the allowed input to a set of Symbols. An enumeration also has a default value in the first item to its constructor (which is aliased as ā¹.[]āŗ). In this case, the āthreadā switch defaults to nil, but, when given, will default to ā¹:shallowāŗ if no argument is given. If an argument is given it must be either āshallowā or ādeepā. A switch isnāt required to take an enumeration as its argument default and can take any kind of default value for its argument that Ame knows how to handle. Weāll look at this in more detail later, but know that the type of the default value will be used to inform Ame how to parse a command-line argument into a Ruby value. An argument to a switch must be given, in this case, as āā¹--thread=deepāŗā on the command line. ¹ See http://disu.se/software/ame-1.0/api/user/Ame/Class#switch-class-method ² See http://disu.se/software/ame-1.0/api/user/Ame/Types/Enumeration/ option '', 'start-number', 'N', 1, 'Start numbering the patches at N instead of 1' An āoptionā¹ is an option that takes an argument. The argument must always be present and may be given, in this case, as āā¹--start-number=2āŗā or āā¹--start-number 2āŗā on the command line. For a short-form option, anything that follows the option is seen as an argument, so assuming that āstart-numberā also had a short name of āSā, āā¹-S2āŗā would be equivalent to āā¹-S 2āŗā, which would be equivalent to āā¹--start-number 2āŗā. Note that āā¹-snS2āŗā would still work as expected. ¹ See http://disu.se/software/ame-1.0/api/user/Ame/Class#option-class-method multioption '', 'to', 'ADDRESS', String, 'Add a To: header to the email headers' A āmultioptionā¹ is an option that takes an argument and may be repeated any number of times. Each argument will be added to an Array stored in the Hash that maps option names to their values. Instead of taking a default argument, it takes a type for the argument (String, in this case). Again, types are used to inform Ame how to parse command-line arguments into Ruby values. ¹ See http://disu.se/software/ame-1.0/api/user/Ame/Class#multioption-class-method optional 'SINCE', 'N/A', 'Generate patches for commits after SINCE' An āoptionalā¹ argument is an argument that isnāt required. If itās not present on the command line itāll get its default value (the String ā¹'N/A'āŗ, in this case). ¹ See http://disu.se/software/ame-1.0/api/user/Ame/Class#optional-class-method Weāve now covered all kinds of options and one new kind of argument. There are three more types of argument (one that weāve already seen and two new) that weāll look into now: āargumentā, āsplatā, and āsplusā. description 'Annotate file lines with commit information' argument 'FILE', String, 'File to annotate' def annotate(file) p file end An āargumentā¹ is an argument thatās required. If itās not present on the command line, an error will be raised (and by default reported to the terminal). As itās required, it doesnāt take a default, but rather a type. ¹ See http://disu.se/software/ame-1.0/api/user/Ame/Class#argument-class-method description 'Add file contents to the index' splat 'PATHSPEC', String, 'Files to add content from' def add(paths) p paths end A āsplatā¹ is an argument thatās not required, but may be given any number of times. The type of a splat is the type of one argument and the type of a splat as a whole is an Array of values of that type. ¹ See http://disu.se/software/ame-1.0/api/user/Ame/Class#splat-class-method description 'Display gitattributes information' splus 'PATHNAME', String, 'Files to list attributes of' def check_attr(paths) p paths end A āsplusā¹ is an argument thatās required, but may also be given any number of times. The type of a splus is the type of one argument and the type of a splus as a whole is an Array of values of that type. ¹ See http://disu.se/software/ame-1.0/api/user/Ame/Class#splus-class-method Now that weāve seen all kinds of options and arguments, letās look on an additional tool at our disposal, the dispatch¹. class Remote < Ame::Class description 'Manage set of remote repositories' def initialize; end description 'Shows a list of existing remotes' flag 'v', 'verbose', false, 'Show remote URL after name' def list(options = {}) p options end description 'Adds a remote named NAME for the repository at URL' argument 'name', String, 'Name of the remote to add' argument 'url', String, 'URL to the repository of the remote to add' def add(name, url) p name, url end end ¹ See http://disu.se/software/ame-1.0/api/user/Ame/Class#dispatch-class-method Here weāre defining a child class to Git::CLI::Git called āRemoteā that doesnāt introduce anything new. Then we set up the dispatch: dispatch Remote, :default => 'list' This adds a method called āremoteā to Git::CLI::Git that will dispatch processing of the command line to an instance of the Remote class when āā¹git remoteāŗā is seen on the command line. The āremoteā method expects an argument thatāll be used to decide what sub-command to execute. Here weāve specified that in the absence of such an argument, the ālistā method should be invoked. We add the same kind of dispatch to Git under Git::CLI: dispatch Git and then weāre done. Hereās all the previous code in its entirety: module Git end class Git::CLI < Ame::Root version '1.0.0' class Git < Ame::Class description 'The stupid content tracker' def initialize; end description 'Prepare patches for e-mail submission' flag ?n, 'numbered', false, 'Name output in [PATCH n/m] format' flag ?N, 'no-numbered', nil, 'Name output in [PATCH] format' do |options| options['numbered'] = false end toggle ?s, 'signoff', false, 'Add Signed-off-by: line to the commit message' switch '', 'thread', 'STYLE', nil, Ame::Types::Enumeration[:shallow, :deep], 'Controls addition of In-Reply-To and References headers' flag '', 'no-thread', nil, 'Disables addition of In-Reply-To and Reference headers' do |options, _| options.delete 'thread' end option '', 'start-number', 'N', 1, 'Start numbering the patches at N instead of 1' multioption '', 'to', 'ADDRESS', String, 'Add a To: header to the email headers' optional 'SINCE', 'N/A', 'Generate patches for commits after SINCE' def format_patch(since = '', options = {}) p since, options end description 'Annotate file lines with commit information' argument 'FILE', String, 'File to annotate' def annotate(file) p file end description 'Add file contents to the index' splat 'PATHSPEC', String, 'Files to add content from' def add(paths) p paths end description 'Display gitattributes information' splus 'PATHNAME', String, 'Files to list attributes of' def check_attr(paths) p paths end class Remote < Ame::Class description 'Manage set of remote repositories' def initialize; end description 'Shows a list of existing remotes' flag 'v', 'verbose', false, 'Show remote URL after name' def list(options = {}) p options end description 'Adds a remote named NAME for the repository at URL' argument 'name', String, 'Name of the remote to add' argument 'url', String, 'URL to the repository of the remote to add' def add(name, url) p name, url end end dispatch Remote, :default => 'list' end dispatch Git end If we put this code in a file called āgitā and add ā¹#! /usr/bin/ruby -wāŗ at the beginning and ā¹Git::CLI.processāŗ at the end, youāll have a very incomplete git command-line interface on your hands. Letās look at what some of its ā¹--helpāŗ output looks like: % git --help Usage: git [OPTIONS]... METHOD [ARGUMENTS]... The stupid content tracker Arguments: METHOD Method to run [ARGUMENTS]... Arguments to pass to METHOD Options: --help Display help for this method --version Display version information Methods: add Add file contents to the index annotate Annotate file lines with commit information check-attr Display gitattributes information format-patch Prepare patches for e-mail submission remote Manage set of remote repositories % git format-patch --help Usage: git format-patch [OPTIONS]... [SINCE] Prepare patches for e-mail submission Arguments: [SINCE=N/A] Generate patches for commits after SINCE Options: -N, --no-numbered Name output in [PATCH] format --help Display help for this method -n, --numbered Name output in [PATCH n/m] format --no-thread Disables addition of In-Reply-To and Reference headers -s, --signoff Add Signed-off-by: line to the commit message --start-number=N Start numbering the patches at N instead of 1 --thread[=STYLE] Controls addition of In-Reply-To and References headers --to=ADDRESS* Add a To: header to the email headers % git remote --help Usage: git remote [OPTIONS]... [METHOD] [ARGUMENTS]... Manage set of remote repositories Arguments: [METHOD=list] Method to run [ARGUMENTS]... Arguments to pass to METHOD Options: --help Display help for this method Methods: add Adds a remote named NAME for the repository at URL list Shows a list of existing remotes § API The previous section gave an introduction to the whole user API in an informal and introductory way. For an indepth reference to the user API, see the {user API documentation}¹. ¹ See http://disu.se/software/ame-1.0/api/user/Ame/ If you want to extend the API or use it in some way other than as a command-line-interface writer, see the {developer API documentation}¹. ¹ See http://disu.se/software/ame-1.0/api/developer/Ame/ § Financing Currently, most of my time is spent at my day job and in my rather busy private life. Please motivate me to spend time on this piece of software by donating some of your money to this project. Yeah, I realize that requesting money to develop software is a bit, well, capitalistic of me. But please realize that I live in a capitalistic society and I need money to have other people give me the things that I need to continue living under the rules of said society. So, if you feel that this piece of software has helped you out enough to warrant a reward, please PayPal a donation to now@disu.se¹. Thanks! Your support wonāt go unnoticed! ¹ Send a donation: https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=now@disu.se&item_name=Ame § Reporting Bugs Please report any bugs that you encounter to the {issue tracker}¹. ¹ See https://github.com/now/ame/issues § Authors Nikolai Weibull wrote the code, the tests, the documentation, and this README. § Licensing Ame is free software: you may redistribute it and/or modify it under the terms of the {GNU Lesser General Public License, version 3}¹ or later², as published by the {Free Software Foundation}³. ¹ See http://disu.se/licenses/lgpl-3.0/ ² See http://gnu.org/licenses/ ³ See http://fsf.org/
jekyll-i18n is a Jekyll plugin that introduces a 't' tag/filter that translates phrases based on translation files found in _i18n/*.yml. It also introduces new mechanisms in the build process for generating language-specific posts and pages based on the language specified in the filename.
DSL for temporally files read/write in the object oriented way (system tmp). Manage tmp files in the super easy way! This dsl let you have simply way to commands and create variables on file system by default in the actual systems (cross platform) tmp folder. Sometimes it can be useful for multi processing (forked processes), but the main goal is not made for shared memory management! The goal is to provide dsl for easy tmp files making on the filesystem in the object oriented way (real objects and not simply strings). By default i's always IO work and not memory, everything you save with this will be IO command and not memory
file-processing-job allows you to distribute the processing load of large files to clients across the network. It is a thin wrapper on top of the EventMachine library.
LayerCake is a simple gem that allows you to specify more than one cache store in rails. It is built on the idea that memory store is the most efficient store with no network or file overhead, but serves multi-process or multi-server architectures, hence a fallback like filestore or memcached store is necessary.
Geoptima is a suite of applications for measuring and locating mobile/cellular subscriber experience on GPS enabled smartphones. It is produced by AmanziTel AB in Helsingborg, Sweden, and supports many phone manufacturers, with free downloads from the various app stores, markets or marketplaces. This Ruby library is capable of reading the JSON format files produced by these phones and reformating them as CSV, GPX and PNG for further analysis in Excel. This is a simple and independent way of analysing the data, when compared to the full-featured analysis applications and servers available from AmanziTel. If you want to analyse a limited amount of data in excel, or with Ruby, then this GEM might be for you. If you want to analyse large amounts of data, from many subscribers, or over long periods of time then rather consider the NetView and Customer IQ applications from AmanziTel at www.amanzitel.com. Current features available in the library and the show_geoptima command: * Import one or many JSON files * Organize data by device id (IMEI) into datasets * Split by event type * Time ordering and time correlation (associate data from one event to another): ** Add GPS locations to other events (time window and interpolation algorithms) ** Add signal strenth, battery level, etc. to other events * Export event tables to CSV format for further processing in excel * Make and export GPS traces in GPX and PNG format for simple map reports The amount of data possible to process is limited by memory, since all data is imported in ruby data structures for procssing. If you need to process larger amounts of data, you will need a database-driven approach, like that provided by AmanziTel's NetView and Customer IQ solutions. This Ruby gem is actually used by parts of the data pre-processing chain of 'Customer IQ', but it not used by the main database and statistics engine that generates the reports.
This gem creates folders and files that are meant for the Views of your rails web app so as to provide easier starting point to your development process.
The Mutagem library provides file based mutexes for recursion protection and classes for threading of external processes with support for output and exit status capturing. A test suite is provided for both unit and functional testing. The code is documented using YARD.
Handle resource issues in forked Ruby processes - File descriptors, DB connections, etc.
Parallelize processing of large files and/or data using Resque, Redis and MongoDB
Processr is a simple text processing and concatenation library. It takes a number of input strings (or files) and outputs a single string (or file) containing the result. Text can be passed through filters to modify the output.
= Simple task organizer syctask can be used to create, plan, prioritize and schedule tasks. ==Install The application can be installed with $ gem install syc-task == Usage syctask provides basic task organizer functions as create, update, list and complete a task. Additional functions are to plan tasks you want to accomplish today. If you are not sure in which sequence to conduct the task you can prioritize them with a pair wise comparisson. You can time tasks with start and stop and you can finally extract tasks from a minutes of meetings file. The schedule task command will print a graphical timeline of the working day assigning the planned tasks to the timeline. Busy times are marked red. Meetings are listed with associated tasks that are assigned to the meetings. With the statistics command you can print statistical evaluation of tasks duration and count. ===Create tasks with new Create a new task in the default task directory ~/.tasks $ syctask new "My first task" Provide a description $ syctask new "My first task" --description "Explanation of my first task" Schedule a task with a follow-up and due date $ syctask new "My first task" --follow-up "2013-02-25" --due "2013-03-11" Set a proirity for a task $ syctask new "My first task" --prio 3 Prompt for task input $ syctask new will prompt for task titles. Ctrl-D will end input. Except for --description you can also provide short forms for the options. ===Create tasks by scanning from files When writing minutes of meetings tasks that should be followed up in syctask can be annotated so they will be recognized by the scan command. The following structure shows how to annotade tasks Some text before @task; title;description;follow_up;due_date,prio Schedule meeting;Invite all developers;2016-09-12;2016-10-12;1 Write letter;Practice writing letters;;;3 Some text after The above annotation will only scan the next task because of the singular 'task' where the task values are separated with ';'. The line after the annotation '@task' lists the sequence of the fields of the task. It is also possible to list the tasks in a table, e.g. markdown Some text before @tasks| title |description |follow_up |due_date |prio ----------------|--------------------------|----------|----------|---- Schedule meeting|Invite all developers |2016-09-12|2016-10-12|1 Write letter |Practice writing letters | | |3 Some text after Call partner |Ask for project's progress|2016-09-14| |1 Even more text The example above scans all tasks due to the plural 'tasks'. It also scans all tasks that are separated with non-task text and occur after the annotation and confirm to the field structure. Lines that start with '-' will be ignored. So if you want to skip only a few tasks within a task list prepend them with '-'. If you have tasks with different fields then you have to add another annotation with the new field structure. Possible fields are title - the title of the task - mandatory field! description - the description of the task follow_up - the follow-up date of the task in the form yyyy-mm-dd due_date - the due-date of the task in the form yyyy-mm-dd prio - the priority of the task tags - tags the task is annotated with note - a note for the task Note: follow_up and due_date can also be written as Follow-up and Due-Date. Also case is ignored. As inidcated in the list the title column is mandatory. Without the title column scan will raise an error during a scan. Fields that are not part of the above list will be ignored. # | Title | Who - | ------------------------------------ | --- 1 | Schedule meeting with all developers | Me 2 | Write letter to practice writing | You In the table only the column Title will be scanned. The '#' and 'Who' column will be ignored during scan. This table is also a table for a minimum scan structure. You need at least to provide a title column so the scan function will recognize the table as a task list. Scanning tasks from files $ syctask scan 2016-09-10-mom.md 2016-09-09-mom.md ===Plan tasks The plan command will print tasks and prompts whether to (a)dd or (s)kip the task. If (q)uit is selected the tasks already added will be add to the today's task list. If (c)omplete is selected the complete task will be printed and the user will be prompted again for adding the task. Invoke plan without filter $ syctask plan 1 - My first task (a)dd, (c)omplete, (s)kip, (q)uit? a Duration (1 = 15 minutes, return 30 minutes): 3 --> 1 task(s) planned Invoke plan with a filter $ syctask plan --id "1,3,5,8" 1 - My first task (a)dd, (c)omplete, (s)kip, (q)uit? Move tasks to another days plan $ syctask plan today --move tomorrow --id 3,5 This will move the tasks with ID 3 and 5 from the today's plan to the tomorrow's plan. The duration will be set to the remaining processing time but at least to 30 minutes. ===Prioritize tasks Planned tasks can be prioritized in a pair wise comparisson. So each task is compared to all other tasks. The task with the highest priority will bubble on top followed by the task with the next highest priority and so on. $ syctask prio 1: My first task 2: My second task Task 1 has (h)igher or (l)ower priority, or (q)uit: h 1: My first task 2: My third task Task 1 has (h)igher or (l)ower priority, or (q)uit: l 1: My third task 2: My fourth task Task 1 has (h)igher or (l)ower priority, or (q)uit: h ... syctask schedule will then print tasks as follows Tasks ----- 0: 10 - My fourth task 1: 7 - My third task 2: 3 - My first task 3: 9 - My second task ... Instead of conducting pairwise comparisson the order of the tasks in the plan can be specified with the -o flag $ syctask plan -o 7,3,10,9 The plan or schedule command will print the tasks in the specified order Tasks ----- 0: 7 - My third task 1: 3 - My first task 2: 10 - My fourth task 3: 9 - My second task If only a part of the tasks is provided the rest of the tasks is appended to the end of the task plan. If you specify a position flag the prioritized tasks are added at the provided position. $ syctask plan -o 7,9 -p 2 Tasks ----- 0: 3 - My first task 1: 10 - My fourth task 2: 7 - My third task 3: 9 - My second task ===Create schedule The schedule command will print a graphical schedule with assigning the tasks selected with plan. When schedule command is invoked the planned tasks are added at or after the current time within the time schedule. Tasks that are done and scheduled in the future are not shown. Tasks done and in the past are shown with the actual processing time. The day starts at 00:00 and ends at 23:59. So 24:00 should be 00:00. Create a schedule with working time from 8a.m. to 6p.m. and meetings between 9a.m. and 9.30a.m. and 1p.m. and 2.45p.m. $ syctask schedule -w "8:00-18:00" -b "9:00-9:30,13:00-14:45" Add titles to the meetings $ syctask schedule -m "Project status,Management meeting" The output will be Meetings -------- A - Project status B - Management meeting A B xxx-///-|---|---|---///////-|---|---|---| 8 9 10 11 12 13 14 15 16 17 18 1 Tasks ----- 0 - 1: My first task Adding a task to a meeting $ syctask schedule -a "A:0" will print Meetings -------- A - Project status 1 - My first task B - Management meeting A B ----///-|---|---|---///////-|---|---|---| 8 9 10 11 12 13 14 15 16 17 18 Tasks ----- 0: 1 - My first task A task that is re-scheduled with $ syctask update 1 -f tomorrow will be shown as done (green) in the schedule and instead of separator - it shows ~. Tasks ---- 0: 1 ~ My first task A started task will be indicated by * $ syctask start 1 $ syctask sche Tasks ----- 0: 1 * My first task ===List tasks List tasks that are not marked as done in short form $ syctask list List all tasks in long form $ syctask list --all --complete Search tasks that match a pattern $ syctask list --id "<10" --follow_up ">2013-02-25" --title "My \w task" ===Inspect tasks Lists each unplanned task and allows to edit, delete, mark as done or plan for today or another day $ syctask inspect 0016 Create command for inspection (e)dit, (d)one, de(l)ete, (p)lan, da(t)e, (c)omplete, (s)kip, (b)ack, (q)uit ===Edit task Edit a task with ID 10 in vi $ syctask edit 10 ===Update tasks Except for title and id all values can be updated. Note and tags are not overridden rather supplemented with the update value. Update task with ID 1 and provide some informative note $ syctask update 1 --note "Some explanation about the progress on the task" ===Complete tasks Complete the task with ID 1 and provide a final note $ syctask done 1 --note "Finalize my first task" ===Delete tasks Delete tasks with ID 1,3 and 5 from the default task directory $ syctask delete --id 1,3,5 Delete tasks with ID 8 and 12 from the planned tasks of today. The tasks are only removed from the planned tasks and not physically deleted. $ syctask delete --plan today --id 8,12 ===Settings The settings command allows to define default values for task directory and to create general purpose tasks that can be used for tracking and later statistical evaluation. Create general purpose tasks for phone and talk $ syctask setting --general PHONE,TALK List all settings $ syctask setting --list ===Info Info searches for the location of a task and lists all task directories Search for task with id 102 $ syctask info --id 102 List all task directories $ syctask info --taskdir ===Statistics Shows statistics for work and meeting times as well as for task processing Evaluate the complete log file $ syctask statistics Evaluate work times, meetings and tasks between 2013-01-01 and 2013-04-14 $ syctask statistics 2013-01-01 2013-04-14 Evaluate yesterday and today $ syctask statistics yesterday today ===Task directory and project directory The global options --taskdir and --project determine where the command finds or creates the tasks. The default task directory is ~/.tasks, so if no task directory is specified all commands obtain tasks from or create tasks in ~/.tasks. If a project is specified the tasks will be saved to or obtained from the task directories subdirectory specified with the --project flag. --taskdir --project Tasks in - - default_task_dir x - task_dir - x default_task_dir/project x x task_dir/project In the table the relation of commands to --taskdir and --project are listed. Command --taskdir --project Comment delete x x deletes the tasks in taskdir/project done x x marks tasks in taskdir/project as done help - - inspect x x lists task to edit, done, delete, plan list x x lists tasks in taskdir/project new x x creates tasks in taskdir/project plan x x retrieves tasks to plan from taskdir/projekt prio - - input to prio are planned tasks (see plan) scan x x creates scanned tasks in taskdir/project schedule - - schedules the planned tasks (see plan) start - - starts task from planned tasks (see plan) statistics - - shows statistics of time and count stop - - stops task from planned task update x x updates task in taskdir/project ===Files * ID id file contains the last issued id. * IDS ids file contains all issued ids. * Task files The tasks are named ID.task where ID is any Integer as 10.task. The files are saved as YAML files and can be edited directly. * Planned tasks files The planned tasks are save to YYYY-MM-DD_planned_tasks in syctask's system directory. Each task is saved with the task's directory and the ID. * Schedule files The schedule is saved to YYYY-MM-DD_time_schedule in the default task directory. The files are saved as YAML files and can be changed manually. * Log file Creating schedule and task processings is logged to tasks.log. For example when a task is started and stopped this is action is saved to tasks.log. * Tracked file A started task is saved to tracked_tasks. A semaphore file is created with ID.track when the task ID is started. When the task is stopped the semaphore file is deleted. * General purpose tasks With syctask setting -g PHONE so called general purpose tasks can be created. These tasks can be used for time tracking and later statistic evaluation to determine the amount of disturbences e.g. by phone. These tasks are saved to default_tasks. The general purpose tasks itself are also saved to the .syc/syctask directory as regular task files. * Default task dir The default task that is used e.g. with list is saved to default_tasks_dir. This can be set with the setting command. ==Working with syctask To work with syctask and get the most out of it there is to follow a certain process. ===Creating a schedule ==== View tasks In the morning before I start to work I scan my tasks with syctask list or syctask inspect to get an overview of my open tasks. $ syctask list ==== Plan tasks Next I start the planning phase with syctask plan. If I have a specific schedule for the day I will filter for the respective tasks $ syctask plan ==== Prioritize tasks (optionally) If I want to process the tasks in a specific sequence I prioritize the tasks with $ syctask prio ==== Create schedule I create a schedule with my working hours and meetings that have been scheduled with $ syctask schedule -w "8:00-18:00" -b "9:00-10:00,14:30-16:00" -m "Team,Status" ==== Create an agenda I assign the topics I want to discuss in the meetings to the meetings with syctask schedule -a "A:1,3,6;B:3,5" ==== Start a task To begin I start the first task in the schedule with syctask start -p ID (where ID is the ID of the planned (-p) tasks) $ syctask start -p 10 ==== End a task To end the task I invoke $ syctask stop This will stop the last started task ==== Re-schedule a task If I cannot finish a task than I update the task with a new follow-up date $ syctask update 23 -f tomorrow The task will be shown in the today's schedule as done. ==== Complete a task When the task is done I call $ syctask done 23 ===Attachements * E-mails If an e-mail creates a task I create a new task with syctask new title_of_task. The subject of the e-mail I prepend with the ID and move the e-mail to a <b>open topics</b> directory. * Files If I create files in the course of a task I create a folder in the task directory with the ID and save the files in this directory. If there is an existing directory I link to the file from the ID directory ==Supported platform syc-task up to version 0.4.2 has been tested with Ruby 1.9.3. Version 0.4.2 also runs with Ruby 2.7. It also works in Windows using Cygwin. Version 1.0.0 has been upgraded to Ruby 3.2. ==Add TAB-completion to syctask To activate bash's TAB-completion following lines have to be added to ~/.bashrc complete -F get_syctask_commands syctask function get_syctask_commands { if [ -z $2 ] ; then COMPREPLY=(`syctask help -c`) else COMPREPLY=(`syctask help -c $2`) fi } After ~/.bashrc has been updated the shell session has to be restarted with $ source ~/.bashrc Now syctask followed by TAB TAB will print $ syctask <TAB><TAB> delete done list plan scan stop _doc help new prio schedule start update To complete a command we can type $ syctask sch<TAB> which will complete to $ syctask schedule ==Output to Printer To print syctask's output to a printer pipe the command to lpr $ syctask schedule | lpr This will print the schedule to the default printer. To determine all available printer lpstat can be used with the lpstat -a command $ lpstat -a Canon-LBP6650-3470 accepting requests since Sat 16 Mar 2013 04:26:15 PM CET Dell-B1160w-Mono accepting requests since Sat 16 Mar 2013 04:27:45 PM CET To print to Dell-B1160w-Mono the following command can be used $ syctask schedule | lpr -P Dell-B1160w-Mono ==Release Notes ===Version 0.0.1 Implementation of new, update, list and done commands. ===Version 0.0.4 * delete: deleting tasks or remove tasks from a task plan * plan: plan tasks and add them to the task plan * schedule: create a schedule with work and busy time and assign the tasks from the task plan to the free times ===Version 0.0.6 * start: start a task and track the lead time * stop: stop the tracking and print the lead time of the task * start, stop: the task is logged in the ~/.tasks/task.log file when added and when stopped * prio: prioritize tasks in the task plan, that is specifying the sequence in that the tasks should be conducted * plan: --move flag added to move tasks from the specified plan to another days task plan * update, new: when a follow-up or a due date is provided the task is added to the provided dates task plan. If both dates are set the task is added to both dates task plans ===Version 0.0.7 * updated rdoc ===Version 0.1.15 * IDs are now unique independent of the task or project directory. After upgrading from a version 0.0.7 or older the user asked whether to re-index the tasks. It is adviced to tar the tasks before re-indexing with $ tar cvfz tasks.tar.gz .tasks other_task_directories * start will now show a timer in the upper right corner of the screen when started with the -t (--timer) flag. $ syctask start 10 -t In order to use the task timer ncurses has to be installed as the task timer uses tput from the ncurses library. * The schedule has a heading with the schedule's date and the working time * Planned tasks are now added at or after the current time if they are not done yet. Done tasks are shown in the past with the actual processing time. Tasks done before the start of the schedule are not shown in the schedule. * Meetings that are at the current time are indicated with a *. Active tasks are indicated with a star, re-scheduled tasks are indicated with a ~. * Assigning tasks to meetings in a schedule is now done with the task ID * Statistics show statistics about work time, meeting times, general purpose tasks and task processing. Total, min, max and average time and count is listed. If you have used version 0.0.7 it is adviced to delete tasks.log that lives in ~/.tasks before upgrading or in ~/.syc/syctask after upgrading. Otherwise the statistic results seem odd. * Meeting time in time line now shows correct duration * Info command searches for the location of a task and lists all task task directories with the tasks contained. * Plan move command sets the duration to the remaining processing time but at least to 15 minutes * With the setting command the default task directory can be set and general purpose tasks can be created. A general purpose task can be used for tracking to analyse how much time for phone calls is occupied. setting -l list all general purpose tasks and the default task directory * Prio command now takes a position flag together with the order flag to determine where to insert the newly ordered tasks * All commands that take an ID as argument (done, edit, start, update) look up the task file associated to the id in the ids file. If it is found the provided task directory is not considered for the task file. If the id is not contained in the ids file the task is looked up in the provided directory * Inspect command allows to list each today's unplanned task to edit, delete, mark as done or plan * Update command now has a duration flag to set the task's duration ====Version 0.2.0 * Migrated from TestUnit to Minitest * Implemented _timeleap_ {<img src="https://badge.fury.io/rb/timeleap.svg" alt="Gem Version" />}[http://badge.fury.io/rb/timeleap] which allows to specify additional time distances to yesterday, today tomorrow. Time distances come in two flavors as long and short forms. Examples for long forms are - yesterday|today|tomorrow - next|previous_monday|tuesday|...|sunday - monday|tuesday|...|sunday_in|back_1_week|month|year - in|back_10_days|weeks|months|years Examples for short forms are - y|tod|tom - n|pmo|tu|..|su - mo|tu|...|sui|b1w|m|y - i|b10d|w|m|y ====Version 0.2.1 * Fix a bug in `syctask delete --plan` * Add indicator '>' to task list when task contains notes * Refactor migration from version 0.0.7 and when user has deleted system files. The user can now specify the directories where the tasks are located and can also define directories to be excluded. This is especially helpful to omit search in large mounted directories, like from NAS servers. ====Version 0.3.1 * Add csv output spearated by ';' to the list command * Fix bug when schedule file is empty * Add scan command to scan tasks from files ====Version 0.3.2 * Fix bugs of missing class lib/syctask/scanner.rb ====Version 0.4.2 * delete command can take now ranges of ids, e.g. 1,2,4-8,5,20-25 * inspect can now go back in the task list * inspect will now show the updated task after making changes to the task in edit * inspect allows to specify a follow_up date * scan will ignore columns that are not part of a syctask task * scan recognizes 'Follow-up' as well as 'follow_up' now. That is an underscore can be replaced with '-' * Fix bug when scanning tables that have spaces between separator and column * When tasks.log file is missing `syctask inspect` prints warning with reason why statistics cannot be printed ====Version 1.0.0 * Upgrade to Ruby 3.2.2 ==Development Pull from Github and then run $ bundle install New classes have to be added to 'lib/syctask.rb' Debugging the interface can be done with GLI_DEBUG: $ bundle exec env GLI_DEBUG=true bin/syctask Building and pushing the gemfile to Rubygems $ gem build syctask.gemspec $ gem push syc-task-0.2.1.gem ==Tests The test files live in the folder test and start with test_. There is a rake file available to run all tests $ rake test The CLI is tested with Cucumber. To run the Cucumber features in verbose mode $ cucumber or if you prefer cleaner output run $ rake features ==License syc-task is released under the {MIT License}[http://opensource.org/licenses/MIT] ==Links * [http://www.github.com/sugaryourcoffee/syc-task] - Source code on GitHub * [https://rubygems.org/gems/syc-task] - RubyGems
Library to tail files and process in Ruby
This library provides support for loading and processing data from Collada Digital Asset Exchange files. These files are typically used for sharing geometry and scenes.
Daemon launching and management made dead simple. With daemon-spawn you can start, stop and restart processes that run in the background. Processed are tracked by a simple PID file written to disk. In addition, you can choose to either execute ruby in your daemonized process or 'exec' another process altogether (handy for wrapping other services).
RDaux creates beautiful documentation websites from markdown files. It is inspired by daux.io and uses redcarpet with pygments.rb to process github-flavored markdown files into beautiful documentation websites and supports ASCII art with help of ditaa
Thebes is a thin binding layer for Rails and Sphinx via Riddle and Mysql2. Thebes expects you to write Sphinx configuration files by hand and have a rich understanding of Sphinx, but provides configuration files and templates to ease the process.
Integrates data into MS Word docx template files. Processing supports loops and replacement of strings of data both outside and within loops.
GemExefy is RubyGems plugin aimed to replace batch files (.bat) with executables with the same name. This gem will work only on RubyInstaller Ruby installation and it requires RubyInstaller DevKit. Reason for such replaceming batch files with executable stubs is twofold. When execution of batch file is interrupted with Ctrl-C key combination, user is faced with the confusing question "Terminate batch job (Y/N)?" which is avoided after replacement. Second reason is appearance of processes in Task manager (or Process Explorer). In the case of batch files all processes are visible as ruby.exe. In order to distinguish between them, program arguments must be examined. In addition, having one process name makes it hard to define firewall rules. Having executable versions instead of batch files will facilitate process identification in task list as well as defining firewall rules. Moreover it makes it possible to create selective firewall rules for different Ruby gems. Installing Ruby applications as Windows services should be also much easer when executable stub is used instead of batch file.
Log2json lets you read, filter and send logs as JSON objects via Unix pipes. It is inspired by Logstash, and is meant to be compatible with it at the JSON event/record level so that it can easily work with Kibana. Reading logs is done via a shell script(eg, `tail`) running in its own process. You then configure(see the `syslog2json` or the `nginxlog2json` script for examples) and run your filters in Ruby using the `Log2Json` module and its contained helper classes. `Log2Json` reads from stdin the logs(one log record per line), parses the log lines into JSON records, and then serializes and writes the records to stdout, which then can be piped to another process for processing or sending it to somewhere else. Currently, Log2json ships with a `tail-log` script that can be run as the input process. It's the same as using the Linux `tail` utility with the `-v -F` options except that it also tracks the positions(as the numbers of lines read from the beginning of the files) in a few files in the file system so that if the input process is interrupted, it can continue reading from where it left off next time if the files had been followed. This feature is similar to the sincedb feature in Logstash's file input. Note: If you don't need the tracking feature(ie, you are fine with always tailling from the end of file with `-v -F -n0`), then you can just use the `tail` utility that comes with your Linux distribution.(Or more specifically, the `tail` from the GNU coreutils). Other versions of the `tail` utility may also work, but are not tested. The input protocol expected by Log2json is very simple and documented in the source code. ** The `tail-log` script uses a patched version of `tail` from the GNU coreutils package. A binary of the `tail` utility compiled for Ubuntu 12.04 LTS is included with the Log2json gem. If the binary doesn't work for your distribution, then you'll need to get GNU coreutils-8.13, apply the patch(it can be found in the src/ directory of the installed gem), and then replace the bin/tail binary in the directory of the installed gem with your version of the binary. ** P.S. If you know of a way to configure and compile ONLY the tail program in coreutils, please let me know! The reason I'm not building tail post gem installation is that it takes too long to configure && make because that actually builds every utilties in coreutils. For shipping logs to Redis, there's the `lines2redis` script that can be used as the output process in the pipe. For shipping logs from Redis to ElasticSearch, Log2json provides a `redis2es` script. Finally here's an example of Log2json in action: From a client machine: tail-log /var/log/{sys,mail}log /var/log/{kern,auth}.log | syslog2json | queue=jsonlogs \ flush_size=20 \ flush_interval=30 \ lines2redis host.to.redis.server 6379 0 # use redis DB 0 On the Redis server: redis_queue=jsonlogs redis2es host.to.es.server Resources that help writing log2json filters: - look at log2json.rb source and example filters - http://grokdebug.herokuapp.com/ - http://www.ruby-doc.org/stdlib-1.9.3/libdoc/date/rdoc/DateTime.html#method-i-strftime
This Ruby gem helps to manage processes in your application so that new process wonāt start while the previous one is still running. It uses so known 'lock file' approach to figure out whether a process is running or not.
This tool can process fastq files, using fastq_quality_trimmer and quake to correct fastq files and then provide a quality asssessment of the data
Paraphraser, is a very simple gem. It adds a rake task that will drop, re-create and migrate a database (default is test) and will output the sql generated in the migration process. It will output the sql to screen as well as to a file ./migration.sql.
This is a library to transmit the file descriptor between the processes.
Dump C level and Ruby level backtrace from living process or core file using gdb.
FatTable is a gem that treats tables as a data type. It provides methods for constructing tables from a variety of sources, building them row-by-row, extracting rows, columns, and cells, and performing aggregate operations on columns. It also provides as set of SQL-esque methods for manipulating table objects: select for filtering by columns or for creating new columns, where for filtering by rows, order_by for sorting rows, distinct for eliminating duplicate rows, group_by for aggregating multiple rows into single rows and applying column aggregate methods to ungrouped columns, a collection of join methods for combining tables, and more. Furthermore, FatTable provides methods for formatting tables and producing output that targets various output media: text, ANSI terminals, ruby data structures, LaTeX tables, Emacs org-mode tables, and more. The formatting methods can specify cell formatting in a way that is uniform across all the output methods and can also decorate the output with any number of footers, including group footers. FatTable applies formatting directives to the extent they makes sense for the output medium and treats other formatting directives as no-ops. FatTable can be used to perform operations on data that are naturally best conceived of as tables, which in my experience is quite often. It can also serve as a foundation for providing reporting functions where flexibility about the output medium can be quite useful. Finally FatTable can be used within Emacs org-mode files in code blocks targeting the Ruby language. Org mode tables are presented to a ruby code block as an array of arrays, so FatTable can read them in with its .from_aoa constructor. A FatTable table can output as an array of arrays with its .to_aoa output function and will be rendered in an org-mode buffer as an org-table, ready for processing by other code blocks.