Shrine is a toolkit for file attachments in Ruby applications. It supports uploading, downloading, processing and deleting IO objects, backed by various storage engines. It uses efficient streaming for low memory usage. Shrine comes with a high-level interface for attaching uploaded files to database records, saving their location and metadata to a database column, and tying them to record's lifecycle. It natively supports background jobs and direct uploads for fully asynchronous user experience.
ZenTest provides 4 different tools: zentest, unit_diff, autotest, and multiruby. zentest scans your target and unit-test code and writes your missing code based on simple naming rules, enabling XP at a much quicker pace. zentest only works with Ruby and Minitest or Test::Unit. There is enough evidence to show that this is still proving useful to users, so it stays. unit_diff is a command-line filter to diff expected results from actual results and allow you to quickly see exactly what is wrong. Do note that minitest 2.2+ provides an enhanced assert_equal obviating the need for unit_diff autotest is a continous testing facility meant to be used during development. As soon as you save a file, autotest will run the corresponding dependent tests. multiruby runs anything you want on multiple versions of ruby. Great for compatibility checking! Use multiruby_setup to manage your installed versions. *NOTE:* The next major release of zentest will not include autotest (use minitest-autotest instead) and multiruby will use rbenv / ruby-build for version management.
Creates aliases for class methods, instance methods, constants, delegated methods and more. Aliases can be easily searched or saved as YAML config files to load later. Custom alias types are easy to create with the DSL Alias provides. Although Alias was created with the irb user in mind, any Ruby console program can hook into Alias for creating configurable aliases.
Use simple_form (>= v2.0) custom inputs to get image previews or a link to uploaded file. Save time and code when you need useful file uploads.
Custom formatter for xcpretty that saves on a JSON file all the errors, warnings and test failures, so you can process them easily later.
autotest is a continous testing facility meant to be used during development. As soon as you save a file, autotest will run the corresponding dependent tests. minitest-autotest is the latest incarnation of the venerable and wise autotest. This time, it talks to minitest via minitest-server. As a result, there is no output parsing. There are no regexps to tweak. There's no cruft or overhead.
PathExpander helps pre-process command-line arguments expanding directories into their constituent files. It further helps by providing additional mechanisms to make specifying subsets easier with path subtraction and allowing for command-line arguments to be saved in a file. NOTE: this is NOT an options processor. It is a path processor (basically everything else besides options). It does provide a mechanism for pre-filtering cmdline options, but not with the intent of actually processing them in PathExpander. Use OptionParser to deal with options either before or after passing ARGV through PathExpander.
pbxplorer is a set of Ruby classes for parsing, editing, and saving Xcode project (.pbxproj) files. It can also be used to explore the contents of a project using the interactive Ruby shell.
Deploy with Rsync to your server from any local (or remote) repository. Saves you the need to install Git on your production machine and deploy all of your development files each time! Works with the new Capistrano v3! Suitable for deploying any apps, be it Ruby or Node.js.
Sym is a ruby library (gem) that offers both the command line interface (CLI) and a set of rich Ruby APIs, which make it rather trivial to add encryption and decryption of sensitive data to your development or deployment workflow. For additional security the private key itself can be encrypted with a user-generated password. For decryption using the key the password can be input into STDIN, or be defined by an ENV variable, or an OS-X Keychain Entry. Unlike many other existing encryption tools, Sym focuses on getting out of your way by offering a streamlined interface with password caching (if MemCached is installed and running locally) in hopes to make encryption of application secrets nearly completely transparent to the developers. Sym uses symmetric 256-bit key encryption with the AES-256-CBC cipher, same cipher as used by the US Government. For password-protecting the key Sym uses AES-128-CBC cipher. The resulting data is zlib-compressed and base64-encoded. The keys are also base64 encoded for easy copying/pasting/etc. Sym accomplishes encryption transparency by combining several convenient features: 1. Sym can read the private key from multiple source types, such as pathname, an environment variable name, a keychain entry, or CLI argument. You simply pass either of these to the -k flag — one flag that works for all source types. 2. By utilizing OS-X Keychain on a Mac, Sym offers truly secure way of storing the key on a local machine, much more secure then storing it on a file system, 3. By using a local password cache (activated with -c) via an in-memory provider such as memcached, sym invocations take advantage of password cache, and only ask for a password once per a configurable time period, 4. By using SYM_ARGS environment variable, where common flags can be saved. This is activated with sym -A, 5. By reading the key from the default key source file ~/.sym.key which requires no flags at all, 6. By utilizing the --negate option to quickly encrypt a regular file, or decrypt an encrypted file with extension .enc 7. By implementing the -t (edit) mode, that opens an encrypted file in your $EDITOR, and replaces the encrypted version upon save & exit, optionally creating a backup. 8. By offering the Sym::MagicFile ruby API to easily read encrypted files into memory. Please refer the module documentation available here: https://www.rubydoc.info/gems/sym
Deploy with Rsync to your server from any local (or remote) repository. Saves you the need to install Git on your production machine and deploy all of your development files each time! Suitable for deploying any apps, be it Ruby or Node.js.
The library creates, saves, and loads configuration files, which are in a user's home directory or a specified directory.
@extendable %placeholders for common CSS3 code provided by Compass. For example use `@extend %hide-text;` instead of `@include hide-text;` to save file size and speed up rendering.
Updates records from the console via your preferred editor. You can update a record's columns as well as <i>any attribute</i> that has accessor methods. Records are edited via a temporary file and once saved, the records are updated. Records go through a filter before and after editing the file. Yaml is the default filter, but you can define your own filter simply with a module and 2 expected methods. See ConsoleUpdate::Filter for more details. Compatible with all major Ruby versions and Rails 2.3.x and up.
An ActionMailer delivery mechanism to save mails as files in a directory.
Wraps the deliver! method on ActionMailer to save the outgoing mail to a .eml file, which can be opened by most email clients. Also provides a mechanism for only sending to an approved list of email recipients, which is useful for ensuring your application doesn't send email outside of an organization.
Parses a hash string of the format `'{ :a => "something" }'` into an actual ruby hash object `{ a: "something" }`. This is useful when you by mistake serialize hashes and save it in database column or a text file and you want to convert them back to hashes without the security issues of executing `eval(hash_string)`. By default only following classes are allowed to be deserialized: * TrueClass * FalseClass * NilClass * Numeric * String * Array * Hash A HashParser::BadHash exception is thrown if unserializable values are present.
Generate decks ready to be imported into Anki! Just pass an array of hashes and get back a string that you can save in a file and import directly into Anki.
A simple gem to add accessors on rails models for file upload of pictures/assets (which will be persisted on remote media's disk on object save). Refer documentation for more help. Latest happening: Now with Base64 encoded image support.
This plugin provides "magical translations" in your .haml files. What does it mean? It's mean that all your raw texts in templates will be automatically translated by GetText, FastGettext or Gettext backend from I18n. No more complicated translation keys and ugly translation methods in views. Now you can only write in your language, nothing more. At the end of your work you can easy find all phrases to translate and generate .po files for it. This type of files are also more readable and easier to translate, thanks to it you save your time with translations.
Save and load NumPy npy and npz files in Ruby
A library for loading, modifying and saving BVH motion capture files.
Hotloader uses ActionCable to serve your page every time you save a file in your app dir. That's your views, css, controllers etc.
Gracefully handles invalid form submissions, so users don't have to resubmit the file. Supports transforming the file before saving, e.g. scaling an image. Compatible with any storage mechanism, including the local filesystem and the cloud.
"Load, save and use YAML files as if they were objects"
DSL for temporally files read/write in the object oriented way (system tmp). Manage tmp files in the super easy way! This dsl let you have simply way to commands and create variables on file system by default in the actual systems (cross platform) tmp folder. Sometimes it can be useful for multi processing (forked processes), but the main goal is not made for shared memory management! The goal is to provide dsl for easy tmp files making on the filesystem in the object oriented way (real objects and not simply strings). By default i's always IO work and not memory, everything you save with this will be IO command and not memory
A simple program that listen at port 80 for uploads, and save the uploaded files to disk
Slim2pdf renders slim template with data hash and save the results as pdf file.
ZenTest provides 4 different tools: zentest, unit_diff, autotest, and multiruby. ZenTest scans your target and unit-test code and writes your missing code based on simple naming rules, enabling XP at a much quicker pace. ZenTest only works with Ruby and Test::Unit. Nobody uses this tool anymore but it is the package namesake, so it stays. unit_diff is a command-line filter to diff expected results from actual results and allow you to quickly see exactly what is wrong. Do note that minitest 2.2+ provides an enhanced assert_equal obviating the need for unit_diff autotest is a continous testing facility meant to be used during development. As soon as you save a file, autotest will run the corresponding dependent tests. multiruby runs anything you want on multiple versions of ruby. Great for compatibility checking! Use multiruby_setup to manage your installed versions.
A rails plugin to manage batch scripts. Provide web interface to create, edit and execute batch scripts simply. Automatically save the log to file.
A Cinch plugin to define keywords and display description when keyword matches, keywords are saved in yaml file for persistence.
Instead of sending down a dozen JavaScript and CSS files full of formatting and comments, this gem makes it simple to merge and compress these into one or more files, increasing speed and saving bandwidth.
ZenTest provides 4 different tools: zentest, unit_diff, autotest, and multiruby. ZenTest scans your target and unit-test code and writes your missing code based on simple naming rules, enabling XP at a much quicker pace. ZenTest only works with Ruby and Test::Unit. Nobody uses this tool anymore but it is the package namesake, so it stays. unit_diff is a command-line filter to diff expected results from actual results and allow you to quickly see exactly what is wrong. Do note that minitest 2.2+ provides an enhanced assert_equal obviating the need for unit_diff autotest is a continous testing facility meant to be used during development. As soon as you save a file, autotest will run the corresponding dependent tests. multiruby runs anything you want on multiple versions of ruby. Great for compatibility checking! Use multiruby_setup to manage your installed versions.
# SecureDataBag / Knife Secure Bag Knife Secure Bag provides a consistent interface to DataBagItem, EncryptedDataBagItem as well as the custom created SecureDataBagItem while also providing a few extra handy features to help in your DataBag workflows. SecureDataBagItem, can not only manage your existing DataBagItems and EncryptedDataBagItems, but it also provides you with a DataBag type which enables you to selectively encrypt only some of the fields in your DataBag thus allowing you to be able to search for the remaining fields. ## Installation To build and install the plugin add it your Gemfile or run: ```shell gem install secure_data_bag ``` ## Configuration #### Knife Secure Bag Defaults for the Knife command may be provided in your _knife.rb_ file. ```ruby knife[:secure_data_bag][:encrypted_keys] = %w( password ssh_keys ssh_ids public_keys private_keys keys secret ) knife[:secure_data_bag][:secret_file] = "#{local_dir}/secret.pem" knife[:secure_data_bag][:export_root] = "#{kitchen_dir}/data_bags" knife[:secure_data_bag][:export_on_upload] = true knife[:secure_data_bag][:defaults][:secrets][:export_format] = 'plain' ``` To break this up: `knife[:secure_data_bag][:encrypted_keys] = []` When Knife Secure Bag encrypts a hash with an _encryption format_ of *nested*, it will recursively walk through the hash from the bottom up and encrypt any key found within this array. `knife[:secure_data_bag][:secret_file]` When encryption is required, the shared secret found at this location will be loaded. `knife[:secure_data_bag][:export_root]` When exporting a data\_bag\_item, files will be created in below this root directory. Typically this would be the data\_bag folder located within your kitchen. `knife[:secure_data_bag][:export_on_upload]` When a data\_bag\_item is edited using `knife secure bag edit`, it may be automatically exported to the _export\_root_. `knife[:secure_data_bag][:defaults][:secrets][:export_format]` The configuration file additionally supports the _defaults_ hash which provides default values for all _command line arguments_ that one might use. Of all of them only the _export\_format_ key is likely to be of much use. ## Examples #### Chef cookbook recipe ```ruby metadata = {} # Define the keys we wish to encrypt metadata[:encrypted_keys] = %w(encoded) # Optionally load a specific shared secret. Otherwise, the global # encrypted\_data\_bag\_secret will be automatically used. secret_key = SecureDataBagItem.load_key("/path/to/secret") # Create a hash of data to use as an exampe raw_data = { id: "item", data_bag: "data_bag", encoded: "my string", unencoded: "other string" } # Instantiate a SecureDataBagItem from a hash item = SecureDataBagItem.from_hash(data, metadata) # Or more explicitely item = SecureDataBagItem.from_hash(data, encrypted_keys: %w(encoded)) # Or load from server item = SecureDataBagItem.load("data_bag", "item") # Print the un-encrypted raw data pp item.raw_data # Print the un-encrypted `encoded` key pp item['encoded'] # Print the encrypted hash as a data_bag_item hash pp item.to_hash =begin { id: "item", data_bag: "data_bag", encoded: { encrypted_data: "encoded", cipher: aes-256-cbc, iv: 13453453dkgfefg== version: 1 } unencoded: "other string", } =end ``` ## Usage #### Knife commands Print an DataBagItem, EncryptedDataBagItem or SecureDataBagItem, auto-detecting the encryption method used as plain text. ```shell knife secure bag show -F js secrets secret_item ``` Print an DataBagItem, EncryptedDataBagItem or SecureDataBagItem, auto-detecting the encryption method used as a SecureDataBagItem in encrypted format. ```shell knife secure bag show -F js secrets secret_item --enc-format nested ``` Edit an EncryptedDataBagItem, preserve it's encryption type, and export a copy to the _data\_bag_ folder in your kitchen. ```shell knife secure bag edit secrets secret_item --export ``` ## Knife SubCommands Most of the SubCommands support the following command-line options: `--enc-format [plain,encrypted,nested]` Ensure that, when displaying or uploading the data\_bag\_item, we forcibly encrypt the data\_bag\_item using the specified format instead of preserving the existing format. In this case: - plain: refers to a DataBagItem - encrypted: refers to an EnrytpedDataBagItem - nested: refers to a SecureDataBagItem `--dec-format [plain,encrypted,nested]` Attempt to decrypt the data\_bag\_item using the given format rather than the auto-detected one. The only real reason to use this is when you wish to specifically select _plain_ as the format so as to not decrypt the item. `--enc-keys key1,key2,key3` Provide a comma delimited list of hash keys which should be encrypted when encrypting the data\_bag\_item. This list will be concatenated with any key names listed in the configuration file or which were previously encrypted. `--export` Export the data\_bag\_item to json file in either of _export-format_ or _enc-format_. `--export-format` Overrides the encryption format only for the _export_ feature. `--export-root` Root directly under which a folder should exist for each _data_bag_ into which to export _data_bag_items_ as json files. When displaying the content of the _data\_bag\_item_, an additional key of *_secure_metadata* will be added to the output which contains gem specific metadata such as the encryption formats and any encrypted keys found. This key will _not_ be saved with the item, however it may be manipulated to alter the behavior of the _edit_ or _export_ commands. #### knife secure bag show DATA_BAG ITEM This command functions just like `knife data bag show` and is used to print out the content of either a DataBagItem, EncryptedDataBagItem or SecureDataBagItem. By default, it will auto-detect the Item type, and print it's unencrypted version to the terminal. This behavior, however, may be altered using the previously mentioned command line options. #### knife secure bag open PATH This commands functions much like `knife secure bag show`, however it is designed to load a _data\_bag\_item_ from disk as opposed to loading it from Chef server. This may be of use when view the content of an exported encrypted file. #### knife secure bag edit DATA_BAG DATA_BAG_ITEM This command functions just like `knife data bag edit` and is used to edit either a DataBagItem, EncryptedDataBagItem or a SecureDataBagItem. It supports all of the same options as `knife secure bag show`. #### knife secure bag from file DATA_BAG PATH This command functions just like `knife data bag from file` and is used to upload either a DataBagItem, EncryptedDataBagItem or a SecureDataBagItem. It supports all of the same options as `knife secure bag show`. ## Recipe DSL The gem additionally provides a few Recipe DSL methods which may be useful. ```ruby load_secure_item = secure_data_bag_item( data_bag_name, data_bag_item, cache: false ) load_plain_item = data_bag_item(data_bag_name, data_bag_item) convert_plain_to_secure = secure_data_bag_item!(load_plain_item) ```
= Simple task organizer syctask can be used to create, plan, prioritize and schedule tasks. ==Install The application can be installed with $ gem install syc-task == Usage syctask provides basic task organizer functions as create, update, list and complete a task. Additional functions are to plan tasks you want to accomplish today. If you are not sure in which sequence to conduct the task you can prioritize them with a pair wise comparisson. You can time tasks with start and stop and you can finally extract tasks from a minutes of meetings file. The schedule task command will print a graphical timeline of the working day assigning the planned tasks to the timeline. Busy times are marked red. Meetings are listed with associated tasks that are assigned to the meetings. With the statistics command you can print statistical evaluation of tasks duration and count. ===Create tasks with new Create a new task in the default task directory ~/.tasks $ syctask new "My first task" Provide a description $ syctask new "My first task" --description "Explanation of my first task" Schedule a task with a follow-up and due date $ syctask new "My first task" --follow-up "2013-02-25" --due "2013-03-11" Set a proirity for a task $ syctask new "My first task" --prio 3 Prompt for task input $ syctask new will prompt for task titles. Ctrl-D will end input. Except for --description you can also provide short forms for the options. ===Create tasks by scanning from files When writing minutes of meetings tasks that should be followed up in syctask can be annotated so they will be recognized by the scan command. The following structure shows how to annotade tasks Some text before @task; title;description;follow_up;due_date,prio Schedule meeting;Invite all developers;2016-09-12;2016-10-12;1 Write letter;Practice writing letters;;;3 Some text after The above annotation will only scan the next task because of the singular 'task' where the task values are separated with ';'. The line after the annotation '@task' lists the sequence of the fields of the task. It is also possible to list the tasks in a table, e.g. markdown Some text before @tasks| title |description |follow_up |due_date |prio ----------------|--------------------------|----------|----------|---- Schedule meeting|Invite all developers |2016-09-12|2016-10-12|1 Write letter |Practice writing letters | | |3 Some text after Call partner |Ask for project's progress|2016-09-14| |1 Even more text The example above scans all tasks due to the plural 'tasks'. It also scans all tasks that are separated with non-task text and occur after the annotation and confirm to the field structure. Lines that start with '-' will be ignored. So if you want to skip only a few tasks within a task list prepend them with '-'. If you have tasks with different fields then you have to add another annotation with the new field structure. Possible fields are title - the title of the task - mandatory field! description - the description of the task follow_up - the follow-up date of the task in the form yyyy-mm-dd due_date - the due-date of the task in the form yyyy-mm-dd prio - the priority of the task tags - tags the task is annotated with note - a note for the task Note: follow_up and due_date can also be written as Follow-up and Due-Date. Also case is ignored. As inidcated in the list the title column is mandatory. Without the title column scan will raise an error during a scan. Fields that are not part of the above list will be ignored. # | Title | Who - | ------------------------------------ | --- 1 | Schedule meeting with all developers | Me 2 | Write letter to practice writing | You In the table only the column Title will be scanned. The '#' and 'Who' column will be ignored during scan. This table is also a table for a minimum scan structure. You need at least to provide a title column so the scan function will recognize the table as a task list. Scanning tasks from files $ syctask scan 2016-09-10-mom.md 2016-09-09-mom.md ===Plan tasks The plan command will print tasks and prompts whether to (a)dd or (s)kip the task. If (q)uit is selected the tasks already added will be add to the today's task list. If (c)omplete is selected the complete task will be printed and the user will be prompted again for adding the task. Invoke plan without filter $ syctask plan 1 - My first task (a)dd, (c)omplete, (s)kip, (q)uit? a Duration (1 = 15 minutes, return 30 minutes): 3 --> 1 task(s) planned Invoke plan with a filter $ syctask plan --id "1,3,5,8" 1 - My first task (a)dd, (c)omplete, (s)kip, (q)uit? Move tasks to another days plan $ syctask plan today --move tomorrow --id 3,5 This will move the tasks with ID 3 and 5 from the today's plan to the tomorrow's plan. The duration will be set to the remaining processing time but at least to 30 minutes. ===Prioritize tasks Planned tasks can be prioritized in a pair wise comparisson. So each task is compared to all other tasks. The task with the highest priority will bubble on top followed by the task with the next highest priority and so on. $ syctask prio 1: My first task 2: My second task Task 1 has (h)igher or (l)ower priority, or (q)uit: h 1: My first task 2: My third task Task 1 has (h)igher or (l)ower priority, or (q)uit: l 1: My third task 2: My fourth task Task 1 has (h)igher or (l)ower priority, or (q)uit: h ... syctask schedule will then print tasks as follows Tasks ----- 0: 10 - My fourth task 1: 7 - My third task 2: 3 - My first task 3: 9 - My second task ... Instead of conducting pairwise comparisson the order of the tasks in the plan can be specified with the -o flag $ syctask plan -o 7,3,10,9 The plan or schedule command will print the tasks in the specified order Tasks ----- 0: 7 - My third task 1: 3 - My first task 2: 10 - My fourth task 3: 9 - My second task If only a part of the tasks is provided the rest of the tasks is appended to the end of the task plan. If you specify a position flag the prioritized tasks are added at the provided position. $ syctask plan -o 7,9 -p 2 Tasks ----- 0: 3 - My first task 1: 10 - My fourth task 2: 7 - My third task 3: 9 - My second task ===Create schedule The schedule command will print a graphical schedule with assigning the tasks selected with plan. When schedule command is invoked the planned tasks are added at or after the current time within the time schedule. Tasks that are done and scheduled in the future are not shown. Tasks done and in the past are shown with the actual processing time. Create a schedule with working time from 8a.m. to 6p.m. and meetings between 9a.m. and 9.30a.m. and 1p.m. and 2.45p.m. $ syctask schedule -w "8:00-18:00" -b "9:00-9:30,13:00-14:45" Add titles to the meetings $ syctask schedule -m "Project status,Management meeting" The output will be Meetings -------- A - Project status B - Management meeting A B xxx-///-|---|---|---///////-|---|---|---| 8 9 10 11 12 13 14 15 16 17 18 1 Tasks ----- 0 - 1: My first task Adding a task to a meeting $ syctask schedule -a "A:0" will print Meetings -------- A - Project status 1 - My first task B - Management meeting A B ----///-|---|---|---///////-|---|---|---| 8 9 10 11 12 13 14 15 16 17 18 Tasks ----- 0: 1 - My first task A task that is re-scheduled with $ syctask update 1 -f tomorrow will be shown as done (green) in the schedule and instead of separator - it shows ~. Tasks ---- 0: 1 ~ My first task A started task will be indicated by * $ syctask start 1 $ syctask sche Tasks ----- 0: 1 * My first task ===List tasks List tasks that are not marked as done in short form $ syctask list List all tasks in long form $ syctask list --all --complete Search tasks that match a pattern $ syctask list --id "<10" --follow_up ">2013-02-25" --title "My \w task" ===Inspect tasks Lists each unplanned task and allows to edit, delete, mark as done or plan for today or another day $ syctask inspect 0016 Create command for inspection (e)dit, (d)one, de(l)ete, (p)lan, da(t)e, (c)omplete, (s)kip, (b)ack, (q)uit ===Edit task Edit a task with ID 10 in vi $ syctask edit 10 ===Update tasks Except for title and id all values can be updated. Note and tags are not overridden rather supplemented with the update value. Update task with ID 1 and provide some informative note $ syctask update 1 --note "Some explanation about the progress on the task" ===Complete tasks Complete the task with ID 1 and provide a final note $ syctask done 1 --note "Finalize my first task" ===Delete tasks Delete tasks with ID 1,3 and 5 from the default task directory $ syctask delete --id 1,3,5 Delete tasks with ID 8 and 12 from the planned tasks of today. The tasks are only removed from the planned tasks and not physically deleted. $ syctask delete --plan today --id 8,12 ===Settings The settings command allows to define default values for task directory and to create general purpose tasks that can be used for tracking and later statistical evaluation. Create general purpose tasks for phone and talk $ syctask setting --general PHONE,TALK List all settings $ syctask setting --list ===Info Info searches for the location of a task and lists all task directories Search for task with id 102 $ syctask info --id 102 List all task directories $ syctask info --taskdir ===Statistics Shows statistics for work and meeting times as well as for task processing Evaluate the complete log file $ syctask statistics Evaluate work times, meetings and tasks between 2013-01-01 and 2013-04-14 $ syctask statistics 2013-01-01 2013-04-14 Evaluate yesterday and today $ syctask statistics yesterday today ===Task directory and project directory The global options --taskdir and --project determine where the command finds or creates the tasks. The default task directory is ~/.tasks, so if no task directory is specified all commands obtain tasks from or create tasks in ~/.tasks. If a project is specified the tasks will be saved to or obtained from the task directories subdirectory specified with the --project flag. --taskdir --project Tasks in - - default_task_dir x - task_dir - x default_task_dir/project x x task_dir/project In the table the relation of commands to --taskdir and --project are listed. Command --taskdir --project Comment delete x x deletes the tasks in taskdir/project done x x marks tasks in taskdir/project as done help - - inspect x x lists task to edit, done, delete, plan list x x lists tasks in taskdir/project new x x creates tasks in taskdir/project plan x x retrieves tasks to plan from taskdir/projekt prio - - input to prio are planned tasks (see plan) scan x x creates scanned tasks in taskdir/project schedule - - schedules the planned tasks (see plan) start - - starts task from planned tasks (see plan) statistics - - shows statistics of time and count stop - - stops task from planned task update x x updates task in taskdir/project ===Files * ID id file contains the last issued id. * IDS ids file contains all issued ids. * Task files The tasks are named ID.task where ID is any Integer as 10.task. The files are saved as YAML files and can be edited directly. * Planned tasks files The planned tasks are save to YYYY-MM-DD_planned_tasks in syctask's system directory. Each task is saved with the task's directory and the ID. * Schedule files The schedule is saved to YYYY-MM-DD_time_schedule in the default task directory. The files are saved as YAML files and can be changed manually. * Log file Creating schedule and task processings is logged to tasks.log. For example when a task is started and stopped this is action is saved to tasks.log. * Tracked file A started task is saved to tracked_tasks. A semaphore file is created with ID.track when the task ID is started. When the task is stopped the semaphore file is deleted. * General purpose tasks With syctask setting -g PHONE so called general purpose tasks can be created. These tasks can be used for time tracking and later statistic evaluation to determine the amount of disturbences e.g. by phone. These tasks are saved to default_tasks. The general purpose tasks itself are also saved to the .syc/syctask directory as regular task files. * Default task dir The default task that is used e.g. with list is saved to default_tasks_dir. This can be set with the setting command. ==Working with syctask To work with syctask and get the most out of it there is to follow a certain process. ===Creating a schedule ==== View tasks In the morning before I start to work I scan my tasks with syctask list or syctask inspect to get an overview of my open tasks. $ syctask list ==== Plan tasks Next I start the planning phase with syctask plan. If I have a specific schedule for the day I will filter for the respective tasks $ syctask plan ==== Prioritize tasks (optionally) If I want to process the tasks in a specific sequence I prioritize the tasks with $ syctask prio ==== Create schedule I create a schedule with my working hours and meetings that have been scheduled with $ syctask -w "8:00-18:00" -b "9:00-10:00,14:30-16:00" -m "Team,Status" ==== Create an agenda I assign the topics I want to discuss in the meetings to the meetings with syctask schedule -a "A:1,3,6;B:3,5" ==== Start a task To begin I start the first task in the schedule with syctask start -p ID (where ID is the ID of the planned (-p) tasks) $ syctask start -p 10 ==== End a task To end the task I invoke $ syctask stop This will stop the last started task ==== Re-schedule a task If I cannot finish a task than I update the task with a new follow-up date $ syctask update 23 -f tomorrow The task will be shown in the today's schedule as done. ==== Complete a task When the task is done I call $ syctask done 23 ===Attachements * E-mails If an e-mail creates a task I create a new task with syctask new title_of_task. The subject of the e-mail I prepend with the ID and move the e-mail to a <b>open topics</b> directory. * Files If I create files in the course of a task I create a folder in the task directory with the ID and save the files in this directory. If there is an existing directory I link to the file from the ID directory ==Supported platform syc-task has been tested with 1.9.3. It also works in Windows using Cygwin. ==Add TAB-completion to syctask To activate bash's TAB-completion following lines have to be added to ~/.bashrc complete -F get_syctask_commands syctask function get_syctask_commands { if [ -z $2 ] ; then COMPREPLY=(`syctask help -c`) else COMPREPLY=(`syctask help -c $2`) fi } After ~/.bashrc has been updated the shell session has to be restarted with $ source ~/.bashrc Now syctask followed by TAB TAB will print $ syctask <TAB><TAB> delete done list plan scan stop _doc help new prio schedule start update To complete a command we can type $ syctask sch<TAB> which will complete to $ syctask schedule ==Output to Printer To print syctask's output to a printer pipe the command to lpr $ syctask schedule | lpr This will print the schedule to the default printer. To determine all available printer lpstat can be used with the lpstat -a command $ lpstat -a Canon-LBP6650-3470 accepting requests since Sat 16 Mar 2013 04:26:15 PM CET Dell-B1160w-Mono accepting requests since Sat 16 Mar 2013 04:27:45 PM CET To print to Dell-B1160w-Mono the following command can be used $ syctask schedule | lpr -P Dell-B1160w-Mono ==Release Notes ===Version 0.0.1 Implementation of new, update, list and done commands. ===Version 0.0.4 * delete: deleting tasks or remove tasks from a task plan * plan: plan tasks and add them to the task plan * schedule: create a schedule with work and busy time and assign the tasks from the task plan to the free times ===Version 0.0.6 * start: start a task and track the lead time * stop: stop the tracking and print the lead time of the task * start, stop: the task is logged in the ~/.tasks/task.log file when added and when stopped * prio: prioritize tasks in the task plan, that is specifying the sequence in that the tasks should be conducted * plan: --move flag added to move tasks from the specified plan to another days task plan * update, new: when a follow-up or a due date is provided the task is added to the provided dates task plan. If both dates are set the task is added to both dates task plans ===Version 0.0.7 * updated rdoc ===Version 0.1.15 * IDs are now unique independent of the task or project directory. After upgrading from a version 0.0.7 or older the user asked whether to re-index the tasks. It is adviced to tar the tasks before re-indexing with $ tar cvfz tasks.tar.gz .tasks other_task_directories * start will now show a timer in the upper right corner of the screen when started with the -t (--timer) flag. $ syctask start 10 -t In order to use the task timer ncurses has to be installed as the task timer uses tput from the ncurses library. * The schedule has a heading with the schedule's date and the working time * Planned tasks are now added at or after the current time if they are not done yet. Done tasks are shown in the past with the actual processing time. Tasks done before the start of the schedule are not shown in the schedule. * Meetings that are at the current time are indicated with a *. Active tasks are indicated with a star, re-scheduled tasks are indicated with a ~. * Assigning tasks to meetings in a schedule is now done with the task ID * Statistics show statistics about work time, meeting times, general purpose tasks and task processing. Total, min, max and average time and count is listed. If you have used version 0.0.7 it is adviced to delete tasks.log that lives in ~/.tasks before upgrading or in ~/.syc/syctask after upgrading. Otherwise the statistic results seem odd. * Meeting time in time line now shows correct duration * Info command searches for the location of a task and lists all task task directories with the tasks contained. * Plan move command sets the duration to the remaining processing time but at least to 15 minutes * With the setting command the default task directory can be set and general purpose tasks can be created. A general purpose task can be used for tracking to analyse how much time for phone calls is occupied. setting -l list all general purpose tasks and the default task directory * Prio command now takes a position flag together with the order flag to determine where to insert the newly ordered tasks * All commands that take an ID as argument (done, edit, start, update) look up the task file associated to the id in the ids file. If it is found the provided task directory is not considered for the task file. If the id is not contained in the ids file the task is looked up in the provided directory * Inspect command allows to list each today's unplanned task to edit, delete, mark as done or plan * Update command now has a duration flag to set the task's duration ====Version 0.2.0 * Migrated from TestUnit to Minitest * Implemented _timeleap_ {<img src="https://badge.fury.io/rb/timeleap.svg" alt="Gem Version" />}[http://badge.fury.io/rb/timeleap] which allows to specify additional time distances to yesterday, today tomorrow. Time distances come in two flavors as long and short forms. Examples for long forms are - yesterday|today|tomorrow - next|previous_monday|tuesday|...|sunday - monday|tuesday|...|sunday_in|back_1_week|month|year - in|back_10_days|weeks|months|years Examples for short forms are - y|tod|tom - n|pmo|tu|..|su - mo|tu|...|sui|b1w|m|y - i|b10d|w|m|y ====Version 0.2.1 * Fix a bug in `syctask delete --plan` * Add indicator '>' to task list when task contains notes * Refactor migration from version 0.0.7 and when user has deleted system files. The user can now specify the directories where the tasks are located and can also define directories to be excluded. This is especially helpful to omit search in large mounted directories, like from NAS servers. ====Version 0.3.1 * Add csv output spearated by ';' to the list command * Fix bug when schedule file is empty * Add scan command to scan tasks from files ====Version 0.3.2 * Fix bugs of missing class lib/syctask/scanner.rb ====Version 0.4.2 * delete command can take now ranges of ids, e.g. 1,2,4-8,5,20-25 * inspect can now go back in the task list * inspect will now show the updated task after making changes to the task in edit * inspect allows to specify a follow_up date * scan will ignore columns that are not part of a syctask task * scan recognizes 'Follow-up' as well as 'follow_up' now. That is an underscore can be replaced with '-' * Fix bug when scanning tables that have spaces between separator and column * When tasks.log file is missing `syctask inspect` prints warning with reason why statistics cannot be printed ==Development Pull from Github and then run $ bundle install New classes have to be added to 'lib/syctask.rb' Debugging the interface can be done with GLI_DEBUG: $ bundle exec env GLI_DEBUG=true bin/syctask Building and pushing the gemfile to Rubygems $ gem build syctask.gemspec $ gem push syc-task-0.2.1.gem ==Tests The test files live in the folder test and start with test_. There is a rake file available to run all tests $ rake test The CLI is tested with Cucumber. To run the Cucumber features in verbose mode $ cucumber or if you prefer cleaner output run $ rake features ==License syc-task is released under the {MIT License}[http://opensource.org/licenses/MIT] ==Links * [http://www.github.com/sugaryourcoffee/syc-task] - Source code on GitHub * [https://rubygems.org/gems/syc-task] - RubyGems
Instead of copying and pasting large outputs or big ruby structures into all the affected test files every time your code change, you can do it the easy way, possibly saving many hours of boring maintenance work!
Download all files from UWAce, skipping ones already saved
Fleek automatically injects any updates stylesheets when you change and save a file. This allows for faster iteration when working in CSS or any variant of it.
While developing Rails apps, it is often difficult to remember which prefixes route to which controller#action. That is why it is useful to run `rails routes` and put that output in a file for later reference. This gem does that for you. After installing the gem in your Rails project, it is listening for saved changes in your `config/routes.rb`. Every time you make a change to routes.rb and you save it, restful_routing will look for `restful_routing.rb` in your root directory. It will update it if there or make it if not. `restful_routing.rb` will contain the output of `rails routes`.
Save your work. $ wipit > git add . && git rm $(git ls-files --deleted) && git commit -m 'WIP' Save your work to a topic branch. $ wipit my_wip_branch > git checkout -b my_wip_branch && git add . && git rm $(git ls-files --deleted) && git commit -m 'WIP' Save your work to a topic branch and push to origin. $ wipit my_wip_branch -p > git checkout -b my_wip_branch && git add . && git rm $(git ls-files --deleted) && git commit -m 'WIP' && git push origin my_wip_branch
Make your field act as a file. Save the content to a file, and load the content from a file.
Paperclip is intended as an easy file attachment library for ActiveRecord. The intent behind it was to keep setup as easy as possible and to treat files as much like other attributes as possible. This means they aren't saved to their final locations on disk, nor are they deleted if set to nil, until ActiveRecord::Base#save is called. It manages validations based on size and presence, if required. It can transform its assigned image into thumbnails if needed, and the prerequisites are as simple as installing ImageMagick (which, for most modern Unix-based systems, is as easy as installing the right packages). Attached files are saved to the filesystem and referenced in the browser by an easily understandable specification, which has sensible and useful defaults.
Handles uploading files and saving to S3
== DESCRIPTION: Creates a configuration controller and model that can be used to quickly create configuration table for your system so you can store system-wide variables that you'd like the user to be able to set. This gem contains a generator to create a simple configuration model, migration, and interface for your application, complete with working tests. == FEATURES * Generates the controller, model, and the associated files. * You can specify the model name and set the fields for the migrations via the generator. == SYNOPSIS: === Setup and overview Generate a new configuration system for your application by executing the generator from the root of your application. ruby script\generate rails_config_model Configuration You can also specify the model fields much like the scaffold_resource generator ruby script/generate rails_config_model Configuration contact_email:string site_name:string welcome_message:text max_number_of_events:integer Once installed, you modify the generated migration to include the fields you want to configure. There are a few defaults there to give you an idea. The generator will create a controller mounted at /configuration so you can edit your configurations. Modify this as needed to provide for security. === The Edit form The application's edit form uses the *form* helper method to auto-generate the fields. This may not be optimal and you may wish to actually write your own view instead. See app/views/configuration/edit.rhtml for more details. === Usage Configuration is simply a model for this table. It is designed to handle a single row of a table, and so additional rows cannot be created. If you have a table that looks like this: id contact_email site_name welcome_message max_number_of_events You simply grab the row from the table @configuration = Configuration.load and then grab the values out. email = @configuration.contact_email Or save new values @configuration = Configuration.load @configuration.welcome_message = "This is the default message." @configuraiton.save
Interactive testing and experimentation with regular expressions. To start, type 'regexp-bench' First command: help. Current nicest feature: saving and restoring test strings from a YAML file. Biggest wishlist item: automatic rspec output.
Jekyll copies static files and duplicates storage use. Hard links point to the same file instead of making copies, thus saving storage.
A local server for saving HTML elements as PNG files.
CSV Files: I hate them, you probably do too, but sometimes you need to get data into your system and this is the only way it's happening. If you're deploying a rails app in a cloud setup, you may have troubles if you're trying to store an uploaded file locally and process it later in a background thread (I know I have). cumulus_csv is one way to solve that problem. You can save your file to your S3 account, and loop over the data inside it at your convenience later. So it doesn't matter where you're doing the processing, you just need to have the key you used to store the file, and you can process away.
saving files to Aliyun OSS
make rails model act as ActiveRecord, but not save data into database but cache system, file or redis