Documentation DSL that provides method level comments and links or imports to other comments. Comments can be written in MarkDown format and the current method can be transformed from Ruby code into a MarkDown readable format. Static instance, class methods, and constants can be called and used inside of ERB tags. All defined areas are generated into markdown file per class.
CI::Reporter is an add-on to Test::Unit, RSpec and Cucumber that allows you to generate XML reports of your test, spec and/or feature runs. The resulting files can be read by a continuous integration system that understands Ant's JUnit report XML format, thus allowing your CI system to track test/spec successes and failures.
Synfeld is a web application framework that does practically nothing. Synfeld is little more than a small wrapper for Rack::Mount (see http://github.com/josh/rack-mount). If you want a web framework that is mostly just going to serve up json blobs, and occasionally serve up some simple content (eg. help files) and media, Synfeld makes that easy. The sample app below shows pretty much everything there is to know about synfeld, in particular: * How to define routes. * Simple rendering of erb, haml, html, json, and static files. * In the case of erb and haml, passing variables into the template is demonstrated. * A dynamic action where the status code, headers, and body are created 'manually' (/my/special/route below) * A simple way of creating format sensitive routes (/alphabet.html vs. /alphabet.json) * The erb demo link also demos the rendering of a partial (not visible in the code below, you have to look at the template file examples/public/erb_files/erb_test.erb).
Converts Stripe's IIF transaction file into a QBO file for importing into Quickbooks Online. A QBO file is in OFX (Open Financial Exchange) format.
A crossword file format converter
A parser for CREMUL payment transaction files. It parses the CREMUL file and creates a Ruby object structure corresponding to the elements in the file. Also supports converting a CREMUL fil to a CSV file format.
ToARFF is a ruby gem to convert sqlite database file to ARFF (Attribute-Relation File Format) file.
One extends standard I18n so that you could store your translations in a Comma-Separated Value files (CSV) in a key-value manner, where the key is a word or a phrase or even a poem if you wish. No limits here (except be aware to escape symbols so the CSV format is kept). And the value is the same text as the key but translated to a language, specified by a file name you are using (for example, you could write one line to a sp.csv file: `"hello!","hola!"` and use `t 'hello!'` with a spanish locale to get the "hola!" text).
This library provides a handy interface to create wavefront .obj files. You can add vertex and faces to define a 3d object. It handles the syntax of the .obj file format and takes care of vertex definition (no vertex is defined twice which reduces filesize). You can access the result in raw data or write it into a file.
gcov2x digests .gcov files generated by llvm-cov and translates them into various common formats
This gem adds the capability to convert GPS Exchange Format (GPX) files to Keyhole Markup Language (KML) files and viceversa
Convertr works with database and handles converting tasks. It fetches files from remote sources and converts them to appropriate formats with ffmpeg
The US Census can be hard to digest for mere mortals. Geographic data is hidden away in shapefiles, a format unsupported by the freely available mapping sites like Google Maps and OpenStreetMap. Map servers, like GeoServer and MapServer have support for shapefiles, but those solutions are often too much for smaller organizations to set up and maintain. Scensus is a project to bring simple mapping of US Census data to the rest of us. Scensus-utils is a set of ruby scripts and files necessary to transform the census data in use for the Scensus project. You do not need to install Scensus-utils to run Scensus, but they are provided to foster further collaboration on the techniques and tools used to map.
Projectionist allows you to quickly edit files in a project from the command line, using the projections.json format
Easily format the output of your seeds and parse YAML files
Given a file (or a string) containing a container, along with options, it will return a hash of those values. Great for importing poorly formatted CSV files.
Easily process flat files with Flat. Specify the format in a subclass of Flat::File and read and write until the cows come home.
SiteFuel is a Ruby program and lightweight API for processing the source code behind your static and dynamic websites. SiteFuel can remove comments and unneeded whitespace from your CSS, HTML, and JavaScript files (as well as fragments in RHTML and PHP) files. It can also losslessly compress your PNG and JPEG images. SiteFuel can also deploy your website from SVN or GIT. Support for more formats and repositories is planned for future versions.
Given a seeds directory and a file name, it will try to import the most expedient format available.
Plugin for omnifocus gem to provide rt BTS synchronization. The first time this runs it creates a yaml file in your home directory for the rt url, username, password, default queue and query. The query is optional. If you don't supply it omnifocus-rt will pull all tickets from the default queue assigned to the specified user. The use a custom query you must supply it in the config file. omnifocus-rt uses the REST interface to RT. More information about query formatting is available here: http://wiki.bestpractical.com/view/REST Example: :rt_url: rt :queue: QA :username: user :password: pass :query: "Queue='QA'ANDOwner='Nobody'ANDStatus!='rejected'"
Tool to generate a CloudFormation parameters json formatted file
RUIC is a library that understands the XML formats used by NVIDIA's "UI Composer" tool suite. In addition to APIs for analyzing and manipulating these files—the UIC portion of the library—it also includes a mini DSL for writing scripts that can be run by the `ruic` interpreter.
A tool for working with common tokamak file formats from tools like EFIT, CHEASE, TRANSP, CRONOS etc. Contains commands for plotting a summary of content, manipulating the data within them and most importantly, converting one format to another.
Miscellaneous methods that may or may not be useful. sh:: Safely pass untrusted parameters to sh scripts. Raise an exception if the script returns a non-zero value. fork_and_check:: Run a block in a forked process and raise an exception if the process returns a non-zero value. do_and_exit, do_and_exit!:: Run a block. If the block does not run exit!, a successful exec or equivalent, run exit(1) or exit!(1) ourselves. Useful to make sure a forked block either runs a successful exec or dies. Any exceptions from the block are printed to standard error. overwrite:: Safely replace a file. Writes to a temporary file and then moves it over the old file. tempname_for:: Generates an unique temporary path based on a filename. The generated filename resides in the same directory as the original one. try_n_times:: Retries a block of code until it succeeds or a maximum number of attempts (default 10) is exceeded. Exception#to_formatted_string:: Return a string that looks like how Ruby would dump an uncaught exception. IO#best_datasync:: Try fdatasync, falling back to fsync, falling back to flush. Random#exp:: Return a random integer 0 ≤ n < 2^argument (using SecureRandom). Random#float:: Return a random float 0.0 ≤ n < argument (using SecureRandom). Random#int:: Return a random integer 0 ≤ n < argument (using SecureRandom). Password:: A small wrapper for String#crypt that does secure salt generation and easy password verification.
Formatting of status messages and comments for terminal output and source files.
Formats Avro files for other file output plugins.
wrapper around google drive to convert files to different formats
Can preview files and format the view.
The topologygenerator gem is a tool for building a custom output file format out of a given network topology. The topology can be retrieved from a custom file written in ruby by the user, or from an SDN controller (by specifying the API uri). The ONOS controller is currently supported, while the API for OpenDayLight is in progress. When building your output, you have to write a module that describes how to each class defined in the network topology. The topologygenerator gem will then use the defined modules to generate the output desired. You can see examples of how to use this gem in the public github webpage.
Seeds your ActiveRecord models from YAML files, when the YAML files are formatted like test fixtures
Ruby Cloud SDK wraps Aspose.Cells REST API so you could seamlessly integrate Microsoft Excel® spreadsheet generation, manipulation, conversion & inspection features into your own applications. Aspose.Cells Cloud for Ruby enables you to handle various aspects of Excel files, including cell data, styles, formulas, charts, pivot tables, data validation, comments, drawing objects, images, hyperlinks, and so on. Additionally, it supports operations such as splitting, merging, repairing, and converting to other compatible file formats.
Bindery is a Ruby library for easy packaging of ebooks. You supply the chapter content (in HTML format) and explain the book's structure to bindery, and bindery generates the various other files required by ebook formats and assembles them into a completed book suitable for installation on an ebook reader.
http://www.engineyard.com/blog/2010/extending-rails-3-with-railties/ http://www.igvita.com/2010/08/04/rails-3-internals-railtie-creating-plugins/ h1. Morning Glory Morning Glory is comprised of a rake task and helper methods that manages the deployment of static assets into an Amazon CloudFront CDN's S3 Bucket, improving the performance of static assets on your Rails web applications. _NOTE: You will require an Amazon Web Services (AWS) account in order to use this gem. Specially: S3 for storing the files you wish to distribute, and CloudFront for CDN distribution of those files._ This version of Morning Glory works with Rails 3.x and Ruby 1.9.x h2. What does it do? Morning Glory provides an easy way to deploy Ruby on Rails application assets to the Amazon CloudFront CDN. It solves a number of common issues with S3/CloudFront. For instance, CloudFront won't automatically expire old assets stored on edge nodes when you redeploy new assets (the Cloudfront expiry time is 24 hours minimum). To fix this Morning Glory will automatically namespace asset releases for you, then update all references to those renamed assets within your stylesheets ensuring there are no broken asset links. It also provides a helper method to rewrite all standard Rails asset helper generated URLs to your CloudFront CDN distributions, as well as handling switching between HTTP and HTTPS. Morning Glory was also built with SASS (Syntactically Awesome Stylesheets) in mind. If you use Sass for your stylesheets they will automatically be built before deployment to the CDN. See http://sass-lang.com/ for more information on Sass.s h2. What it doesn't do Morning Glory cannot configure your CloudFront distributions for you automatically. You will manually have to login to your AWS Management Console account, "https://console.aws.amazon.com/cloudfront/home":https://console.aws.amazon.com/cloudfront/home, and set up a distribution pointing to an S3 Bucket. h2. Installation <pre> gem 'morning_glory' </pre> h2. Usage Morning Glory provides it's functionality via rake tasks. You'll need to specify the target rails environment configuration you want to deploy for by using the @RAILS_ENV={env}@ parameter (for example, @RAILS_ENV=production@). <pre> rake morning_glory:cloudfront:deploy RAILS_ENV={YOUR_TARGET_ENVIRONMENT} </pre> h2. Configuration h3. The Morning Glory configuration file, @config/morning_glory.yml@ You can specify a configuration section for every rails environment (production, staging, testing, development). This section can have the following properties defined: <pre> --- production: enabled: true # Is MorningGlory enabled for this environment? bucket: cdn.production.foo.com # The bucket to deploy your assets into s3_logging_enabled: true # Log the deployment to S3 revision: "20100317134627" # The revision prefix. This timestamp automatically generateed on deployment delete_prev_rev: true # Delete the previous asset release (save on S3 storage space) </pre> h3. The Amazon S3 authentication keys configuration file, @config/s3.yml@ This file provides the access credentials for your Amazon AWS S3 account. You can configure keys for all your environments (production, staging, testing, development). <pre> --- production: access_key_id: YOUR_ACCESS_KEY secret_access_key: YOUR_SECRET_ACCESS_KEY </pre> Note: If you are deploying your system to Heroku, you can configure your Amazon AWS S3 information with the environment variables S3_KEY and S3_SECRET instead of using a configuration file. h3. Set up an asset_host For each environment that you'd like to utilise the CloudFront CDN for you'll need to define the asset_host within the @config/environments/{ENVIRONMENT}.rb@ configuration file. As of June 2010 AWS supports HTTPS requests on the CloudFront CDN, so you no longer have to worry about switching servers. (Yay!) h4. Example config/environments/production.rb @asset_host@ snippet: Here we're targeting a CNAME domain with HTTP support. <pre> ActionController::Base.asset_host = Proc.new { |source, request| if request.ssl? "#{request.protocol}#{request.host_with_port}" else "#{request.protocol}assets.example.com" end } </pre> h3. Why do we have to use a revision-number/namespace/timestamp? Once an asset has been deployed to the Amazon Cloudfront edge servers it cannot be modified - the version exists until it expires (minimum of 24 hours). To get around this we need to prefix the asset path with a revision of some sort - in MorningGlory's case we use a timestamp. That way you can deploy many times during a 24 hour period and always have your latest revision available on your web site. h2. Dependencies h3. AWS S3 Required for uploading the assets to the Amazon Web Services S3 buckets. See "http://amazon.rubyforge.org/":http://amazon.rubyforge.org/ for more documentation on installation. h2. About the name Perhaps not what you'd expect; a "Morning Glory":http://en.wikipedia.org/wiki/Morning_Glory_cloud is a rare cloud formation observed by glider pilots in Australia (see my side project, "YourFlightLog.com for flight-logging software for paraglider and hang-glider pilots":http://www.yourflightlog.com, from which the Morning Glory plugin was originally extracted). Copyright (c) 2010 "@AdamBurmister":http://twitter.com/adamburmister/, released under the MIT license
Aspose.Email Cloud is a REST API for creating email applications that work with common email file formats.
Take snapshots of websites. This is a Ruby/RubyCocoa port of the webkit2png.py script by Paul Hammond (http://www.paulhammond.org/webkit2png/) with some minor modifications - Generates a set of image files representing the thumbnail, clipped, and full-sized view of a web page in PNG format.
A wrapper of the aws-sdk-polly gem which caches audio files locally. Defaults to a British voice and OGG format.
This gem is under development. Generates a Liveblog indexed file in JSON format as well as Polyrex format.
Generate requests reports in HTML, OrgMode, and SQLite format from an Apache log file. Superseded by Log Sense (https://rubygems.org/gems/log_sense).
A C extension library for parsing accounting files in acct(5) format
This set of commands converts a CSV file to the following formats: - .strings (iOS) - .xml (Android) - .json - .php
A Ruby library for parsing Xcode file formats.
The purpose of this gem is to provide fast IP lookups for the MaxMindDB file format without parsing the data stored.
AbsoluteRenamer extension that provides date functions (such as NOW or file date, ...) to include in the filename format
README ====== This is a simple API to evaluate information retrieval results. It allows you to load ranked and unranked query results and calculate various evaluation metrics (precision, recall, MAP, kappa) against a previously loaded gold standard. Start this program from the command line with: retreval -l <gold-standard-file> -q <query-results> -f <format> -o <output-prefix> The options are outlined when you pass no arguments and just call retreval You will find further information in the RDOC documentation and the HOWTO section below. If you want to see an example, use this command: retreval -l example/gold_standard.yml -q example/query_results.yml -f yaml -v INSTALLATION ============ If you have RubyGems, just run gem install retreval You can manually download the sources and build the Gem from there by `cd`ing to the folder where this README is saved and calling gem build retreval.gemspec This will create a gem file called which you just have to install with `gem install <file>` and you're done. HOWTO ===== This API supports the following evaluation tasks: - Loading a Gold Standard that takes a set of documents, queries and corresponding judgements of relevancy (i.e. "Is this document relevant for this query?") - Calculation of the _kappa measure_ for the given gold standard - Loading ranked or unranked query results for a certain query - Calculation of _precision_ and _recall_ for each result - Calculation of the _F-measure_ for weighing precision and recall - Calculation of _mean average precision_ for multiple query results - Calculation of the _11-point precision_ and _average precision_ for ranked query results - Printing of summary tables and results Typically, you will want to use this Gem either standalone or within another application's context. Standalone Usage ================ Call parameters --------------- After installing the Gem (see INSTALLATION), you can always call `retreval` from the commandline. The typical call is: retreval -l <gold-standard-file> -q <query-results> -f <format> -o <output-prefix> Where you have to define the following options: - `gold-standard-file` is a file in a specified format that includes all the judgements - `query-results` is a file in a specified format that includes all the query results in a single file - `format` is the format that the files will use (either "yaml" or "plain") - `output-prefix` is the prefix of output files that will be created Formats ------- Right now, we focus on the formats you can use to load data into the API. Currently, we support YAML files that must adhere to a special syntax. So, in order to load a gold standard, we need a file in the following format: * "query" denotes the query * "documents" these are the documents judged for this query * "id" the ID of the document (e.g. its filename, etc.) * "judgements" an array of judgements, each one with: * "relevant" a boolean value of the judgment (relevant or not) * "user" an optional identifier of the user Example file, with one query, two documents, and one judgement: - query: 12th air force germany 1957 documents: - id: g5701s.ict21311 judgements: [] - id: g5701s.ict21313 judgements: - relevant: false user: 2 So, when calling the program, specify the format as `yaml`. For the query results, a similar format is used. Note that it is necessary to specify whether the result sets are ranked or not, as this will heavily influence the calculations. You can specify the score for a document. By "score" we mean the score that your retrieval algorithm has given the document. But this is not necessary. The documents will always be ranked in the order of their appearance, regardless of their score. Thus in the following example, the document with "07" at the end is the first and "25" is the last, regardless of the score. --- query: 12th air force germany 1957 ranked: true documents: - score: 0.44034874 document: g5701s.ict21307 - score: 0.44034874 document: g5701s.ict21309 - score: 0.44034874 document: g5701s.ict21311 - score: 0.44034874 document: g5701s.ict21313 - score: 0.44034874 document: g5701s.ict21315 - score: 0.44034874 document: g5701s.ict21317 - score: 0.44034874 document: g5701s.ict21319 - score: 0.44034874 document: g5701s.ict21321 - score: 0.44034874 document: g5701s.ict21323 - score: 0.44034874 document: g5701s.ict21325 --- query: 1612 ranked: true documents: - score: 1.0174774 document: g3290.np000144 - score: 0.763108 document: g3201b.ct000726 - score: 0.763108 document: g3400.ct000886 - score: 0.6359234 document: g3201s.ct000130 --- **Note**: You can also use the `plain` format, which will load the gold standard in a different way (but not the results): my_query my_document_1 false my_query my_document_2 true See that every query/document/relevancy pair is separated by a tabulator? You can also add the user's ID in the fourth column if necessary. Running the evaluation ----------------------- After you have specified the input files and the format, you can run the program. If needed, the `-v` switch will turn on verbose messages, such as information on how many judgements, documents and users there are, but this shouldn't be necessary. The program will first load the gold standard and then calculate the statistics for each result set. The output files are automatically created and contain a YAML representation of the results. Calculations may take a while depending on the amount of judgements and documents. If there are a thousand judgements, always consider a few seconds for each result set. Interpreting the output files ------------------------------ Two output files will be created: - `output_avg_precision.yml` - `output_statistics.yml` The first lists the average precision for each query in the query result file. The second file lists all supported statistics for each query in the query results file. For example, for a ranked evaluation, the first two entries of such a query result statistic look like this: --- 12th air force germany 1957: - :precision: 0.0 :recall: 0.0 :false_negatives: 1 :false_positives: 1 :true_negatives: 2516 :true_positives: 0 :document: g5701s.ict21313 :relevant: false - :precision: 0.0 :recall: 0.0 :false_negatives: 1 :false_positives: 2 :true_negatives: 2515 :true_positives: 0 :document: g5701s.ict21317 :relevant: false You can see the precision and recall for that specific point and also the number of documents for the contingency table (true/false positives/negatives). Also, the document identifier is given. API Usage ========= Using this API in another ruby application is probably the more common use case. All you have to do is include the Gem in your Ruby or Ruby on Rails application. For details about available methods, please refer to the API documentation generated by RDoc. **Important**: For this implementation, we use the document ID, the query and the user ID as the primary keys for matching objects. This means that your documents and queries are identified by a string and thus the strings should be sanitized first. Loading the Gold Standard ------------------------- Once you have loaded the Gem, you will probably start by creating a new gold standard. gold_standard = GoldStandard.new Then, you can load judgements into this standard, either from a file, or manually: gold_standard.load_from_yaml_file "my-file.yml" gold_standard.add_judgement :document => doc_id, :query => query_string, :relevant => boolean, :user => John There is a nice shortcut for the `add_judgement` method. Both lines are essentially the same: gold_standard.add_judgement :document => doc_id, :query => query_string, :relevant => boolean, :user => John gold_standard << :document => doc_id, :query => query_string, :relevant => boolean, :user => John Note the usage of typical Rails hashes for better readability (also, this Gem was developed to be used in a Rails webapp). Now that you have loaded the gold standard, you can do things like: gold_standard.contains_judgement? :document => "a document", :query => "the query" gold_standard.relevant? :document => "a document", :query => "the query" Loading the Query Results ------------------------- Now we want to create a new `QueryResultSet`. A query result set can contain more than one result, which is what we normally want. It is important that you specify the gold standard it belongs to. query_result_set = QueryResultSet.new :gold_standard => gold_standard Just like the Gold Standard, you can read a query result set from a file: query_result_set.load_from_yaml_file "my-results-file.yml" Alternatively, you can load the query results one by one. To do this, you have to create the results (either ranked or unranked) and then add documents: my_result = RankedQueryResult.new :query => "the query" my_result.add_document :document => "test_document 1", :score => 13 my_result.add_document :document => "test_document 2", :score => 11 my_result.add_document :document => "test_document 3", :score => 3 This result would be ranked, obviously, and contain three documents. Documents can have a score, but this is optional. You can also create an Array of documents first and add them altogether: documents = Array.new documents << ResultDocument.new :id => "test_document 1", :score => 20 documents << ResultDocument.new :id => "test_document 2", :score => 21 my_result = RankedQueryResult.new :query => "the query", :documents => documents The same applies to `UnrankedQueryResult`s, obviously. The order of ranked documents is the same as the order in which they were added to the result. The `QueryResultSet` will now contain all the results. They are stored in an array called `query_results`, which you can access. So, to iterate over each result, you might want to use the following code: query_result_set.query_results.each_with_index do |result, index| # ... end Or, more simply: for result in query_result_set.query_results # ... end Calculating statistics ---------------------- Now to the interesting part: Calculating statistics. As mentioned before, there is a conceptual difference between ranked and unranked results. Unranked results are much easier to calculate and thus take less CPU time. No matter if unranked or ranked, you can get the most important statistics by just calling the `statistics` method. statistics = my_result.statistics In the simple case of an unranked result, you will receive a hash with the following information: * `precision` - the precision of the results * `recall` - the recall of the results * `false_negatives` - number of not retrieved but relevant items * `false_positives` - number of retrieved but nonrelevant * `true_negatives` - number of not retrieved and nonrelevantv items * `true_positives` - number of retrieved and relevant items In case of a ranked result, you will receive an Array that consists of _n_ such Hashes, depending on the number of documents. Each Hash will give you the information at a certain rank, e.g. the following to lines return the recall at the fourth rank. statistics = my_ranked_result.statistics statistics[3][:recall] In addition to the information mentioned above, you can also get for each rank: * `document` - the ID of the document that was returned at this rank * `relevant` - whether the document was relevant or not Calculating statistics with missing judgements ---------------------------------------------- Sometimes, you don't have judgements for all document/query pairs in the gold standard. If this happens, the results will be cleaned up first. This means that every document in the results that doesn't appear to have a judgement will be removed temporarily. As an example, take the following results: * A * B * C * D Our gold standard only contains judgements for A and C. The results will be cleaned up first, thus leading to: * A * C With this approach, we can still provide meaningful results (for precision and recall). Other statistics ---------------- There are several other statistics that can be calculated, for example the **F measure**. The F measure weighs precision and recall and has one parameter, either "alpha" or "beta". Get the F measure like so: my_result.f_measure :beta => 1 If you don't specify either alpha or beta, we will assume that beta = 1. Another interesting measure is **Cohen's Kappa**, which tells us about the inter-agreement of assessors. Get the kappa statistic like this: gold_standard.kappa This will calculate the average kappa for each pairwise combination of users in the gold standard. For ranked results one might also want to calculate an **11-point precision**. Just call the following: my_ranked_result.eleven_point_precision This will return a Hash that has indices at the 11 recall levels from 0 to 1 (with steps of 0.1) and the corresponding precision at that recall level.
Parses a hash string of the format `'{ :a => "something" }'` into an actual ruby hash object `{ a: "something" }`. This is useful when you by mistake serialize hashes and save it in database column or a text file and you want to convert them back to hashes without the security issues of executing `eval(hash_string)`. By default only following classes are allowed to be deserialized: * TrueClass * FalseClass * NilClass * Numeric * String * Array * Hash A HashParser::BadHash exception is thrown if unserializable values are present.
Read & write metadata tags, convert audio files to alternate formats, manage files on the filesystem.
+rdoc2md+ is a utility for converting Rdoc style documents into markdown. The primary motivation is to make a Hoe gem project more github friendly. Hoe depends on a README.txt file in Rdoc format. Github expects a README.md file to display nicely on the webpage. This utility lets you make the .txt file the master and autogenerate the .md version without Repeating Yourself. Incidentally, if you are reading this on github, this README was produced by +rdoc2md+. Kinda meta, eh?
vollbremsung is a Handbrake bulk encoding tool, designed to re-encode a file structure to a DLNA enabled TV compatible format comfortably.
Guess format and encoding of .csv/.tsv files to generate options compatible with ruby CSV class. Works with ruby2.0
RDocF95 is an improved RDoc for generation of documents of Fortran 90/95 programs. Differences to the original one are given below. <b>Enhancement of "parser/f95.rb"</b> :: The Fortran 90/95 parse script "parser/f95.rb" (In rdoc-f95, old name "parsers/parse_f95.rb" is used yet) is modified in order to parse almost all entities of the Fortran 90/95 Standard. <b>Addition of <tt>--ignore-case</tt> option </b> :: In the Fortran 90/95 Standard, upper case letters are not distinguished from lower case letters, although original RDoc produces case-dependently cross-references of Class and Methods. When this options is specified, upper cases are not distinguished from lower cases. <b>Cross-reference of file names</b> :: Cross-reference of file names is available as well as modules, subroutines, and so on. <b>Modification of <tt>--style</tt> option</b> :: Original RDoc can not treat relative path stylesheet. Application of this patch modifies this function. <b>Conversion of TeX formula into MathML</b>:: TeX formula can be converted into MathML format with --mathml option, if <b>MathML library for Ruby version 0.6b -- 0.8</b> is installed. This library is available from {Bottega of Hiraku (only JAPANESE)}[http://www.hinet.mydns.jp/~hiraku/]. See {RDocF95::Markup::ToXHtmlTexParser}[link:classes/RDocF95/Markup/ToXHtmlTexParser.html] about format. <b>*** Caution ***</b> Documents generated with "--mathml" option are not displayed correctly according to browser and/or its setting. We have been confirmed that documents generated with "--mathml" option are displayed correctly with {Mozilla Firefox}[http://www.mozilla.org/products/firefox/] and Internet Explorer (+ {MathPlayer}[http://www.dessci.com/en/products/mathplayer/]). See {MathML Software - Browsers}[http://www.w3.org/Math/Software/mathml_software_cat_browsers.html] for other browsers. Some formats of comments in HTML document are changed to improve the analysis features. See {parse_f95.rb}[link:files/lib/rdoc-f95/parsers/parse_f95_rb.html]