YARD is a documentation generation tool for the Ruby programming language. It enables the user to generate consistent, usable documentation that can be exported to a number of formats very easily, and also supports extending for custom Ruby constructs such as custom class level definitions.
The mime-types library provides a library and registry for information about MIME content type definitions. It can be used to determine defined filename extensions for MIME types, or to use filename extensions to look up the likely MIME type definitions. Version 3.0 is a major release that requires Ruby 2.0 compatibility and removes deprecated functions. The columnar registry format introduced in 2.6 has been made the primary format; the registry data has been extracted from this library and put into {mime-types-data}[https://github.com/mime-types/mime-types-data]. Additionally, mime-types is now licensed exclusively under the MIT licence and there is a code of conduct in effect. There are a number of other smaller changes described in the History file.
Diff::LCS computes the difference between two Enumerable sequences using the McIlroy-Hunt longest common subsequence (LCS) algorithm. It includes utilities to create a simple HTML diff output format and a standard diff-like tool. This is release 1.4.3, providing a simple extension that allows for Diff::LCS::Change objects to be treated implicitly as arrays and fixes a number of formatting issues. Ruby versions below 2.5 are soft-deprecated, which means that older versions are no longer part of the CI test suite. If any changes have been introduced that break those versions, bug reports and patches will be accepted, but it will be up to the reporter to verify any fixes prior to release. The next major release will completely break compatibility.
Fast international phone number (E164 standard) normalizing, splitting and formatting. Lots of formatting options: International (+.., 00..), national (0..), and local.
Google libphonenumber library was taken as a basis for this gem. Gem uses its data file for validations and number formatting.
A nifty gem, in pure Ruby, to parse PDF files and combine (merge) them with other PDF files, number the pages, watermark them or stamp them, create tables, add basic text objects etc` (all using the PDF file format).
Although made popular by Windows, INI files can be used on any system thanks to their flexibility. They allow a program to store configuration data, which can then be easily parsed and changed. Two notable systems that use the INI format are Samba and Trac. More information about INI files can be found on the [Wikipedia Page](http://en.wikipedia.org/wiki/INI_file). ### Properties The basic element contained in an INI file is the property. Every property has a name and a value, delimited by an equals sign *=*. The name appears to the left of the equals sign and the value to the right. name=value ### Sections Section declarations start with *[* and end with *]* as in `[section1]` and `[section2]` shown in the example below. The section declaration marks the beginning of a section. All properties after the section declaration will be associated with that section. ### Comments All lines beginning with a semicolon *;* or a number sign *#* are considered to be comments. Comment lines are ignored when parsing INI files. ### Example File Format A typical INI file might look like this: [section1] ; some comment on section1 var1 = foo var2 = doodle var3 = multiline values \ are also possible [section2] # another comment var1 = baz var2 = shoodle
Validate, generate and format CPF/CNPJ numbers. Include command-line tools.
Docsplit is a command-line utility and Ruby library for splitting apart documents into their component parts: searchable UTF-8 plain text, page images or thumbnails in any format, PDFs, single pages, and document metadata (title, author, number of pages...)
Phone number parsing, validation and formatting.
Multiple worksheets can be added to a workbook and formatting can be applied to cells. Text, numbers, formulas, hyperlinks and images can be written to the cells.
GlobalPhone parses, validates, and formats local and international phone numbers according to the E.164 standard using the rules specified in Google's libphonenumber database.
Text::Format is provides the ability to nicely format fixed-width text with knowledge of the writeable space (number of columns), margins, and indentation settings. Text::Format can work with either TeX::Hyphen or Text::Hyphen to hyphenate words when formatting.
These functions tell you whether a credit card number is self-consistent using known algorithms for credit card numbers. All non-integer values are removed from the string before parsing so that you don't have to worry about the format of the string.
Phone number parsing, validation and formatting
Debuggers are great! They help us troubleshoot complicated programming problems by inspecting values produced by code, line by line. They are invaluable when trying to understand what is going on in a large application composed of thousands or millions of lines of code. In day-to-day test-driven development and simple debugging though, a puts statement can be a lot quicker in revealing what is going on than halting execution completely just to inspect a single value or a few. This is certainly true when writing the simplest possible code that could possibly work, and running a test every few seconds or minutes. Problem is you need to locate puts statements in large output logs, know which file names, line numbers, classes, and methods contained the puts statements, find out what variable names are being printed, and see nicely formatted output. Enter puts_debuggerer. A guilt-free puts debugging Ruby gem FTW that prints file names, line numbers, class names, method names, and code statements; and formats output nicely courtesy of awesome_print. Partially inspired by this blog post: https://tenderlovemaking.com/2016/02/05/i-am-a-puts-debuggerer.html (Credit to Tenderlove.)
An inputmask helps the user with the input by ensuring a predefined format. This can be usefull for dates, numerics, phone numbers...
autoNumeric.js library automatically formats currency and numbers. With an ujs flavor it is now directly usable in a rails project. All credits for autoNumeric.js library its-self go to its creator BobKnothe
Docsplit is a command-line utility and Ruby library for splitting apart documents into their component parts: searchable UTF-8 plain text, page images or thumbnails in any format, PDFs, single pages, and document metadata (title, author, number of pages...)
A Ruby library for prying open files you can convert to a previewable format, such as video, image and audio files. It includes a number of parser modules that try to recover metadata useful for post-processing and layout while reading the absolute minimum amount of data possible.
Ruby library that parses a phone number and automatically formats it correctly, depending on the country/locale you set.
A Ruby library for parsing and formatting UK phone numbers.
Parse and format i18n messages using ICU MessageFormat patterns, including simple placeholders, number and date placeholders, and selecting among submessages for gender and plural arguments.
Hirb currently provides a mini view framework for console applications, designed to improve irb's default output. Hirb improves console output by providing a smart pager and auto-formatting output. The smart pager detects when an output exceeds a screenful and thus only pages output as needed. Auto-formatting adds a view to an output's class. This is helpful in separating views from content (MVC anyone?). The framework encourages reusing views by letting you package them in classes and associate them with any number of output classes.
The e164 gem can parse and normalize numbers into the e164 format. It provides extra information on the Country Code and National Destination Codes. It can be used standalone or mixed into a model.
Formats a number with SI prefix
Format your telephone numbers
Lookout Lookout is a unit testing framework for Ruby¹ that puts your results in focus. Tests (expectations) are written as follows expect 2 do 1 + 1 end expect ArgumentError do Integer('1 + 1') end expect Array do [1, 2, 3].select{ |i| i % 2 == 0 } end expect [2, 4, 6] do [1, 2, 3].map{ |i| i * 2 } end Lookout is designed to encourage – force, even – unit testing best practices such as • Setting up only one expectation per test • Not setting expectations on non-public APIs • Test isolation This is done by • Only allowing one expectation to be set per test • Providing no (additional) way of accessing private state • Providing no setup and tear-down methods, nor a method of providing test helpers Other important points are • Putting the expected outcome of a test in focus with the steps of the calculation of the actual result only as a secondary concern • A focus on code readability by providing no mechanism for describing an expectation other than the code in the expectation itself • A unified syntax for setting up both state-based and behavior-based expectations The way Lookout works has been heavily influenced by expectations², by {Jay Fields}³. The code base was once also heavily based on expectations, based at Subversion {revision 76}⁴. A lot has happened since then and all of the work past that revision are due to {Nikolai Weibull}⁵. ¹ Ruby: http://ruby-lang.org/ ² Expectations: http://expectations.rubyforge.org/ ³ Jay Fields’s blog: http://blog.jayfields.com/ ⁴ Lookout revision 76: https://github.com/now/lookout/commit/537bedf3e5b3eb4b31c066b3266f42964ac35ebe ⁵ Nikolai Weibull’s home page: http://disu.se/ § Installation Install Lookout with % gem install lookout § Usage Lookout allows you to set expectations on an object’s state or behavior. We’ll begin by looking at state expectations and then take a look at expectations on behavior. § Expectations on State: Literals An expectation can be made on the result of a computation: expect 2 do 1 + 1 end Most objects, in fact, have their state expectations checked by invoking ‹#==› on the expected value with the result as its argument. Checking that a result is within a given range is also simple: expect 0.099..0.101 do 0.4 - 0.3 end Here, the more general ‹#===› is being used on the ‹Range›. § Regexps ‹Strings› of course match against ‹Strings›: expect 'ab' do 'abc'[0..1] end but we can also match a ‹String› against a ‹Regexp›: expect %r{a substring} do 'a string with a substring' end (Note the use of ‹%r{…}› to avoid warnings that will be generated when Ruby parses ‹expect /…/›.) § Modules Checking that the result includes a certain module is done by expecting the ‹Module›. expect Enumerable do [] end This, due to the nature of Ruby, of course also works for classes (as they are also modules): expect String do 'a string' end This doesn’t hinder us from expecting the actual ‹Module› itself: expect Enumerable do Enumerable end or the ‹Class›: expect String do String end for obvious reasons. As you may have figured out yourself, this is accomplished by first trying ‹#==› and, if it returns ‹false›, then trying ‹#===› on the expected ‹Module›. This is also true of ‹Ranges› and ‹Regexps›. § Booleans Truthfulness is expected with ‹true› and ‹false›: expect true do 1 end expect false do nil end Results equaling ‹true› or ‹false› are slightly different: expect TrueClass do true end expect FalseClass do false end The rationale for this is that you should only care if the result of a computation evaluates to a value that Ruby considers to be either true or false, not the exact literals ‹true› or ‹false›. § IO Expecting output on an IO object is also common: expect output("abc\ndef\n") do |io| io.puts 'abc', 'def' end This can be used to capture the output of a formatter that takes an output object as a parameter. § Warnings Expecting warnings from code isn’t very common, but should be done: expect warning('this is your final one!') do warn 'this is your final one!' end expect warning('this is your final one!') do warn '%s:%d: warning: this is your final one!' % [__FILE__, __LINE__] end ‹$VERBOSE› is set to ‹true› during the execution of the block, so you don’t need to do so yourself. If you have other code that depends on the value of $VERBOSE, that can be done with ‹#with_verbose› expect nil do with_verbose nil do $VERBOSE end end § Errors You should always be expecting errors from – and in, but that’s a different story – your code: expect ArgumentError do Integer('1 + 1') end Often, not only the type of the error, but its description, is important to check: expect StandardError.new('message') do raise StandardError.new('message') end As with ‹Strings›, ‹Regexps› can be used to check the error description: expect StandardError.new(/mess/) do raise StandardError.new('message') end § Queries Through Symbols Symbols are generally matched against symbols, but as a special case, symbols ending with ‹?› are seen as expectations on the result of query methods on the result of the block, given that the method is of zero arity and that the result isn’t a Symbol itself. Simply expect a symbol ending with ‹?›: expect :empty? do [] end To expect it’s negation, expect the same symbol beginning with ‹not_›: expect :not_nil? do [1, 2, 3] end This is the same as expect true do [].empty? end and expect false do [1, 2, 3].empty? end but provides much clearer failure messages. It also makes the expectation’s intent a lot clearer. § Queries By Proxy There’s also a way to make the expectations of query methods explicit by invoking methods on the result of the block. For example, to check that the even elements of the Array ‹[1, 2, 3]› include ‹1› you could write expect result.to.include? 1 do [1, 2, 3].reject{ |e| e.even? } end You could likewise check that the result doesn’t include 2: expect result.not.to.include? 2 do [1, 2, 3].reject{ |e| e.even? } end This is the same as (and executes a little bit slower than) writing expect false do [1, 2, 3].reject{ |e| e.even? }.include? 2 end but provides much clearer failure messages. Given that these two last examples would fail, you’d get a message saying “[1, 2, 3]#include?(2)” instead of the terser “true≠false”. It also clearly separates the actual expectation from the set-up. The keyword for this kind of expectations is ‹result›. This may be followed by any of the methods • ‹#not› • ‹#to› • ‹#be› • ‹#have› or any other method you will want to call on the result. The methods ‹#to›, ‹#be›, and ‹#have› do nothing except improve readability. The ‹#not› method inverts the expectation. § Literal Literals If you need to literally check against any of the types of objects otherwise treated specially, that is, any instances of • ‹Module› • ‹Range› • ‹Regexp› • ‹Exception› • ‹Symbol›, given that it ends with ‹?› you can do so by wrapping it in ‹literal(…)›: expect literal(:empty?) do :empty? end You almost never need to do this, as, for all but symbols, instances will match accordingly as well. § Expectations on Behavior We expect our objects to be on their best behavior. Lookout allows you to make sure that they are. Reception expectations let us verify that a method is called in the way that we expect it to be: expect mock.to.receive.to_str(without_arguments){ '123' } do |o| o.to_str end Here, ‹#mock› creates a mock object, an object that doesn’t respond to anything unless you tell it to. We tell it to expect to receive a call to ‹#to_str› without arguments and have ‹#to_str› return ‹'123'› when called. The mock object is then passed in to the block so that the expectations placed upon it can be fulfilled. Sometimes we only want to make sure that a method is called in the way that we expect it to be, but we don’t care if any other methods are called on the object. A stub object, created with ‹#stub›, expects any method and returns a stub object that, again, expects any method, and thus fits the bill. expect stub.to.receive.to_str(without_arguments){ '123' } do |o| o.to_str if o.convertable? end You don’t have to use a mock object to verify that a method is called: expect Object.to.receive.name do Object.name end As you have figured out by now, the expected method call is set up by calling ‹#receive› after ‹#to›. ‹#Receive› is followed by a call to the method to expect with any expected arguments. The body of the expected method can be given as the block to the method. Finally, an expected invocation count may follow the method. Let’s look at this formal specification in more detail. The expected method arguments may be given in a variety of ways. Let’s introduce them by giving some examples: expect mock.to.receive.a do |m| m.a end Here, the method ‹#a› must be called with any number of arguments. It may be called any number of times, but it must be called at least once. If a method must receive exactly one argument, you can use ‹Object›, as the same matching rules apply for arguments as they do for state expectations: expect mock.to.receive.a(Object) do |m| m.a 0 end If a method must receive a specific argument, you can use that argument: expect mock.to.receive.a(1..2) do |m| m.a 1 end Again, the same matching rules apply for arguments as they do for state expectations, so the previous example expects a call to ‹#a› with 1, 2, or the Range 1..2 as an argument on ‹m›. If a method must be invoked without any arguments you can use ‹without_arguments›: expect mock.to.receive.a(without_arguments) do |m| m.a end You can of course use both ‹Object› and actual arguments: expect mock.to.receive.a(Object, 2, Object) do |m| m.a nil, 2, '3' end The body of the expected method may be given as the block. Here, calling ‹#a› on ‹m› will give the result ‹1›: expect mock.to.receive.a{ 1 } do |m| raise 'not 1' unless m.a == 1 end If no body has been given, the result will be a stub object. To take a block, grab a block parameter and ‹#call› it: expect mock.to.receive.a{ |&b| b.call(1) } do |m| j = 0 m.a{ |i| j = i } raise 'not 1' unless j == 1 end To simulate an ‹#each›-like method, ‹#call› the block several times. Invocation count expectations can be set if the default expectation of “at least once” isn’t good enough. The following expectations are possible • ‹#at_most_once› • ‹#once› • ‹#at_least_once› • ‹#twice› And, for a given ‹N›, • ‹#at_most(N)› • ‹#exactly(N)› • ‹#at_least(N)› § Utilities: Stubs Method stubs are another useful thing to have in a unit testing framework. Sometimes you need to override a method that does something a test shouldn’t do, like access and alter bank accounts. We can override – stub out – a method by using the ‹#stub› method. Let’s assume that we have an ‹Account› class that has two methods, ‹#slips› and ‹#total›. ‹#Slips› retrieves the bank slips that keep track of your deposits to the ‹Account› from a database. ‹#Total› sums the ‹#slips›. In the following test we want to make sure that ‹#total› does what it should do without accessing the database. We therefore stub out ‹#slips› and make it return something that we can easily control. expect 6 do |m| stub(Class.new{ def slips raise 'database not available' end def total slips.reduce(0){ |m, n| m.to_i + n.to_i } end }.new, :slips => [1, 2, 3]){ |account| account.total } end To make it easy to create objects with a set of stubbed methods there’s also a convenience method: expect 3 do s = stub(:a => 1, :b => 2) s.a + s.b end This short-hand notation can also be used for the expected value: expect stub(:a => 1, :b => 2).to.receive.a do |o| o.a + o.b end and also works for mock objects: expect mock(:a => 2, :b => 2).to.receive.a do |o| o.a + o.b end Blocks are also allowed when defining stub methods: expect 3 do s = stub(:a => proc{ |a, b| a + b }) s.a(1, 2) end If need be, we can stub out a specific method on an object: expect 'def' do stub('abc', :to_str => 'def'){ |a| a.to_str } end The stub is active during the execution of the block. § Overriding Constants Sometimes you need to override the value of a constant during the execution of some code. Use ‹#with_const› to do just that: expect 'hello' do with_const 'A::B::C', 'hello' do A::B::C end end Here, the constant ‹A::B::C› is set to ‹'hello'› during the execution of the block. None of the constants ‹A›, ‹B›, and ‹C› need to exist for this to work. If a constant doesn’t exist it’s created and set to a new, empty, ‹Module›. The value of ‹A::B::C›, if any, is restored after the block returns and any constants that didn’t previously exist are removed. § Overriding Environment Variables Another thing you often need to control in your tests is the value of environment variables. Depending on such global values is, of course, not a good practice, but is often unavoidable when working with external libraries. ‹#With_env› allows you to override the value of environment variables during the execution of a block by giving it a ‹Hash› of key/value pairs where the key is the name of the environment variable and the value is the value that it should have during the execution of that block: expect 'hello' do with_env 'INTRO' => 'hello' do ENV['INTRO'] end end Any overridden values are restored and any keys that weren’t previously a part of the environment are removed when the block returns. § Overriding Globals You may also want to override the value of a global temporarily: expect 'hello' do with_global :$stdout, StringIO.new do print 'hello' $stdout.string end end You thus provide the name of the global and a value that it should take during the execution of a block of code. The block gets passed the overridden value, should you need it: expect true do with_global :$stdout, StringIO.new do |overridden| $stdout != overridden end end § Integration Lookout can be used from Rake¹. Simply install Lookout-Rake²: % gem install lookout-rake and add the following code to your Rakefile require 'lookout-rake-3.0' Lookout::Rake::Tasks::Test.new Make sure to read up on using Lookout-Rake for further benefits and customization. ¹ Read more about Rake at http://rake.rubyforge.org/ ² Get information on Lookout-Rake at http://disu.se/software/lookout-rake/ § API Lookout comes with an API¹ that let’s you create things such as new expected values, difference reports for your types, and so on. ¹ See http://disu.se/software/lookout/api/ § Interface Design The default output of Lookout can Spartanly be described as Spartan. If no errors or failures occur, no output is generated. This is unconventional, as unit testing frameworks tend to dump a lot of information on the user, concerning things such as progress, test count summaries, and flamboyantly colored text telling you that your tests passed. None of this output is needed. Your tests should run fast enough to not require progress reports. The lack of output provides you with the same amount of information as reporting success. Test count summaries are only useful if you’re worried that your tests aren’t being run, but if you worry about that, then providing such output doesn’t really help. Testing your tests requires something beyond reporting some arbitrary count that you would have to verify by hand anyway. When errors or failures do occur, however, the relevant information is output in a format that can easily be parsed by an ‹'errorformat'› for Vim or with {Compilation Mode}¹ for Emacs². Diffs are generated for Strings, Arrays, Hashes, and I/O. ¹ Read up on Compilation mode for Emacs at http://www.emacswiki.org/emacs/CompilationMode ² Visit The GNU Foundation’s Emacs’ software page at http://www.gnu.org/software/emacs/ § External Design Let’s now look at some of the points made in the introduction in greater detail. Lookout only allows you to set one expectation per test. If you’re testing behavior with a reception expectation, then only one method-invocation expectation can be set. If you’re testing state, then only one result can be verified. It may seem like this would cause unnecessary duplication between tests. While this is certainly a possibility, when you actually begin to try to avoid such duplication you find that you often do so by improving your interfaces. This kind of restriction tends to encourage the use of value objects, which are easy to test, and more focused objects, which require simpler tests, as they have less behavior to test, per method. By keeping your interfaces focused you’re also keeping your tests focused. Keeping your tests focused improves, in itself, test isolation, but let’s look at something that hinders it: setup and tear-down methods. Most unit testing frameworks encourage test fragmentation by providing setup and tear-down methods. Setup methods create objects and, perhaps, just their behavior for a set of tests. This means that you have to look in two places to figure out what’s being done in a test. This may work fine for few methods with simple set-ups, but makes things complicated when the number of tests increases and the set-up is complex. Often, each test further adjusts the previously set-up object before performing any verifications, further complicating the process of figuring out what state an object has in a given test. Tear-down methods clean up after tests, perhaps by removing records from a database or deleting files from the file-system. The duplication that setup methods and tear-down methods hope to remove is better avoided by improving your interfaces. This can be done by providing better set-up methods for your objects and using idioms such as {Resource Acquisition Is Initialization}¹ for guaranteed clean-up, test or no test. By not using setup and tear-down methods we keep everything pertinent to a test in the test itself, thus improving test isolation. (You also won’t {slow down your tests}² by keeping unnecessary state.) Most unit test frameworks also allow you to create arbitrary test helper methods. Lookout doesn’t. The same rationale as that that has been crystallized in the preceding paragraphs applies. If you need helpers you’re interface isn’t good enough. It really is as simple as that. To clarify: there’s nothing inherently wrong with test helper methods, but they should be general enough that they reside in their own library. The support for mocks in Lookout is provided through a set of test helper methods that make it easier to create mocks than it would have been without them. Lookout-rack³ is another example of a library providing test helper methods (well, one method, actually) that are very useful in testing web applications that use Rack⁴. A final point at which some unit test frameworks try to fragment tests further is documentation. These frameworks provide ways of describing the whats and hows of what’s being tested, the rationale being that this will provide documentation of both the test and the code being tested. Describing how a stack data structure is meant to work is a common example. A stack is, however, a rather simple data structure, so such a description provides little, if any, additional information that can’t be extracted from the implementation and its tests themselves. The implementation and its tests is, in fact, its own best documentation. Taking the points made in the previous paragraphs into account, we should already have simple, self-describing, interfaces that have easily understood tests associated with them. Rationales for the use of a given data structure or system-design design documentation is better suited in separate documentation focused at describing exactly those issues. ¹ Read the Wikipedia entry for Resource Acquisition Is Initialization at http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization ² Read how 37signals had problems with slow Test::Unit tests at http://37signals.com/svn/posts/2742-the-road-to-faster-tests/ ³ Visit the Lookout-rack home page at http://disu.se/software/lookout-rack/ ⁴ Visit the Rack Rubyforge project page at http://rack.rubyforge.org/ § Internal Design The internal design of Lookout has had a couple of goals. • As few external dependencies as possible • As few internal dependencies as possible • Internal extensibility provides external extensibility • As fast load times as possible • As high a ratio of value objects to mutable objects as possible • Each object must have a simple, obvious name • Use mix-ins, not inheritance for shared behavior • As few responsibilities per object as possible • Optimizing for speed can only be done when you have all the facts § External Dependencies Lookout used to depend on Mocha for mocks and stubs. While benchmarking I noticed that a method in Mocha was taking up more than 300 percent of the runtime. It turned out that Mocha’s method for cleaning up back-traces generated when a mock failed was doing something incredibly stupid: backtrace.reject{ |l| Regexp.new(@lib).match(File.expand_path(l)) } Here ‹@lib› is a ‹String› containing the path to the lib sub-directory in the Mocha installation directory. I reported it, provided a patch five days later, then waited. Nothing happened. {254 days later}¹, according to {Wolfram Alpha}², half of my patch was, apparently – I say “apparently”, as I received no notification – applied. By that time I had replaced the whole mocking-and-stubbing subsystem and dropped the dependency. Many Ruby developers claim that Ruby and its gems are too fast-moving for normal package-managing systems to keep up. This is testament to the fact that this isn’t the case and that the real problem is instead related to sloppy practices. Please note that I don’t want to single out the Mocha library nor its developers. I only want to provide an example where relying on external dependencies can be “considered harmful”. ¹ See the Wolfram Alpha calculation at http://www.wolframalpha.com/input/?i=days+between+march+17%2C+2010+and+november+26%2C+2010 ² Check out the Wolfram Alpha computational knowledge engine at http://www.wolframalpha.com/ § Internal Dependencies Lookout has been designed so as to keep each subsystem independent of any other. The diff subsystem is, for example, completely decoupled from any other part of the system as a whole and could be moved into its own library at a time where that would be of interest to anyone. What’s perhaps more interesting is that the diff subsystem is itself very modular. The data passes through a set of filters that depends on what kind of diff has been requested, each filter yielding modified data as it receives it. If you want to read some rather functional Ruby I can highly recommend looking at the code in the ‹lib/lookout/diff› directory. This lookout on the design of the library also makes it easy to extend Lookout. Lookout-rack was, for example, written in about four hours and about 5 of those 240 minutes were spent on setting up the interface between the two. § Optimizing For Speed The following paragraph is perhaps a bit personal, but might be interesting nonetheless. I’ve always worried about speed. The original Expectations library used ‹extend› a lot to add new behavior to objects. Expectations, for example, used to hold the result of their execution (what we now term “evaluation”) by being extended by a module representing success, failure, or error. For the longest time I used this same method, worrying about the increased performance cost that creating new objects for results would incur. I finally came to a point where I felt that the code was so simple and clean that rewriting this part of the code for a benchmark wouldn’t take more than perhaps ten minutes. Well, ten minutes later I had my results and they confirmed that creating new objects wasn’t harming performance. I was very pleased. § Naming I hate low lines (underscores). I try to avoid them in method names and I always avoid them in file names. Since the current “best practice” in the Ruby community is to put ‹BeginEndStorage› in a file called ‹begin_end_storage.rb›, I only name constants using a single noun. This has had the added benefit that classes seem to have acquired less behavior, as using a single noun doesn’t allow you to tack on additional behavior without questioning if it’s really appropriate to do so, given the rather limited range of interpretation for that noun. It also seems to encourage the creation of value objects, as something named ‹Range› feels a lot more like a value than ‹BeginEndStorage›. (To reach object-oriented-programming Nirvana you must achieve complete value.) § News § 3.0.0 The ‹xml› expectation has been dropped. It wasn’t documented, didn’t suit very many use cases, and can be better implemented by an external library. The ‹arg› argument matcher for mock method arguments has been removed, as it didn’t provide any benefit over using Object. The ‹#yield› and ‹#each› methods on stub and mock methods have been removed. They were slightly weird and their use case can be implemented using block parameters instead. The ‹stub› method inside ‹expect› blocks now stubs out the methods during the execution of a provided block instead of during the execution of the whole except block. When a mock method is called too many times, this is reported immediately, with a full backtrace. This makes it easier to pin down what’s wrong with the code. Query expectations were added. Explicit query expectations were added. Fluent boolean expectations, for example, ‹expect nil.to.be.nil?› have been replaced by query expectations (‹expect :nil? do nil end›) and explicit query expectations (‹expect result.to.be.nil? do nil end›). This was done to discourage creating objects as the expected value and creating objects that change during the course of the test. The ‹literal› expectation was added. Equality (‹#==›) is now checked before “caseity” (‹#===›) for modules, ranges, and regular expressions to match the documentation. § Financing Currently, most of my time is spent at my day job and in my rather busy private life. Please motivate me to spend time on this piece of software by donating some of your money to this project. Yeah, I realize that requesting money to develop software is a bit, well, capitalistic of me. But please realize that I live in a capitalistic society and I need money to have other people give me the things that I need to continue living under the rules of said society. So, if you feel that this piece of software has helped you out enough to warrant a reward, please PayPal a donation to now@disu.se¹. Thanks! Your support won’t go unnoticed! ¹ Send a donation: https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=now%40disu%2ese&item_name=Lookout § Reporting Bugs Please report any bugs that you encounter to the {issue tracker}¹. ¹ See https://github.com/now/lookout/issues § Contributors Contributors to the original expectations codebase are mentioned there. We hope no one on that list feels left out of this list. Please {let us know}¹ if you do. • Nikolai Weibull ¹ Add an issue to the Lookout issue tracker at https://github.com/now/lookout/issues § Licensing Lookout is free software: you may redistribute it and/or modify it under the terms of the {GNU Lesser General Public License, version 3}¹ or later², as published by the {Free Software Foundation}³. ¹ See http://disu.se/licenses/lgpl-3.0/ ² See http://gnu.org/licenses/ ³ See http://fsf.org/
PrettyDiff is a highly customizable library for creating fully featured HTML listings out of unified diff format strings. Include copy/paste-safe line numbers and built-in syntax highlighting.
Docsplit is a command-line utility and Ruby library for splitting apart documents into their component parts: searchable UTF-8 plain text, page images or thumbnails in any format, PDFs, single pages, and document metadata (title, author, number of pages...)
A javascript library for formatting and manipulating numbers.
A tiny JavaScript library for number, money and currency formatting with Rails asset pipeline
A mixin to parse "spintax", a text format used for automated article generation. Can handle nested spintax, and can count the total number of unique variations. Now also supports consistent unspinning!
# Overview ## Authentication LaunchDarkly's REST API uses the HTTPS protocol with a minimum TLS version of 1.2. All REST API resources are authenticated with either [personal or service access tokens](https://docs.launchdarkly.com/home/account/api), or session cookies. Other authentication mechanisms are not supported. You can manage personal access tokens on your [**Authorization**](https://app.launchdarkly.com/settings/authorization) page in the LaunchDarkly UI. LaunchDarkly also has SDK keys, mobile keys, and client-side IDs that are used by our server-side SDKs, mobile SDKs, and JavaScript-based SDKs, respectively. **These keys cannot be used to access our REST API**. These keys are environment-specific, and can only perform read-only operations such as fetching feature flag settings. | Auth mechanism | Allowed resources | Use cases | | ----------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- | -------------------------------------------------- | | [Personal or service access tokens](https://docs.launchdarkly.com/home/account/api) | Can be customized on a per-token basis | Building scripts, custom integrations, data export. | | SDK keys | Can only access read-only resources specific to server-side SDKs. Restricted to a single environment. | Server-side SDKs | | Mobile keys | Can only access read-only resources specific to mobile SDKs, and only for flags marked available to mobile keys. Restricted to a single environment. | Mobile SDKs | | Client-side ID | Can only access read-only resources specific to JavaScript-based client-side SDKs, and only for flags marked available to client-side. Restricted to a single environment. | Client-side JavaScript | > #### Keep your access tokens and SDK keys private > > Access tokens should _never_ be exposed in untrusted contexts. Never put an access token in client-side JavaScript, or embed it in a mobile application. LaunchDarkly has special mobile keys that you can embed in mobile apps. If you accidentally expose an access token or SDK key, you can reset it from your [**Authorization**](https://app.launchdarkly.com/settings/authorization) page. > > The client-side ID is safe to embed in untrusted contexts. It's designed for use in client-side JavaScript. ### Authentication using request header The preferred way to authenticate with the API is by adding an `Authorization` header containing your access token to your requests. The value of the `Authorization` header must be your access token. Manage personal access tokens from the [**Authorization**](https://app.launchdarkly.com/settings/authorization) page. ### Authentication using session cookie For testing purposes, you can make API calls directly from your web browser. If you are logged in to the LaunchDarkly application, the API will use your existing session to authenticate calls. If you have a [role](https://docs.launchdarkly.com/home/account/built-in-roles) other than Admin, or have a [custom role](https://docs.launchdarkly.com/home/account/custom-roles) defined, you may not have permission to perform some API calls. You will receive a `401` response code in that case. > ### Modifying the Origin header causes an error > > LaunchDarkly validates that the Origin header for any API request authenticated by a session cookie matches the expected Origin header. The expected Origin header is `https://app.launchdarkly.com`. > > If the Origin header does not match what's expected, LaunchDarkly returns an error. This error can prevent the LaunchDarkly app from working correctly. > > Any browser extension that intentionally changes the Origin header can cause this problem. For example, the `Allow-Control-Allow-Origin: *` Chrome extension changes the Origin header to `http://evil.com` and causes the app to fail. > > To prevent this error, do not modify your Origin header. > > LaunchDarkly does not require origin matching when authenticating with an access token, so this issue does not affect normal API usage. ## Representations All resources expect and return JSON response bodies. Error responses also send a JSON body. To learn more about the error format of the API, read [Errors](/#section/Overview/Errors). In practice this means that you always get a response with a `Content-Type` header set to `application/json`. In addition, request bodies for `PATCH`, `POST`, and `PUT` requests must be encoded as JSON with a `Content-Type` header set to `application/json`. ### Summary and detailed representations When you fetch a list of resources, the response includes only the most important attributes of each resource. This is a _summary representation_ of the resource. When you fetch an individual resource, such as a single feature flag, you receive a _detailed representation_ of the resource. The best way to find a detailed representation is to follow links. Every summary representation includes a link to its detailed representation. ### Expanding responses Sometimes the detailed representation of a resource does not include all of the attributes of the resource by default. If this is the case, the request method will clearly document this and describe which attributes you can include in an expanded response. To include the additional attributes, append the `expand` request parameter to your request and add a comma-separated list of the attributes to include. For example, when you append `?expand=members,maintainers` to the [Get team](/tag/Teams#operation/getTeam) endpoint, the expanded response includes both of these attributes. ### Links and addressability The best way to navigate the API is by following links. These are attributes in representations that link to other resources. The API always uses the same format for links: - Links to other resources within the API are encapsulated in a `_links` object - If the resource has a corresponding link to HTML content on the site, it is stored in a special `_site` link Each link has two attributes: - An `href`, which contains the URL - A `type`, which describes the content type For example, a feature resource might return the following: ```json { "_links": { "parent": { "href": "/api/features", "type": "application/json" }, "self": { "href": "/api/features/sort.order", "type": "application/json" } }, "_site": { "href": "/features/sort.order", "type": "text/html" } } ``` From this, you can navigate to the parent collection of features by following the `parent` link, or navigate to the site page for the feature by following the `_site` link. Collections are always represented as a JSON object with an `items` attribute containing an array of representations. Like all other representations, collections have `_links` defined at the top level. Paginated collections include `first`, `last`, `next`, and `prev` links containing a URL with the respective set of elements in the collection. ## Updates Resources that accept partial updates use the `PATCH` verb. Most resources support the [JSON patch](/reference#updates-using-json-patch) format. Some resources also support the [JSON merge patch](/reference#updates-using-json-merge-patch) format, and some resources support the [semantic patch](/reference#updates-using-semantic-patch) format, which is a way to specify the modifications to perform as a set of executable instructions. Each resource supports optional [comments](/reference#updates-with-comments) that you can submit with updates. Comments appear in outgoing webhooks, the audit log, and other integrations. When a resource supports both JSON patch and semantic patch, we document both in the request method. However, the specific request body fields and descriptions included in our documentation only match one type of patch or the other. ### Updates using JSON patch [JSON patch](https://datatracker.ietf.org/doc/html/rfc6902) is a way to specify the modifications to perform on a resource. JSON patch uses paths and a limited set of operations to describe how to transform the current state of the resource into a new state. JSON patch documents are always arrays, where each element contains an operation, a path to the field to update, and the new value. For example, in this feature flag representation: ```json { "name": "New recommendations engine", "key": "engine.enable", "description": "This is the description", ... } ``` You can change the feature flag's description with the following patch document: ```json [{ "op": "replace", "path": "/description", "value": "This is the new description" }] ``` You can specify multiple modifications to perform in a single request. You can also test that certain preconditions are met before applying the patch: ```json [ { "op": "test", "path": "/version", "value": 10 }, { "op": "replace", "path": "/description", "value": "The new description" } ] ``` The above patch request tests whether the feature flag's `version` is `10`, and if so, changes the feature flag's description. Attributes that are not editable, such as a resource's `_links`, have names that start with an underscore. ### Updates using JSON merge patch [JSON merge patch](https://datatracker.ietf.org/doc/html/rfc7386) is another format for specifying the modifications to perform on a resource. JSON merge patch is less expressive than JSON patch. However, in many cases it is simpler to construct a merge patch document. For example, you can change a feature flag's description with the following merge patch document: ```json { "description": "New flag description" } ``` ### Updates using semantic patch Some resources support the semantic patch format. A semantic patch is a way to specify the modifications to perform on a resource as a set of executable instructions. Semantic patch allows you to be explicit about intent using precise, custom instructions. In many cases, you can define semantic patch instructions independently of the current state of the resource. This can be useful when defining a change that may be applied at a future date. To make a semantic patch request, you must append `domain-model=launchdarkly.semanticpatch` to your `Content-Type` header. Here's how: ``` Content-Type: application/json; domain-model=launchdarkly.semanticpatch ``` If you call a semantic patch resource without this header, you will receive a `400` response because your semantic patch will be interpreted as a JSON patch. The body of a semantic patch request takes the following properties: * `comment` (string): (Optional) A description of the update. * `environmentKey` (string): (Required for some resources only) The environment key. * `instructions` (array): (Required) A list of actions the update should perform. Each action in the list must be an object with a `kind` property that indicates the instruction. If the instruction requires parameters, you must include those parameters as additional fields in the object. The documentation for each resource that supports semantic patch includes the available instructions and any additional parameters. For example: ```json { "comment": "optional comment", "instructions": [ {"kind": "turnFlagOn"} ] } ``` Semantic patches are not applied partially; either all of the instructions are applied or none of them are. If **any** instruction is invalid, the endpoint returns an error and will not change the resource. If all instructions are valid, the request succeeds and the resources are updated if necessary, or left unchanged if they are already in the state you request. ### Updates with comments You can submit optional comments with `PATCH` changes. To submit a comment along with a JSON patch document, use the following format: ```json { "comment": "This is a comment string", "patch": [{ "op": "replace", "path": "/description", "value": "The new description" }] } ``` To submit a comment along with a JSON merge patch document, use the following format: ```json { "comment": "This is a comment string", "merge": { "description": "New flag description" } } ``` To submit a comment along with a semantic patch, use the following format: ```json { "comment": "This is a comment string", "instructions": [ {"kind": "turnFlagOn"} ] } ``` ## Errors The API always returns errors in a common format. Here's an example: ```json { "code": "invalid_request", "message": "A feature with that key already exists", "id": "30ce6058-87da-11e4-b116-123b93f75cba" } ``` The `code` indicates the general class of error. The `message` is a human-readable explanation of what went wrong. The `id` is a unique identifier. Use it when you're working with LaunchDarkly Support to debug a problem with a specific API call. ### HTTP status error response codes | Code | Definition | Description | Possible Solution | | ---- | ----------------- | ------------------------------------------------------------------------------------------- | ---------------------------------------------------------------- | | 400 | Invalid request | The request cannot be understood. | Ensure JSON syntax in request body is correct. | | 401 | Invalid access token | Requestor is unauthorized or does not have permission for this API call. | Ensure your API access token is valid and has the appropriate permissions. | | 403 | Forbidden | Requestor does not have access to this resource. | Ensure that the account member or access token has proper permissions set. | | 404 | Invalid resource identifier | The requested resource is not valid. | Ensure that the resource is correctly identified by ID or key. | | 405 | Method not allowed | The request method is not allowed on this resource. | Ensure that the HTTP verb is correct. | | 409 | Conflict | The API request can not be completed because it conflicts with a concurrent API request. | Retry your request. | | 422 | Unprocessable entity | The API request can not be completed because the update description can not be understood. | Ensure that the request body is correct for the type of patch you are using, either JSON patch or semantic patch. | 429 | Too many requests | Read [Rate limiting](/#section/Overview/Rate-limiting). | Wait and try again later. | ## CORS The LaunchDarkly API supports Cross Origin Resource Sharing (CORS) for AJAX requests from any origin. If an `Origin` header is given in a request, it will be echoed as an explicitly allowed origin. Otherwise the request returns a wildcard, `Access-Control-Allow-Origin: *`. For more information on CORS, read the [CORS W3C Recommendation](http://www.w3.org/TR/cors). Example CORS headers might look like: ```http Access-Control-Allow-Headers: Accept, Content-Type, Content-Length, Accept-Encoding, Authorization Access-Control-Allow-Methods: OPTIONS, GET, DELETE, PATCH Access-Control-Allow-Origin: * Access-Control-Max-Age: 300 ``` You can make authenticated CORS calls just as you would make same-origin calls, using either [token or session-based authentication](/#section/Overview/Authentication). If you are using session authentication, you should set the `withCredentials` property for your `xhr` request to `true`. You should never expose your access tokens to untrusted entities. ## Rate limiting We use several rate limiting strategies to ensure the availability of our APIs. Rate-limited calls to our APIs return a `429` status code. Calls to our APIs include headers indicating the current rate limit status. The specific headers returned depend on the API route being called. The limits differ based on the route, authentication mechanism, and other factors. Routes that are not rate limited may not contain any of the headers described below. > ### Rate limiting and SDKs > > LaunchDarkly SDKs are never rate limited and do not use the API endpoints defined here. LaunchDarkly uses a different set of approaches, including streaming/server-sent events and a global CDN, to ensure availability to the routes used by LaunchDarkly SDKs. ### Global rate limits Authenticated requests are subject to a global limit. This is the maximum number of calls that your account can make to the API per ten seconds. All service and personal access tokens on the account share this limit, so exceeding the limit with one access token will impact other tokens. Calls that are subject to global rate limits may return the headers below: | Header name | Description | | ------------------------------ | -------------------------------------------------------------------------------- | | `X-Ratelimit-Global-Remaining` | The maximum number of requests the account is permitted to make per ten seconds. | | `X-Ratelimit-Reset` | The time at which the current rate limit window resets in epoch milliseconds. | We do not publicly document the specific number of calls that can be made globally. This limit may change, and we encourage clients to program against the specification, relying on the two headers defined above, rather than hardcoding to the current limit. ### Route-level rate limits Some authenticated routes have custom rate limits. These also reset every ten seconds. Any service or personal access tokens hitting the same route share this limit, so exceeding the limit with one access token may impact other tokens. Calls that are subject to route-level rate limits return the headers below: | Header name | Description | | ----------------------------- | ----------------------------------------------------------------------------------------------------- | | `X-Ratelimit-Route-Remaining` | The maximum number of requests to the current route the account is permitted to make per ten seconds. | | `X-Ratelimit-Reset` | The time at which the current rate limit window resets in epoch milliseconds. | A _route_ represents a specific URL pattern and verb. For example, the [Delete environment](/tag/Environments#operation/deleteEnvironment) endpoint is considered a single route, and each call to delete an environment counts against your route-level rate limit for that route. We do not publicly document the specific number of calls that an account can make to each endpoint per ten seconds. These limits may change, and we encourage clients to program against the specification, relying on the two headers defined above, rather than hardcoding to the current limits. ### IP-based rate limiting We also employ IP-based rate limiting on some API routes. If you hit an IP-based rate limit, your API response will include a `Retry-After` header indicating how long to wait before re-trying the call. Clients must wait at least `Retry-After` seconds before making additional calls to our API, and should employ jitter and backoff strategies to avoid triggering rate limits again. ## OpenAPI (Swagger) and client libraries We have a [complete OpenAPI (Swagger) specification](https://app.launchdarkly.com/api/v2/openapi.json) for our API. We auto-generate multiple client libraries based on our OpenAPI specification. To learn more, visit the [collection of client libraries on GitHub](https://github.com/search?q=topic%3Alaunchdarkly-api+org%3Alaunchdarkly&type=Repositories). You can also use this specification to generate client libraries to interact with our REST API in your language of choice. Our OpenAPI specification is supported by several API-based tools such as Postman and Insomnia. In many cases, you can directly import our specification to explore our APIs. ## Method overriding Some firewalls and HTTP clients restrict the use of verbs other than `GET` and `POST`. In those environments, our API endpoints that use `DELETE`, `PATCH`, and `PUT` verbs are inaccessible. To avoid this issue, our API supports the `X-HTTP-Method-Override` header, allowing clients to "tunnel" `DELETE`, `PATCH`, and `PUT` requests using a `POST` request. For example, to call a `PATCH` endpoint using a `POST` request, you can include `X-HTTP-Method-Override:PATCH` as a header. ## Beta resources We sometimes release new API resources in **beta** status before we release them with general availability. Resources that are in beta are still undergoing testing and development. They may change without notice, including becoming backwards incompatible. We try to promote resources into general availability as quickly as possible. This happens after sufficient testing and when we're satisfied that we no longer need to make backwards-incompatible changes. We mark beta resources with a "Beta" callout in our documentation, pictured below: > ### This feature is in beta > > To use this feature, pass in a header including the `LD-API-Version` key with value set to `beta`. Use this header with each call. To learn more, read [Beta resources](/#section/Overview/Beta-resources). > > Resources that are in beta are still undergoing testing and development. They may change without notice, including becoming backwards incompatible. ### Using beta resources To use a beta resource, you must include a header in the request. If you call a beta resource without this header, you receive a `403` response. Use this header: ``` LD-API-Version: beta ``` ## Federal environments The version of LaunchDarkly that is available on domains controlled by the United States government is different from the version of LaunchDarkly available to the general public. If you are an employee or contractor for a United States federal agency and use LaunchDarkly in your work, you likely use the federal instance of LaunchDarkly. If you are working in the federal instance of LaunchDarkly, the base URI for each request is `https://app.launchdarkly.us`. In the "Try it" sandbox for each request, click the request path to view the complete resource path for the federal environment. To learn more, read [LaunchDarkly in federal environments](https://docs.launchdarkly.com/home/infrastructure/federal). ## Versioning We try hard to keep our REST API backwards compatible, but we occasionally have to make backwards-incompatible changes in the process of shipping new features. These breaking changes can cause unexpected behavior if you don't prepare for them accordingly. Updates to our REST API include support for the latest features in LaunchDarkly. We also release a new version of our REST API every time we make a breaking change. We provide simultaneous support for multiple API versions so you can migrate from your current API version to a new version at your own pace. ### Setting the API version per request You can set the API version on a specific request by sending an `LD-API-Version` header, as shown in the example below: ``` LD-API-Version: 20240415 ``` The header value is the version number of the API version you would like to request. The number for each version corresponds to the date the version was released in `yyyymmdd` format. In the example above the version `20240415` corresponds to April 15, 2024. ### Setting the API version per access token When you create an access token, you must specify a specific version of the API to use. This ensures that integrations using this token cannot be broken by version changes. Tokens created before versioning was released have their version set to `20160426`, which is the version of the API that existed before the current versioning scheme, so that they continue working the same way they did before versioning. If you would like to upgrade your integration to use a new API version, you can explicitly set the header described above. > ### Best practice: Set the header for every client or integration > > We recommend that you set the API version header explicitly in any client or integration you build. > > Only rely on the access token API version during manual testing. ### API version changelog |<div style="width:75px">Version</div> | Changes | End of life (EOL) |---|---|---| | `20240415` | <ul><li>Changed several endpoints from unpaginated to paginated. Use the `limit` and `offset` query parameters to page through the results.</li> <li>Changed the [list access tokens](/tag/Access-tokens#operation/getTokens) endpoint: <ul><li>Response is now paginated with a default limit of `25`</li></ul></li> <li>Changed the [list account members](/tag/Account-members#operation/getMembers) endpoint: <ul><li>The `accessCheck` filter is no longer available</li></ul></li> <li>Changed the [list custom roles](/tag/Custom-roles#operation/getCustomRoles) endpoint: <ul><li>Response is now paginated with a default limit of `20`</li></ul></li> <li>Changed the [list feature flags](/tag/Feature-flags#operation/getFeatureFlags) endpoint: <ul><li>Response is now paginated with a default limit of `20`</li><li>The `environments` field is now only returned if the request is filtered by environment, using the `filterEnv` query parameter</li><li>The `filterEnv` query parameter supports a maximum of three environments</li><li>The `followerId`, `hasDataExport`, `status`, `contextKindTargeted`, and `segmentTargeted` filters are no longer available</li></ul></li> <li>Changed the [list segments](/tag/Segments#operation/getSegments) endpoint: <ul><li>Response is now paginated with a default limit of `20`</li></ul></li> <li>Changed the [list teams](/tag/Teams#operation/getTeams) endpoint: <ul><li>The `expand` parameter no longer supports including `projects` or `roles`</li><li>In paginated results, the maximum page size is now 100</li></ul></li> <li>Changed the [get workflows](/tag/Workflows#operation/getWorkflows) endpoint: <ul><li>Response is now paginated with a default limit of `20`</li><li>The `_conflicts` field in the response is no longer available</li></ul></li> </ul> | Current | | `20220603` | <ul><li>Changed the [list projects](/tag/Projects#operation/getProjects) return value:<ul><li>Response is now paginated with a default limit of `20`.</li><li>Added support for filter and sort.</li><li>The project `environments` field is now expandable. This field is omitted by default.</li></ul></li><li>Changed the [get project](/tag/Projects#operation/getProject) return value:<ul><li>The `environments` field is now expandable. This field is omitted by default.</li></ul></li></ul> | 2025-04-15 | | `20210729` | <ul><li>Changed the [create approval request](/tag/Approvals#operation/postApprovalRequest) return value. It now returns HTTP Status Code `201` instead of `200`.</li><li> Changed the [get users](/tag/Users#operation/getUser) return value. It now returns a user record, not a user. </li><li>Added additional optional fields to environment, segments, flags, members, and segments, including the ability to create big segments. </li><li> Added default values for flag variations when new environments are created. </li><li>Added filtering and pagination for getting flags and members, including `limit`, `number`, `filter`, and `sort` query parameters. </li><li>Added endpoints for expiring user targets for flags and segments, scheduled changes, access tokens, Relay Proxy configuration, integrations and subscriptions, and approvals. </li></ul> | 2023-06-03 | | `20191212` | <ul><li>[List feature flags](/tag/Feature-flags#operation/getFeatureFlags) now defaults to sending summaries of feature flag configurations, equivalent to setting the query parameter `summary=true`. Summaries omit flag targeting rules and individual user targets from the payload. </li><li> Added endpoints for flags, flag status, projects, environments, audit logs, members, users, custom roles, segments, usage, streams, events, and data export. </li></ul> | 2022-07-29 | | `20160426` | <ul><li>Initial versioning of API. Tokens created before versioning have their version set to this.</li></ul> | 2020-12-12 | To learn more about how EOL is determined, read LaunchDarkly's [End of Life (EOL) Policy](https://launchdarkly.com/policies/end-of-life-policy/).
Rails validator for validating format of czech identification number = IC
CommandSet is a user interface framework. Its focus is a DSL for defining commands, much like Rake or RSpec. A default readline based terminal interpreter (complete with context sensitive tab completion, and the amenities of readline: history editing, etc) is included. It could very well be adapted to interact with CGI or a GUI - both are planned. CommandSet has a lot of very nice features. First is the domain-specific language for defining commands and sets of commands. Those sets can further be neatly composed into larger interfaces, so that useful or standard commands can be resued. Optional application modes, much like Cisco's IOS, with a little bit more flexibility. Arguments have their own sub-language, that allows them to provide interface hints (like tab completion) as well as input validation. On the output side of things, CommandSet has a very flexible output capturing mechanism, which generates a tree of data as it's generated, even capturing writes to multiple places at once (even from multiple threads) and keeping everything straight. Methods that normally write to stdout are interposed and fed into the tree, so you can hack in existing scripts with minimal adjustment. The final output can be presented to the user in a number of formats, including contextual coloring and indentation, or even progress hashes. XML is also provided, although it needs some work. Templates are on the way. While you're developing your application, you might find the record and playback utilities useful. cmdset-record will start up with your defaults for your command set, and spit out an interaction script. Then you can replay the script against the live set with cmdset-playback. Great for ad hoc testing, usability surveys and general demos.
Provides date, time, number, and list formatting functionality for various Twitter-supported locales in Javascript.
Generates a classes (or class like things) that can translate currency values in a specific currency to a number of other currencies. Other currency sources and new formats can be added.
Phone number parsing, validation and formatting.
Allows you to easily format and validate phone numbers in any format you desire (sensible defaults provided).
Format numbers and dates according to The New York Times Manual of Style
Formats RSpec-3 test report in TAP format with a proper nested display of example groups and includes stats for the total number of passed, failed, and pending tests per example group. It supports four different TAP format styles.
Plugs directly in the the Google's native C++ [libphonenumber](https://github.com/google/libphonenumber) for extemely _fast_ and _robust_ phone number parsing, validation, and formatting. On average, most methods are 40x t0 50x faster than other Ruby phone number libraries.
R18n backend for Rails I18n and R18n filters and loader to support Rails translation format. R18n has nice Ruby-style syntax, filters, flexible locales, custom loaders, translation support for any classes, time and number localization, several user language support, agnostic core package with out-of-box support for Rails, Sinatra and desktop applications.
Formats a number into a Japanese Yen string (e.g., 12万3,456円).
A few CLI and general utilities. Includes a numbered-menu select loop utility, a ANSI formatting escape code handler, a text-based histogram maker, k-means and n-means (k-means with minimum optimal k) calculators, various collection utility methods, and a utility for using OptionParser with less code.
Docsplit is a command-line utility and Ruby library for splitting apart documents into their component parts: searchable UTF-8 plain text, page \ images or thumbnails in any format, PDFs, single pages, and document metadata (title, author, number of pages...)
A plugin for the Merb framework that provides i18n support to translate your site in several languages. It is just a wrap for R18n core library. It can format numbers and time to the rules of the user locale, has translation for common words, storage translation in YAML format with pluralization and procedures and has special support for countries with two official languages.
These functions tell you whether a credit card number is self-consistent using known algorithms for credit card numbers. All non-integer values are removed from the string before parsing so that you don't have to worry about the format of the string.
GlobalPhone parses, validates, and formats local and international phone numbers according to the E.164 standard using the rules specified in Google's libphonenumber database.