Fluentd is an open source data collector designed to scale and simplify log management. It can collect, process and ship many kinds of data in near real-time.
Process manager for applications with multiple components
open child process with handles on pid, stdin, stdout, and stderr: manage child processes and their io handles easily.
[Deprecated] The PayPal REST SDK provides Ruby APIs to create, process and manage payment. It is recommended to use paypal-server-sdk instead
Serv-O-Lux is a collection of Ruby classes that are useful for daemon and process management, and for writing your own Ruby services. The code is well documented and tested. It works with Ruby and JRuby supporting 1.9 and 2.0 interpreters.
RExec stands for Remote Execute and provides support for executing processes both locally and remotely. It provides a number of different tools to assist with running Ruby code, including remote code execution, daemonization, task management and priviledge management.
Manages the process of loading decorators into your Rails application.
Mongrel plugin that provides commands and Capistrano tasks for managing multiple Mongrel processes.
A library utilizing the lsof unix command for process management.
A sophisticated process manager for applications running on Linux & macOS.
Guard::Spring automatically manages spring process.
Something small for process management
Run and manage multiple processes in separate fibers with predictable behaviour.
ruby-wmi by Gordon Thiesfeld http://ruby-wmi.rubyforge.org/ gthiesfeld@gmail.com == DESCRIPTION: ruby-wmi is an ActiveRecord style interface for Microsoft's Windows Management Instrumentation provider. Many of the methods in WMI::Base are borrowed directly, or with some modification from ActiveRecord. http://api.rubyonrails.org/classes/ActiveRecord/Base.html The major tool in this library is the #find method. For more information, see WMI::Base. There is also a WMI.sublasses method included for reflection. == SYNOPSIS: # The following code sample kills all processes of a given name # (in this case, Notepad), except the oldest. require 'ruby-wmi' procs = WMI::Win32_Process.find(:all, :conditions => { :name => 'Notepad.exe' } ) morituri = procs.sort_by{|p| p.CreationDate } #those who are about to die morituri.shift morituri.each{|p| p.terminate } == REQUIREMENTS: Windows 2000 or newer Ruby 1.8 == INSTALL: gem install ruby-wmi == LICENSE: (The MIT License) Copyright (c) 2007 Gordon Thiesfeld Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Standardize your website's look and feel with PHCDevworks' Rails view helpers. Our helpers simplify the coding process for managing page headings, taglines, SEO, and title tags, ensuring that your pages always look their best.
Dataflow is a managed service for executing a wide variety of data processing patterns. Note that google-cloud-dataflow-v1beta3 is a version-specific client library. For most uses, we recommend installing the main client library google-cloud-dataflow instead. See the readme for more details.
Small gem to manage executing looping processes and signal handling.
ChildLabor is a gem that helps manage child processes.
Tokyo Tyrant is a package of network interface to the DBM called Tokyo Cabinet. Though the DBM has high performance, you might bother in case that multiple processes share the same database, or remote processes access the database. Thus, Tokyo Tyrant is provided for concurrent and remote connections to Tokyo Cabinet. It is composed of the server process managing a database and its access library for client applications.
Rich-CMS is a module of E9s (http://github.com/archan937/e9s) which provides a frontend for your CMS content. You can use this gem to manage CMS content or translations (in an internationalized application). The installation and setup process is very easily done. You will have to register content at the Rich-CMS engine and also you will have to specify the authentication mechanism. Both are one-liners.
Rails engine allowing to configure and manage business processes in rails including user operations, background operations, etc. '
It launches and interacts with a set of services from a single terminal window. * Colorizes output for each process to help distinguish them. * Useful for integration-test applications where you need to start up several processes and test them all. * Built because servolux wasn't working very well for me. * Can detect exactly when a service is successfully launched by watching the output of the process.
Generates WSDL automatically. It manages jobs as asynchronous processes
Process manager for applications with multiple components
A modular CMS for Ruby on Rails 5 with an intuitive out-of-the-box interface to manage and customize components. Use Binda if you want to speed up your setup process and concentrate directly on what your application is about. Please read the documentation for more information and a quick start guide.
Many other Ruby libraries that simplify parallel execution support one primary use case - crunching through a large queue of small, similar tasks as quickly and efficiently as possible. This library primarily supports the use case of executing a few larger and unrelated tasks in parallel, automatically managing the stdout and passing return values back to the main process. This library was created to be used by Puppet's Beaker test framework to enable parallel execution of some of the framework's tasks, and allow users to execute code in parallel within their tests.
A simple managed process builder for Ruby projects
Lookout Lookout is a unit testing framework for Ruby¹ that puts your results in focus. Tests (expectations) are written as follows expect 2 do 1 + 1 end expect ArgumentError do Integer('1 + 1') end expect Array do [1, 2, 3].select{ |i| i % 2 == 0 } end expect [2, 4, 6] do [1, 2, 3].map{ |i| i * 2 } end Lookout is designed to encourage – force, even – unit testing best practices such as • Setting up only one expectation per test • Not setting expectations on non-public APIs • Test isolation This is done by • Only allowing one expectation to be set per test • Providing no (additional) way of accessing private state • Providing no setup and tear-down methods, nor a method of providing test helpers Other important points are • Putting the expected outcome of a test in focus with the steps of the calculation of the actual result only as a secondary concern • A focus on code readability by providing no mechanism for describing an expectation other than the code in the expectation itself • A unified syntax for setting up both state-based and behavior-based expectations The way Lookout works has been heavily influenced by expectations², by {Jay Fields}³. The code base was once also heavily based on expectations, based at Subversion {revision 76}⁴. A lot has happened since then and all of the work past that revision are due to {Nikolai Weibull}⁵. ¹ Ruby: http://ruby-lang.org/ ² Expectations: http://expectations.rubyforge.org/ ³ Jay Fields’s blog: http://blog.jayfields.com/ ⁴ Lookout revision 76: https://github.com/now/lookout/commit/537bedf3e5b3eb4b31c066b3266f42964ac35ebe ⁵ Nikolai Weibull’s home page: http://disu.se/ § Installation Install Lookout with % gem install lookout § Usage Lookout allows you to set expectations on an object’s state or behavior. We’ll begin by looking at state expectations and then take a look at expectations on behavior. § Expectations on State: Literals An expectation can be made on the result of a computation: expect 2 do 1 + 1 end Most objects, in fact, have their state expectations checked by invoking ‹#==› on the expected value with the result as its argument. Checking that a result is within a given range is also simple: expect 0.099..0.101 do 0.4 - 0.3 end Here, the more general ‹#===› is being used on the ‹Range›. § Regexps ‹Strings› of course match against ‹Strings›: expect 'ab' do 'abc'[0..1] end but we can also match a ‹String› against a ‹Regexp›: expect %r{a substring} do 'a string with a substring' end (Note the use of ‹%r{…}› to avoid warnings that will be generated when Ruby parses ‹expect /…/›.) § Modules Checking that the result includes a certain module is done by expecting the ‹Module›. expect Enumerable do [] end This, due to the nature of Ruby, of course also works for classes (as they are also modules): expect String do 'a string' end This doesn’t hinder us from expecting the actual ‹Module› itself: expect Enumerable do Enumerable end or the ‹Class›: expect String do String end for obvious reasons. As you may have figured out yourself, this is accomplished by first trying ‹#==› and, if it returns ‹false›, then trying ‹#===› on the expected ‹Module›. This is also true of ‹Ranges› and ‹Regexps›. § Booleans Truthfulness is expected with ‹true› and ‹false›: expect true do 1 end expect false do nil end Results equaling ‹true› or ‹false› are slightly different: expect TrueClass do true end expect FalseClass do false end The rationale for this is that you should only care if the result of a computation evaluates to a value that Ruby considers to be either true or false, not the exact literals ‹true› or ‹false›. § IO Expecting output on an IO object is also common: expect output("abc\ndef\n") do |io| io.puts 'abc', 'def' end This can be used to capture the output of a formatter that takes an output object as a parameter. § Warnings Expecting warnings from code isn’t very common, but should be done: expect warning('this is your final one!') do warn 'this is your final one!' end expect warning('this is your final one!') do warn '%s:%d: warning: this is your final one!' % [__FILE__, __LINE__] end ‹$VERBOSE› is set to ‹true› during the execution of the block, so you don’t need to do so yourself. If you have other code that depends on the value of $VERBOSE, that can be done with ‹#with_verbose› expect nil do with_verbose nil do $VERBOSE end end § Errors You should always be expecting errors from – and in, but that’s a different story – your code: expect ArgumentError do Integer('1 + 1') end Often, not only the type of the error, but its description, is important to check: expect StandardError.new('message') do raise StandardError.new('message') end As with ‹Strings›, ‹Regexps› can be used to check the error description: expect StandardError.new(/mess/) do raise StandardError.new('message') end § Queries Through Symbols Symbols are generally matched against symbols, but as a special case, symbols ending with ‹?› are seen as expectations on the result of query methods on the result of the block, given that the method is of zero arity and that the result isn’t a Symbol itself. Simply expect a symbol ending with ‹?›: expect :empty? do [] end To expect it’s negation, expect the same symbol beginning with ‹not_›: expect :not_nil? do [1, 2, 3] end This is the same as expect true do [].empty? end and expect false do [1, 2, 3].empty? end but provides much clearer failure messages. It also makes the expectation’s intent a lot clearer. § Queries By Proxy There’s also a way to make the expectations of query methods explicit by invoking methods on the result of the block. For example, to check that the even elements of the Array ‹[1, 2, 3]› include ‹1› you could write expect result.to.include? 1 do [1, 2, 3].reject{ |e| e.even? } end You could likewise check that the result doesn’t include 2: expect result.not.to.include? 2 do [1, 2, 3].reject{ |e| e.even? } end This is the same as (and executes a little bit slower than) writing expect false do [1, 2, 3].reject{ |e| e.even? }.include? 2 end but provides much clearer failure messages. Given that these two last examples would fail, you’d get a message saying “[1, 2, 3]#include?(2)” instead of the terser “true≠false”. It also clearly separates the actual expectation from the set-up. The keyword for this kind of expectations is ‹result›. This may be followed by any of the methods • ‹#not› • ‹#to› • ‹#be› • ‹#have› or any other method you will want to call on the result. The methods ‹#to›, ‹#be›, and ‹#have› do nothing except improve readability. The ‹#not› method inverts the expectation. § Literal Literals If you need to literally check against any of the types of objects otherwise treated specially, that is, any instances of • ‹Module› • ‹Range› • ‹Regexp› • ‹Exception› • ‹Symbol›, given that it ends with ‹?› you can do so by wrapping it in ‹literal(…)›: expect literal(:empty?) do :empty? end You almost never need to do this, as, for all but symbols, instances will match accordingly as well. § Expectations on Behavior We expect our objects to be on their best behavior. Lookout allows you to make sure that they are. Reception expectations let us verify that a method is called in the way that we expect it to be: expect mock.to.receive.to_str(without_arguments){ '123' } do |o| o.to_str end Here, ‹#mock› creates a mock object, an object that doesn’t respond to anything unless you tell it to. We tell it to expect to receive a call to ‹#to_str› without arguments and have ‹#to_str› return ‹'123'› when called. The mock object is then passed in to the block so that the expectations placed upon it can be fulfilled. Sometimes we only want to make sure that a method is called in the way that we expect it to be, but we don’t care if any other methods are called on the object. A stub object, created with ‹#stub›, expects any method and returns a stub object that, again, expects any method, and thus fits the bill. expect stub.to.receive.to_str(without_arguments){ '123' } do |o| o.to_str if o.convertable? end You don’t have to use a mock object to verify that a method is called: expect Object.to.receive.name do Object.name end As you have figured out by now, the expected method call is set up by calling ‹#receive› after ‹#to›. ‹#Receive› is followed by a call to the method to expect with any expected arguments. The body of the expected method can be given as the block to the method. Finally, an expected invocation count may follow the method. Let’s look at this formal specification in more detail. The expected method arguments may be given in a variety of ways. Let’s introduce them by giving some examples: expect mock.to.receive.a do |m| m.a end Here, the method ‹#a› must be called with any number of arguments. It may be called any number of times, but it must be called at least once. If a method must receive exactly one argument, you can use ‹Object›, as the same matching rules apply for arguments as they do for state expectations: expect mock.to.receive.a(Object) do |m| m.a 0 end If a method must receive a specific argument, you can use that argument: expect mock.to.receive.a(1..2) do |m| m.a 1 end Again, the same matching rules apply for arguments as they do for state expectations, so the previous example expects a call to ‹#a› with 1, 2, or the Range 1..2 as an argument on ‹m›. If a method must be invoked without any arguments you can use ‹without_arguments›: expect mock.to.receive.a(without_arguments) do |m| m.a end You can of course use both ‹Object› and actual arguments: expect mock.to.receive.a(Object, 2, Object) do |m| m.a nil, 2, '3' end The body of the expected method may be given as the block. Here, calling ‹#a› on ‹m› will give the result ‹1›: expect mock.to.receive.a{ 1 } do |m| raise 'not 1' unless m.a == 1 end If no body has been given, the result will be a stub object. To take a block, grab a block parameter and ‹#call› it: expect mock.to.receive.a{ |&b| b.call(1) } do |m| j = 0 m.a{ |i| j = i } raise 'not 1' unless j == 1 end To simulate an ‹#each›-like method, ‹#call› the block several times. Invocation count expectations can be set if the default expectation of “at least once” isn’t good enough. The following expectations are possible • ‹#at_most_once› • ‹#once› • ‹#at_least_once› • ‹#twice› And, for a given ‹N›, • ‹#at_most(N)› • ‹#exactly(N)› • ‹#at_least(N)› § Utilities: Stubs Method stubs are another useful thing to have in a unit testing framework. Sometimes you need to override a method that does something a test shouldn’t do, like access and alter bank accounts. We can override – stub out – a method by using the ‹#stub› method. Let’s assume that we have an ‹Account› class that has two methods, ‹#slips› and ‹#total›. ‹#Slips› retrieves the bank slips that keep track of your deposits to the ‹Account› from a database. ‹#Total› sums the ‹#slips›. In the following test we want to make sure that ‹#total› does what it should do without accessing the database. We therefore stub out ‹#slips› and make it return something that we can easily control. expect 6 do |m| stub(Class.new{ def slips raise 'database not available' end def total slips.reduce(0){ |m, n| m.to_i + n.to_i } end }.new, :slips => [1, 2, 3]){ |account| account.total } end To make it easy to create objects with a set of stubbed methods there’s also a convenience method: expect 3 do s = stub(:a => 1, :b => 2) s.a + s.b end This short-hand notation can also be used for the expected value: expect stub(:a => 1, :b => 2).to.receive.a do |o| o.a + o.b end and also works for mock objects: expect mock(:a => 2, :b => 2).to.receive.a do |o| o.a + o.b end Blocks are also allowed when defining stub methods: expect 3 do s = stub(:a => proc{ |a, b| a + b }) s.a(1, 2) end If need be, we can stub out a specific method on an object: expect 'def' do stub('abc', :to_str => 'def'){ |a| a.to_str } end The stub is active during the execution of the block. § Overriding Constants Sometimes you need to override the value of a constant during the execution of some code. Use ‹#with_const› to do just that: expect 'hello' do with_const 'A::B::C', 'hello' do A::B::C end end Here, the constant ‹A::B::C› is set to ‹'hello'› during the execution of the block. None of the constants ‹A›, ‹B›, and ‹C› need to exist for this to work. If a constant doesn’t exist it’s created and set to a new, empty, ‹Module›. The value of ‹A::B::C›, if any, is restored after the block returns and any constants that didn’t previously exist are removed. § Overriding Environment Variables Another thing you often need to control in your tests is the value of environment variables. Depending on such global values is, of course, not a good practice, but is often unavoidable when working with external libraries. ‹#With_env› allows you to override the value of environment variables during the execution of a block by giving it a ‹Hash› of key/value pairs where the key is the name of the environment variable and the value is the value that it should have during the execution of that block: expect 'hello' do with_env 'INTRO' => 'hello' do ENV['INTRO'] end end Any overridden values are restored and any keys that weren’t previously a part of the environment are removed when the block returns. § Overriding Globals You may also want to override the value of a global temporarily: expect 'hello' do with_global :$stdout, StringIO.new do print 'hello' $stdout.string end end You thus provide the name of the global and a value that it should take during the execution of a block of code. The block gets passed the overridden value, should you need it: expect true do with_global :$stdout, StringIO.new do |overridden| $stdout != overridden end end § Integration Lookout can be used from Rake¹. Simply install Lookout-Rake²: % gem install lookout-rake and add the following code to your Rakefile require 'lookout-rake-3.0' Lookout::Rake::Tasks::Test.new Make sure to read up on using Lookout-Rake for further benefits and customization. ¹ Read more about Rake at http://rake.rubyforge.org/ ² Get information on Lookout-Rake at http://disu.se/software/lookout-rake/ § API Lookout comes with an API¹ that let’s you create things such as new expected values, difference reports for your types, and so on. ¹ See http://disu.se/software/lookout/api/ § Interface Design The default output of Lookout can Spartanly be described as Spartan. If no errors or failures occur, no output is generated. This is unconventional, as unit testing frameworks tend to dump a lot of information on the user, concerning things such as progress, test count summaries, and flamboyantly colored text telling you that your tests passed. None of this output is needed. Your tests should run fast enough to not require progress reports. The lack of output provides you with the same amount of information as reporting success. Test count summaries are only useful if you’re worried that your tests aren’t being run, but if you worry about that, then providing such output doesn’t really help. Testing your tests requires something beyond reporting some arbitrary count that you would have to verify by hand anyway. When errors or failures do occur, however, the relevant information is output in a format that can easily be parsed by an ‹'errorformat'› for Vim or with {Compilation Mode}¹ for Emacs². Diffs are generated for Strings, Arrays, Hashes, and I/O. ¹ Read up on Compilation mode for Emacs at http://www.emacswiki.org/emacs/CompilationMode ² Visit The GNU Foundation’s Emacs’ software page at http://www.gnu.org/software/emacs/ § External Design Let’s now look at some of the points made in the introduction in greater detail. Lookout only allows you to set one expectation per test. If you’re testing behavior with a reception expectation, then only one method-invocation expectation can be set. If you’re testing state, then only one result can be verified. It may seem like this would cause unnecessary duplication between tests. While this is certainly a possibility, when you actually begin to try to avoid such duplication you find that you often do so by improving your interfaces. This kind of restriction tends to encourage the use of value objects, which are easy to test, and more focused objects, which require simpler tests, as they have less behavior to test, per method. By keeping your interfaces focused you’re also keeping your tests focused. Keeping your tests focused improves, in itself, test isolation, but let’s look at something that hinders it: setup and tear-down methods. Most unit testing frameworks encourage test fragmentation by providing setup and tear-down methods. Setup methods create objects and, perhaps, just their behavior for a set of tests. This means that you have to look in two places to figure out what’s being done in a test. This may work fine for few methods with simple set-ups, but makes things complicated when the number of tests increases and the set-up is complex. Often, each test further adjusts the previously set-up object before performing any verifications, further complicating the process of figuring out what state an object has in a given test. Tear-down methods clean up after tests, perhaps by removing records from a database or deleting files from the file-system. The duplication that setup methods and tear-down methods hope to remove is better avoided by improving your interfaces. This can be done by providing better set-up methods for your objects and using idioms such as {Resource Acquisition Is Initialization}¹ for guaranteed clean-up, test or no test. By not using setup and tear-down methods we keep everything pertinent to a test in the test itself, thus improving test isolation. (You also won’t {slow down your tests}² by keeping unnecessary state.) Most unit test frameworks also allow you to create arbitrary test helper methods. Lookout doesn’t. The same rationale as that that has been crystallized in the preceding paragraphs applies. If you need helpers you’re interface isn’t good enough. It really is as simple as that. To clarify: there’s nothing inherently wrong with test helper methods, but they should be general enough that they reside in their own library. The support for mocks in Lookout is provided through a set of test helper methods that make it easier to create mocks than it would have been without them. Lookout-rack³ is another example of a library providing test helper methods (well, one method, actually) that are very useful in testing web applications that use Rack⁴. A final point at which some unit test frameworks try to fragment tests further is documentation. These frameworks provide ways of describing the whats and hows of what’s being tested, the rationale being that this will provide documentation of both the test and the code being tested. Describing how a stack data structure is meant to work is a common example. A stack is, however, a rather simple data structure, so such a description provides little, if any, additional information that can’t be extracted from the implementation and its tests themselves. The implementation and its tests is, in fact, its own best documentation. Taking the points made in the previous paragraphs into account, we should already have simple, self-describing, interfaces that have easily understood tests associated with them. Rationales for the use of a given data structure or system-design design documentation is better suited in separate documentation focused at describing exactly those issues. ¹ Read the Wikipedia entry for Resource Acquisition Is Initialization at http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Initialization ² Read how 37signals had problems with slow Test::Unit tests at http://37signals.com/svn/posts/2742-the-road-to-faster-tests/ ³ Visit the Lookout-rack home page at http://disu.se/software/lookout-rack/ ⁴ Visit the Rack Rubyforge project page at http://rack.rubyforge.org/ § Internal Design The internal design of Lookout has had a couple of goals. • As few external dependencies as possible • As few internal dependencies as possible • Internal extensibility provides external extensibility • As fast load times as possible • As high a ratio of value objects to mutable objects as possible • Each object must have a simple, obvious name • Use mix-ins, not inheritance for shared behavior • As few responsibilities per object as possible • Optimizing for speed can only be done when you have all the facts § External Dependencies Lookout used to depend on Mocha for mocks and stubs. While benchmarking I noticed that a method in Mocha was taking up more than 300 percent of the runtime. It turned out that Mocha’s method for cleaning up back-traces generated when a mock failed was doing something incredibly stupid: backtrace.reject{ |l| Regexp.new(@lib).match(File.expand_path(l)) } Here ‹@lib› is a ‹String› containing the path to the lib sub-directory in the Mocha installation directory. I reported it, provided a patch five days later, then waited. Nothing happened. {254 days later}¹, according to {Wolfram Alpha}², half of my patch was, apparently – I say “apparently”, as I received no notification – applied. By that time I had replaced the whole mocking-and-stubbing subsystem and dropped the dependency. Many Ruby developers claim that Ruby and its gems are too fast-moving for normal package-managing systems to keep up. This is testament to the fact that this isn’t the case and that the real problem is instead related to sloppy practices. Please note that I don’t want to single out the Mocha library nor its developers. I only want to provide an example where relying on external dependencies can be “considered harmful”. ¹ See the Wolfram Alpha calculation at http://www.wolframalpha.com/input/?i=days+between+march+17%2C+2010+and+november+26%2C+2010 ² Check out the Wolfram Alpha computational knowledge engine at http://www.wolframalpha.com/ § Internal Dependencies Lookout has been designed so as to keep each subsystem independent of any other. The diff subsystem is, for example, completely decoupled from any other part of the system as a whole and could be moved into its own library at a time where that would be of interest to anyone. What’s perhaps more interesting is that the diff subsystem is itself very modular. The data passes through a set of filters that depends on what kind of diff has been requested, each filter yielding modified data as it receives it. If you want to read some rather functional Ruby I can highly recommend looking at the code in the ‹lib/lookout/diff› directory. This lookout on the design of the library also makes it easy to extend Lookout. Lookout-rack was, for example, written in about four hours and about 5 of those 240 minutes were spent on setting up the interface between the two. § Optimizing For Speed The following paragraph is perhaps a bit personal, but might be interesting nonetheless. I’ve always worried about speed. The original Expectations library used ‹extend› a lot to add new behavior to objects. Expectations, for example, used to hold the result of their execution (what we now term “evaluation”) by being extended by a module representing success, failure, or error. For the longest time I used this same method, worrying about the increased performance cost that creating new objects for results would incur. I finally came to a point where I felt that the code was so simple and clean that rewriting this part of the code for a benchmark wouldn’t take more than perhaps ten minutes. Well, ten minutes later I had my results and they confirmed that creating new objects wasn’t harming performance. I was very pleased. § Naming I hate low lines (underscores). I try to avoid them in method names and I always avoid them in file names. Since the current “best practice” in the Ruby community is to put ‹BeginEndStorage› in a file called ‹begin_end_storage.rb›, I only name constants using a single noun. This has had the added benefit that classes seem to have acquired less behavior, as using a single noun doesn’t allow you to tack on additional behavior without questioning if it’s really appropriate to do so, given the rather limited range of interpretation for that noun. It also seems to encourage the creation of value objects, as something named ‹Range› feels a lot more like a value than ‹BeginEndStorage›. (To reach object-oriented-programming Nirvana you must achieve complete value.) § News § 3.0.0 The ‹xml› expectation has been dropped. It wasn’t documented, didn’t suit very many use cases, and can be better implemented by an external library. The ‹arg› argument matcher for mock method arguments has been removed, as it didn’t provide any benefit over using Object. The ‹#yield› and ‹#each› methods on stub and mock methods have been removed. They were slightly weird and their use case can be implemented using block parameters instead. The ‹stub› method inside ‹expect› blocks now stubs out the methods during the execution of a provided block instead of during the execution of the whole except block. When a mock method is called too many times, this is reported immediately, with a full backtrace. This makes it easier to pin down what’s wrong with the code. Query expectations were added. Explicit query expectations were added. Fluent boolean expectations, for example, ‹expect nil.to.be.nil?› have been replaced by query expectations (‹expect :nil? do nil end›) and explicit query expectations (‹expect result.to.be.nil? do nil end›). This was done to discourage creating objects as the expected value and creating objects that change during the course of the test. The ‹literal› expectation was added. Equality (‹#==›) is now checked before “caseity” (‹#===›) for modules, ranges, and regular expressions to match the documentation. § Financing Currently, most of my time is spent at my day job and in my rather busy private life. Please motivate me to spend time on this piece of software by donating some of your money to this project. Yeah, I realize that requesting money to develop software is a bit, well, capitalistic of me. But please realize that I live in a capitalistic society and I need money to have other people give me the things that I need to continue living under the rules of said society. So, if you feel that this piece of software has helped you out enough to warrant a reward, please PayPal a donation to now@disu.se¹. Thanks! Your support won’t go unnoticed! ¹ Send a donation: https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=now%40disu%2ese&item_name=Lookout § Reporting Bugs Please report any bugs that you encounter to the {issue tracker}¹. ¹ See https://github.com/now/lookout/issues § Contributors Contributors to the original expectations codebase are mentioned there. We hope no one on that list feels left out of this list. Please {let us know}¹ if you do. • Nikolai Weibull ¹ Add an issue to the Lookout issue tracker at https://github.com/now/lookout/issues § Licensing Lookout is free software: you may redistribute it and/or modify it under the terms of the {GNU Lesser General Public License, version 3}¹ or later², as published by the {Free Software Foundation}³. ¹ See http://disu.se/licenses/lgpl-3.0/ ² See http://gnu.org/licenses/ ³ See http://fsf.org/
Dataproc Metastore is a fully managed, highly available within a region, autohealing serverless Apache Hive metastore (HMS) on Google Cloud for data analytics products. It supports HMS and serves as a critical component for managing the metadata of relational entities and provides interoperability between data processing applications in the open source data ecosystem. Note that google-cloud-metastore-v1 is a version-specific client library. For most uses, we recommend installing the main client library google-cloud-metastore instead. See the readme for more details.
ZiYa allows you to easily create interactive charts, gauges and maps for your web applications. ZiYa leverages flash which offload heavy server side processing to the client. At the root ZiYa allows you to easily generate an XML files that will be downloaded to the client for rendering. Using this gem, you will be able to easily create great looking charts for your application. You will also be able to use the charts, gauges and maps has a navigation scheme by embedding various link in the graphical components thus bringing to the table an ideal scheme for reporting and dashboard like applications. Your manager will love you for it !! Sample site : http://ziya.liquidrail.com Documentation : http://ziya.liquidrail.com/docs Forum : http://groups.google.com/group/ziya-plugin Repositories : git://github.com/derailed/ziya.git
Dataproc Metastore is a fully managed, highly available within a region, autohealing serverless Apache Hive metastore (HMS) on Google Cloud for data analytics products. It supports HMS and serves as a critical component for managing the metadata of relational entities and provides interoperability between data processing applications in the open source data ecosystem. Note that google-cloud-metastore-v1beta is a version-specific client library. For most uses, we recommend installing the main client library google-cloud-metastore instead. See the readme for more details.
Capistrano tasks to manage and monitor processes by eye
Manage your application processes in production hassle-free like Heroku CLI with Procfile and Systemd
[Deprecated] It is recommended to use paypal-server-sdk instead. The PayPal Adaptive Payments SDK provides Ruby APIs to create, process and manage simple and complex (parallel and chained) payments, and pre-approvals using the Adaptive Payments API.
# Overview ## Authentication LaunchDarkly's REST API uses the HTTPS protocol with a minimum TLS version of 1.2. All REST API resources are authenticated with either [personal or service access tokens](https://docs.launchdarkly.com/home/account/api), or session cookies. Other authentication mechanisms are not supported. You can manage personal access tokens on your [**Authorization**](https://app.launchdarkly.com/settings/authorization) page in the LaunchDarkly UI. LaunchDarkly also has SDK keys, mobile keys, and client-side IDs that are used by our server-side SDKs, mobile SDKs, and JavaScript-based SDKs, respectively. **These keys cannot be used to access our REST API**. These keys are environment-specific, and can only perform read-only operations such as fetching feature flag settings. | Auth mechanism | Allowed resources | Use cases | | ----------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- | -------------------------------------------------- | | [Personal or service access tokens](https://docs.launchdarkly.com/home/account/api) | Can be customized on a per-token basis | Building scripts, custom integrations, data export. | | SDK keys | Can only access read-only resources specific to server-side SDKs. Restricted to a single environment. | Server-side SDKs | | Mobile keys | Can only access read-only resources specific to mobile SDKs, and only for flags marked available to mobile keys. Restricted to a single environment. | Mobile SDKs | | Client-side ID | Can only access read-only resources specific to JavaScript-based client-side SDKs, and only for flags marked available to client-side. Restricted to a single environment. | Client-side JavaScript | > #### Keep your access tokens and SDK keys private > > Access tokens should _never_ be exposed in untrusted contexts. Never put an access token in client-side JavaScript, or embed it in a mobile application. LaunchDarkly has special mobile keys that you can embed in mobile apps. If you accidentally expose an access token or SDK key, you can reset it from your [**Authorization**](https://app.launchdarkly.com/settings/authorization) page. > > The client-side ID is safe to embed in untrusted contexts. It's designed for use in client-side JavaScript. ### Authentication using request header The preferred way to authenticate with the API is by adding an `Authorization` header containing your access token to your requests. The value of the `Authorization` header must be your access token. Manage personal access tokens from the [**Authorization**](https://app.launchdarkly.com/settings/authorization) page. ### Authentication using session cookie For testing purposes, you can make API calls directly from your web browser. If you are logged in to the LaunchDarkly application, the API will use your existing session to authenticate calls. If you have a [role](https://docs.launchdarkly.com/home/account/built-in-roles) other than Admin, or have a [custom role](https://docs.launchdarkly.com/home/account/custom-roles) defined, you may not have permission to perform some API calls. You will receive a `401` response code in that case. > ### Modifying the Origin header causes an error > > LaunchDarkly validates that the Origin header for any API request authenticated by a session cookie matches the expected Origin header. The expected Origin header is `https://app.launchdarkly.com`. > > If the Origin header does not match what's expected, LaunchDarkly returns an error. This error can prevent the LaunchDarkly app from working correctly. > > Any browser extension that intentionally changes the Origin header can cause this problem. For example, the `Allow-Control-Allow-Origin: *` Chrome extension changes the Origin header to `http://evil.com` and causes the app to fail. > > To prevent this error, do not modify your Origin header. > > LaunchDarkly does not require origin matching when authenticating with an access token, so this issue does not affect normal API usage. ## Representations All resources expect and return JSON response bodies. Error responses also send a JSON body. To learn more about the error format of the API, read [Errors](/#section/Overview/Errors). In practice this means that you always get a response with a `Content-Type` header set to `application/json`. In addition, request bodies for `PATCH`, `POST`, and `PUT` requests must be encoded as JSON with a `Content-Type` header set to `application/json`. ### Summary and detailed representations When you fetch a list of resources, the response includes only the most important attributes of each resource. This is a _summary representation_ of the resource. When you fetch an individual resource, such as a single feature flag, you receive a _detailed representation_ of the resource. The best way to find a detailed representation is to follow links. Every summary representation includes a link to its detailed representation. ### Expanding responses Sometimes the detailed representation of a resource does not include all of the attributes of the resource by default. If this is the case, the request method will clearly document this and describe which attributes you can include in an expanded response. To include the additional attributes, append the `expand` request parameter to your request and add a comma-separated list of the attributes to include. For example, when you append `?expand=members,maintainers` to the [Get team](/tag/Teams#operation/getTeam) endpoint, the expanded response includes both of these attributes. ### Links and addressability The best way to navigate the API is by following links. These are attributes in representations that link to other resources. The API always uses the same format for links: - Links to other resources within the API are encapsulated in a `_links` object - If the resource has a corresponding link to HTML content on the site, it is stored in a special `_site` link Each link has two attributes: - An `href`, which contains the URL - A `type`, which describes the content type For example, a feature resource might return the following: ```json { "_links": { "parent": { "href": "/api/features", "type": "application/json" }, "self": { "href": "/api/features/sort.order", "type": "application/json" } }, "_site": { "href": "/features/sort.order", "type": "text/html" } } ``` From this, you can navigate to the parent collection of features by following the `parent` link, or navigate to the site page for the feature by following the `_site` link. Collections are always represented as a JSON object with an `items` attribute containing an array of representations. Like all other representations, collections have `_links` defined at the top level. Paginated collections include `first`, `last`, `next`, and `prev` links containing a URL with the respective set of elements in the collection. ## Updates Resources that accept partial updates use the `PATCH` verb. Most resources support the [JSON patch](/reference#updates-using-json-patch) format. Some resources also support the [JSON merge patch](/reference#updates-using-json-merge-patch) format, and some resources support the [semantic patch](/reference#updates-using-semantic-patch) format, which is a way to specify the modifications to perform as a set of executable instructions. Each resource supports optional [comments](/reference#updates-with-comments) that you can submit with updates. Comments appear in outgoing webhooks, the audit log, and other integrations. When a resource supports both JSON patch and semantic patch, we document both in the request method. However, the specific request body fields and descriptions included in our documentation only match one type of patch or the other. ### Updates using JSON patch [JSON patch](https://datatracker.ietf.org/doc/html/rfc6902) is a way to specify the modifications to perform on a resource. JSON patch uses paths and a limited set of operations to describe how to transform the current state of the resource into a new state. JSON patch documents are always arrays, where each element contains an operation, a path to the field to update, and the new value. For example, in this feature flag representation: ```json { "name": "New recommendations engine", "key": "engine.enable", "description": "This is the description", ... } ``` You can change the feature flag's description with the following patch document: ```json [{ "op": "replace", "path": "/description", "value": "This is the new description" }] ``` You can specify multiple modifications to perform in a single request. You can also test that certain preconditions are met before applying the patch: ```json [ { "op": "test", "path": "/version", "value": 10 }, { "op": "replace", "path": "/description", "value": "The new description" } ] ``` The above patch request tests whether the feature flag's `version` is `10`, and if so, changes the feature flag's description. Attributes that are not editable, such as a resource's `_links`, have names that start with an underscore. ### Updates using JSON merge patch [JSON merge patch](https://datatracker.ietf.org/doc/html/rfc7386) is another format for specifying the modifications to perform on a resource. JSON merge patch is less expressive than JSON patch. However, in many cases it is simpler to construct a merge patch document. For example, you can change a feature flag's description with the following merge patch document: ```json { "description": "New flag description" } ``` ### Updates using semantic patch Some resources support the semantic patch format. A semantic patch is a way to specify the modifications to perform on a resource as a set of executable instructions. Semantic patch allows you to be explicit about intent using precise, custom instructions. In many cases, you can define semantic patch instructions independently of the current state of the resource. This can be useful when defining a change that may be applied at a future date. To make a semantic patch request, you must append `domain-model=launchdarkly.semanticpatch` to your `Content-Type` header. Here's how: ``` Content-Type: application/json; domain-model=launchdarkly.semanticpatch ``` If you call a semantic patch resource without this header, you will receive a `400` response because your semantic patch will be interpreted as a JSON patch. The body of a semantic patch request takes the following properties: * `comment` (string): (Optional) A description of the update. * `environmentKey` (string): (Required for some resources only) The environment key. * `instructions` (array): (Required) A list of actions the update should perform. Each action in the list must be an object with a `kind` property that indicates the instruction. If the instruction requires parameters, you must include those parameters as additional fields in the object. The documentation for each resource that supports semantic patch includes the available instructions and any additional parameters. For example: ```json { "comment": "optional comment", "instructions": [ {"kind": "turnFlagOn"} ] } ``` If any instruction in the patch encounters an error, the endpoint returns an error and will not change the resource. In general, each instruction silently does nothing if the resource is already in the state you request. ### Updates with comments You can submit optional comments with `PATCH` changes. To submit a comment along with a JSON patch document, use the following format: ```json { "comment": "This is a comment string", "patch": [{ "op": "replace", "path": "/description", "value": "The new description" }] } ``` To submit a comment along with a JSON merge patch document, use the following format: ```json { "comment": "This is a comment string", "merge": { "description": "New flag description" } } ``` To submit a comment along with a semantic patch, use the following format: ```json { "comment": "This is a comment string", "instructions": [ {"kind": "turnFlagOn"} ] } ``` ## Errors The API always returns errors in a common format. Here's an example: ```json { "code": "invalid_request", "message": "A feature with that key already exists", "id": "30ce6058-87da-11e4-b116-123b93f75cba" } ``` The `code` indicates the general class of error. The `message` is a human-readable explanation of what went wrong. The `id` is a unique identifier. Use it when you're working with LaunchDarkly Support to debug a problem with a specific API call. ### HTTP status error response codes | Code | Definition | Description | Possible Solution | | ---- | ----------------- | ------------------------------------------------------------------------------------------- | ---------------------------------------------------------------- | | 400 | Invalid request | The request cannot be understood. | Ensure JSON syntax in request body is correct. | | 401 | Invalid access token | Requestor is unauthorized or does not have permission for this API call. | Ensure your API access token is valid and has the appropriate permissions. | | 403 | Forbidden | Requestor does not have access to this resource. | Ensure that the account member or access token has proper permissions set. | | 404 | Invalid resource identifier | The requested resource is not valid. | Ensure that the resource is correctly identified by ID or key. | | 405 | Method not allowed | The request method is not allowed on this resource. | Ensure that the HTTP verb is correct. | | 409 | Conflict | The API request can not be completed because it conflicts with a concurrent API request. | Retry your request. | | 422 | Unprocessable entity | The API request can not be completed because the update description can not be understood. | Ensure that the request body is correct for the type of patch you are using, either JSON patch or semantic patch. | 429 | Too many requests | Read [Rate limiting](/#section/Overview/Rate-limiting). | Wait and try again later. | ## CORS The LaunchDarkly API supports Cross Origin Resource Sharing (CORS) for AJAX requests from any origin. If an `Origin` header is given in a request, it will be echoed as an explicitly allowed origin. Otherwise the request returns a wildcard, `Access-Control-Allow-Origin: *`. For more information on CORS, read the [CORS W3C Recommendation](http://www.w3.org/TR/cors). Example CORS headers might look like: ```http Access-Control-Allow-Headers: Accept, Content-Type, Content-Length, Accept-Encoding, Authorization Access-Control-Allow-Methods: OPTIONS, GET, DELETE, PATCH Access-Control-Allow-Origin: * Access-Control-Max-Age: 300 ``` You can make authenticated CORS calls just as you would make same-origin calls, using either [token or session-based authentication](/#section/Overview/Authentication). If you are using session authentication, you should set the `withCredentials` property for your `xhr` request to `true`. You should never expose your access tokens to untrusted entities. ## Rate limiting We use several rate limiting strategies to ensure the availability of our APIs. Rate-limited calls to our APIs return a `429` status code. Calls to our APIs include headers indicating the current rate limit status. The specific headers returned depend on the API route being called. The limits differ based on the route, authentication mechanism, and other factors. Routes that are not rate limited may not contain any of the headers described below. > ### Rate limiting and SDKs > > LaunchDarkly SDKs are never rate limited and do not use the API endpoints defined here. LaunchDarkly uses a different set of approaches, including streaming/server-sent events and a global CDN, to ensure availability to the routes used by LaunchDarkly SDKs. ### Global rate limits Authenticated requests are subject to a global limit. This is the maximum number of calls that your account can make to the API per ten seconds. All service and personal access tokens on the account share this limit, so exceeding the limit with one access token will impact other tokens. Calls that are subject to global rate limits may return the headers below: | Header name | Description | | ------------------------------ | -------------------------------------------------------------------------------- | | `X-Ratelimit-Global-Remaining` | The maximum number of requests the account is permitted to make per ten seconds. | | `X-Ratelimit-Reset` | The time at which the current rate limit window resets in epoch milliseconds. | We do not publicly document the specific number of calls that can be made globally. This limit may change, and we encourage clients to program against the specification, relying on the two headers defined above, rather than hardcoding to the current limit. ### Route-level rate limits Some authenticated routes have custom rate limits. These also reset every ten seconds. Any service or personal access tokens hitting the same route share this limit, so exceeding the limit with one access token may impact other tokens. Calls that are subject to route-level rate limits return the headers below: | Header name | Description | | ----------------------------- | ----------------------------------------------------------------------------------------------------- | | `X-Ratelimit-Route-Remaining` | The maximum number of requests to the current route the account is permitted to make per ten seconds. | | `X-Ratelimit-Reset` | The time at which the current rate limit window resets in epoch milliseconds. | A _route_ represents a specific URL pattern and verb. For example, the [Delete environment](/tag/Environments#operation/deleteEnvironment) endpoint is considered a single route, and each call to delete an environment counts against your route-level rate limit for that route. We do not publicly document the specific number of calls that an account can make to each endpoint per ten seconds. These limits may change, and we encourage clients to program against the specification, relying on the two headers defined above, rather than hardcoding to the current limits. ### IP-based rate limiting We also employ IP-based rate limiting on some API routes. If you hit an IP-based rate limit, your API response will include a `Retry-After` header indicating how long to wait before re-trying the call. Clients must wait at least `Retry-After` seconds before making additional calls to our API, and should employ jitter and backoff strategies to avoid triggering rate limits again. ## OpenAPI (Swagger) and client libraries We have a [complete OpenAPI (Swagger) specification](https://app.launchdarkly.com/api/v2/openapi.json) for our API. We auto-generate multiple client libraries based on our OpenAPI specification. To learn more, visit the [collection of client libraries on GitHub](https://github.com/search?q=topic%3Alaunchdarkly-api+org%3Alaunchdarkly&type=Repositories). You can also use this specification to generate client libraries to interact with our REST API in your language of choice. Our OpenAPI specification is supported by several API-based tools such as Postman and Insomnia. In many cases, you can directly import our specification to explore our APIs. ## Method overriding Some firewalls and HTTP clients restrict the use of verbs other than `GET` and `POST`. In those environments, our API endpoints that use `DELETE`, `PATCH`, and `PUT` verbs are inaccessible. To avoid this issue, our API supports the `X-HTTP-Method-Override` header, allowing clients to "tunnel" `DELETE`, `PATCH`, and `PUT` requests using a `POST` request. For example, to call a `PATCH` endpoint using a `POST` request, you can include `X-HTTP-Method-Override:PATCH` as a header. ## Beta resources We sometimes release new API resources in **beta** status before we release them with general availability. Resources that are in beta are still undergoing testing and development. They may change without notice, including becoming backwards incompatible. We try to promote resources into general availability as quickly as possible. This happens after sufficient testing and when we're satisfied that we no longer need to make backwards-incompatible changes. We mark beta resources with a "Beta" callout in our documentation, pictured below: > ### This feature is in beta > > To use this feature, pass in a header including the `LD-API-Version` key with value set to `beta`. Use this header with each call. To learn more, read [Beta resources](/#section/Overview/Beta-resources). > > Resources that are in beta are still undergoing testing and development. They may change without notice, including becoming backwards incompatible. ### Using beta resources To use a beta resource, you must include a header in the request. If you call a beta resource without this header, you receive a `403` response. Use this header: ``` LD-API-Version: beta ``` ## Federal environments The version of LaunchDarkly that is available on domains controlled by the United States government is different from the version of LaunchDarkly available to the general public. If you are an employee or contractor for a United States federal agency and use LaunchDarkly in your work, you likely use the federal instance of LaunchDarkly. If you are working in the federal instance of LaunchDarkly, the base URI for each request is `https://app.launchdarkly.us`. In the "Try it" sandbox for each request, click the request path to view the complete resource path for the federal environment. To learn more, read [LaunchDarkly in federal environments](https://docs.launchdarkly.com/home/infrastructure/federal). ## Versioning We try hard to keep our REST API backwards compatible, but we occasionally have to make backwards-incompatible changes in the process of shipping new features. These breaking changes can cause unexpected behavior if you don't prepare for them accordingly. Updates to our REST API include support for the latest features in LaunchDarkly. We also release a new version of our REST API every time we make a breaking change. We provide simultaneous support for multiple API versions so you can migrate from your current API version to a new version at your own pace. ### Setting the API version per request You can set the API version on a specific request by sending an `LD-API-Version` header, as shown in the example below: ``` LD-API-Version: 20240415 ``` The header value is the version number of the API version you would like to request. The number for each version corresponds to the date the version was released in `yyyymmdd` format. In the example above the version `20240415` corresponds to April 15, 2024. ### Setting the API version per access token When you create an access token, you must specify a specific version of the API to use. This ensures that integrations using this token cannot be broken by version changes. Tokens created before versioning was released have their version set to `20160426`, which is the version of the API that existed before the current versioning scheme, so that they continue working the same way they did before versioning. If you would like to upgrade your integration to use a new API version, you can explicitly set the header described above. > ### Best practice: Set the header for every client or integration > > We recommend that you set the API version header explicitly in any client or integration you build. > > Only rely on the access token API version during manual testing. ### API version changelog |<div style="width:75px">Version</div> | Changes | End of life (EOL) |---|---|---| | `20240415` | <ul><li>Changed several endpoints from unpaginated to paginated. Use the `limit` and `offset` query parameters to page through the results.</li> <li>Changed the [list access tokens](/tag/Access-tokens#operation/getTokens) endpoint: <ul><li>Response is now paginated with a default limit of `25`</li></ul></li> <li>Changed the [list account members](/tag/Account-members#operation/getMembers) endpoint: <ul><li>The `accessCheck` filter is no longer available</li></ul></li> <li>Changed the [list custom roles](/tag/Custom-roles#operation/getCustomRoles) endpoint: <ul><li>Response is now paginated with a default limit of `20`</li></ul></li> <li>Changed the [list feature flags](/tag/Feature-flags#operation/getFeatureFlags) endpoint: <ul><li>Response is now paginated with a default limit of `20`</li><li>The `environments` field is now only returned if the request is filtered by environment, using the `filterEnv` query parameter</li><li>The `filterEnv` query parameter supports a maximum of three environments</li><li>The `followerId`, `hasDataExport`, `status`, `contextKindTargeted`, and `segmentTargeted` filters are no longer available</li></ul></li> <li>Changed the [list segments](/tag/Segments#operation/getSegments) endpoint: <ul><li>Response is now paginated with a default limit of `20`</li></ul></li> <li>Changed the [list teams](/tag/Teams#operation/getTeams) endpoint: <ul><li>The `expand` parameter no longer supports including `projects` or `roles`</li><li>In paginated results, the maximum page size is now 100</li></ul></li> <li>Changed the [get workflows](/tag/Workflows#operation/getWorkflows) endpoint: <ul><li>Response is now paginated with a default limit of `20`</li><li>The `_conflicts` field in the response is no longer available</li></ul></li> </ul> | Current | | `20220603` | <ul><li>Changed the [list projects](/tag/Projects#operation/getProjects) return value:<ul><li>Response is now paginated with a default limit of `20`.</li><li>Added support for filter and sort.</li><li>The project `environments` field is now expandable. This field is omitted by default.</li></ul></li><li>Changed the [get project](/tag/Projects#operation/getProject) return value:<ul><li>The `environments` field is now expandable. This field is omitted by default.</li></ul></li></ul> | 2025-04-15 | | `20210729` | <ul><li>Changed the [create approval request](/tag/Approvals#operation/postApprovalRequest) return value. It now returns HTTP Status Code `201` instead of `200`.</li><li> Changed the [get users](/tag/Users#operation/getUser) return value. It now returns a user record, not a user. </li><li>Added additional optional fields to environment, segments, flags, members, and segments, including the ability to create big segments. </li><li> Added default values for flag variations when new environments are created. </li><li>Added filtering and pagination for getting flags and members, including `limit`, `number`, `filter`, and `sort` query parameters. </li><li>Added endpoints for expiring user targets for flags and segments, scheduled changes, access tokens, Relay Proxy configuration, integrations and subscriptions, and approvals. </li></ul> | 2023-06-03 | | `20191212` | <ul><li>[List feature flags](/tag/Feature-flags#operation/getFeatureFlags) now defaults to sending summaries of feature flag configurations, equivalent to setting the query parameter `summary=true`. Summaries omit flag targeting rules and individual user targets from the payload. </li><li> Added endpoints for flags, flag status, projects, environments, audit logs, members, users, custom roles, segments, usage, streams, events, and data export. </li></ul> | 2022-07-29 | | `20160426` | <ul><li>Initial versioning of API. Tokens created before versioning have their version set to this.</li></ul> | 2020-12-12 | To learn more about how EOL is determined, read LaunchDarkly's [End of Life (EOL) Policy](https://launchdarkly.com/policies/end-of-life-policy/).
Cloud Life Sciences is a suite of services and tools for managing, processing, and transforming life sciences data. It also enables advanced insights and operational workflows using highly scalable and compliant infrastructure. Note that google-cloud-life_sciences-v2beta is a version-specific client library. For most uses, we recommend installing the main client library google-cloud-life_sciences instead. See the readme for more details.
Manages the lifecycle of HTTP server processes
A Chef cookbook for managing the Monit process manager.
Manage long running forked processes
This package provides reliable messaging and persistent queues for building asynchronous applications in Ruby. It supports transaction processing, message selectors, priorities, delivery semantics, remote queue managers, disk-based and MySQL message stores and more.
Provides basic framework for managing stateful processes with collins
Utility for managing sub processes.
A simple parallel processing fork manager, based on the Perl module.
Email newsletter templating and management system which allows a designer to develop templates and elements that are email-friendly and allows a user to create newsletters without html/css knowledge utilizing a wysiwyg interface. Also available with the mail manager (including contact and mailing list management, double opt-in mailing list sign up, mailer, and bounce processing) and user access as the iReach gem.
simplify cloud management, thanks to predefined process to manage cloud obj and make it work.
A ruby implementation of Consul Template with support of erb templating with hi-performance on large clusters and advanced process management features.
This library runs GNU Go in a separate process and allows you to communicate with it using the Go Text Protocol (GTP). This makes it easy to manage full games of Go, work with SGF files, analyze Go positions, and more.
Daedalus is a build system based on years of attempting to build Rubinus with a collection of Rake tasks. Rubinius is a complex system. It has dependencies on external C libraries (some of which are vendored), internal C/C++ libraries, Ruby C-extensions, and compiled Ruby code. The Rubinius bytecode compiler is written in Ruby, so we have to bootstrap compiling it. Rake fails at this task because there is no way to manage multiple, independent dependency trees without subprocessing another Rake process. This results in unreasonable and unmanagable complexity.
Capistrano integration for node.js process manager PM2
NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This is the package creation piece of the toolset.