async
Tiny futures library for Go. 🔮
- Supports eager and lazy evaluation, both synchronous and asynchronous
- Generics-based
- Propagates panics
- Designed to interoperate with
context
Usage
Let's start with a simple example:
f := NewFuture(func() ([]byte, error) {
res, err := http.Get("http://www.example.com")
if err != nil {
return nil, err
}
defer res.Body.Close()
return io.ReadAll(res.Body)
})
We have just created a Future
f
that can be used to wait for and
obtain the result of the function passed to NewFuture
. At this point,
exeuction of the function wrapped in the Future
has not started yet.
There are multiple ways to start evaluation, the simplest being calling
Result
, as this will start evaluation and wait until either the result
is available or the context is cancelled.
Note that if multiple calls to Result
are done concurrently, only the
first one starts execution of the wrapped function; when the wrapped
function completes the same result is returned to all current (and
future) callers of Result
:
go func() {
buf, err := f.Result(ctx1)
}()
go func() {
buf, err := f.Result(ctx2)
}()
A call to Result
return immediately if the context is cancelled. This
does not cancel execution of the wrapped function (to cancel execution
of the wrapped function use a context or other cancellation mechanism
in the wrapped function). So, for example in the code above is ctx1
is
cancelled the call to Result
in the first goroutine will return
immediately, but the call in the second goroutine will continue waiting
until the wrapped function returns (or ctx2
is cancelled).
An important feature of the futures provided by this library is that they
propagate panics, so e.g. in the example above if the function wrapped by
the Future
panicked, the panic would be caught and each call to Result
would panic instead (if Result
is not called and the wrapped function
panics, the panic will be delivered to the go runtime instead, crashing
the process as if the panic had not been recovered).
A more complex example
The real power of this library lies in its ability to quickly build
lazy evaluation trees that allow performant and efficient concurrent
evaluation of the desired results.
As an example, let's consider the case in which we need to construct
a response based on three subrequests (foo, bar, baz) whose results are
used to construct the two fields in the response (x and y).
ctx, cancel := context.WithCancel(ctx)
defer cancel()
foo := NewFuture(func() (Foo, error) {
return
})
bar := NewFuture(func() (Bar, error) {
return
})
baz := NewFuture(func() (Baz, error) {
return
})
x := NewFuture(func() (string, error) {
bar.Eager()
res, err := foo.Result(ctx)
if err != nil {
return "", err
}
res2, err := bar.Result()
if err != nil {
return "", err
}
return fmt.Sprintf("%v,%v", res, res2), nil
})
y := NewFuture(func() (string, error) {
baz.Eager()
res, err := foo.Result(ctx)
if err != nil {
return "", err
}
res2, err := baz.Result()
if err != nil {
return "", err
}
return fmt.Sprintf("%v,%v", res, res2), nil
})
We have now built the evaluation trees. Instead of using futures,
we could have simply started eager evaluation of all these functions,
and this would would work in simple cases.
Now consider though what would happen if you did not always need
both x and y to be populated in the response, and instead you needed
to populate them only if requested (or only in some other dynamic
condition).
Executing all the functions anyway just in case they are needed would
be extremely resource-inefficient, even if you could prune uneeded
functions by selectively cancelling the respective context. Another
option would be to start only the goroutines that are needed, but
this would require spreading the logic needed to control this in
multiple places.
Alternatively, you could execute everything serially, once you are
certain that each function needs to be executed, but this would
potentially be very slow (e.g. in case they involve performing
network requests).
Using this library you can instead do:
if req.needY {
y.Eager()
}
res := &Response{}
if req.needX {
r, err := x.Result(ctx)
if err != nil {
return nil, err
}
res.x = r
}
if req.needY {
r, err := y.Result(ctx)
if err != nil {
return nil, err
}
res.y = r
}
return res, nil
This will concurrently execute all functions required to satisfy
the request, and none of the functions that are not required, while
maximizing readability and separation of concerns: the resulting
code is linear as all synchronization happens behind the scenes,
regardless of how complex the logic to be implemented id.
Specifically, in the case above:
- if we have both needX and needY true, all futures defiend above
are started and execute concurrently
- if we have only needX true, only x, foo and bar are executed
- if we have only needY true, only y, foo and baz are executed
Note that thanks to the context defined above, as soon as any future
returns an error or panics, the context is cancelled and this makes
all futures using that context to return. As such this is an
effective replacement for errgroup
when it's used to coordinate
the execution of multiple parts of a request.
Examples
Future plans
Some potential ideas:
- Support also promises
- Adapters for common patterns