Research
Security News
Quasar RAT Disguised as an npm Package for Detecting Vulnerabilities in Ethereum Smart Contracts
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
github.com/cafxx/async
Tiny futures library for Go. 🔮
context
Let's start with a simple example:
f := NewFuture(func() ([]byte, error) {
res, err := http.Get("http://www.example.com")
if err != nil {
return nil, err
}
defer res.Body.Close()
return io.ReadAll(res.Body)
})
We have just created a Future
f
that can be used to wait for and
obtain the result of the function passed to NewFuture
. At this point,
exeuction of the function wrapped in the Future
has not started yet.
There are multiple ways to start evaluation, the simplest being calling
Result
, as this will start evaluation and wait until either the result
is available or the context is cancelled.
Note that if multiple calls to Result
are done concurrently, only the
first one starts execution of the wrapped function; when the wrapped
function completes the same result is returned to all current (and
future) callers of Result
:
go func() {
buf, err := f.Result(ctx1)
// use buf and err
}()
go func() {
buf, err := f.Result(ctx2)
// use buf and err
}()
A call to Result
return immediately if the context is cancelled. This
does not cancel execution of the wrapped function (to cancel execution
of the wrapped function use a context or other cancellation mechanism
in the wrapped function). So, for example in the code above is ctx1
is
cancelled the call to Result
in the first goroutine will return
immediately, but the call in the second goroutine will continue waiting
until the wrapped function returns (or ctx2
is cancelled).
An important feature of the futures provided by this library is that they
propagate panics, so e.g. in the example above if the function wrapped by
the Future
panicked, the panic would be caught and each call to Result
would panic instead (if Result
is not called and the wrapped function
panics, the panic will be delivered to the go runtime instead, crashing
the process as if the panic had not been recovered).
The real power of this library lies in its ability to quickly build lazy evaluation trees that allow performant and efficient concurrent evaluation of the desired results.
As an example, let's consider the case in which we need to construct a response based on three subrequests (foo, bar, baz) whose results are used to construct the two fields in the response (x and y).
ctx, cancel := context.WithCancel(ctx)
defer cancel()
foo := NewFuture(func() (Foo, error) {
return /* ... */
})
bar := NewFuture(func() (Bar, error) {
return /* ... */
})
baz := NewFuture(func() (Baz, error) {
return /* ... */
})
x := NewFuture(func() (string, error) {
bar.Eager() // start eager evaluation of bar
res, err := foo.Result(ctx)
if err != nil {
return "", err
}
res2, err := bar.Result()
if err != nil {
return "", err
}
return fmt.Sprintf("%v,%v", res, res2), nil
})
y := NewFuture(func() (string, error) {
baz.Eager() // start eager evaluation of baz
res, err := foo.Result(ctx) // note: result will be reused
if err != nil {
return "", err
}
res2, err := baz.Result()
if err != nil {
return "", err
}
return fmt.Sprintf("%v,%v", res, res2), nil
})
We have now built the evaluation trees. Instead of using futures, we could have simply started eager evaluation of all these functions, and this would would work in simple cases.
Now consider though what would happen if you did not always need both x and y to be populated in the response, and instead you needed to populate them only if requested (or only in some other dynamic condition).
Executing all the functions anyway just in case they are needed would be extremely resource-inefficient, even if you could prune uneeded functions by selectively cancelling the respective context. Another option would be to start only the goroutines that are needed, but this would require spreading the logic needed to control this in multiple places.
Alternatively, you could execute everything serially, once you are certain that each function needs to be executed, but this would potentially be very slow (e.g. in case they involve performing network requests).
Using this library you can instead do:
if req.needY {
y.Eager()
}
res := &Response{}
if req.needX {
r, err := x.Result(ctx)
if err != nil {
return nil, err
}
res.x = r
}
if req.needY {
r, err := y.Result(ctx)
if err != nil {
return nil, err
}
res.y = r
}
return res, nil
This will concurrently execute all functions required to satisfy the request, and none of the functions that are not required, while maximizing readability and separation of concerns: the resulting code is linear as all synchronization happens behind the scenes, regardless of how complex the logic to be implemented id.
Specifically, in the case above:
Note that thanks to the context defined above, as soon as any future
returns an error or panics, the context is cancelled and this makes
all futures using that context to return. As such this is an
effective replacement for errgroup
when it's used to coordinate
the execution of multiple parts of a request.
Some potential ideas:
FAQs
Unknown package
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket researchers uncover a malicious npm package posing as a tool for detecting vulnerabilities in Etherium smart contracts.
Security News
Research
A supply chain attack on Rspack's npm packages injected cryptomining malware, potentially impacting thousands of developers.
Research
Security News
Socket researchers discovered a malware campaign on npm delivering the Skuld infostealer via typosquatted packages, exposing sensitive data.