Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@chainlink/ea-bootstrap

Package Overview
Dependencies
Maintainers
10
Versions
6
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@chainlink/ea-bootstrap

Bootstrap an external adapter with this package

  • 0.2.0
  • latest
  • npm
  • Socket score

Version published
Maintainers
10
Created
Source

The core framework that every External Adapter uses.

Detailed here is optional configuration that can be provided to any EA through environment variables.


Table of Contents

  1. Server configuration
  2. Performance
  3. Metrics
  4. Websockets

Server configuration

Required?NameDescriptionOptionsDefaults to
BASE_URLSet a base url that is used for setting up routes on the external adapter. Ex. Typically a external adapter is served on the root, so you would make requests to /, setting BASE_URL to /coingecko would instead have requests made to /coingecko. Useful when multiple external adapters are being hosted under the same domain, and path mapping is being used to route between them./
EA_PORTThe port to run the external adapter's server on8080
UUIDA universally unique identifier that is used to identify the EA(generated randomly)
DEBUGToggles debug mode.false
NODE_ENVToggles development mode. When set to developement the log messages will be prettified to be more read-able.developmentundefined
LOG_LEVELThe winston log level. Set to debug for full log messages.info, debug, traceinfo
API_TIMEOUTThe number of milliseconds a request can be pending before returning a timeout error.30000
API_ENDPOINTOverride the base URL within the EA.Defined in EA
WS_API_ENDPOINTOverride the base websocket URL within the EA.Defined in EA
API_VERBOSEToggle whether the response from the EA should contain just the results or also include the full response body from the queried API.false

Performance

Caching

To cache data every adapter using the bootstrap package has access to a simple LRU cache that will cache successful 200 responses using SHA1 hash of input as a key.

⚠️ Note

Please check and ensure caching is allowed and not in violation of the Terms of Service of the data provider's API. Disable caching flags if it is not supported by the specified API provider's TOS.

To configure caching these environment variables are available:

Required?NameDescriptionOptionsDefaults to
CACHE_ENABLEDToggle caching.true
CACHE_TYPEWhich cache type should be used.local or redislocal
CACHE_KEY_GROUPSet to specific group ID to group the cached data, for this adapter, with other instances in the same group. Applicable only in remote cache scenarios, where multiple adapter instances share the cache.UUID of the adapter
CACHE_KEY_IGNORED_PROPSKeys to ignore while deriving the cache key, delimited by ,. The key set will be added to the default ignored keys['id', 'maxAge', 'meta', 'rateLimitMaxAge', 'debug', 'metricsMeta']
CACHE_MAX_AGEMaximum age in ms. Items are not pro-actively pruned out as they age, but if you try to get an item that is too old, it'll drop it and return undefined instead of giving it to you. If set to 0 the default will be used, and if set to < 0 entries will not persist in cache.90000 (1.5 minutes)
CACHE_MIN_AGEMinimum age in ms.30000 (30 seconds)

Cache key

The cache key of a stored request is derived by hashing the input object, using the SHA1 hash function, while by default ignoring keys ['id', 'maxAge', 'meta', 'rateLimitMaxAge', 'debug']. So for example these few requests will derive the same key:

  • {"id": 1, "data": {"base":"LINK", "quote": "USD"}}
  • {"id": 2, "data": {"base":"LINK", "quote": "USD", "maxAge": 10000}}
  • {"id": 3, "data": {"base":"LINK", "quote": "USD"}}

The maxAge input argument can be used to set per item maxAge parameter. If not set, or set to 0, the cache level maxAge option will be used. Every time the maxAge input argument changes, the item will be cached with the new maxAge parameter. To avoid hitting the cache for a specific item set maxAge: -1 (any value < 0).

Ignoring keys

If you want to ignore specific input data object keys, to be excluded from key derivation, you can use the CACHE_KEY_IGNORED_PROPS environment variable.

For example, if the CACHE_KEY_IGNORED_PROPS=timestamp is set, these requests will derive the same key:

  • {"id": 1, "data": {"base":"LINK", "quote": "USD", "timestamp": 1598874704}}
  • {"id": 2, "data": {"base":"LINK", "quote": "USD", "timestamp": 1598874721}}

Local cache

Required?NameDescriptionOptionsDefaults to
CACHE_MAX_ITEMSThe maximum size of the cache, checked by applying the length function to all values in the cache.500
CACHE_UPDATE_AGE_ON_GETWhen using time-expiring entries with maxAge, setting this to true will make each item's effective time update to the current time whenever it is retrieved from cache, causing it to not expire. (It can still fall out of cache based on recency of use, of course.)false

Redis

Required?NameDescriptionOptionsDefaults to
CACHE_REDIS_HOSTIP address of the Redis server.127.0.0.1
CACHE_REDIS_PORTPort of the Redis server.6379
CACHE_REDIS_PATHThe UNIX socket string of the Redis server.undefined
CACHE_REDIS_URLThe URL of the Redis server. Format: [redis[s]:]//[[user][:password@]][host][:port][/db-number][?db=db-number[&password=bar[&option=value]]].undefined
CACHE_REDIS_PASSWORDThe password required for redis auth.null
CACHE_REDIS_TIMEOUTThe timeout in ms if connection to Redis errors or is not responding.500

For local development run a Redis Docker container:

docker run -p 6379:6379 --name ea-redis -d redis redis-server --requirepass SUPER_SECRET

For ElastiCache Redis deployments: if encryption in transit is used, to make a connection CACHE_REDIS_URL needs to be set with rediss://... protocol.

Rate Limiting

To avoid hitting rate limit issues with the data provider subscription, a rate limit capacity per minute can be set:

Required?NameDescriptionOptionsDefaults to
RATE_LIMIT_ENABLEDEnabling Rate Limit functionality.true
  • Option 1, manual capacity setting:

    Required?NameDescriptionOptionsDefaults to
    RATE_LIMIT_CAPACITYMaximum capacity on requests per minuteundefined
  • Option 2, capacity by reference. Check your plan here and use it with the following configuration:

Required?NameDescriptionOptionsDefaults to
RATE_LIMIT_API_PROVIDERName of the provider.The derived name of the running External Adapter
RATE_LIMIT_API_TIERPlan you are subscribed to.undefined
Provider Limits

Each provider is defined within limits.json as so:

{
  "[provider-name]": {
    "http": {
      "[plan-name]": {
        "rateLimit1s": 1,
        "rateLimit1m": 30,
        "rateLimit1h": 200
      },
      "premium": {
        "rateLimit1s": 10,
        "rateLimit1m": 300,
        "rateLimit1h": 2000
      }
    },
    "ws": {
      "[plan-name]": {
        "connections": 1,
        "subscriptions": 10
      }
    }
  }, {...}
}

Being:

  • provider-name: The provider name. E.g. "amberdata" or "coinmarketcap"
  • plan-name: The provider plan name. Used as a identifier for the plan. E.g. "free" or "premium"
  • There are two protocols with different limit types:
    • http: With rateLimit1s, rateLimit1m, rateLimit1h, which stands for requests per second/minute/hour respectively. If only one is provided, the rest would be calculated based on it.
    • ws: Websocket limits, which accepts: connections and subscriptions. If websockets are not supported on the provider, can be left empty as ws: {}

Cache Warming

*To use this feature the CACHE_ENABLED environment variable must also be enabled.

Required?NameDescriptionOptionsDefaults to
WARMUP_ENABLEDEnable the cache warmer functionality.true
WARMUP_UNHEALTHY_THRESHOLDThe number of times a warmup execution can fail before we drop a warmup subscription for a particular cache key.to. Set to -1 to disable.3
WARMUP_SUBSCRIPTION_TTLThe maximum duration between requests for a cache key to an external adapter before the cache warmer will unsubscribe from warming up a particular cache key.3600000 (1 hour)
WARMUP_INTERVALThe interval at which the cache warmer should send requests to warm the cache.The cache's minimum TTL (30s)

Request Coalescing

One final consideration is the “thundering herd” situation, in which many clients make requests that need the same uncached downstream resource at approximately the same time. This can also occur when a server comes up and joins the fleet with an empty local cache. This results in a large number of requests from each server going to the downstream dependency, which can lead to throttling/brownout. To remedy this issue we use request coalescing, where the servers or external cache ensure that only one pending request is out for uncached resources. Some caching libraries provide support for request coalescing, and some external inline caches (such as Nginx or Varnish) do as well. In addition, request coalescing can be implemented on top of existing caches. -- Amazon on Caching challenges and strategies

To configure caching these environment variables are available:

Required?NameDescriptionOptionsDefaults to
REQUEST_COALESCING_ENABLEDEnable request coalescing.false
REQUEST_COALESCING_INTERVALInterval in milliseconds for exponential back-off function.100
REQUEST_COALESCING_INTERVAL_MAXMaximum back-off in milliseconds.1000
REQUEST_COALESCING_INTERVAL_COEFFICIENTA coefficient as the base multiplier for exponential back-off interval function.2
REQUEST_COALESCING_ENTROPY_MAXAmount of random delay (entropy) in milliseconds that will be added to requests. Avoids issue where the request coalescing key won't be set before multiple other instances in a burst try to access the same key.0
REQUEST_COALESCING_MAX_RETRIESMaximum number of attempts to wait for the request coalescing key to be deleted before continuing this request5

Metrics

A metrics server can be exposed which returns prometheus compatible data on the metrics endpoint on the specified port. When enabled, a metrics endpoint is opened on /metrics, which can be prepended with the BASE_URL if METRICS_USE_BASE_URL is enabled.

*Please note that this feature is EXPERIMENTAL.

Required?NameDescriptionOptionsDefaults to
EXPERIMENTAL_METRICS_ENABLEDSet to true to enable metrics collection.false
METRICS_USE_BASE_URLSet to "true" to have the internal metrics endpoint use the supplied base url.false
METRICS_PORTThe port the metrics endpoint is served on9080
METRICS_NAMEset to apply a label of to each metric.undefined

To run Prometheus and Grafana with development setup:

yarn dev:metrics

Websockets

Adapters who interact with data providers that support websockets will be able to use them offering a WS interface. Each adapter will have its corresponding WS documentation.

Multiple subscription channels are multiplexed over one connection.

For every type of request, the adapter will subscribe to the corresponding channel.

From the moment the subscription is confirmed, the adapter will start receiving messages with the relevant information, piping this information to the cache. On future requests, the adapter will always have fresh data saved on cache. If there is no data available in cache, the adapter will continue with its default execution.

Required?NameDescriptionOptionsDefaults to
WS_ENABLEDSet this to true to enable WS support (on adapters that support this).false
WS_SUBSCRIPTION_TTLSubscription expiration time in ms. If no new incoming requests ask for this information during this time, the subscription will be cancelled.120000
WS_SUBSCRIPTION_UNRESPONSIVE_TTLUnresponsive subscription expiration time in ms. If the adapter doesn't receive messages from an open subscription during this time, a resubscription will be tried.120000

*For the websockets to be effective, caching needs to be enabled

FAQs

Package last updated on 29 Oct 2021

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc