
Security News
AGENTS.md Gains Traction as an Open Format for AI Coding Agents
AGENTS.md is a fast-growing open format giving AI coding agents a shared, predictable way to understand project setup, style, and workflows.
@envelop/response-cache
Advanced tools
- Skip the execution phase and reduce server load by caching execution results in-memory. - Customize cache entry time to live based on fields and types within the execution result. - Automatically invalidate the cache based on mutation selection sets. -
@envelop/response-cache
Cache
interface.Check out the GraphQL Response Cache Guide for more information
Watch Episode #34 of
graphql.wtf
for a quick introduction to using Response Cache plugin with Envelop:
yarn add @envelop/response-cache
When configuring the useResponseCache
, you can choose the type of cache:
@envelop/response-cache-redis
)This plugin rely on a custom executor to work. This means that this plugin should in most cases placed last in the plugin list. Otherwise, some other plugin might override the custom executor.
For example, this would not work:
import { execute, parse, specifiedRules, subscribe, validate } from 'graphql'
import { envelop, useEngine } from '@envelop/core'
import { useResponseCache } from '@envelop/response-cache'
// Don't
const getEnveloped = envelop({
plugins: [
useResponseCache(),
// Here, useEngine will override the `execute` function, leading to a non working cache.
useEngine({ parse, validate, specifiedRules, execute, subscribe })
]
})
// Do
const getEnveloped = envelop({
plugins: [
useEngine({ parse, validate, specifiedRules, execute, subscribe }),
// Here, the plugin can control the `execute` function
useResponseCache()
]
})
The in-memory LRU cache is used by default.
import { execute, parse, specifiedRules, subscribe, validate } from 'graphql'
import { envelop, useEngine } from '@envelop/core'
import { useResponseCache } from '@envelop/response-cache'
const getEnveloped = envelop({
plugins: [
useEngine({ parse, validate, specifiedRules, execute, subscribe }),
// ... other plugins ...
useResponseCache({
// use global cache for all operations
session: () => null
})
]
})
Or, you may create the in-memory LRU cache explicitly.
import { execute, parse, specifiedRules, subscribe, validate } from 'graphql'
import { envelop, useEngine } from '@envelop/core'
import { createInMemoryCache, useResponseCache } from '@envelop/response-cache'
const cache = createInMemoryCache()
const getEnveloped = envelop({
plugins: [
useEngine({ parse, validate, specifiedRules, execute, subscribe }),
// ... other plugins ...
useResponseCache({
cache,
session: () => null // use global cache for all operations
})
]
})
Note: The in-memory LRU cache is not suitable for serverless deployments. Instead, consider the Redis cache provided by
@envelop/response-cache-redis
.
import { execute, parse, specifiedRules, subscribe, validate } from 'graphql'
import { envelop, useEngine } from '@envelop/core'
import { useResponseCache } from '@envelop/response-cache'
const getEnveloped = envelop({
plugins: [
useEngine({ parse, validate, specifiedRules, execute, subscribe }),
// ... other plugins ...
useResponseCache({
ttl: 2000,
// context is the GraphQL context used for execution
session: context => String(context.user?.id)
})
]
})
yarn add @envelop/response-cache-redis
In order to use the Redis cache, you need to:
host
, port
, username
,
password
, tls
, etc.useResponseCache
plugin optionsimport { parse, validate, execute, subscribe } from 'graphql'
import { envelop } from '@envelop/core'
import { useResponseCache } from '@envelop/response-cache'
import { createRedisCache } from '@envelop/response-cache-redis'
import Redis from 'ioredis'
const redis = new Redis({
host: 'my-redis-db.example.com',
port: '30652',
password: '1234567890'
})
const redis = new Redis('rediss://:1234567890@my-redis-db.example.com:30652')
const cache = createRedisCache({ redis })
const getEnveloped = envelop({
parse,
validate,
execute,
subscribe,
plugins: [
// ... other plugins ...
useResponseCache({
cache,
session: () => null // use global cache for all operations
})
]
})
Note: In the Recipes below, be sure to provide your Redis
cache
instance withuseResponseCache({ cache })
.
yarn add @envelop/response-cache-cloudflare-kv
In order to use the Cloudflare KV cache, you need to:
wrangler.toml
in order to access it from your worker. Read the
KV docs to get started.createKvCache
function and set to the useResponseCache
plugin
options. See the example below.The example below demonstrates how to use this with graphql-yoga within a Cloudflare Worker script.
import { createSchema, createYoga, YogaInitialContext } from 'graphql-yoga'
import { useResponseCache } from '@envelop/response-cache'
import { createKvCache } from '@envelop/response-cache-cloudflare-kv'
import { resolvers } from './graphql-schema/resolvers.generated'
import { typeDefs } from './graphql-schema/typeDefs.generated'
export type Env = {
GRAPHQL_RESPONSE_CACHE: KVNamespace
}
const graphqlServer = createYoga<Env & ExecutionContext>({
schema: createSchema({ typeDefs, resolvers }),
plugins: [
useResponseCache({
cache: createKvCache({
KVName: 'GRAPHQL_RESPONSE_CACHE',
keyPrefix: 'graphql' // optional
}),
session: () => null,
includeExtensionMetadata: true,
ttl: 1000 * 10 // 10 seconds
})
]
})
export default {
fetch: graphqlServer
}
import { execute, parse, subscribe, validate } from 'graphql'
import { envelop } from '@envelop/core'
import { useResponseCache } from '@envelop/response-cache'
const getEnveloped = envelop({
parse,
validate,
execute,
subscribe,
plugins: [
// ... other plugins ...
useResponseCache({
ttl: 2000, // cached execution results become stale after 2 seconds
session: () => null // use global cache for all operations
})
]
})
Note: Setting
ttl: 0
will disable TTL for all types. You can use that if you wish to disable caching for all type, and then enable caching for specific types usingttlPerType
.
import { execute, parse, subscribe, validate } from 'graphql'
import { envelop } from '@envelop/core'
import { useResponseCache } from '@envelop/response-cache'
const getEnveloped = envelop({
parse,
validate,
execute,
subscribe,
plugins: [
// ... other plugins ...
useResponseCache({
ttl: 2000,
ttlPerType: {
// cached execution results that contain a `Stock` object become stale after 500ms
Stock: 500
}
})
]
})
It is also possible to define the TTL by using the @cacheControl
directive in your schema.
import { execute, parse, subscribe, validate, buildSchema } from 'graphql'
import { envelop } from '@envelop/core'
import { useResponseCache, cacheControlDirective } from '@envelop/response-cache'
const schema = buildSchema(/* GraphQL */ `
${cacheControlDirective}
type Stock @cacheControl(maxAge: 500) {
# ... stock fields ...
}
# ... rest of the schema ...
`)
const getEnveloped = envelop({
parse,
validate,
execute,
subscribe,
plugins: [
useSchema(schema)
// ... other plugins ...
useResponseCache({ ttl: 2000 })
]
})
import { execute, parse, subscribe, validate } from 'graphql'
import { envelop } from '@envelop/core'
import { useResponseCache } from '@envelop/response-cache'
const getEnveloped = envelop({
parse,
validate,
execute,
subscribe,
plugins: [
// ... other plugins ...
useResponseCache({
ttl: 2000,
ttlPerSchemaCoordinate: {
// cached execution results that select the `Query.user` field become stale after 100ms
'Query.rocketCoordinates': 100
}
})
]
})
It is also possible to define the TTL by using the @cacheControl
directive in your schema.
import { buildSchema, execute, parse, subscribe, validate } from 'graphql'
import { envelop } from '@envelop/core'
import { cacheControlDirective, useResponseCache } from '@envelop/response-cache'
const schema = buildSchema(/* GraphQL */ `
${cacheControlDirective}
type Query {
rocketCoordinates: Coordinates @cacheControl(maxAge: 100)
}
# ... rest of the schema ...
`)
const getEnveloped = envelop({
parse,
validate,
execute,
subscribe,
plugins: [
useSchema(schema)
// ... other plugins ...
useResponseCache({ ttl: 2000 })
]
})
import { execute, parse, subscribe, validate } from 'graphql'
import { envelop } from '@envelop/core'
import { useResponseCache } from '@envelop/response-cache'
const getEnveloped = envelop({
parse,
validate,
execute,
subscribe,
plugins: [
// ... other plugins ...
useResponseCache({
ttl: 2000,
// context is the GraphQL context used for execution
enabled: context => context.user?.role !== 'admin',
session: () => null
})
]
})
Some types or fields in the schemas should never be globally cached. Its data is always linked to a
session or user. PRIVATE
scope allows to enforce this fact and ensure that responses with a
PRIVATE
scope will never be cached without a session. The default scope for all types and fields
is PUBLIC
.
import { execute, parse, subscribe, validate } from 'graphql'
import { envelop } from '@envelop/core'
import { useResponseCache } from '@envelop/response-cache'
const getEnveloped = envelop({
parse,
validate,
execute,
subscribe,
plugins: [
// ... other plugins ...
useResponseCache({
ttl: 2000,
session: (request) => getSessionId(request)
scopePerSchemaCoordinate: {
// Set scope for an entire type
PrivateProfile: 'PRIVATE',
// Set scope for a single field
'Profile.privateData': 'PRIVATE',
}
})
]
})
It is also possible to define scopes using the @cacheControl
directive in your schema.
import { execute, parse, subscribe, validate, buildSchema } from 'graphql'
import { envelop, useSchema } from '@envelop/core'
import { useResponseCache, cacheControlDirective } from '@envelop/response-cache'
const schema = buildSchema(/* GraphQL */`
${cacheControlDirective}
type PrivateProfile @cacheControl(scope: PRIVATE) {
# ...
}
type Profile {
privateData: String @cacheControl(scope: PRIVATE)
}
`)
const getEnveloped = envelop({
parse,
validate,
execute,
subscribe,
plugins: [
// ... other plugins ...
useResponseCache({
ttl: 2000,
session: (request) => getSessionId(request)
scopePerSchemaCoordinate: {
// Set scope for an entire type
PrivateProfile: 'PRIVATE',
// Set scope for a single field
'Profile.privateData': 'PRIVATE',
}
})
]
})
You can define a custom function used to check if a query operation execution result should be cached.
type ShouldCacheResultFunction = (params: { result: ExecutionResult }) => boolean
This is useful for advanced use-cases. E.g. if you want to cache results with certain error types.
By default, the defaultShouldCacheResult
function is used which never caches any query operation
execution results that includes any errors (unexpected, EnvelopError, or GraphQLError).
import { execute, parse, subscribe, validate } from 'graphql'
import { envelop } from '@envelop/core'
import { ShouldCacheResultFunction, useResponseCache } from '@envelop/response-cache'
export const defaultShouldCacheResult: ShouldCacheResultFunction = (params): boolean => {
// cache any query operation execution result
// even if it includes errors
return true
}
const getEnveloped = envelop({
parse,
validate,
execute,
subscribe,
plugins: [
// ... other plugins ...
useResponseCache({
shouldCacheResult: myCustomShouldCacheResult,
session: () => null
})
]
})
By default introspection query operations are not cached. In case you want to cache them you can do
so with the ttlPerSchemaCoordinate
parameter.
Infinite caching
import { execute, parse, subscribe, validate } from 'graphql'
import { envelop } from '@envelop/core'
import { useResponseCache } from '@envelop/response-cache'
const getEnveloped = envelop({
parse,
validate,
execute,
subscribe,
plugins: [
// ... other plugins ...
useResponseCache({
ttlPerSchemaCoordinate: {
'Query.__schema': undefined // cache infinitely
},
session: () => null
})
]
})
TTL caching
import { execute, parse, subscribe, validate } from 'graphql'
import { envelop } from '@envelop/core'
import { useResponseCache } from '@envelop/response-cache'
const getEnveloped = envelop({
parse,
validate,
execute,
subscribe,
plugins: [
// ... other plugins ...
useResponseCache({
ttlPerSchemaCoordinate: {
'Query.__schema': 10_000 // cache for 10 seconds
},
session: () => null
})
]
})
import { execute, parse, subscribe, validate } from 'graphql'
import { envelop } from '@envelop/core'
import { useResponseCache } from '@envelop/response-cache'
const getEnveloped = envelop({
parse,
validate,
execute,
subscribe,
plugins: [
// ... other plugins ...
useResponseCache({
ttl: 2000, // cached execution results become stale after 2 seconds
session: () => null
})
]
})
import { execute, parse, subscribe, validate } from 'graphql'
import { envelop } from '@envelop/core'
import { useResponseCache } from '@envelop/response-cache'
const getEnveloped = envelop({
parse,
validate,
execute,
subscribe,
plugins: [
// ... other plugins ...
useResponseCache({
ttl: 2000,
// use the `_id` instead of `id` field.
idFields: ['_id'],
session: () => null
})
]
})
import { execute, parse, subscribe, validate } from 'graphql'
import { envelop } from '@envelop/core'
import { useResponseCache } from '@envelop/response-cache'
const getEnveloped = envelop({
parse,
validate,
execute,
subscribe,
plugins: [
// ... other plugins ...
useResponseCache({
ttl: 2000,
// some might prefer invalidating based on a database write log
invalidateViaMutation: false,
session: () => null
})
]
})
import { execute, parse, subscribe, validate } from 'graphql'
import { envelop } from '@envelop/core'
import { createInMemoryCache, useResponseCache } from '@envelop/response-cache'
import { emitter } from './eventEmitter'
// we create our cache instance, which allows calling all methods on it
const cache = createInMemoryCache()
const getEnveloped = envelop({
parse,
validate,
execute,
subscribe,
plugins: [
// ... other plugins ...
useResponseCache({
ttl: 2000,
// we pass the cache instance to the request.
cache,
session: () => null
})
]
})
emitter.on('invalidate', resource => {
cache.invalidate([
{
typename: resource.type,
id: resource.id
}
])
})
import { execute, parse, subscribe, validate } from 'graphql'
import { envelop } from '@envelop/core'
import { createInMemoryCache, useResponseCache } from '@envelop/response-cache'
import { emitter } from './eventEmitter'
// we create our cache instance, which allows calling all methods on it
const cache = createInMemoryCache({
// in relay we have global unique ids, no need to use `typename:id`
makeId: (typename, id) => id ?? typename
})
const getEnveloped = envelop({
parse,
validate,
execute,
subscribe,
plugins: [
// ... other plugins ...
useResponseCache({
ttl: 2000,
// we pass the cache instance to the request.
cache,
session: () => null
})
]
})
If you have a some kind of custom logic, that should be used to calculate the TTL for a specific
reason. The following example tracks the Cache-Control
header from a remote server and uses it to
calculate the TTL.
const getEnveloped = envelop({
parse,
validate,
execute,
subscribe,
plugins: [
useSchema(
makeExecutableSchema({
typeDefs: /* GraphQL */ `
type Query {
dataFromRemote: String
}
`,
resolvers: {
Query: {
dataFromRemote: async (_, __, context) => {
const res = await fetch('https://api.example.com/data')
const cacheControlHeader = res.headers.get('Cache-Control')
if (cacheControlHeader) {
const maxAgeInSeconds = cacheControlHeader.match(/max-age=(\d+)/)
if (maxAgeInSeconds) {
const ttl = parseInt(maxAgeInSeconds[1]) * 1000
if (context.ttl == null || ttl < context.ttl) {
context.ttl = ttl
}
}
}
return res.text()
}
}
}
})
),
useResponseCache({
session: () => null,
onTtl({ ttl, context }) {
if (context.ttl != null && context.ttl < ttl) {
return context.ttl
}
return ttl
}
})
]
})
For debugging or monitoring it might be useful to know whether a response got served from the cache or not.
import { execute, parse, subscribe, validate } from 'graphql'
import { envelop } from '@envelop/core'
const getEnveloped = envelop({
parse,
validate,
execute,
subscribe,
plugins: [
// ... other plugins ...
useResponseCache({
ttl: 2000,
includeExtensionMetadata: true,
session: () => null
})
]
})
This option will attach the following fields to the execution result if set to true (or
process.env["NODE_ENV"]
is "development"
).
extension.responseCache.hit
- Whether the result was served form the cache or notextension.responseCache.invalidatedEntities
- Entities that got invalidated by a mutation
operationCache miss (response is generated by executing the query):
query UserById {
user(id: "1") {
id
name
}
}
{
"result": {
"user": {
"id": "1",
"name": "Laurin"
}
},
"extensions": {
"responseCache": {
"hit": false
}
}
}
Cache hit (response served from response cache):
query UserById {
user(id: "1") {
id
name
}
}
{
"result": {
"user": {
"id": "1",
"name": "Laurin"
}
},
"extensions": {
"responseCache": {
"hit": true
}
}
}
Invalidation via Mutation:
mutation SetNameMutation {
userSetName(name: "NotLaurin") {
user {
id
name
}
}
}
{
"result": {
"userSetName": {
"user": {
"id": "1",
"name": "Laurin"
}
}
},
"extensions": {
"invalidatedEntities": [{ "id": "1", "typename": "User" }]
}
}
FAQs
- Skip the execution phase and reduce server load by caching execution results in-memory. - Customize cache entry time to live based on fields and types within the execution result. - Automatically invalidate the cache based on mutation selection sets. -
We found that @envelop/response-cache demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
AGENTS.md is a fast-growing open format giving AI coding agents a shared, predictable way to understand project setup, style, and workflows.
Security News
/Research
Malicious npm package impersonates Nodemailer and drains wallets by hijacking crypto transactions across multiple blockchains.
Security News
This episode explores the hard problem of reachability analysis, from static analysis limits to handling dynamic languages and massive dependency trees.