Security News
Supply Chain Attack Detected in Solana's web3.js Library
A supply chain attack has been detected in versions 1.95.6 and 1.95.7 of the popular @solana/web3.js library.
@sanity/client
Advanced tools
Client for retrieving, creating and patching data from Sanity.io
@sanity/client is a JavaScript client for Sanity.io, a platform for structured content. It allows developers to interact with the Sanity content lake, enabling them to fetch, create, update, and delete documents, as well as perform queries and mutations.
Fetching Documents
This feature allows you to fetch documents from your Sanity dataset using GROQ queries. The example fetches all documents of type 'post'.
const sanityClient = require('@sanity/client');
const client = sanityClient({
projectId: 'yourProjectId',
dataset: 'yourDataset',
useCdn: true
});
client.fetch('*[_type == "post"]').then(posts => {
console.log(posts);
});
Creating Documents
This feature allows you to create new documents in your Sanity dataset. The example creates a new document of type 'post' with a title and body.
const sanityClient = require('@sanity/client');
const client = sanityClient({
projectId: 'yourProjectId',
dataset: 'yourDataset',
useCdn: false
});
client.create({
_type: 'post',
title: 'Hello World',
body: 'This is my first post!'
}).then(response => {
console.log('Document created:', response);
});
Updating Documents
This feature allows you to update existing documents in your Sanity dataset. The example updates the title of a document with a specific ID.
const sanityClient = require('@sanity/client');
const client = sanityClient({
projectId: 'yourProjectId',
dataset: 'yourDataset',
useCdn: false
});
client.patch('documentId')
.set({ title: 'Updated Title' })
.commit()
.then(updatedDocument => {
console.log('Document updated:', updatedDocument);
});
Deleting Documents
This feature allows you to delete documents from your Sanity dataset. The example deletes a document with a specific ID.
const sanityClient = require('@sanity/client');
const client = sanityClient({
projectId: 'yourProjectId',
dataset: 'yourDataset',
useCdn: false
});
client.delete('documentId').then(response => {
console.log('Document deleted:', response);
});
Contentful is a content management platform that provides a similar set of functionalities for managing structured content. It allows you to fetch, create, update, and delete content entries. Compared to @sanity/client, Contentful offers a more traditional CMS experience with a focus on ease of use and integration with various front-end frameworks.
Prismic is another headless CMS that offers a JavaScript client for interacting with its API. It provides functionalities for querying, creating, and managing content. Prismic's API is more RESTful compared to Sanity's GROQ-based querying, and it emphasizes a slice-based content modeling approach.
Strapi is an open-source headless CMS that provides a JavaScript SDK for interacting with its API. It allows you to perform CRUD operations on your content types. Strapi is highly customizable and can be self-hosted, offering more control over the backend compared to Sanity.
JavaScript client for Sanity. Works in modern browsers, as well as runtimes like Node.js, Bun, Deno, and Edge Runtime
Install the client with a package manager:
npm install @sanity/client
Import and create a new client instance, and use its methods to interact with your project's Content Lake. Below are some simple examples in plain JavaScript. Read further for more comprehensive documentation.
// sanity.js
import {createClient} from '@sanity/client'
// Import using ESM URL imports in environments that supports it:
// import {createClient} from 'https://esm.sh/@sanity/client'
export const client = createClient({
projectId: 'your-project-id',
dataset: 'your-dataset-name',
useCdn: true, // set to `false` to bypass the edge cache
apiVersion: '2023-05-03', // use current date (YYYY-MM-DD) to target the latest API version
// token: process.env.SANITY_SECRET_TOKEN // Needed for certain operations like updating content or accessing previewDrafts perspective
})
// uses GROQ to query content: https://www.sanity.io/docs/groq
export async function getPosts() {
const posts = await client.fetch('*[_type == "post"]')
return posts
}
export async function createPost(post: Post) {
const result = client.create(post)
return result
}
export async function updateDocumentTitle(_id, title) {
const result = client.patch(_id).set({title})
return result
}
v5
v4
ES5
v12
no longer supporteddefault
export is replaced with the named export createClient
client.assets.delete
is removedclient.assets.getImageUrl
is removed, replace with @sanity/image-url
SanityClient
static properties moved to named exportsclient.clientConfig
is removed, replace with client.config()
client.isPromiseAPI()
is removed, replace with an instanceof
checkclient.observable.isObservableAPI()
is removed, replace with an instanceof
checkclient._requestObservable
is removed, replace with client.observable.request
client._dataRequest
is removed, replace with client.dataRequest
client._create_
is removed, replace with one of client.create
, client.createIfNotExists
or client.createOrReplace
client.patch.replace
is removed, replace with client.createOrReplace
client.auth
is removed, replace with client.request
Sanity Client transpiles syntax for modern browsers. The JavaScript runtime must support ES6 features such as class, rest parameters, spread syntax and more. Most modern web frameworks, browsers, and developer tooling supports ES6 today.
For legacy ES5 environments we recommend v4.
The client can be installed from npm:
npm install @sanity/client
# Alternative package managers
yarn add @sanity/client
pnpm install @sanity/client
const client = createClient(options)
Initializes a new Sanity Client. Required options are projectId
, dataset
, and apiVersion
. We encourage setting useCdn
to either true
or false
. The default is true
. If you're not sure which option to choose we recommend starting with true
and revise later if you find that you require uncached content. Our awesome Slack community can help guide you on how to avoid stale data tailored to your tech stack and architecture.
import {createClient} from '@sanity/client'
const client = createClient({
projectId: 'your-project-id',
dataset: 'your-dataset-name',
useCdn: true, // set to `false` to bypass the edge cache
apiVersion: '2023-05-03', // use current date (YYYY-MM-DD) to target the latest API version
})
const data = await client.fetch(`count(*)`)
console.log(`Number of documents: ${data}`)
const {createClient} = require('@sanity/client')
const client = createClient({
projectId: 'your-project-id',
dataset: 'your-dataset-name',
useCdn: true, // set to `false` to bypass the edge cache
apiVersion: '2023-05-03', // use current date (YYYY-MM-DD) to target the latest API version
})
client
.fetch(`count(*)`)
.then((data) => console.log(`Number of documents: ${data}`))
.catch(console.error)
import {createClient, type ClientConfig} from '@sanity/client'
const config: ClientConfig = {
projectId: 'your-project-id',
dataset: 'your-dataset-name',
useCdn: true, // set to `false` to bypass the edge cache
apiVersion: '2023-05-03', // use current date (YYYY-MM-DD) to target the latest API version
}
const client = createClient(config)
const data = await client.fetch<number>(`count(*)`)
// data is typed as `number`
console.log(`Number of documents: ${data}`)
We're currently exploring typed GROQ queries that are runtime safe, and will share more when we've landed on a solution we're satisifed with. Until then you can achieve this using Zod:
import {createClient} from '@sanity/client'
import {z} from 'zod'
const client = createClient({
projectId: 'your-project-id',
dataset: 'your-dataset-name',
useCdn: true, // set to `false` to bypass the edge cache
apiVersion: '2023-05-03', // use current date (YYYY-MM-DD) to target the latest API version
})
const schema = z.number()
const data = schema.parse(await client.fetch(`count(*)`))
// data is guaranteed to be `number`, or zod will throw an error
console.log(`Number of documents: ${data}`)
Another alternative is groqd.
import {createClient} from '@sanity/client'
const client = createClient({
projectId: 'your-project-id',
dataset: 'your-dataset-name',
useCdn: true, // set to `false` to bypass the edge cache
apiVersion: '2023-05-03', // use current date (YYYY-MM-DD) to target the latest API version
})
export default async function ReactServerComponent() {
const data = await client.fetch<number>(
`count(*[_type == "page"])`,
{},
{
// You can set any of the `cache` and `next` options as you would on a standard `fetch` call
cache: 'force-cache',
next: {tags: ['pages']},
},
)
return <h1>Number of pages: {data}</h1>
}
The cache
and next
options are documented in the Next.js documentation.
Since request memoization is supported it's unnecessary to use the React.cache
API.
To opt-out of memoization, set the signal
property:
const {signal} = new AbortController()
// By passing `signal` this request will not be memoized and `now()` will execute for every React Server Component that runs this query
const data = await client.fetch<number>(`{"dynamic": now()}`, {}, {signal})
bun init
bun add @sanity/client
open index.ts
// index.ts
import {createClient} from '@sanity/client'
const client = createClient({
projectId: 'your-project-id',
dataset: 'your-dataset-name',
useCdn: true, // set to `false` to bypass the edge cache
apiVersion: '2023-05-03', // use current date (YYYY-MM-DD) to target the latest API version
})
const data = await client.fetch<number>(`count(*)`)
console.write(`Number of documents: ${data}`)
bun run index.ts
# Number of documents ${number}
deno init
open main.ts
// main.ts
import {createClient} from 'https://esm.sh/@sanity/client'
const client = createClient({
projectId: 'your-project-id',
dataset: 'your-dataset-name',
useCdn: true, // set to `false` to bypass the edge cache
apiVersion: '2023-05-03', // use current date (YYYY-MM-DD) to target the latest API version
})
const data = await client.fetch<number>(`count(*)`)
console.log(`Number of documents: ${data}`)
deno run --allow-net --allow-env main.ts
# Number of documents ${number}
npm install next
// pages/api/total.ts
import {createClient} from '@sanity/client'
import type {NextRequest} from 'next/server'
export const config = {
runtime: 'edge',
}
export default async function handler(req: NextRequest) {
const client = createClient({
projectId: 'your-project-id',
dataset: 'your-dataset-name',
useCdn: true, // set to `false` to bypass the edge cache
apiVersion: '2023-05-03', // use current date (YYYY-MM-DD) to target the latest API version
})
const count = await client.fetch<number>(`count(*)`)
return new Response(JSON.stringify({count}), {
status: 200,
headers: {
'content-type': 'application/json',
},
})
}
npx next dev
# Open http://localhost:3000/api/total
# {"count": number}
Using esm.sh you can either load the client using a <script type="module">
tag:
<script type="module">
import {createClient} from 'https://esm.sh/@sanity/client'
const client = createClient({
projectId: 'your-project-id',
dataset: 'your-dataset-name',
useCdn: true, // set to `false` to bypass the edge cache
apiVersion: '2023-05-03', // use current date (YYYY-MM-DD) to target the latest API version
})
const data = await client.fetch(`count(*)`)
document.getElementById('results').innerText = `Number of documents: ${data}`
</script>
<div id="results"></div>
Or from anywhere using a dynamic import()
:
// You can run this snippet from your browser DevTools console.
// Super handy when you're quickly testing out queries.
const {createClient} = await import('https://esm.sh/@sanity/client')
const client = createClient({
projectId: 'your-project-id',
dataset: 'your-dataset-name',
useCdn: true, // set to `false` to bypass the edge cache
apiVersion: '2023-05-03', // use current date (YYYY-MM-DD) to target the latest API version
})
const data = await client.fetch(`count(*)`)
console.log(`Number of documents: ${data}`)
Loading the UMD script creates a SanityClient
global that have the same exports as import * as SanityClient from '@sanity/client'
:
<script src="https://unpkg.com/@sanity/client"></script>
<!-- Unminified build for debugging -->
<!--<script src="https://unpkg.com/@sanity/client/umd/sanityClient.js"></script>-->
<script>
const {createClient} = SanityClient
const client = createClient({
projectId: 'your-project-id',
dataset: 'your-dataset-name',
useCdn: true, // set to `false` to bypass the edge cache
apiVersion: '2023-05-03', // use current date (YYYY-MM-DD) to target the latest API version
})
client.fetch(`count(*)`).then((data) => console.log(`Number of documents: ${data}`))
</script>
The require-unpkg
library lets you consume npm
packages from unpkg.com
similar to how esm.sh
lets you import()
anything:
<div id="results"></div>
<script src="https://unpkg.com/require-unpkg"></script>
<script>
;(async () => {
// const {createClient} = await require('@sanity/client')
const [$, {createClient}] = await require(['jquery', '@sanity/client'])
const client = createClient({
projectId: 'your-project-id',
dataset: 'your-dataset-name',
useCdn: true, // set to `false` to bypass the edge cache
apiVersion: '2023-05-03', // use current date (YYYY-MM-DD) to target the latest API version
})
const data = await client.fetch(`count(*)`)
$('#results').text(`Number of documents: ${data}`)
})()
</script>
Sanity uses ISO dates (YYYY-MM-DD) in UTC timezone for versioning. The explanation for this can be found in the documentation
In general, unless you know what API version you want to use, you'll want to statically set it to today's UTC date when starting a new project. By doing this, you'll get all the latest bugfixes and features, while locking the API to prevent breaking changes.
Note: Do not be tempted to use a dynamic value for the apiVersion
. The reason for setting a static value is to prevent unexpected, breaking changes.
In future versions, specifying an API version will be required. For now (to maintain backwards compatiblity) not specifying a version will trigger a deprecation warning and fall back to using v1
.
Request tags are values assigned to API and CDN requests that can be used to filter and aggregate log data within request logs from your Sanity Content Lake.
Sanity Client has out-of-the-box support for tagging every API and CDN request on two levels:
requestTagPrefix
client configuration parameterThe following example will result in a query with tag=website.landing-page
:
const client = createClient({
projectId: '<project>',
dataset: '<dataset>',
useCdn: true,
apiVersion: '2024-01-24',
requestTagPrefix: 'website', // Added to every request
})
const posts = await client.fetch('*[_type == "post"]', {
tag: `index-page`, // Appended to requestTagPrefix for this individual request
})
const query = '*[_type == "bike" && seats >= $minSeats] {name, seats}'
const params = {minSeats: 2}
client.fetch(query, params).then((bikes) => {
console.log('Bikes with more than one seat:')
bikes.forEach((bike) => {
console.log(`${bike.name} (${bike.seats} seats)`)
})
})
client.fetch(query, params = {})
Perform a query using the given parameters (if any).
The perspective
option can be used to specify special filtering behavior for queries. The default value is raw
, which means no special filtering is applied, while published
and previewDrafts
can be used to optimize for specific use cases.
published
Useful for when you want to be sure that draft documents are not returned in production. Pairs well with private datasets.
With a dataset that looks like this:
[
{
"_type": "author",
"_id": "ecfef291-60f0-4609-bbfc-263d11a48c43",
"name": "George Martin"
},
{
"_type": "author",
"_id": "drafts.ecfef291-60f0-4609-bbfc-263d11a48c43",
"name": "George R.R. Martin"
},
{
"_type": "author",
"_id": "drafts.f4898efe-92c4-4dc0-9c8c-f7480aef17e2",
"name": "Stephen King"
}
]
And a query like this:
import {createClient} from '@sanity/client'
const client = createClient({
...config,
useCdn: true, // set to `false` to bypass the edge cache
perspective: 'published',
})
const authors = await client.fetch('*[_type == "author"]')
Then authors
will only contain documents that don't have a drafts.
prefix in their _id
, in this case just "George Martin":
[
{
"_type": "author",
"_id": "ecfef291-60f0-4609-bbfc-263d11a48c43",
"name": "George Martin"
}
]
previewDrafts
Designed to help answer the question "What is our app going to look like after all the draft documents are published?".
Given a dataset like this:
[
{
"_type": "author",
"_id": "ecfef291-60f0-4609-bbfc-263d11a48c43",
"name": "George Martin"
},
{
"_type": "author",
"_id": "drafts.ecfef291-60f0-4609-bbfc-263d11a48c43",
"name": "George R.R. Martin"
},
{
"_type": "author",
"_id": "drafts.f4898efe-92c4-4dc0-9c8c-f7480aef17e2",
"name": "Stephen King"
},
{
"_type": "author",
"_id": "6b3792d2-a9e8-4c79-9982-c7e89f2d1e75",
"name": "Terry Pratchett"
}
]
And a query like this:
import {createClient} from '@sanity/client'
const client = createClient({
...config,
useCdn: false, // the `previewDrafts` perspective requires this to be `false`
perspective: 'previewDrafts',
})
const authors = await client.fetch('*[_type == "author"]')
Then authors
will look like this. Note that the result dedupes documents with a preference for the draft version:
[
{
"_type": "author",
"_id": "ecfef291-60f0-4609-bbfc-263d11a48c43",
"_originalId": "drafts.ecfef291-60f0-4609-bbfc-263d11a48c43",
"name": "George R.R. Martin"
},
{
"_type": "author",
"_id": "f4898efe-92c4-4dc0-9c8c-f7480aef17e2",
"_originalId": "drafts.f4898efe-92c4-4dc0-9c8c-f7480aef17e2",
"name": "Stephen King"
},
{
"_type": "author",
"_id": "6b3792d2-a9e8-4c79-9982-c7e89f2d1e75",
"_originalId": "6b3792d2-a9e8-4c79-9982-c7e89f2d1e75",
"name": "Terry Pratchett"
}
]
Since the query simulates what the result will be after publishing the drafts, the _id
doesn't contain the drafts.
prefix. If you want to check if a document is a draft or not you can use the _originalId
field, which is only available when using the previewDrafts
perspective.
const authors = await client.fetch(`*[_type == "author"]{..., "status": select(
_originalId in path("drafts.**") => "draft",
"published"
)}`)
Which changes the result to be:
[
{
"_type": "author",
"_id": "ecfef291-60f0-4609-bbfc-263d11a48c43",
"_originalId": "drafts.ecfef291-60f0-4609-bbfc-263d11a48c43",
"name": "George R.R. Martin",
"status": "draft"
},
{
"_type": "author",
"_id": "f4898efe-92c4-4dc0-9c8c-f7480aef17e2",
"_originalId": "f4898efe-92c4-4dc0-9c8c-f7480aef17e2",
"name": "Stephen King",
"status": "published"
}
]
Content Source Maps annotate fragments in your query results with metadata about its origin: the field, document, and dataset it originated from.
[!IMPORTANT]
Content Source Maps are supported in the Content Lake API versions
2021-03-25
and later.
Before diving in, review the Content Source Maps introduction and keep the Content Source Maps reference within reach for a quick lookup.
Enabling Content Source Maps is a two-step process:
Update your client configuration with resultSourceMap
.
import {createClient} from '@sanity/client'
const client = createClient({
projectId: 'your-project-id',
dataset: 'your-dataset-name',
useCdn: true, // set to `false` to bypass the edge cache
apiVersion: '2021-03-25', // use current date (YYYY-MM-DD) to target the latest API version
resultSourceMap: true, // tells the API to start sending source maps, if available
})
On client.fetch
calls add {filterResponse: false}
to return the full response on queries.
// Before
// const result = await client.fetch(query, params)
// After adding `filterResponse: false`
const {result, resultSourceMap} = await client.fetch(query, params, {filterResponse: false})
// Build something cool with the source map
console.log(resultSourceMap)
If your apiVersion
is 2021-03-25
or later, the resultSourceMap
property will always exist in the response after enabling it. If there is no source map, resultSourceMap
is an empty object.
This is the corresponding TypeScript definition:
import type {ContentSourceMapping} from '@sanity/client'
const {result, resultSourceMap} = await client.fetch(query, params, {filterResponse: false})
function useContentSourceMap(resultSourceMap: ContentSourceMapping): unknown {
// Sky's the limit
}
useContentSourceMap(resultSourceMap)
A turnkey integration with Visual editing is available in [@sanity/client
], with additional utils available on [@sanity/client/stega
]. It creates edit intent links for all the string values in your query result, using steganography under the hood. The code that handles stega is lazy loaded on demand when client.fetch
is called, if client.config().stega.enabled
is true
.
import {createClient} from '@sanity/client'
const client = createClient({
// ...base config options
stega: {
// If you use Vercel Visual Editing, we recommend enabling it for Preview deployments
enabled: process.env.VERCEL_ENV === 'preview',
// Required: Set it to the relative or absolute URL of your Sanity Studio instance
studioUrl: '/studio', // or 'https://your-project-name.sanity.studio'
// To resolve Cross Dataset References, pass a function returning a URL
studioUrl: (sourceDocument: ContentSourceMapDocument | ContentSourceMapRemoteDocument) => {
// If `sourceDocument` has a projectId and a dataset, then it's a Cross Dataset Reference
if (source._projectId && source._dataset) {
return 'https://acme-global.sanity.studio'
}
return 'https://acme-store.sanity.studio'
},
// If your Studio has Workspaces: https://www.sanity.io/docs/workspaces
// and if your Cross Dataset References are available in a workspace, you can return an object to let the client set up the URL
studioUrl: (sourceDocument) => {
// This organization has a single studio with everything organized in workspaces
const baseUrl = 'https://acme.sanity.studio'
// If `sourceDocument` has a projectId and a dataset, then it's a Cross Dataset Reference
if (source._projectId && source._dataset) {
return {baseUrl, workspace: 'global'}
}
return {baseUrl, workspace: 'store'}
},
// Optional, to control which fields have stega payloads
filter: (props) => {
const {resultPath, sourcePath, sourceDocument, value} = props
if (sourcePath[0] === 'externalurl') {
return false
}
// The default behavior is packaged into `filterDefault`, allowing you to enable encoding fields that are skipped by default
return props.filterDefault(props)
},
// Optional, to log what's encoded and what isn't
// logger: console,
},
})
// Disable on demand
client.config({stega: {enabled: false}})
// New client with different stega settings
const debugClient = client.withConfig({
stega: {studioUrl: 'https://your-project-name.sanity.studio', logger: console},
})
Removing stega from part of the result, available on [@sanity/client/stega
]:
import {stegaClean} from '@sanity/client/stega'
const result = await client.fetch('*[_type == "video"][0]')
// Remove stega from the payload sent to a third party library
const videoAsset = stegaClean(result.videoAsset)
If you want to create an edit link to something that isn't a string, or a field that isn't rendered directly, like a slug
used in a URL but not rendered on the page, you can use the resolveEditUrl
function.
import {createClient} from '@sanity/client'
import {resolveEditUrl} from '@sanity/client/csm'
const client = createClient({
// ... standard client config
// Required: the new 'withKeyArraySelector' option is used here instead of 'true' so that links to array items and portable text are stable even if the array is reordered
resultSourceMap: 'withKeyArraySelector',
})
const {result, resultSourceMap} = await client.fetch(
`*[_type == "author" && slug.current == $slug][0]{name, pictures}`,
{slug: 'john-doe'},
// Required, otherwise you can't access `resultSourceMap`
{filterResponse: false},
)
// The `result` looks like this:
const result = {
name: 'John Doe',
pictures: [
{
_type: 'image',
alt: 'A picture of exactly what you think someone named John Doe would look like',
_key: 'cee5fbb69da2',
asset: {
_ref: 'image-a75b03fdd5b5fa36947bf2b776a542e0c940f682-1000x1500-jpg',
_type: 'reference',
},
},
],
}
const studioUrl = 'https://your-project-name.sanity.studio'
resolveEditUrl({
// The URL resolver requires the base URL of your Sanity Studio instance
studioUrl,
// It also requires a Content Source Map for the query result you want to create an edit intent link for
resultSourceMap,
// The path to the field you want to edit. You can pass a string
resultPath: 'pictures[0].alt',
// or an array of segments
resultPath: ['pictures', 0, 'alt'],
})
// ^? 'https://your-project-name.sanity.studio/intent/edit/mode=presentation;id=462efcc6-3c8b-47c6-8474-5544e1a4acde;type=author;path=pictures[_key=="cee5fbb69da2"].alt'
const query = '*[_type == "comment" && authorId != $ownerId]'
const params = {ownerId: 'bikeOwnerUserId'}
const subscription = client.listen(query, params).subscribe((update) => {
const comment = update.result
console.log(`${comment.author} commented: ${comment.text}`)
})
// to unsubscribe later on
subscription.unsubscribe()
client.listen(query, params = {}, options = {includeResult: true})
Open a query that listens for updates on matched documents, using the given parameters (if any). The return value is an RxJS Observable. When calling .subscribe()
on the returned observable, a subscription is returned, and this can be used to unsubscribe from the query later on by calling subscription.unsubscribe()
The update events which are emitted always contain mutation
, which is an object containing the mutation which triggered the document to appear as part of the query.
By default, the emitted update event will also contain a result
property, which contains the document with the mutation applied to it. In case of a delete mutation, this property will not be present, however. You can also tell the client not to return the document (to save bandwidth, or in cases where the mutation or the document ID is the only relevant factor) by setting the includeResult
property to false
in the options.
Likewise, you can also have the client return the document before the mutation was applied, by setting includePreviousRevision
to true
in the options, which will include a previous
property in each emitted object.
If it's not relevant to know what mutations that was applied, you can also set includeMutation
to false
in the options, which will save some additional bandwidth by omitting the mutation
property from the received events.
This will fetch a document from the Doc endpoint. This endpoint cuts through any caching/indexing middleware that may involve delayed processing. As it is less scalable/performant than the other query mechanisms, it should be used sparingly. Performing a query is usually a better option.
client.getDocument('bike-123').then((bike) => {
console.log(`${bike.name} (${bike.seats} seats)`)
})
This will fetch multiple documents in one request from the Doc endpoint. This endpoint cuts through any caching/indexing middleware that may involve delayed processing. As it is less scalable/performant than the other query mechanisms, it should be used sparingly. Performing a query is usually a better option.
client.getDocuments(['bike123', 'bike345']).then(([bike123, bike345]) => {
console.log(`Bike 123: ${bike123.name} (${bike123.seats} seats)`)
console.log(`Bike 345: ${bike345.name} (${bike345.seats} seats)`)
})
Note: Unlike in the HTTP API, the order/position of documents is preserved based on the original array of IDs. If any of the documents are missing, they will be replaced by a null
entry in the returned array:
const ids = ['bike123', 'nonexistent-document', 'bike345']
client.getDocuments(ids).then((docs) => {
// the docs array will be:
// [{_id: 'bike123', ...}, null, {_id: 'bike345', ...}]
})
[!NOTE]
Live Content is experimental and requires your client config to be set up with
apiVersion: 'vX'
.
// Subscribe to live updates
const subscription = client.live.events().subscribe((event) => {
// Check if incoming tags match saved sync tags
if (event.type === 'message' && event.tags.some((tag) => syncTags.includes(tag))) {
// Refetch with ID to get latest data
render(event.id)
}
if (event.type === 'restart') {
// A restart event is sent when the `lastLiveEventId` we've been given earlier is no longer usable
render()
}
})
// Later, unsubscribe when no longer needed (such as on unmount)
// subscription.unsubscribe()
client.live.events(options)
Listen to live content updates. Returns an RxJS Observable. When calling .subscribe()
on the returned observable, a subscription is returned, which can be used to unsubscribe from the events later on by calling subscription.unsubscribe()
.
The options
object can contain the following properties:
includeDrafts (boolean)
- Whether to include draft documents in the events. Default is false. Note: This is an experimental API and may change or be removed.tag (string)
- Optional request tag for the listener. Use to identify the request in logs.The method will emit different types of events:
message
: Regular event messages.restart
: Emitted when the event stream restarts.reconnect
: Emitted when the client reconnects to the event stream.welcome
: Emitted when the client successfully connects to the event stream.To listen to updates in draft content, set includeDrafts
to true
and configure the client with a token or withCredentials: true
. The token should have the lowest possible access role.
const doc = {
_type: 'bike',
name: 'Sanity Tandem Extraordinaire',
seats: 2,
}
client.create(doc).then((res) => {
console.log(`Bike was created, document ID is ${res._id}`)
})
client.create(doc)
client.create(doc, mutationOptions)
Create a document. Argument is a plain JS object representing the document. It must contain a _type
attribute. It may contain an _id
. If an ID is not specified, it will automatically be created.
To create a draft document, prefix the document ID with drafts.
- eg _id: 'drafts.myDocumentId'
. To auto-generate a draft document ID, set _id
to drafts.
(nothing after the .
).
const doc = {
_id: 'my-bike',
_type: 'bike',
name: 'Sanity Tandem Extraordinaire',
seats: 2,
}
client.createOrReplace(doc).then((res) => {
console.log(`Bike was created, document ID is ${res._id}`)
})
client.createOrReplace(doc)
client.createOrReplace(doc, mutationOptions)
If you are not sure whether or not a document exists but want to overwrite it if it does, you can use the createOrReplace()
method. When using this method, the document must contain an _id
attribute.
const doc = {
_id: 'my-bike',
_type: 'bike',
name: 'Sanity Tandem Extraordinaire',
seats: 2,
}
client.createIfNotExists(doc).then((res) => {
console.log('Bike was created (or was already present)')
})
client.createIfNotExists(doc)
client.createIfNotExists(doc, mutationOptions)
If you want to create a document if it does not already exist, but fall back without error if it does, you can use the createIfNotExists()
method. When using this method, the document must contain an _id
attribute.
client
.patch('bike-123') // Document ID to patch
.set({inStock: false}) // Shallow merge
.inc({numSold: 1}) // Increment field by count
.commit() // Perform the patch and return a promise
.then((updatedBike) => {
console.log('Hurray, the bike is updated! New document:')
console.log(updatedBike)
})
.catch((err) => {
console.error('Oh no, the update failed: ', err.message)
})
Modify a document. patch
takes a document ID. set
merges the partialDoc with the stored document. inc
increments the given field with the given numeric value. commit
executes the given patch
. Returns the updated object.
client.patch()
[operations]
.commit(mutationOptions)
client.patch('bike-123').setIfMissing({title: 'Untitled bike'}).commit()
client.patch('bike-123').unset(['title', 'price']).commit()
client
.patch('bike-123')
.inc({price: 88, numSales: 1}) // Increment `price` by 88, `numSales` by 1
.dec({inStock: 1}) // Decrement `inStock` by 1
.commit()
You can use the ifRevisionId(rev)
method to specify that you only want the patch to be applied if the stored document matches a given revision.
client
.patch('bike-123')
.ifRevisionId('previously-known-revision')
.set({title: 'Little Red Tricycle'})
.commit()
The patch operation insert
takes a location (before
, after
or replace
), a path selector and an array of elements to insert.
client
.patch('bike-123')
// Ensure that the `reviews` arrays exists before attempting to add items to it
.setIfMissing({reviews: []})
// Add the items after the last item in the array (append)
.insert('after', 'reviews[-1]', [{title: 'Great bike!', stars: 5}])
.commit({
// Adds a `_key` attribute to array items, unique within the array, to
// ensure it can be addressed uniquely in a real-time collaboration context
autoGenerateArrayKeys: true,
})
The operations of appending and prepending to an array are so common that they have been given their own methods for better readability:
client
.patch('bike-123')
.setIfMissing({reviews: []})
.append('reviews', [{title: 'Great bike!', stars: 5}])
.commit({autoGenerateArrayKeys: true})
Each entry in the unset
array can be either an attribute or a JSON path.
In this example, we remove the first review and the review with _key: 'abc123'
from the bike.reviews
array:
const reviewsToRemove = ['reviews[0]', 'reviews[_key=="abc123"]']
client.patch('bike-123').unset(reviewsToRemove).commit()
A single document can be deleted by specifying a document ID:
client.delete(docId)
client.delete(docId, mutationOptions)
client
.delete('bike-123')
.then(() => {
console.log('Bike deleted')
})
.catch((err) => {
console.error('Delete failed: ', err.message)
})
One or more documents can be deleted by specifying a GROQ query (and optionally, params
):
client.delete({ query: "GROQ query", params: { key: value } })
// Without params
client
.delete({query: '*[_type == "bike"][0]'})
.then(() => {
console.log('The document matching *[_type == "bike"][0] was deleted')
})
.catch((err) => {
console.error('Delete failed: ', err.message)
})
// With params
client
.delete({query: '*[_type == $type][0..1]', params: {type: 'bike'}})
.then(() => {
console.log('The documents matching *[_type == "bike"][0..1] was deleted')
})
.catch((err) => {
console.error('Delete failed: ', err.message)
})
const namePatch = client.patch('bike-310').set({name: 'A Bike To Go'})
client
.transaction()
.create({name: 'Sanity Tandem Extraordinaire', seats: 2})
.delete('bike-123')
.patch(namePatch)
.commit()
.then((res) => {
console.log('Whole lot of stuff just happened')
})
.catch((err) => {
console.error('Transaction failed: ', err.message)
})
client.transaction().create(doc).delete(docId).patch(patch).commit()
Create a transaction to perform chained mutations.
client
.transaction()
.create({name: 'Sanity Tandem Extraordinaire', seats: 2})
.patch('bike-123', (p) => p.set({inStock: false}))
.commit()
.then((res) => {
console.log('Bike created and a different bike is updated')
})
.catch((err) => {
console.error('Transaction failed: ', err.message)
})
client.transaction().create(doc).patch(docId, p => p.set(partialDoc)).commit()
A patch
can be performed inline on a transaction
.
Transactions and patches can also be built outside the scope of a client:
import {createClient, Patch, Transaction} from '@sanity/client'
const client = createClient({
projectId: 'your-project-id',
dataset: 'bikeshop',
})
// Patches:
const patch = new Patch('<documentId>')
client.mutate(patch.inc({count: 1}).unset(['visits']))
// Transactions:
const transaction = new Transaction().create({_id: '123', name: 'FooBike'}).delete('someDocId')
client.mutate(transaction)
const patch = new Patch(docId)
const transaction = new Transaction()
An important note on this approach is that you cannot call commit()
on transactions or patches instantiated this way, instead you have to pass them to client.mutate()
The Actions API provides a new interface for creating, updating and publishing documents. It is a wrapper around the Actions API.
This API is only available from API version v2024-05-23
.
The following options are available for actions, and can be applied as the second argument to action()
.
transactionId
: If set, this ID is as transaction ID for the action instead of using an autogenerated one.dryRun
(true|false
) - default false
. If true, the mutation will be a dry run - the response will be identical to the one returned had this property been omitted or false (including error responses) but no documents will be affected.skipCrossDatasetReferenceValidation
(true|false
) - default false
. If true, the mutation will be skipped validation of cross dataset references. This is useful when you are creating a document that references a document in a different dataset, and you want to skip the validation to avoid an error.A document draft can be created by specifying a create action type:
client
.action(
{
actionType: 'sanity.action.document.create',
publishedId: 'bike-123',
attributes: {name: 'Sanity Tandem Extraordinaire', _type: 'bike', seats: 1},
ifExists: 'fail',
},
actionOptions,
)
.then(() => {
console.log('Bike draft created')
})
.catch((err) => {
console.error('Create draft failed: ', err.message)
})
A published document can be deleted by specifying a delete action type, optionally including some drafts:
client
.action(
{
actionType: 'sanity.action.document.delete',
publishedId: 'bike-123',
includeDrafts: ['draft.bike-123'],
},
actionOptions,
)
.then(() => {
console.log('Bike deleted')
})
.catch((err) => {
console.error('Delete failed: ', err.message)
})
A draft document can be deleted by specifying a discard action type:
client
.action(
{
actionType: 'sanity.action.document.discard',
draftId: 'draft.bike-123',
},
actionOptions,
)
.then(() => {
console.log('Bike draft deleted')
})
.catch((err) => {
console.error('Discard failed: ', err.message)
})
A patch can be applied to an existing document draft or create a new one by specifying an edit action type:
client
.action(
{
actionType: 'sanity.action.document.edit',
publishedId: 'bike-123',
attributes: {name: 'Sanity Tandem Extraordinaire', _type: 'bike', seats: 2},
},
actionOptions,
)
.then(() => {
console.log('Bike draft edited')
})
.catch((err) => {
console.error('Edit draft failed: ', err.message)
})
A draft document can be published by specifying a publish action type, optionally with revision ID checks:
client
.action(
{
actionType: 'sanity.action.document.publish',
draftId: 'draft.bike-123',
ifDraftRevisionId: '<previously-known-revision>',
publishedId: 'bike-123',
ifPublishedRevisionId: '<previously-known-revision>',
},
actionOptions,
)
.then(() => {
console.log('Bike draft published')
})
.catch((err) => {
console.error('Publish draft failed: ', err.message)
})
An existing document draft can be deleted and replaced by a new one by specifying a replaceDraft action type:
client
.action(
{
actionType: 'sanity.action.document.replaceDraft',
publishedId: 'bike-123',
attributes: {name: 'Sanity Tandem Extraordinaire', _type: 'bike', seats: 1},
},
actionOptions,
)
.then(() => {
console.log('Bike draft replaced')
})
.catch((err) => {
console.error('Replace draft failed: ', err.message)
})
A published document can be retracted by specifying an unpublish action type:
client
.action(
{
actionType: 'sanity.action.document.unpublish',
draftId: 'draft.bike-123',
publishedId: 'bike-123',
},
actionOptions,
)
.then(() => {
console.log('Bike draft unpublished')
})
.catch((err) => {
console.error('Unpublish draft failed: ', err.message)
})
Assets can be uploaded using the client.assets.upload(...)
method.
client.assets.upload(type: 'file' | image', body: File | Blob | Buffer | NodeJS.ReadableStream, options = {}): Promise<AssetDocument>
👉 Read more about assets in Sanity
// Upload a file from the file system
client.assets
.upload('file', fs.createReadStream('myFile.txt'), {filename: 'myFile.txt'})
.then((document) => {
console.log('The file was uploaded!', document)
})
.catch((error) => {
console.error('Upload failed:', error.message)
})
// Upload an image file from the file system
client.assets
.upload('image', fs.createReadStream('myImage.jpg'), {filename: 'myImage.jpg'})
.then((document) => {
console.log('The image was uploaded!', document)
})
.catch((error) => {
console.error('Upload failed:', error.message)
})
// Create a file with "foo" as its content
const file = new File(['foo'], 'foo.txt', {type: 'text/plain'})
// Upload it
client.assets
.upload('file', file)
.then((document) => {
console.log('The file was uploaded!', document)
})
.catch((error) => {
console.error('Upload failed:', error.message)
})
// Draw something on a canvas and upload as image
const canvas = document.getElementById('someCanvas')
const ctx = canvas.getContext('2d')
ctx.fillStyle = '#f85040'
ctx.fillRect(0, 0, 50, 50)
ctx.fillStyle = '#fff'
ctx.font = '10px monospace'
ctx.fillText('Sanity', 8, 30)
canvas.toBlob(uploadImageBlob, 'image/png')
function uploadImageBlob(blob) {
client.assets
.upload('image', blob, {contentType: 'image/png', filename: 'someText.png'})
.then((document) => {
console.log('The image was uploaded!', document)
})
.catch((error) => {
console.error('Upload failed:', error.message)
})
}
// Extract palette of colors as well as GPS location from exif
client.assets
.upload('image', someFile, {extract: ['palette', 'location']})
.then((document) => {
console.log('The file was uploaded!', document)
})
.catch((error) => {
console.error('Upload failed:', error.message)
})
Deleting an asset document will also trigger deletion of the actual asset.
client.delete(assetDocumentId: string): Promise
client.delete('image-abc123_someAssetId-500x500-png').then((result) => {
console.log('deleted imageAsset', result)
})
The following options are available for mutations, and can be applied either as the second argument to create()
, createOrReplace
, createIfNotExists
, delete()
and mutate()
- or as an argument to the commit()
method on patches and transactions.
visibility
('sync'|'async'|'deferred'
) - default 'sync'
sync
: request will not return until the requested changes are visible to subsequent queries.async
: request will return immediately when the changes have been committed, but it might still be a second or more until changes are reflected in a query. Unless you are immediately re-querying for something that includes the mutated data, this is the preferred choice.deferred
: fastest way to write - bypasses real-time indexing completely, and should be used in cases where you are bulk importing/mutating a large number of documents and don't need to see that data in a query for tens of seconds.dryRun
(true|false
) - default false
. If true, the mutation will be a dry run - the response will be identical to the one returned had this property been omitted or false (including error responses) but no documents will be affected.autoGenerateArrayKeys
(true|false
) - default false
. If true, the mutation API will automatically add _key
attributes to objects in arrays that are missing them. This makes array operations more robust by having a unique key within the array available for selections, which helps prevent race conditions in real-time, collaborative editing.Requests can be aborted (or cancelled) in two ways:
Sanity Client supports the AbortController API and supports receiving an abort signal that can be used to cancel the request. Here's an example that will abort the request if it takes more than 200ms to complete:
const abortController = new AbortController()
// note the lack of await here
const request = getClient().fetch('*[_type == "movie"]', {}, {signal: abortController.signal})
// this will abort the request after 200ms
setTimeout(() => abortController.abort(), 200)
try {
const response = await request
//…
} catch (error) {
if (error.name === 'AbortError') {
console.log('Request was aborted')
} else {
// rethrow in case of other errors
throw error
}
}
When using the Observable API (e.g. client.observable.fetch()
), you can cancel the request by simply unsubscribe
from the returned observable:
const subscription = client.observable.fetch('*[_type == "movie"]').subscribe((result) => {
/* do something with the result */
})
// this will cancel the request
subscription.unsubscribe()
const config = client.config()
console.log(config.dataset)
client.config()
Get client configuration.
client.config({dataset: 'newDataset'})
client.config(options)
Set client configuration. Required options are projectId
and dataset
.
MIT © Sanity.io
v5
useCdn
is changed to true
It was previously false
. If you were relying on the default being false
you can continue using the live API by setting it in the constructor:
import {createClient} from '@sanity/client'
export const client = createClient({
projectId: 'your-project-id',
dataset: 'your-dataset-name',
apiVersion: '2023-03-12',
+ useCdn: false, // set to `true` to use the edge cache
})
v4
ES5
The target is changed to modern browsers that supports ES6
class
, {...rest}
syntax and more. You may need to update your bundler to a recent major version. Or you could configure your bundler to transpile @sanity/client
, and get-it
, which is the engine that powers @sanity/client
and uses the same output target.
v12
no longer supportedUpgrade to the LTS release, or one of the Maintenance releases.
default
export is replaced with the named export createClient
Before:
import createClient from '@sanity/client'
const client = createClient()
import SanityClient from '@sanity/client'
const client = new SanityClient()
After:
import {createClient} from '@sanity/client'
const client = createClient()
client.assets.delete
is removedBefore:
client.assets.delete('image', 'abc123_foobar-123x123-png')
client.assets.delete('image', 'image-abc123_foobar-123x123-png')
client.assets.delete({_id: 'image-abc123_foobar-123x123-png'})
After:
client.delete('image-abc123_foobar-123x123-png')
client.assets.getImageUrl
is removed, replace with @sanity/image-url
Before:
import createClient from '@sanity/client'
const client = createClient({projectId: 'abc123', dataset: 'foo'})
client.assets.getImageUrl('image-abc123_foobar-123x123-png')
client.assets.getImageUrl('image-abc123_foobar-123x123-png', {auto: 'format'})
client.assets.getImageUrl({_ref: 'image-abc123_foobar-123x123-png'})
client.assets.getImageUrl({_ref: 'image-abc123_foobar-123x123-png'}, {auto: 'format'})
After:
npm install @sanity/image-url
import imageUrlBuilder from '@sanity/image-url'
const builder = imageUrlBuilder({projectId: 'abc123', dataset: 'foo'})
const urlFor = (source) => builder.image(source)
urlFor('image-abc123_foobar-123x123-png').url()
urlFor('image-abc123_foobar-123x123-png').auto('format').url()
urlFor({_ref: 'image-abc123_foobar-123x123-png'}).url()
urlFor({_ref: 'image-abc123_foobar-123x123-png'}).auto('format').url()
SanityClient
static properties moved to named exportsBefore:
import SanityClient from '@sanity/client'
const {Patch, Transaction, ClientError, ServerError, requester} = SanityClient
After:
import {Patch, Transaction, ClientError, ServerError, requester} from '@sanity/client'
client.clientConfig
is removed, replace with client.config()
Before:
import createClient from '@sanity/client'
const client = createClient()
console.log(client.clientConfig.projectId)
After:
import {createClient} from '@sanity/client'
const client = createClient()
console.log(client.config().projectId)
client.isPromiseAPI()
is removed, replace with an instanceof
checkBefore:
import createClient from '@sanity/client'
const client = createClient()
console.log(client.isPromiseAPI())
console.log(client.clientConfig.isPromiseAPI)
console.log(client.config().isPromiseAPI)
After:
import {createClient, SanityClient} from '@sanity/client'
const client = createClient()
console.log(client instanceof SanityClient)
client.observable.isObservableAPI()
is removed, replace with an instanceof
checkBefore:
import createClient from '@sanity/client'
const client = createClient()
console.log(client.observable.isObservableAPI())
After:
import {createClient, ObservableSanityClient} from '@sanity/client'
const client = createClient()
console.log(client.observable instanceof ObservableSanityClient)
client._requestObservable
is removed, replace with client.observable.request
Before:
import createClient from '@sanity/client'
const client = createClient()
client._requestObservable({uri: '/ping'}).subscribe()
After:
import {createClient} from '@sanity/client'
const client = createClient()
client.observable.request({uri: '/ping'}).subscribe()
client._dataRequest
is removed, replace with client.dataRequest
Before:
import createClient from '@sanity/client'
const client = createClient()
client._dataRequest(endpoint, body, options)
After:
import {createClient} from '@sanity/client'
const client = createClient()
client.dataRequest(endpoint, body, options)
client._create_
is removed, replace with one of client.create
, client.createIfNotExists
or client.createOrReplace
Before:
import createClient from '@sanity/client'
const client = createClient()
client._create(doc, 'create', options)
client._create(doc, 'createIfNotExists', options)
client._create(doc, 'createOrReplace', options)
After:
import {createClient} from '@sanity/client'
const client = createClient()
client.create(doc, options)
client.createIfNotExists(doc, options)
client.createOrReplace(doc, options)
client.patch.replace
is removed, replace with client.createOrReplace
Before:
import createClient from '@sanity/client'
const client = createClient()
client.patch('tropic-hab').replace({name: 'Tropical Habanero', ingredients: []}).commit()
After:
import {createClient} from '@sanity/client'
const client = createClient()
client.createOrReplace({
_id: 'tropic-hab',
_type: 'hotsauce',
name: 'Tropical Habanero',
ingredients: [],
})
client.auth
is removed, replace with client.request
Before:
import createClient from '@sanity/client'
const client = createClient()
/**
* Fetch available login providers
*/
const loginProviders = await client.auth.getLoginProviders()
/**
* Revoke the configured session/token
*/
await client.auth.logout()
After:
import {createclient, type AuthProviderResponse} from '@sanity/client'
const client = createClient()
/**
* Fetch available login providers
*/
const loginProviders = await client.request<AuthProviderResponse>({uri: '/auth/providers'})
/**
* Revoke the configured session/token
*/
await client.request<void>({uri: '/auth/logout', method: 'POST'})
FAQs
Client for retrieving, creating and patching data from Sanity.io
The npm package @sanity/client receives a total of 282,680 weekly downloads. As such, @sanity/client popularity was classified as popular.
We found that @sanity/client demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 64 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
A supply chain attack has been detected in versions 1.95.6 and 1.95.7 of the popular @solana/web3.js library.
Research
Security News
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Security News
Research
Socket researchers have discovered malicious npm packages targeting crypto developers, stealing credentials and wallet data using spyware delivered through typosquats of popular cryptographic libraries.