![Maven Central Adds Sigstore Signature Validation](https://cdn.sanity.io/images/cgdhsj6q/production/7da3bc8a946cfb5df15d7fcf49767faedc72b483-1024x1024.webp?w=400&fit=max&auto=format)
Security News
Maven Central Adds Sigstore Signature Validation
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.
azure-blob-storage
Advanced tools
This library wraps an Azure Blob Storage container which stores objects in JSON format.
DataContainer is a wrapper over Azure Blob Storage container which stores only objects in JSON format. All the objects that will be stored will be validated against the schema that is provided at the creation time of the container.
Create a DataContainer
an options object, described below, then call its
async init
method before doing anything else.
let {DataContainer} = require('azure-blob-storage');
let container = new DataContainer({
containername: 'AzureContainerName', // Azure container name
credentials: ..., // see below
schema: '...', // JSON schema object
schemaVersion: 1, // JSON schema version. (optional)
// The default value is 1.
// Max number of update blob request retries
updateRetries: 10,
// Multiplier for computation of retry delay: 2 ^ retry * delayFactor
updateDelayFactor: 100,
// Randomization factor added as:
// delay = delay * random([1 - randomizationFactor; 1 + randomizationFactor])
updateRandomizationFactor: 0.25,
// Maximum retry delay in ms (defaults to 30 seconds)
updateMaxDelay: 30 * 1000,
});
await container.init();
Credentials can be specified to this library in a variety of ways. Note that these match those of the fast-azure-storage library.
Given an accountId and accompanying accessKey, configure access like this:
{
// Azure connection details
tableName: "AzureTableName",
// Azure credentials
credentials: {
accountId: "...",
accessKey: "...",
},
}
The underlying fast-azure-storage library allows use of SAS credentials, including dynamic generation of SAS credentials as needed. That support can be used transparently from this library:
{
tableName: 'AzureTableName',
credentials: {
accountId: '...',
sas: sas // sas in querystring form: "se=...&sp=...&sig=..."
};
}
or
{
tableName: 'AzureTableName',
credentials: {
accountId: '...',
sas: function() {
return new Promise(/* fetch SAS from somewhere */);
},
minSASAuthExpiry: 15 * 60 * 1000 // time before refreshing the SAS
};
}
let container = new DataContainer({ /* ... */ });
await container.init();
init
, so there is never any need to call this method. await container.ensureContainer();
await container.removeContainer();
let blob = await container.listBlobs({
prefix: 'state',
maxResults: 1000,
});
let handler = async (blob) => {
await blob.modify((content) => {
content.version += 1;
});
};
let options = {
prefix: 'state',
};
await container.scanDataBlockBlob(handler, options);
This is equivalent to creating a new DataBlockBlob
instance with the given
options (see below), then calling its create
method. This will
unconditionally overwrite any existing blob with the same name.
let options = {
name: 'state-blob',
cacheContent: true,
};
let content = {
value: 30,
};
let dataBlob = await container.createDataBlockBlob(options, content);
This is equivalent to creating a new AppendDataBlob
instance with the given
options (see below), then calling its create
and (if content
is provided)
append
methods.
let options = {
name: 'auth-log',
};
let content = {
user: 'test',
};
let appendBlob = await container.createAppendDataBlob(options, content);
let blob = await container.load(blob, false);
ignoreIfNotExists
to true to ignore the error that is
thrown in case the blob does not exist.
Returns true, if the blob was deleted. It makes sense to read the return value only if ignoreIfNotExists
is set. await container.remove('state-blob', true);
Each blob has an associated schema version, and all schema versions are stored in the blob storage alongside the blobs containing user data. The version declared to the constructor defines the "current" version, but blobs may exist that use older versions.
When a blob is loaded, it is validated against the schema with which it was stored.
When a blob is written (via create
), it is validated against the current
schema. However, note that an existing object cannot be modified to a more
recent schema version. This is a bug.
DataBlockBlob is a wrapper over an Azure block blob which stores a JSON data which is conform with schema defined at container level.
AppendDataBlob is a wrapper over an Azure append blob. This type is optimized for fast append operations and all writes happen at the end of the blob. Updating and deleting the existing content is not supported. This type of blob can be used for e.g. logging or auditing.
The constructor of the blob takes the following options:
let {DataBlockBlob, AppendDataBlob} = require('azure-blob-storage');
{
name: '...', // The name of the blob (required)
container: '...', // An instance of DataContainer (required)
contentEncoding: '...', // The content encoding of the blob
contentLanguage: '...', // The content language of the blob
cacheControl: '...', // The cache control of the blob
contentDisposition: '...', // The content disposition of the blob
cacheContent: true|false, // This can be set true in order to keep a reference of the blob content.
// Default value is false
}
The options cacheContent
can be set to true only for DataBlockBlob because, AppendDataBlob does not support the caching
of its content.
Note that the createDataBlockBlob
and createAppendDataBlob
methods of
DataContainer
provide shortcuts to calling these constructors.
options
, if given are passed to
putBlob. let content = {
value: 40,
};
let options = {
ifMatch: 'abcd',
};
let content = await dataBlob.create(content, options);
To conditionally create a blob, use ifNoneMatch: '*'
and catch the BlobAlreadyExists
error:
try {
await dataBlob.create(content, {ifNoneMatch: '*'});
} catch (e) {
if (e.code !== 'BlobAlreadyExists') {
throw e;
}
console.log('blob already exists, not overwriting..');
}
cacheContent
was set. let content = await dataBlob.load();
modifier
is a function that
will be called with a clone of the blob content as first argument and it should
apply the changes to the instance of the object passed as argument. The
options
, if given, are passed to
putBlob,
with type
and ifMatch
used to achieve atomicity. let modifier = (data) => {
data.value = 'new value';
};
let options = {
ifUnmodifiedSince: new Date(2017, 1, 1),
};
await dataBlob.modify(modifier, options);
This method uses ETags to ensure that modifications are atomic: if some other
process writes to the blob while modifier
is executing, modify
will
automatically fetch the updated blob and call modifier
again, retrying
several times.
Note that the modifier
function must be synchronous.
options
, if
given are passed to
putBlob. await logBlob.create();
let content = {
user: 'test2',
}
await logBlob.append(content);
let content = await logBlob.load();
To test this library, set the environment variables AZURE_ACCOUNT_KEY
and AZURE_ACCOUNT
appropriately before running the tests.
FAQs
azure-blob-storage
We found that azure-blob-storage demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 5 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.
Security News
CISOs are racing to adopt AI for cybersecurity, but hurdles in budgets and governance may leave some falling behind in the fight against cyber threats.
Research
Security News
Socket researchers uncovered a backdoored typosquat of BoltDB in the Go ecosystem, exploiting Go Module Proxy caching to persist undetected for years.