Research
Security News
Malicious npm Package Targets Solana Developers and Hijacks Funds
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Azurite is an open-source Azure Storage API compatible server (emulator). It provides a local environment for testing and development of applications that use Azure Storage services, including Blob, Queue, and Table storage.
Blob Storage
This code snippet demonstrates how to start the Azurite Blob service on port 10000. Blob storage is used for storing large amounts of unstructured data such as text or binary data.
const azurite = require('azurite');
azurite().blob().listen(10000, () => {
console.log('Azurite Blob service is running on port 10000');
});
Queue Storage
This code snippet demonstrates how to start the Azurite Queue service on port 10001. Queue storage is used for storing large numbers of messages that can be accessed from anywhere via authenticated calls.
const azurite = require('azurite');
azurite().queue().listen(10001, () => {
console.log('Azurite Queue service is running on port 10001');
});
Table Storage
This code snippet demonstrates how to start the Azurite Table service on port 10002. Table storage is used for storing structured NoSQL data.
const azurite = require('azurite');
azurite().table().listen(10002, () => {
console.log('Azurite Table service is running on port 10002');
});
LocalStack is a fully functional local AWS cloud stack. It provides a local testing environment for AWS services, including S3, DynamoDB, and SQS. Unlike Azurite, which focuses on Azure Storage services, LocalStack emulates a wide range of AWS services.
MinIO is a high-performance, S3-compatible object storage system. It is designed to be used for large-scale data infrastructure. While Azurite emulates Azure Blob storage, MinIO provides an alternative for S3-compatible object storage.
Fake S3 is a lightweight server that emulates the S3 API. It is useful for testing S3 interactions locally. Unlike Azurite, which emulates Azure Storage services, Fake S3 focuses solely on the S3 API.
Note: The latest Azurite V3 code, which supports Blob, Queue, and Table (preview) is in the main branch. The legacy Azurite V2 code is in the legacy-master branch.
Version | Azure Storage API Version | Service Support | Description | Reference Links |
---|---|---|---|---|
3.33.0 | 2025-01-05 | Blob, Queue and Table(preview) | Azurite V3 based on TypeScript & New Architecture | NPM - Docker - Visual Studio Code Extension |
Legacy (v2) | 2016-05-31 | Blob, Queue and Table | Legacy Azurite V2 | NPM |
Azurite is an open source Azure Storage API compatible server (emulator). Based on Node.js, Azurite provides cross platform experiences for customers wanting to try Azure Storage easily in a local environment. Azurite simulates most of the commands supported by Azure Storage with minimal dependencies.
Azurite V2 is manually created with pure JavaScript, popular and active as an open source project. However, Azure Storage APIs are growing and keeping updating, manually keeping Azurite up to date is not efficient and prone to bugs. JavaScript also lacks strong type validation which prevents easy collaboration.
Compared to V2, Azurite V3 implements a new architecture leveraging code generated by a TypeScript Server Code Generator we created. The generator uses the same swagger (modified) used by the new Azure Storage SDKs. This reduces manual effort and facilitates better code alignment with storage APIs.
3.0.0-preview is the first release version using Azurite's new architecture.
Try with any of following ways to start an Azurite V3 instance.
After cloning source code, execute following commands to install and start Azurite V3.
npm ci
npm run build
npm install -g
azurite
In order to run Azurite V3 you need Node.js installed on your system. Azurite works cross-platform on Windows, Linux, and OS X. Azurite is compatible with the current Node.Js LTS Versions in support.
After installation you can install Azurite simply with npm which is the Node.js package management tool included with every Node.js installation.
npm install -g azurite
Simply start it with the following command:
azurite -s -l c:\azurite -d c:\azurite\debug.log
or,
azurite --silent --location c:\azurite --debug c:\azurite\debug.log
This tells Azurite to store all data in a particular directory c:\azurite
. If the -l
option is omitted it will use the current working directory. You can also selectively start different storage services.
For example, to start blob service only:
azurite-blob -l path/to/azurite/workspace
Start queue service only:
azurite-queue -l path/to/azurite/workspace
Start table service only:
azurite-table -l path/to/azurite/workspace
Azurite V3 can be installed from Visual Studio Code extension market.
You can quickly start or close Azurite by clicking Azurite status bar item or following commands.
Extension supports following Visual Studio Code commands:
Azurite: Start
Start all Azurite servicesAzurite: Close
Close all Azurite servicesAzurite: Clean
Reset all Azurite services persistency dataAzurite: Start Blob Service
Start blob serviceAzurite: Close Blob Service
Close blob serviceAzurite: Clean Blob Service
Clean blob serviceAzurite: Start Queue Service
Start queue serviceAzurite: Close Queue Service
Close queue serviceAzurite: Clean Queue Service
Clean queue serviceAzurite: Start Table Service
Start table serviceAzurite: Close Table Service
Close table serviceAzurite: Clean Table Service
Clean table serviceFollowing extension configurations are supported:
azurite.blobHost
Blob service listening endpoint, by default 127.0.0.1azurite.blobPort
Blob service listening port, by default 10000azurite.queueHost
Queue service listening endpoint, by default 127.0.0.1azurite.queuePort
Queue service listening port, by default 10001azurite.tableHost
Table service listening endpoint, by default 127.0.0.1azurite.tablePort
Table service listening port, by default 10002azurite.location
Workspace location folder path (can be relative or absolute). By default, in the VS Code extension, the currently opened folder is used. If launched from the command line, the current process working directory is the default. Relative paths are resolved relative to the default folder.azurite.silent
Silent mode to disable access log in Visual Studio channel, by default falseazurite.debug
Output debug log into Azurite channel, by default falseazurite.loose
Enable loose mode which ignores unsupported headers and parameters, by default falseazurite.cert
Path to a PEM or PFX cert file. Required by HTTPS mode.azurite.key
Path to a PEM key file. Required when azurite.cert
points to a PEM file.azurite.pwd
PFX cert password. Required when azurite.cert
points to a PFX file.azurite.oauth
OAuth oauthentication level. Candidate level values: basic
.azurite.skipApiVersionCheck
Skip the request API version check, by default false.azurite.disableProductStyleUrl
Force parsing storage account name from request URI path, instead of from request URI host.azurite.inMemoryPersistence
Disable persisting any data to disk. If the Azurite process is terminated, all data is lost.azurite.extentMemoryLimit
When using in-memory persistence, limit the total size of extents (blob and queue content) to a specific number of megabytes. This does not limit blob, queue, or table metadata. Defaults to 50% of total memory.Note. Find more docker images tags in https://mcr.microsoft.com/v2/azure-storage/azurite/tags/list
docker run -p 10000:10000 -p 10001:10001 -p 10002:10002 mcr.microsoft.com/azure-storage/azurite
-p 10000:10000
will expose blob service's default listening port.
-p 10001:10001
will expose queue service's default listening port.
-p 10002:10002
will expose table service's default listening port.
Or just run blob service:
docker run -p 10000:10000 mcr.microsoft.com/azure-storage/azurite azurite-blob --blobHost 0.0.0.0
docker run -p 10000:10000 -p 10001:10001 -v c:/azurite:/data mcr.microsoft.com/azure-storage/azurite
-v c:/azurite:/data
will use and map host path c:/azurite
as Azurite's workspace location.
docker run -p 7777:7777 -p 8888:8888 -p 9999:9999 -v c:/azurite:/workspace mcr.microsoft.com/azure-storage/azurite azurite -l /workspace -d /workspace/debug.log --blobPort 7777 --blobHost 0.0.0.0 --queuePort 8888 --queueHost 0.0.0.0 --tablePort 9999 --tableHost 0.0.0.0 --loose --skipApiVersionCheck --disableProductStyleUrl
Above command will try to start Azurite image with configurations:
-l //workspace
defines folder /workspace
as Azurite's location path inside docker instance, while /workspace
is mapped to c:/azurite
in host environment by -v c:/azurite:/workspace
-d //workspace/debug.log
enables debug log into /workspace/debug.log
inside docker instance. debug.log
will also mapped to c:/azurite/debug.log
in host machine because of docker volume mapping.
--blobPort 7777
makes Azurite blob service listen to port 7777, while -p 7777:7777
redirects requests from host machine's port 7777 to docker instance.
--blobHost 0.0.0.0
defines blob service listening endpoint to accept requests from host machine.
--queuePort 8888
makes Azurite queue service listen to port 8888, while -p 8888:8888
redirects requests from host machine's port 8888 to docker instance.
--queueHost 0.0.0.0
defines queue service listening endpoint to accept requests from host machine.
--tablePort 9999
makes Azurite table service listen to port 9999, while -p 9999:9999
redirects requests from host machine's port 9999 to docker instance.
--tableHost 0.0.0.0
defines table service listening endpoint to accept requests from host machine.
--loose
enables loose mode which ignore unsupported headers and parameters.
--skipApiVersionCheck
skip the request API version check.
--disableProductStyleUrl
force parsing storage account name from request URI path, instead of from request URI host.
If you use customized azurite parameters for docker image,
--blobHost 0.0.0.0
,--queueHost 0.0.0.0
are required parameters.
In above sample, you need to use double first forward slash for location and debug path parameters to avoid a known issue for Git on Windows.
Will support more release channels for Azurite V3 in the future.
To run Azurite in Docker Compose, you can start with the following configuration:
---
version: "3.9"
services:
azurite:
image: mcr.microsoft.com/azure-storage/azurite
container_name: "azurite"
hostname: azurite
restart: always
ports:
- "10000:10000"
- "10001:10001"
- "10002:10002"
Releasing Azurite V3 to NuGet is under investigation.
Integrate Azurite with Visual Studio is under investigation.
Optional. By default, Azurite V3 will listen to 127.0.0.1 as a local server. You can customize the listening address per your requirements.
--blobHost 127.0.0.1
--queueHost 127.0.0.1
--tableHost 127.0.0.1
--blobHost 0.0.0.0
--queueHost 0.0.0.0
--tableHost 0.0.0.0
Optional. By default, Azurite V3 will listen to 10000 as blob service port, and 10001 as queue service port, and 10002 as the table service port. You can customize the listening port per your requirements.
Warning: After using a customized port, you need to update connection string or configurations correspondingly in your Storage Tools or SDKs. If starting Azurite you see error
Error: listen EACCES 0.0.0.0:10000
the TCP port is most likely already occupied by another process.
--blobPort 8888
--queuePort 9999
--tablePort 11111
--blobPort 0
--queuePort 0
--tablePort 0
Note: The port in use is displayed on Azurite startup.
Optional. Azurite V3 needs to persist metadata and binary data to local disk during execution.
You can provide a customized path as the workspace location, or by default, Current process working directory will be used.
-l c:\azurite
--location c:\azurite
Optional. By default Azurite will display access log in console. Disable it by:
-s
--silent
Optional. Debug log includes detailed information on every request and exception stack traces. Enable it by providing a valid local file path for the debug log destination.
-d path/debug.log
--debug path/debug.log
Optional. By default Azurite will apply strict mode. Strict mode will block unsupported request headers or parameters. Disable it by enabling loose mode:
-L
--loose
Optional. By default Azurite will listen on HTTP protocol. Provide a PEM or PFX certificate file path to enable HTTPS mode:
--cert path/server.pem
When --cert
is provided for a PEM file, must provide corresponding --key
.
--key path/key.pem
When --cert
is provided for a PFX file, must provide corresponding --pwd
--pwd pfxpassword
Optional. By default, Azurite doesn't support OAuth and bearer token. Enable OAuth authentication for Azurite by:
--oauth basic
Note. OAuth requires HTTPS endpoint. Make sure HTTPS is enabled by providing
--cert
parameter along with--oauth
parameter.
Currently, Azurite supports following OAuth authentication levels:
In basic level, --oauth basic
, Azurite will do basic authentication, like validating incoming bearer token, checking issuer, audience, expiry. But Azurite will NOT check token signature and permission.
Optional. By default Azurite will check the request API version is valid API version. Skip the API version check by:
--skipApiVersionCheck
Optional. When using FQDN instead of IP in request URI host, by default Azurite will parse storage account name from request URI host. Force parsing storage account name from request URI path by:
--disableProductStyleUrl
Optional. Disable persisting any data to disk and only store data in-memory. If the Azurite process is terminated, all
data is lost. By default, LokiJS persists blob and queue metadata to disk and content to extent files. Table storage
persists all data to disk. This behavior can be disabled using this option. This setting is rejected when the SQL based
metadata implementation is enabled (via AZURITE_DB
). This setting is rejected when the --location
option is
specified.
--inMemoryPersistence
By default, the in-memory extent store (for blob and queue content) is limited to 50% of the total memory on the host
machine. This is evaluated to using os.totalmem()
. This limit can be
overridden using the --extentMemoryLimit <megabytes>
option. There is no restriction on the value specified for this
option but virtual memory may be used if the limit exceeds the amount of available physical memory as provided by the
operating system. A high limit may eventually lead to out of memory errors or reduced performance.
As blob or queue content (i.e. bytes in the in-memory extent store) is deleted, the memory is not freed immediately. Similar to the default file-system based extent store, both the blob and queue service have an extent garbage collection (GC) process. This process is in addition to the standard Node.js runtime GC. The extent GC periodically detects unused extents and deletes them from the extent store. This happens on a regular time period rather than immediately after the blob or queue REST API operation that caused some content to be deleted. This means that process memory consumed by the deleted blob or queue content will only be released after both the extent GC and the runtime GC have run. The extent GC will remove the reference to the in-memory byte storage and the runtime GC will free the unreferenced memory some time after that. The blob extent GC runs every 10 minutes and the queue extent GC runs every 1 minute.
The queue and blob extent storage count towards the same limit. The --extentMemoryLimit
setting is rejected when
--inMemoryPersistence
is not specified. LokiJS storage (blob and queue metadata and table data) does
not contribute to this limit and is unbounded which is the same as without the --inMemoryPersistence
option.
--extentMemoryLimit <megabytes>
This option is rejected when --inMemoryPersistence
is not specified.
When the limit is reached, write operations to the blob or queue endpoints which carry content will fail with an HTTP 409
status code, a custom storage error code of MemoryExtentStoreAtSizeLimit
, and a helpful error message.
Well-behaved storage SDKs and tools will not a retry on this failure and will return a related error message. If this
error is met, consider deleting some in-memory content (blobs or queues), raising the limit, or restarting the Azurite
server thus resetting the storage completely.
Note that if many hundreds of megabytes of content (queue message or blob content) are stored in-memory, it can take noticeably longer than usual for the process to terminate since all the consumed memory needs to be released.
Azurite V3 supports SharedKey, Account Shared Access Signature (SAS), Service SAS, OAuth, and Public Container Access authentications, you can use any Azure Storage SDKs or tools like Storage Explorer to connect Azurite V3 with any authentication strategy.
An option to bypass authentication is NOT provided in Azurite V3.
When starting Azurite from npm command line azurite
or docker image, following environment variables are supported for advanced customization.
Azurite V3 allows customizing storage account names and keys by providing environment variable AZURITE_ACCOUNTS
with format account1:key1[:key2];account2:key1[:key2];...
.
For example, customize one storage account which has only one key:
set AZURITE_ACCOUNTS=account1:key1
Or customize multi storage accounts and each has 2 keys:
set AZURITE_ACCOUNTS=account1:key1:key2;account2:key1:key2
Azurite will refresh customized account name and key from environment variable every minute by default. With this feature, we can dynamically rotate account key, or add new storage accounts on the air without restarting Azurite instance.
Note. Default storage account
devstoreaccount1
will be disabled when providing customized storage accounts.
Note. The account keys must be base64 encoded string.
Note. Should update connection string accordingly if using customized account name and key.
Note. Use
export
keyword to set environment variable in Linux like environment,set
in Windows.
Note. When changing storage account name, keep these rules in mind as same as Azure Storage Account:
- Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.
By default, Azurite leverages loki as metadata database.
However, as an in-memory database, loki limits Azurite's scalability and data persistency.
Set environment variable AZURITE_DB=dialect://[username][:password][@]host:port/database
to make Azurite blob service switch to a SQL database based metadata storage, like MySql, SqlServer.
For example, connect to MySql or SqlServer by set environment variables:
set AZURITE_DB=mysql://username:password@localhost:3306/azurite_blob
set AZURITE_DB=mssql://username:password@localhost:1024/azurite_blob
When Azurite starts with above environment variable, it connects to the configured database, and creates tables if not exist. This feature is in preview, when Azurite changes database table schema, you need to drop existing tables and let Azurite regenerate database tables.
Note. Need to manually create database before starting Azurite instance.
Note. Blob Copy & Page Blob are not supported by SQL based metadata implementation.
Tips. Create database instance quickly with docker, for example
docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:latest
. Grant external access and create databaseazurite_blob
usingdocker exec mysql mysql -u root -pmy-secret-pw -e "GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION; FLUSH PRIVILEGES; create database azurite_blob;"
. Notice that, above commands are examples, you need to carefully define the access permissions in your production environment.
Azurite natively supports HTTPS with self-signed certificates via the --cert
and --key
/--pwd
options. You have two certificate type options: PEM or PFX. PEM certificates are split into "cert" and "key" files. A PFX certificate is a single file that can be assigned a password.
You have a few options to generate PEM certificate and key files. We'll show you how to use mkcert and OpenSSL.
mkcert is a utility that makes the entire self-signed certificate process much easier because it wraps a lot of the complex commands that you need to manually execute with other utilities.
choco install mkcert
, but you can install with any mechanism you'd like.mkcert -install
mkcert 127.0.0.1
That will create two files. A certificate file: 127.0.0.1.pem
and a key file: 127.0.0.1-key.pem
.
Then you start Azurite with that cert and key.
azurite --cert 127.0.0.1.pem --key 127.0.0.1-key.pem
If you start Azurite with docker, you need to map the folder contains the cert and key files to docker. In following example, the local folder c:/azurite contains the cert and key files, and map it to /workspace on docker.
docker run -p 10000:10000 -p 10001:10001 -p 10002:10002 -v c:/azurite:/workspace mcr.microsoft.com/azure-storage/azurite azurite --blobHost 0.0.0.0 --queueHost 0.0.0.0 --tableHost 0.0.0.0 --cert /workspace/127.0.0.1.pem --key /workspace/127.0.0.1-key.pem
OpenSSL is a TLS/SSL toolkit. You can use it to generate certificates. It is more involved than mkcert, but has more options.
set OPENSSL_CONF=c:\OpenSSL-Win32\bin\openssl.cfg
set Path=%PATH%;c:\OpenSSL-Win32\bin
Execute the following command to generate a cert and key with OpenSSL.
openssl req -newkey rsa:2048 -x509 -nodes -keyout key.pem -new -out cert.pem -sha256 -days 365 -addext "subjectAltName=IP:127.0.0.1" -subj "/C=CO/ST=ST/L=LO/O=OR/OU=OU/CN=CN"
The -subj
values are required, but do not have to be valid. The subjectAltName
must contain the Azurite IP address.
You then need to add that certificate to the Trusted Root Certification Authorities. This is required to work with Azure SDKs and Storage Explorer.
Here's how to do that on Windows:
certutil –addstore -enterprise –f "Root" cert.pem
Then you start Azurite with that cert and key.
Azurite --cert cert.pem --key key.pem
NOTE: If you are using the Azure SDKs, then you will also need to pass the --oauth basic
option.
You first need to generate a PFX file to use with Azurite.
You can use the following command to generate a PFX file with dotnet dev-certs
, which is installed with the .NET Core SDK.
dotnet dev-certs https --trust -ep cert.pfx -p <password>
Storage Explorer does not currently work with certificates produced by
dotnet dev-certs
. While you can use them for Azurite and Azure SDKs, you won't be able to access the Azurite endpoints with Storage Explorer if you are using the certs created with dotnet dev-certs. We are tracking this issue on GitHub here: https://github.com/microsoft/AzureStorageExplorer/issues/2859
Then you start Azurite with that cert and password.
azurite --cert cert.pfx --pwd pfxpassword
NOTE: If you are using the Azure SDKs, then you will also need to pass the --oauth basic
option.
Azurite V3 provides support for a default storage account as General Storage Account V2 and associated features.
devstoreaccount1
Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==
Note. Besides SharedKey authentication, Azurite V3 supports account, OAuth, and service SAS authentication. Anonymous access is also available when container is set to allow public access.
As mentioned by above section. Azurite V3 allows customizing storage account names and keys by providing environment variable AZURITE_ACCOUNTS
with format account1:key1[:key2];account2:key1[:key2];...
. Account keys must be base64 encoded string.
For example, customize one storage account which has only one key:
set AZURITE_ACCOUNTS="account1:key1"
Or customize multi storage accounts and each has 2 keys:
set AZURITE_ACCOUNTS="account1:key1:key2;account2:key1:key2"
You can pass the following connection strings to the Azure SDKs or tools (like Azure CLI 2.0 or Storage Explorer)
The full connection string is:
DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1;TableEndpoint=http://127.0.0.1:10002/devstoreaccount1;
Take blob service only, the full connection string is:
DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;
Or if the SDK or tools support following short connection string:
UseDevelopmentStorage=true;
The full HTTPS connection string is:
DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=https://127.0.0.1:10001/devstoreaccount1;TableEndpoint=https://127.0.0.1:10002/devstoreaccount1
To use the Blob service only, the HTTPS connection string is:
DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;
If you used dotnet dev-certs
to generate your self-signed certificate, then you need to use the following connection string, because that only generates a cert for localhost
, not 127.0.0.1
.
DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://localhost:10000/devstoreaccount1;QueueEndpoint=https://localhost:10001/devstoreaccount1;
To use Azurite with the Azure SDKs, you can use OAuth with HTTPS options:
azurite --oauth basic --cert certname.pem --key certname-key.pem
You can then instantiate BlobContainerClient, BlobServiceClient, or BlobClient.
// With container url and DefaultAzureCredential
var client = new BlobContainerClient(new Uri("https://127.0.0.1:10000/devstoreaccount1/container-name"), new DefaultAzureCredential());
// With connection string
var client = new BlobContainerClient("DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=https://127.0.0.1:10001/devstoreaccount1;", "container-name");
// With account name and key
var client = new BlobContainerClient(new Uri("https://127.0.0.1:10000/devstoreaccount1/container-name"), new StorageSharedKeyCredential("devstoreaccount1", "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="));
You can also instantiate QueueClient or QueueServiceClient.
// With queue url and DefaultAzureCredential
var client = new QueueClient(new Uri("https://127.0.0.1:10001/devstoreaccount1/queue-name"), new DefaultAzureCredential());
// With connection string
var client = new QueueClient("DefaultEndpointsProtocol=https;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=https://127.0.0.1:10001/devstoreaccount1;", "queue-name");
// With account name and key
var client = new QueueClient(new Uri("https://127.0.0.1:10001/devstoreaccount1/queue-name"), new StorageSharedKeyCredential("devstoreaccount1", "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="));
Connect to Azurite by click "Add Account" icon, then select "Attach to a local emulator" and click "Connect".
By default Storage Explorer will not open an HTTPS endpoint that uses a self-signed certificate. If you are running Azurite with HTTPS, then you are likely using a self-signed certificate. Fortunately, Storage Explorer allows you to import SSL certificates via the Edit -> SSL Certificates -> Import Certificates dialog.
mkcert -CAROOT
. For mkcert, you want to import the RootCA.pem file, not the certificate file you created.dotnet dev-certs
. We are tracking this issue on GitHub here: https://github.com/microsoft/AzureStorageExplorer/issues/2859If you do not set this, then you will get the following error:
unable to verify the first certificate
or
self signed certificate in chain
Follow these steps to add Azurite HTTPS to Storage Explorer:
You can now explore the Azurite HTTPS endpoints with Storage Explorer.
Following files or folders may be created when initializing Azurite in selected workspace location.
azurite_db_blob.json
Metadata file used by Azurite blob service. (No when starting Azurite against external database)azurite_db_blob_extent.json
Extent metadata file used by Azurite blob service. (No when starting Azurite against external database)blobstorage
Persisted binary data by Azurite blob service.azurite_db_queue.json
Metadata file used by Azurite queue service. (No when starting Azurite against external database)azurite_db_queue_extent.json
Extent metadata file used by Azurite queue service. (No when starting Azurite against external database)queuestorage
Persisted binary data by Azurite queue service.azurite_db_table.json
Metadata file used by Azurite table service.Note. Delete above files and folders and restart Azurite to clean up Azurite. It will remove all data stored in Azurite!!
Because Azurite runs as a local instance for persistent data storage, there are differences in functionality between Azurite and an Azure storage account in the cloud.
You could enable multiple accounts by setting up environment variable AZURITE_ACCOUNTS
. See the section above.
Optionally, you could modify your hosts file, to access accounts with production-style URL. See section below.
The service endpoints for Azurite are different from those of an Azure storage account. The difference is because Azurite runs on local computer, and normally, no DNS resolves address to local.
When you address a resource in an Azure storage account, use the following scheme. The account name is part of the URI host name, and the resource being addressed is part of the URI path:
<http|https>://<account-name>.<service-name>.core.windows.net/<resource-path>
For example, the following URI is a valid address for a blob in an Azure storage account:
https://myaccount.blob.core.windows.net/mycontainer/myblob.txt
However, because Azurite runs on local computer, it use IP-style URI by default, and the account name is part of the URI path instead of the host name. Use the following URI format for a resource in Azurite:
http://<local-machine-address>:<port>/<account-name>/<resource-path>
For example, the following address might be used for accessing a blob in Azurite:
http://127.0.0.1:10000/myaccount/mycontainer/myblob.txt
The service endpoints for Azurite blob service:
http://127.0.0.1:10000/<account-name>/<resource-path>
Optionally, you could modify your hosts file, to access an account with production-style URL.
First, add line(s) to your hosts file, like:
127.0.0.1 account1.blob.localhost
127.0.0.1 account1.queue.localhost
127.0.0.1 account1.table.localhost
Secondly, set environment variables to enable customized storage accounts & keys:
set AZURITE_ACCOUNTS="account1:key1:key2"
You could add more accounts. See the section above.
Finally, start Azurite and use a customized connection string to access your account.
In the connection string below, it is assumed default ports are used.
DefaultEndpointsProtocol=http;AccountName=account1;AccountKey=key1;BlobEndpoint=http://account1.blob.localhost:10000;QueueEndpoint=http://account1.queue.localhost:10001;TableEndpoint=http://account1.table.localhost:10002;
Note. Do not access default account in this way with Azure Storage Explorer. There is a bug that Storage Explorer is always adding account name in URL path, causing failures.
Note. When use Production-style URL to access Azurite, by default the account name should be the host name in FQDN, like "http://devstoreaccount1.blob.localhost:10000/container". To use Production-style URL with account name in URL path, like "http://foo.bar.com:10000/devstoreaccount1/container", please start Azurite with
--disableProductStyleUrl
.
Note. If use "host.docker.internal" as request URI host, like "http://host.docker.internal:10000/devstoreaccount1/container", Azurite will always get account name from request URI path, not matter Azurite start with
--disableProductStyleUrl
or not.
Please reach to us if you have requirements or suggestions for a distributed Azurite implementation or higher performance.
Azurite is not a scalable storage service and does not support many concurrent clients. There is also no performance and TPS guarantee, they highly depend on the environments Azurite has deployed.
Please reach to us if you have requirements or suggestions for a specific error handling.
Azurite tries to align with Azure Storage error handling logic, and provides best-efforts alignment based on Azure Storage online documentation. But CANNOT provide 100% alignment, such as error messages (returned in error response body) maybe different (while error status code will align).
Azurite V3 follows a Try best to serve compatible strategy with Azure Storage API versions:
x-ms-version
(HTTP status code 400 - Bad Request).Azurite supports read-access geo-redundant replication (RA-GRS). For storage resources both in the cloud and in the local emulator, you can access the secondary location by appending -secondary to the account name. For example, the following address might be used for accessing a blob using the secondary in Azurite:
http://127.0.0.1:10000/devstoreaccount1-secondary/mycontainer/myblob.txt
Note. Secondary endpoint is not read-only in Azurite, which diffs from Azure Storage.
Both Azurite V3 and Azurite V2 aim to provide a convenient emulation for customers to quickly try out Azure Storage services locally. There are lots of differences between Azurite V3 and legacy Azurite V2.
Architecture in Azurite V3 has been refactored, it's more flexible and robust. It provides the flexibility to support following scenarios in the future:
Azurite V3 leverages a TypeScript server code generator based on Azure Storage REST API swagger specifications. This reduces manual efforts and ensures alignment with the API implementation.
Azurite V3 selected TypeScript as its programming language, as this facilitates broad collaboration, whilst also ensuring quality.
Legacy Azurite V2 supports Azure Storage Blob, Queue and Table services. Azurite V3 currently only supports Azure Storage blob service. Queue service is supported after V3.2.0-preview. Table service support is currently under discussion.
Azurite V3 supports features from Azure Storage API version 2023-01-03, and will maintain parity with the latest API versions, in a more frequent update frequency than legacy Azurite V2.
Azurite V3 leverages a TypeScript Node.js Server Code Generator to generate the majority of code from Azure Storage REST APIs swagger specification.
Currently, the generator project is private, under development and only used by Azurite V3.
We have plans to make the TypeScript server generator public after Azurite V3 releases.
All the generated code is kept in generated
folder, including the generated middleware, request and response models.
Latest release targets 2025-01-05 API version blob service.
Detailed support matrix:
Supported Vertical Features
Supported REST APIs
Following features or REST APIs are NOT supported or limited supported in this release (will support more features per customers feedback in future releases)
Latest version supports for 2025-01-05 API version queue service. Detailed support matrix:
Latest version supports for 2025-01-05 API version table service (preview). Detailed support matrix:
This project is licensed under MIT.
Go to GitHub project page or GitHub issues for the milestone and TODO items we are used for tracking upcoming features and bug fixes.
We are currently working on Azurite V3 to implement the remaining Azure Storage REST APIs. We finished the basic structure and majority of features in Blob Storage, as can be seen in the support matrix. The detailed work items are also tracked in GitHub repository projects and issues.
Any contribution and suggestions for Azurite V3 is welcome, please goto CONTRIBUTION.md for detailed contribution guidelines. Alternatively, you can open GitHub issues voting for any missing features in Azurite V3.
Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
2024.10 Version 3.33.0
General:
Blob:
FAQs
An open source Azure Storage API compatible server
The npm package azurite receives a total of 342,956 weekly downloads. As such, azurite popularity was classified as popular.
We found that azurite demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 5 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Security News
Research
Socket researchers have discovered malicious npm packages targeting crypto developers, stealing credentials and wallet data using spyware delivered through typosquats of popular cryptographic libraries.
Security News
Socket's package search now displays weekly downloads for npm packages, helping developers quickly assess popularity and make more informed decisions.