Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

azure-storage

Package Overview
Dependencies
Maintainers
1
Versions
50
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

azure-storage - npm Package Compare versions

Comparing version 1.0.1 to 1.1.0

23

ChangeLog.md
Note: This is an Azure Storage only package. The all up Azure node sdk still has the old storage bits in there. In a future release, those storage bits will be removed and an npm dependency to this storage node sdk will
be taken. This is a GA release and the changes described below indicate the changes from the Azure node SDK 0.9.8 available here - https://github.com/Azure/azure-sdk-for-node.
2016.06 Version 1.1.0
ALL
* Fixed the issue that using SAS doesn't work against storage emulator.
* Fixed the issue that the service SAS signature is incorrect when protocol parameter is specified.
* Fixed the issue that the timeout query string should be in seconds instead of milliseconds.
BLOB
* Added parameter snapshotId to BlobService.getUrl function to support getting url of a specified snapshot.
* Fixed the issue that the getUrl doesn't work against storage emulator.
* Fixed the race issue that the _rangeList may be deleted before using it in the BlockRangeStream._getTypeList function.
* Fixed the issue that downloading block blob with size bigger than 32MB will fail when using anonymous credential.
* Added `CREATE` to `BlobUtilities.SharedAccessPermissions`.
TABLE
* Supported string type value for entity PartionKey and RowKey.
* Supported implicit Edm type value for entity properties. The supported implicit Edm types including Int32, Double, Bool, DateTime and String.
FILE
* Fixed the issue that the getUrl doesn't work against storage emulator.
* Added `CREATE` to `FileUtilities.SharedAccessPermissions`.
2016.05 Version 1.0.1

@@ -5,0 +28,0 @@

81

CONTRIBUTING.md
# Contribute Code or Provide Feedback
If you would like to become an active contributor to this project please follow the instructions provided in [Windows Azure Projects Contribution Guidelines](http://azure.github.io/guidelines.html).
If you would like to become an active contributor to this project please follow the instructions provided in [Microsoft Azure Projects Contribution Guidelines](https://azure.github.io/guidelines/).
If you encounter any bugs with the library please file an issue in the [Issues](https://github.com/Azure/azure-storage-node/issues) section of the project.
## Project Steup
The Azure Storage development team uses Visual Studio Code so instructions will be tailored to that preference. However, any preferred IDE or other toolset should be usable.
### Install
* Node v0.10, v0.12 or v4
* [Visual Studio Code](https://code.visualstudio.com/)
### Development Environment Setup
To get the source code of the SDK via **git** just type:
```bash
git clone https://github.com/Azure/azure-storage-node.git
cd ./azure-storage-node
```
Then, run NPM to install all the NPM dependencies:
```bash
npm install
```
## Tests
### Running
Unit tests don't require real credentials and don't require any environment varaibles to be set. By default the unit tests are run with Nock recording data.
If you would like to run the unit test against a live storage account, you will need to setup environment variables which will be used. These test will use these credentials to run live tests against Azure with the provided credentials. Note that you will be charged for storage usage. You need verify the clean up script did its job at the end of a test run.
Unit tests can then be run from root directory using:
```bash
npm test
```
To run unit tests against a live storage account, please set environment variable to turn off Nock by:
```bash
export NOCK_OFF=true
```
and set up the following environment variables for storage account credentials by:
```bash
export AZURE_STORAGE_CONNECTION_STRING="valid storage connection string"
```
or
```bash
export AZURE_STORAGE_ACCOUNT="valid storage account name"
export AZURE_STORAGE_ACCESS_KEY="valid storage account key"
```
### Testing Features
As you develop a feature, you'll need to write tests to ensure quality. Your changes should be covered by both unit tests. You should also run existing tests related to your change to address any unexpected breaks.
## Pull Requests
### Guidelines
The following are the minimum requirements for any pull request that must be met before contributions can be accepted.
* Make sure you've signed the [CLA](https://cla.azure.com/) before you start working on any change.
* Discuss any proposed contribution with the team via a GitHub issue **before** starting development.
* Code must be professional quality
* No style issues
* You should strive to mimic the style with which we have written the library
* Clean, well-commented, well-designed code
* Try to limit the number of commits for a feature to 1-2. If you end up having too many we may ask you to squash your changes into fewer commits.
* [ChangeLog.md](ChangeLog.md) needs to be updated describing the new change
* Thoroughly test your feature
### Branching Policy
Changes should be based on the **dev** branch, not master as master is considered publicly released code. Each breaking change should be recorded in [BreakingChanges.md](BreakingChanges.md).
### Adding Features for All Platforms
We strive to release each new feature for each of our environments at the same time. Therefore, we ask that all contributions be written for Node v0.10 and later.
### Review Process
We expect all guidelines to be met before accepting a pull request. As such, we will work with you to address issues we find by leaving comments in your code. Please understand that it may take a few iterations before the code is accepted as we maintain high standards on code quality. Once we feel comfortable with a contribution, we will validate the change and accept the pull request.
Thank you for any contributions! Please let the team know if you have any questions or concerns about our contribution policy.

10

examples/samples/sassample.js

@@ -36,4 +36,4 @@ //

var azure;
if (fs.existsSync('absolute path to azure-storage.js')) {
azure = require('absolute path to azure-storage');
if (fs.existsSync('../../lib/azure-storage.js')) {
azure = require('../../lib/azure-storage');
} else {

@@ -200,7 +200,3 @@ azure = require('azure-storage');

console.log('Downloaded the blob ' + blob + ' by using the shared access signature URL: \n ' + sharedBlobService.getUrl(container, blob, sharedAccessSignatureToken));
assert.equal(headers.cacheControl, result.cacheControl);
assert.equal(headers.contentDisposition, result.contentDisposition);
assert.equal(headers.contentEncoding, result.contentEncoding);
assert.equal(headers.contentLanguage, result.contentLanguage);
assert.equal(headers.contentType, result.contentType);
}

@@ -207,0 +203,0 @@

@@ -221,3 +221,3 @@ //

* @param {string} sharedAccessPolicy.AccessPolicy.IPAddressOrRange The permission type. Refer to `Constants.AccountSasConstants.ResourceTypes`.
* @param {string} sharedAccessPolicy.AccessPolicy.Protocol The possible protocol. Refer to `Constants.AccountSasConstants.ResourceTypes`.
* @param {string} sharedAccessPolicy.AccessPolicy.Protocols The possible protocols. Refer to `Constants.AccountSasConstants.ResourceTypes`.
*/

@@ -326,3 +326,3 @@ exports.generateAccountSharedAccessSignature = function(storageAccountOrConnectionString, storageAccessKey, sharedAccessAccountPolicy)

* @property {string} IPAddressOrRange An IP address or a range of IP addresses from which to accept requests. When specifying a range, note that the range is inclusive.
* @property {string} Protocol The protocol permitted for a request made with the SAS.
* @property {string} Protocols The protocols permitted for a request made with the SAS.
* @property {string} Services The services (blob, file, queue, table) for a shared access signature associated with this shared access policy.

@@ -329,0 +329,0 @@ * @property {string} ResourceTypes The resource type for a shared access signature associated with this shared access policy.

@@ -510,3 +510,3 @@ //

if(!azureutil.objectIsNull(options.timeoutIntervalInMs) && options.timeoutIntervalInMs > 0) {
webResource.withQueryOption(QueryStringConstants.TIMEOUT, options.timeoutIntervalInMs);
webResource.withQueryOption(QueryStringConstants.TIMEOUT, Math.ceil(options.timeoutIntervalInMs / 1000));
}

@@ -817,3 +817,8 @@

webResource.uri = url.resolve(host, url.format({pathname: webResource.path, query: webResource.queryString}));
if(host && host.lastIndexOf('/') !== (host.length - 1)){
host = host + '/';
}
var fullPath = url.format({pathname: webResource.path, query: webResource.queryString});
webResource.uri = url.resolve(host, fullPath);
webResource.path = url.parse(webResource.uri).pathname;

@@ -824,6 +829,4 @@ };

* Retrieves the normalized path to be used in a request.
* This takes into consideration the usePathStyleUri object field
* which specifies if the request is against the emulator or against
* the production service. It also adds a leading "/" to the path in case
* it's not there before.
* It also removes any leading "/" of the path in case
* it's there before.
* @ignore

@@ -835,11 +838,7 @@ * @param {string} path The path to be normalized.

if (path === null || path === undefined) {
path = '/';
} else if (path.indexOf('/') !== 0) {
path = '/' + path;
path = '';
} else if (path.indexOf('/') === 0) {
path = path.substring(1);
}
if (this.usePathStyleUri) {
path = '/' + this.storageAccount + path;
}
return path;

@@ -846,0 +845,0 @@ };

@@ -62,3 +62,3 @@ //

* @param {string} sharedAccessPolicy.AccessPolicy.IPAddressOrRange An IP address or a range of IP addresses from which to accept requests. When specifying a range, note that the range is inclusive.
* @param {string} sharedAccessPolicy.AccessPolicy.Protocol The protocol permitted for a request made with the account SAS.
* @param {string} sharedAccessPolicy.AccessPolicy.Protocols The protocols permitted for a request made with the account SAS.
* Possible values are both HTTPS and HTTP (https,http) or HTTPS only (https). The default value is https,http.

@@ -131,3 +131,3 @@ * Refer to `Constants.AccountSasConstants.Protocols`.

* @param {string} sharedAccessPolicy.AccessPolicy.IPAddressOrRange An IP address or a range of IP addresses from which to accept requests. When specifying a range, note that the range is inclusive.
* @param {string} sharedAccessPolicy.AccessPolicy.Protocol The protocol permitted for a request made with the account SAS.
* @param {string} sharedAccessPolicy.AccessPolicy.Protocols The protocols permitted for a request made with the account SAS.
* Possible values are both HTTPS and HTTP (https,http) or HTTPS only (https). The default value is https,http.

@@ -290,3 +290,3 @@ * Refer to `Constants.AccountSasConstants.Protocols`.

* @param {string} [sharedAccessPolicy.AccessPolicy.IPAddressOrRange] An IP address or a range of IP addresses from which to accept requests. When specifying a range, note that the range is inclusive.
* @param {string} [sharedAccessPolicy.AccessPolicy.Protocol] The protocol permitted for a request made with the account SAS.
* @param {string} [sharedAccessPolicy.AccessPolicy.Protocols] The protocols permitted for a request made with the account SAS.
* Possible values are both HTTPS and HTTP (https,http) or HTTPS only (https). The default value is https,http.

@@ -418,3 +418,3 @@ * @param {string} sasVersion A string indicating the desired SAS Version to use, in storage service version format. Value must be 2012-02-12 or later.

* @param {string} [sharedAccessPolicy.AccessPolicy.IPAddressOrRange] An IP address or a range of IP addresses from which to accept requests. When specifying a range, note that the range is inclusive.
* @param {string} [sharedAccessPolicy.AccessPolicy.Protocol] The protocol permitted for a request made with the account SAS.
* @param {string} [sharedAccessPolicy.AccessPolicy.Protocols] The protocols permitted for a request made with the account SAS.
* Possible values are both HTTPS and HTTP (https,http) or HTTPS only (https). The default value is https,http.

@@ -474,3 +474,3 @@ * @param {string} sasVersion A string indicating the desired SAS Version to use, in storage service version format. Value must be 2012-02-12 or later.

getvalueToAppend(sharedAccessPolicy.Id) +
getvalueToAppend(sharedAccessPolicy.AccessPolicy ? sharedAccessPolicy.AccessPolicy.Protocol : '') +
getvalueToAppend(sharedAccessPolicy.AccessPolicy ? sharedAccessPolicy.AccessPolicy.Protocols : '') +
getvalueToAppend(sharedAccessPolicy.AccessPolicy ? sharedAccessPolicy.AccessPolicy.IPAddressOrRange : '') +

@@ -477,0 +477,0 @@ sasVersion;

@@ -34,3 +34,3 @@ //

*/
USER_AGENT_PRODUCT_VERSION: '1.0.1',
USER_AGENT_PRODUCT_VERSION: '1.1.0',

@@ -37,0 +37,0 @@ /**

@@ -32,4 +32,5 @@ //

SharedAccessPermissions: {
READ: 'r',
ADD: 'a',
READ: 'r',
CREATE: 'c',
WRITE: 'w',

@@ -36,0 +37,0 @@ DELETE: 'd',

@@ -63,8 +63,8 @@ //

if (!options.bloblistType) {
options.bloblistType = BlobUtilities.BlockListFilter.ALL;
if (!options.blockListFilter) {
options.blockListFilter = BlobUtilities.BlockListFilter.ALL;
}
var self = this;
this.blobServiceClient.listBlocks(this.container, this.blob, options.bloblistType, options, function (error, blocklist, response) {
this.blobServiceClient.listBlocks(this.container, this.blob, options.blockListFilter, options, function (error, blocklist, response) {
if (error) {

@@ -108,16 +108,18 @@ callback(error);

var typeStart = false;
for (var blockType in this._rangelist) {
if (this._rangelist.hasOwnProperty(blockType)) {
if (this._emittedRangeType === null || typeStart || this._emittedRangeType == blockType) {
this._emittedRangeType = blockType;
typeStart = true;
} else if (this._emittedRangeType !== blockType) {
continue;
if (this._rangelist) {
for (var blockType in this._rangelist) {
if (this._rangelist.hasOwnProperty(blockType)) {
if (this._emittedRangeType === null || typeStart || this._emittedRangeType == blockType) {
this._emittedRangeType = blockType;
typeStart = true;
} else if (this._emittedRangeType !== blockType) {
continue;
}
if (this._paused) {
return;
}
this._emitBlockRange (blockType, callback);
}
if (this._paused) {
return;
}
this._emitBlockRange (blockType, callback);
}

@@ -124,0 +126,0 @@ }

@@ -33,2 +33,3 @@ //

READ: 'r',
CREATE: 'c',
WRITE: 'w',

@@ -35,0 +36,0 @@ DELETE: 'd',

@@ -36,18 +36,36 @@ //

exports.serializeJson = function (entity) {
function normalizeEntityProperty(property) {
if(azureutil.objectIsNull(property)) {
return { _: property };
}
if (typeof property === 'object' && property.hasOwnProperty(Constants.TableConstants.ODATA_VALUE_MARKER)) {
return property;
}
var result = { _: property };
result[Constants.TableConstants.ODATA_TYPE_MARKER] = edmHandler.propertyType(property, true);
return result;
}
var result = {};
for (var propName in entity) {
// ignore if .metadata or null or undefined
if ((propName !== Constants.TableConstants.ODATA_METADATA_MARKER) && !azureutil.objectIsNull(entity[propName][Constants.TableConstants.ODATA_VALUE_MARKER])) {
var value = entity[propName][Constants.TableConstants.ODATA_VALUE_MARKER];
var type = entity[propName][Constants.TableConstants.ODATA_TYPE_MARKER];
if (propName !== Constants.TableConstants.ODATA_METADATA_MARKER) {
var property = normalizeEntityProperty(entity[propName]);
if (!azureutil.objectIsNull(property[Constants.TableConstants.ODATA_VALUE_MARKER])) {
var value = property[Constants.TableConstants.ODATA_VALUE_MARKER];
var type = property[Constants.TableConstants.ODATA_TYPE_MARKER];
if (type === undefined) {
type = edmHandler.propertyType(value, true);
}
if (type === undefined) {
type = edmHandler.propertyType(value, true);
}
result[propName] = edmHandler.serializeValue(type, value);
if (edmHandler.isTypeRequired(type, value)) {
result[propName + Constants.TableConstants.ODATA_TYPE_SUFFIX] = type;
result[propName] = edmHandler.serializeValue(type, value);
if (edmHandler.isTypeRequired(type, value)) {
result[propName + Constants.TableConstants.ODATA_TYPE_SUFFIX] = type;
}
}
}
}
}

@@ -67,3 +85,3 @@

};
return JSON.stringify(result, replacer);

@@ -70,0 +88,0 @@ };

@@ -75,4 +75,19 @@ //

} else {
var path = getEntityPath(table, entityDescriptor.PartitionKey[TableConstants.ODATA_VALUE_MARKER], entityDescriptor.RowKey[TableConstants.ODATA_VALUE_MARKER]);
var partitionKey;
var rowKey;
if (typeof (entityDescriptor.PartitionKey) === 'string') {
partitionKey = entityDescriptor.PartitionKey;
} else {
partitionKey = entityDescriptor.PartitionKey[TableConstants.ODATA_VALUE_MARKER];
}
if (typeof (entityDescriptor.RowKey) === 'string') {
rowKey = entityDescriptor.RowKey;
} else {
rowKey = entityDescriptor.RowKey[TableConstants.ODATA_VALUE_MARKER];
}
var path = getEntityPath(table, partitionKey, rowKey);
if (operation === TableConstants.Operations.DELETE) {

@@ -79,0 +94,0 @@ webResource = WebResource.del(path);

{
"name": "azure-storage",
"author": "Microsoft Corporation",
"version": "1.0.1",
"version": "1.1.0",
"description": "Microsoft Azure Storage Client Library for Node.js",

@@ -6,0 +6,0 @@ "tags": [

# Microsoft Azure Storage SDK for Node.js
[![NPM version](https://badge.fury.io/js/azure-storage.svg)](http://badge.fury.io/js/azure-storage) [![Build Status](https://travis-ci.org/Azure/azure-storage-node.svg?branch=master)](https://travis-ci.org/Azure/azure-storage-node)
[![Coverage Status](https://coveralls.io/repos/Azure/azure-storage-node/badge.svg?branch=master&service=github)](https://coveralls.io/github/Azure/azure-storage-node?branch=master)
[![NPM version](https://badge.fury.io/js/azure-storage.svg)](http://badge.fury.io/js/azure-storage)
* Master [![Build Status](https://travis-ci.org/Azure/azure-storage-node.svg?branch=master)](https://travis-ci.org/Azure/azure-storage-node/branches) [![Coverage Status](https://coveralls.io/repos/Azure/azure-storage-node/badge.svg?branch=master&service=github)](https://coveralls.io/github/Azure/azure-storage-node?branch=master)
* Dev [![Build Status](https://travis-ci.org/Azure/azure-storage-node.svg?branch=dev)](https://travis-ci.org/Azure/azure-storage-node/branches) [![Coverage Status](https://coveralls.io/repos/Azure/azure-storage-node/badge.svg?branch=dev&service=github)](https://coveralls.io/github/Azure/azure-storage-node?branch=dev)
This project provides a Node.js package that makes it easy to consume and manage Microsoft Azure Storage Services.

@@ -452,2 +454,3 @@

```Batchfile
set NODE_TLS_REJECT_UNAUTHORIZED=0
set HTTP_PROXY=http://127.0.0.1:8888

@@ -479,4 +482,6 @@ ```

- Forums: Interact with the development teams on StackOverflow or the Microsoft Azure Forums
- Source Code Contributions: If you would like to become an active contributor to this project please follow the instructions provided in [Microsoft Azure Projects Contribution Guidelines](http://azure.github.com/guidelines.html).
- Source Code Contributions: If you would like to become an active contributor to this project please follow the instructions provided in [Contributing.md](CONTRIBUTING.md).
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
For general suggestions about Microsoft Azure please use our [UserVoice forum](http://feedback.azure.com/forums/34192--general-feedback).

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc