Security News
New Python Packaging Proposal Aims to Solve Phantom Dependency Problem with SBOMs
PEP 770 proposes adding SBOM support to Python packages to improve transparency and catch hidden non-Python dependencies that security tools often miss.
This package can be installed using:
pip install adlfs
or
conda install -c conda-forge adlfs
The adl://
and abfs://
protocols are included in fsspec's known_implementations registry
in fsspec > 0.6.1, otherwise users must explicitly inform fsspec about the supported adlfs protocols.
To use the Gen1 filesystem:
import dask.dataframe as dd
storage_options={'tenant_id': TENANT_ID, 'client_id': CLIENT_ID, 'client_secret': CLIENT_SECRET}
dd.read_csv('adl://{STORE_NAME}/{FOLDER}/*.csv', storage_options=storage_options)
To use the Gen2 filesystem you can use the protocol abfs
or az
:
import dask.dataframe as dd
storage_options={'account_name': ACCOUNT_NAME, 'account_key': ACCOUNT_KEY}
ddf = dd.read_csv('abfs://{CONTAINER}/{FOLDER}/*.csv', storage_options=storage_options)
ddf = dd.read_parquet('az://{CONTAINER}/folder.parquet', storage_options=storage_options)
Accepted protocol / uri formats include:
'PROTOCOL://container/path-part/file'
'PROTOCOL://container@account.dfs.core.windows.net/path-part/file'
or optionally, if AZURE_STORAGE_ACCOUNT_NAME and an AZURE_STORAGE_<CREDENTIAL> is
set as an environmental variable, then storage_options will be read from the environmental
variables
To read from a public storage blob you are required to specify the 'account_name'
.
For example, you can access NYC Taxi & Limousine Commission as:
storage_options = {'account_name': 'azureopendatastorage'}
ddf = dd.read_parquet('az://nyctlc/green/puYear=2019/puMonth=*/*.parquet', storage_options=storage_options)
The package includes pythonic filesystem implementations for both Azure Datalake Gen1 and Azure Datalake Gen2, that facilitate interactions between both Azure Datalake implementations and Dask. This is done leveraging the intake/filesystem_spec base class and Azure Python SDKs.
Operations against both Gen1 Datalake currently only work with an Azure ServicePrincipal with suitable credentials to perform operations on the resources of choice.
Operations against the Gen2 Datalake are implemented by leveraging Azure Blob Storage Python SDK.
The storage_options
can be instantiated with a variety of keyword arguments depending on the filesystem. The most commonly used arguments are:
connection_string
account_name
account_key
sas_token
tenant_id
, client_id
, and client_secret
are combined for an Azure ServicePrincipal e.g. storage_options={'account_name': ACCOUNT_NAME, 'tenant_id': TENANT_ID, 'client_id': CLIENT_ID, 'client_secret': CLIENT_SECRET}
anon
: bool, optional.
The value to use for whether to attempt anonymous access if no other credential is passed. By default (None
), the
AZURE_STORAGE_ANON
environment variable is checked. False values (false
, 0
, f
) will resolve to False
and
anonymous access will not be attempted. Otherwise the value for anon
resolves to True.location_mode
: valid values are "primary" or "secondary" and apply to RA-GRS accountsFor more argument details see all arguments for AzureBlobFileSystem
here and AzureDatalakeFileSystem
here.
The following environmental variables can also be set and picked up for authentication:
The filesystem can be instantiated for different use cases based on a variety of storage_options
combinations. The following list describes some common use cases utilizing AzureBlobFileSystem
, i.e. protocols abfs
or az
. Note that all cases require the account_name
argument to be provided:
storage_options={'account_name': ACCOUNT_NAME, 'anon': True}
will assume the ACCOUNT_NAME
points to a public container, and attempt to use an anonymous login. Note, the default value for anon
is True.storage_options={'account_name': ACCOUNT_NAME, 'anon': False}
will use DefaultAzureCredential
to get valid credentials to the container ACCOUNT_NAME
. DefaultAzureCredential
attempts to authenticate via the mechanisms and order visualized here.storage_options
: Set AZURE_STORAGE_ANON
to false
, resulting in automatic credential resolution. Useful for compatibility with fsspec.tenant_id
, client_id
, and client_secret
are all used as credentials for an Azure ServicePrincipal: e.g. storage_options={'account_name': ACCOUNT_NAME, 'tenant_id': TENANT_ID, 'client_id': CLIENT_ID, 'client_secret': CLIENT_SECRET}
.The AzureBlobFileSystem
accepts all of the Async BlobServiceClient arguments.
By default, write operations create BlockBlobs in Azure, which, once written can not be appended. It is possible to create an AppendBlob using mode="ab"
when creating and operating on blobs. Currently, AppendBlobs are not available if hierarchical namespaces are enabled.
FAQs
Access Azure Datalake Gen1 with fsspec and dask
We found that adlfs demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
PEP 770 proposes adding SBOM support to Python packages to improve transparency and catch hidden non-Python dependencies that security tools often miss.
Security News
Socket CEO Feross Aboukhadijeh discusses open source security challenges, including zero-day attacks and supply chain risks, on the Cyber Security Council podcast.
Security News
Research
Socket researchers uncover how threat actors weaponize Out-of-Band Application Security Testing (OAST) techniques across the npm, PyPI, and RubyGems ecosystems to exfiltrate sensitive data.