DataFusion in Python
This is a Python library that binds to Apache Arrow in-memory query engine DataFusion.
DataFusion's Python bindings can be used as a foundation for building new data systems in Python. Here are some examples:
- Dask SQL uses DataFusion's Python bindings for SQL parsing, query
planning, and logical plan optimizations, and then transpiles the logical plan to Dask operations for execution.
- DataFusion Ballista is a distributed SQL query engine that extends
DataFusion's Python bindings for distributed use cases.
It is also possible to use these Python bindings directly for DataFrame and SQL operations, but you may find that
Polars and DuckDB are more suitable for this use case, since they have
more of an end-user focus and are more actively maintained than these Python bindings.
Features
- Execute queries using SQL or DataFrames against CSV, Parquet, and JSON data sources.
- Queries are optimized using DataFusion's query optimizer.
- Execute user-defined Python code from SQL.
- Exchange data with Pandas and other DataFrame libraries that support PyArrow.
- Serialize and deserialize query plans in Substrait format.
- Experimental support for transpiling SQL queries to DataFrame calls with Polars, Pandas, and cuDF.
Example Usage
The following example demonstrates running a SQL query against a Parquet file using DataFusion, storing the results
in a Pandas DataFrame, and then plotting a chart.
The Parquet file used in this example can be downloaded from the following page:
from datafusion import SessionContext
ctx = SessionContext()
ctx.register_parquet('taxi', 'yellow_tripdata_2021-01.parquet')
df = ctx.sql("select passenger_count, count(*) "
"from taxi "
"where passenger_count is not null "
"group by passenger_count "
"order by passenger_count")
pandas_df = df.to_pandas()
fig = pandas_df.plot(kind="bar", title="Trip Count by Number of Passengers").get_figure()
fig.savefig('chart.png')
This produces the following chart:
Configuration
It is possible to configure runtime (memory and disk settings) and configuration settings when creating a context.
runtime = (
RuntimeConfig()
.with_disk_manager_os()
.with_fair_spill_pool(10000000)
)
config = (
SessionConfig()
.with_create_default_catalog_and_schema(True)
.with_default_catalog_and_schema("foo", "bar")
.with_target_partitions(8)
.with_information_schema(True)
.with_repartition_joins(False)
.with_repartition_aggregations(False)
.with_repartition_windows(False)
.with_parquet_pruning(False)
.set("datafusion.execution.parquet.pushdown_filters", "true")
)
ctx = SessionContext(config, runtime)
Refer to the API documentation for more information.
Printing the context will show the current configuration settings.
print(ctx)
More Examples
See examples for more information.
Executing Queries with DataFusion
Running User-Defined Python Code
Substrait Support
How to install (from pip)
Pip
pip install datafusion
python -m pip install datafusion
Conda
conda install -c conda-forge datafusion
You can verify the installation by running:
>>> import datafusion
>>> datafusion.__version__
'0.6.0'
How to develop
This assumes that you have rust and cargo installed. We use the workflow recommended by pyo3 and maturin.
The Maturin tools used in this workflow can be installed either via Conda or Pip. Both approaches should offer the same experience. Multiple approaches are only offered to appease developer preference. Bootstrapping for both Conda and Pip are as follows.
Bootstrap (Conda):
git clone git@github.com:apache/datafusion-python.git
conda env create -f ./conda/environments/datafusion-dev.yaml -n datafusion-dev
conda activate datafusion-dev
Or alternatively, if you are on an OS that supports CUDA Toolkit, you can use -f ./conda/environments/datafusion-cuda-dev.yaml
.
Bootstrap (Pip):
git clone git@github.com:apache/datafusion-python.git
python3 -m venv venv
source venv/bin/activate
python -m pip install -U pip
python -m pip install -r requirements.in
The tests rely on test data in git submodules.
git submodule init
git submodule update
Whenever rust code changes (your changes or via git pull
):
maturin develop
python -m pytest
Running & Installing pre-commit hooks
arrow-datafusion-python takes advantage of pre-commit to assist developers with code linting to help reduce
the number of commits that ultimately fail in CI due to linter errors. Using the pre-commit hooks is optional for the
developer but certainly helpful for keeping PRs clean and concise.
Our pre-commit hooks can be installed by running pre-commit install
, which will install the configurations in
your ARROW_DATAFUSION_PYTHON_ROOT/.github directory and run each time you perform a commit, failing to complete
the commit if an offending lint is found allowing you to make changes locally before pushing.
The pre-commit hooks can also be run adhoc without installing them by simply running pre-commit run --all-files
Running linters without using pre-commit
There are scripts in ci/scripts
for running Rust and Python linters.
./ci/scripts/python_lint.sh
./ci/scripts/rust_clippy.sh
./ci/scripts/rust_fmt.sh
./ci/scripts/rust_toml_fmt.sh
How to update dependencies
To change test dependencies, change the requirements.in
and run
python -m pip install pip-tools
python -m piptools compile --generate-hashes -o requirements-310.txt
To update dependencies, run with -U
python -m piptools compile -U --generate-hashes -o requirements-310.txt
More details here