
Security News
Browserslist-rs Gets Major Refactor, Cutting Binary Size by Over 1MB
Browserslist-rs now uses static data to reduce binary size by over 1MB, improving memory use and performance for Rust-based frontend tools.
Apache Beam is a unified model for defining both batch and streaming data-parallel processing pipelines, as well as a set of language-specific SDKs for constructing pipelines and Runners for executing them on distributed processing backends, including Apache Flink, Apache Spark, Google Cloud Dataflow, and Hazelcast Jet.
Beam provides a general approach to expressing embarrassingly parallel data processing pipelines and supports three categories of users, each of which have relatively disparate backgrounds and needs.
The model behind Beam evolved from several internal Google data processing projects, including MapReduce, FlumeJava, and Millwheel. This model was originally known as the βDataflow Modelβ.
To learn more about the Beam Model (though still under the original name of Dataflow), see the World Beyond Batch: Streaming 101 and Streaming 102 posts on OβReillyβs Radar site, and the VLDB 2015 paper.
The key concepts in the Beam programming model are:
PCollection
: represents a collection of data, which could be bounded or unbounded in size.PTransform
: represents a computation that transforms input PCollections into output PCollections.Pipeline
: manages a directed acyclic graph of PTransforms and PCollections that is ready for execution.PipelineRunner
: specifies where and how the pipeline should execute.Beam supports executing programs on multiple distributed processing backends through PipelineRunners. Currently, the following PipelineRunners are available:
DirectRunner
runs the pipeline on your local machine.PrismRunner
runs the pipeline on your local machine using Beam Portability.DataflowRunner
submits the pipeline to the Google Cloud Dataflow.FlinkRunner
runs the pipeline on an Apache Flink cluster. The code has been donated from dataArtisans/flink-dataflow and is now part of Beam.SparkRunner
runs the pipeline on an Apache Spark cluster.JetRunner
runs the pipeline on a Hazelcast Jet cluster. The code has been donated from hazelcast/hazelcast-jet and is now part of Beam.Twister2Runner
runs the pipeline on a Twister2 cluster. The code has been donated from DSC-SPIDAL/twister2 and is now part of Beam.Have ideas for new Runners? See the runner-ideas label.
Get started with the Beam Python SDK quickstart to set up your Python development environment, get the Beam SDK for Python, and run an example pipeline. Then, read through the Beam programming guide to learn the basic concepts that apply to all SDKs in Beam.
See the Python API reference for more information on individual APIs.
Python streaming pipeline execution is available (with some limitations) starting with Beam SDK version 2.5.0.
Python is a dynamically-typed language with no static type checking. The Beam SDK for Python uses type hints during pipeline construction and runtime to try to emulate the correctness guarantees achieved by true static typing. Ensuring Python Type Safety walks through how to use type hints, which help you to catch potential bugs up front with the Direct Runner.
When you run your pipeline locally, the packages that your pipeline depends on are available because they are installed on your local machine. However, when you want to run your pipeline remotely, you must make sure these dependencies are available on the remote machines. Managing Python Pipeline Dependencies shows you how to make your dependencies available to the remote workers.
The Beam SDK for Python provides an extensible API that you can use to create new I/O connectors. See the Developing I/O connectors overview for information about developing new I/O connectors and links to language-specific implementation guidance.
To integrate machine learning models into your pipelines for making inferences, use the RunInference API for PyTorch and Scikit-learn models. If you are using TensorFlow models, you can make use of the
library from tfx_bsl
.
You can create multiple types of transforms using the RunInference API: the API takes multiple types of setup parameters from model handlers, and the parameter type determines the model implementation. For more information, see About Beam ML.
TensorFlow Extended (TFX) is an end-to-end platform for deploying production ML pipelines. TFX is integrated with Beam. For more information, see TFX user guide.
Apache Beam lets you combine transforms written in any supported SDK language and use them in one multi-language pipeline. To learn how to create a multi-language pipeline using the Python SDK, see the Python multi-language pipelines quickstart.
Some common errors can occur during worker start-up and prevent jobs from starting. To learn about these errors and how to troubleshoot them in the Python SDK, see Unrecoverable Errors in Beam Python.
Here are some resources actively maintained by the Beam community to help you get started:
Resource | Details |
---|---|
Apache Beam Website | Our website discussing the project, and it's specifics. |
Python Quickstart | A guide to getting started with the Python SDK. |
Tour of Beam | A comprehensive, interactive learning experience covering Beam concepts in depth. |
Beam Quest | A certification granted by Google Cloud, certifying proficiency in Beam. |
Community Metrics | Beam's Git Community Metrics. |
Instructions for building and testing Beam itself are in the contribution guide.
To get involved with Apache Beam:
FAQs
Apache Beam SDK for Python
We found that apache-beam demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.Β It has 12 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Browserslist-rs now uses static data to reduce binary size by over 1MB, improving memory use and performance for Rust-based frontend tools.
Research
Security News
Eight new malicious Firefox extensions impersonate games, steal OAuth tokens, hijack sessions, and exploit browser permissions to spy on users.
Security News
The official Go SDK for the Model Context Protocol is in development, with a stable, production-ready release expected by August 2025.