
Security News
Deno 2.4 Brings Back deno bundle, Improves Dependency Management and Observability
Deno 2.4 brings back bundling, improves dependency updates and telemetry, and makes the runtime more practical for real-world JavaScript projects.
Aggregates hi-res data from ATC traffic signal controllers into 15-minute binned ATSPM/performance measures.
atspm
is a cutting-edge, lightweight Python package that transforms raw traffic signal controller event logs into meaningful Traffic Signal Performance Measures (TSPMs). These measures help transportation agencies continuously monitor and optimize signal timing perfomance, detect issues, and take proative actions - all in real-time.
Unlike traditional traffic signal optimization tools like Synchro, which rely on periodic manual data collection and simulation models, ATSPM uses real-time data directly collected from signal controllers installed at intersections (inside the ITS cabinets). This real-time reporting capability allows agencies to generate performance data for any selected time range, making it ideal for continuously monitoring signal perfromance and diagnosing problems before they escalate.
Traditional signal retiming projects often depend on infrequent manual traffic studies and citizen complaints to detect problems. This reactive approach can delay maintencance, increase congestion and compromise road safety. On the other hand, ATSPMs enable proactive management by continuously collecting data and monitoring traffic signal performance, allowing agencies to solve issues before they lead to major traffic disruptions.
The Python atspm
project is inspired by UDOT ATSPM, (https://github.com/udotdevelopment/ATSPM) which is a full stack application for collecting data from signal controllers and visualizing it at the intersection level for detailed real-time troubleshooting and analysis. This atspm package focuses instead on aggregation and analytics, enabling more of a system-wide monitoring approach. Both projects are complimentary and can be deployed together.
With over 330,000 traffic signals operating in the US, agencies typically retime these signals every three to five years at a cost of around $4,500 per intersetion. ATSPMs provide a significant improvement over this traditional model by offering continuous performance monitoring, reducing the need for costly manual interventions. (https://ops.fhwa.dot.gov/publications/fhwahop20002/ch2.htm)
This project focuses only on transforming event logs into performance measures and troubleshooting data, it does include data visualization. Feel free to submit feature requests or bug reports or to reach out with questions or comments. Contributions are welcome!
pip install atspm
Or pinned to a specific version:
pip install atspm==1.x.x
atspm
works on Python 3.10-3.12 and is tested on Ubuntu, Windows, and MacOS.
The best place to start is with these self-contained example uses in Colab!
In this section, we will walk through an example of using the atspm
package to get started. This can easily be done using pip as shown above
The first step in running the tool is to define the parameters that will dictate how the data is processed. The parameters include global settings for input data, output formats, and options to select specific performance measures.
sample_data.data
. In a real-world scenario, this would be a DataFrame or file path (CSV/Parquet/JSON) containing traffic event logs.detector_config
defines how the detectors at the intersections are configured (e.g., their location, type).test_folder
. This could be customized based on the user's need.output_format: 'csv'
), but the package also supports other formats like Parquet or JSON.params = {
'raw_data': sample_data.data, # Path to raw event data
'detector_config': sample_data.config,
'bin_size': 15, # 15-minute aggregation bins
'output_dir': 'test_folder', # Output directory for results
'output_format': 'csv', # Output format (CSV/Parquet/JSON)
'output_file_prefix': 'prefix_', # Optional file prefix
'remove_incomplete': True, # Remove periods with incomplete data
'verbose': 1, # Verbosity level (1: performance logging)
'aggregations': [ # Performance measures to calculate
{'name': 'has_data', 'params': {'no_data_min': 5, 'min_data_points': 3}},
{'name': 'actuations', 'params': {}},
{'name': 'arrival_on_green', 'params': {'latency_offset_seconds': 0}},
{'name': 'split_failures', 'params': {'red_time': 5, 'red_occupancy_threshold': 0.80, 'green_occupancy_threshold': 0.80}},
# ... other performance measures
]
}
The core of atspm
is calculating various traffic signal performance measures from the raw event log data. Each measure is based on specific traffic signal controller events such as vehicle actuations, pedestrian button presses, or signal changes (green, yellow, red).
Each of these measures can be configured in the params
dictionary. You can also add or remove measures based on your analysis needs.
{
'name': 'split_failures',
'params': {
'red_time': 5, # Minimum red time for a split failure
'red_occupancy_threshold': 0.80, # Threshold for red signal occupancy
'green_occupancy_threshold': 0.80, # Threshold for green signal occupancy
'by_approach': True # Aggregate split failures by approach
}
}
After setting the parameters, the next step is to run the data processor. This involves loading the raw data, performing the aggregations, and saving the results.
processor = SignalDataProcessor(**params)
processor.load() # Load raw event data
processor.aggregate() # Perform data aggregation
processor.save() # Save aggregated results to the output folder
The `aggregate()` function computes the defined performance measures, while `save()` outputs the results to the specified folder.
After running the code, your output folder (e.g., `test_folder/`) will contain the results of the analysis, with the data split into subdirectories based on the performance measures.
test_folder/
āāā actuations/
āāā arrival_on_green/
āāā split_failures/
āāā ...
Inside each folder, there will be a CSV file named prefix_.csv
with the aggregated performance data. In production, the prefix could be named using the date/time of the run. Or you can output everything to a single folder.
You can also manually query the results from the internal database and retrieve the data as a Pandas DataFrame for further analysis:
# Query results from the processor and convert to a Pandas DataFrame
results = processor.conn.query("SELECT * FROM actuations ORDER BY TimeStamp").df()
print(results.head())
Once you've collected a significant amount of data (e.g., 5 weeks), you can run advanced measures like detector health, which uses time series decomposition for anomaly detection. This feature allows you to identify malfunctioning detectors and impute missing data.
The package can also estimate pedestrian volumes from push button actuations using the methodology established in traffic studies. This is especially useful for understanding pedestrian activity at intersections.
params = {
'raw_data': 'path/to/ped_data.parquet',
'bin_size': 15, # Binned at 15-minute intervals
'aggregations': [
{'name': 'full_ped', 'params': {'seconds_between_actuations': 15, 'return_volumes': True}},
]
}
processor = SignalDataProcessor(**params)
processor.load()
processor.aggregate()
The output will provide an estimated count of pedestrian volumes at various intersections.
The data produced by atspm
can easily be visualized using tools like Power BI, Plotly, or other data visualization platforms. This allows users to create dashboards showing key traffic metrics such as pedestrian volumes, signal timings, and detector health.
You can generate interactive maps of pedestrian volumes using plotly
to create a visual representation of pedestrian activity:
import plotly.graph_objects as go
fig = go.Figure(data=go.Scattermapbox(
lon=ped_data['Longitude'],
lat=ped_data['Latitude'],
text=ped_data['Name'] + '<br>Pedestrian Volume: ' + ped_data['PedVolumes'].astype(str),
mode='markers',
marker=dict(
size=ped_data['PedVolumes'] / 50,
color=ped_data['PedVolumes'],
colorscale='Viridis'
)
))
fig.update_layout(mapbox=dict(style='outdoors', zoom=5))
fig.show()
A good way to use the data is to output as parquet to separate folders, and then a data visualization tool like Power BI can read in all the files in each folder and create a dashboard. For example, see: Oregon DOT ATSPM Dashboard
Use of CSV files in production should be avoided, instead use Parquet file format, which is significantly faster, smaller, and enforces datatypes.
The following performance measures are included:
Coming Soon:
Detailed documentation for each measure is coming soon.
Filling in missing time periods for detectors with zero actuations didn't work for incremental processing, this has been fixed by tracking a list of known detectors between each run, similar to the unmatched event tracking. So how it works is you provide a dataframe or file path of known detectors, it will filter out detectors last seen more than n days ago, and then will fill in missing time periods with zeros for the remaining detectors.
known_detectors_df='path/to/known_detectors.csv'
# or supply Pandas DataFrame directly
from src.atspm import SignalDataProcessor, sample_data
# Set up all parameters
params = {
# Global Settings
'raw_data': sample_data.data,
'bin_size': 15,
# Performance Measures
'aggregations': [
{'name': 'actuations', 'params': {
'fill_in_missing': True,
'known_detectors_df_or_path': known_detectors_df,
'known_detectors_max_days_old': 2
}}
]
}
After you run the processor, here's how to query the known detectors table:
processor = SignalDataProcessor(**params)
processor.load()
processor.aggregate()
# get all table names from the database
known_detectors_df = processor.conn.query("SELECT * FROM known_detectors;").df()
Here's what the known detectors table could look like:
DeviceId | Detector | LastSeen |
---|---|---|
1 | 1 | 2025-03-04 00:00:00 |
1 | 2 | 2025-03-04 00:00:00 |
2 | 1 | 2025-03-04 00:00:00 |
Added option to fill in missing time periods for detector actuations with zeros. This makes it clearer when there are no actuations for a detector vs no data due to comm loss. Having zero-value actuation time periods also allows detector health to better identify anomalies due to stuck on/off detectors.
New timeline events:
Also updated tests to include these new features. This is a lot of new events to process, so be sure to test thoroughly before deploying to production.
Fixed a timestamp conversion issue when reading unmatched events from a csv file. Updated the unit tests to catch this issue in the future.
Ideas and contributions are welcome! Please feel free to submit a Pull Request. Note that GitHub Actions will automatically run unit tests on your code.
This project is licensed under the MIT License - see the LICENSE file for details.
FAQs
Aggregates hi-res data from ATC traffic signal controllers into 15-minute binned ATSPM/performance measures.
We found that atspm demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.Ā It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Deno 2.4 brings back bundling, improves dependency updates and telemetry, and makes the runtime more practical for real-world JavaScript projects.
Security News
CVEForecast.org uses machine learning to project a record-breaking surge in vulnerability disclosures in 2025.
Security News
Browserslist-rs now uses static data to reduce binary size by over 1MB, improving memory use and performance for Rust-based frontend tools.