
Research
/Security News
Popular Tinycolor npm Package Compromised in Supply Chain Attack Affecting 40+ Packages
Malicious update to @ctrl/tinycolor on npm is part of a supply-chain attack hitting 40+ packages across maintainers
Just to mention the most important features delivered in latest BehaveX releases:
š·ļø Tag Expressions v2 Support (v4.6.0) - Native support for Cucumber-style tag expressions with boolean logic (and, or, not), parentheses grouping, wildcard matching (@prefix*, @*suffix, @substring), and complex filtering. Supported in Behave 1.3.0+ using zero external dependencies. See Tag Expressions for comprehensive examples and usage.
š Enhanced Behave Integration (v4.5.0) - Added support for newer behave versions (>= 1.3.0). Also, major performance overhaul using direct Behave Runner class integration, providing better programmatic control with improved status detection efficiency. See Migration to BehaveX 4.5.0 for upgrade considerations.
š ļø Enhanced Error Status Handling (v4.5.0) - Comprehensive improvements in "error" status management, now preserving original "error" status instead of converting to "failed" for more accurate reporting.
š Interactive Execution Timeline Chart (v4.5.0) - New visual timeline in HTML reports displaying scenario execution order, duration, and status across parallel processes.
šÆ Test Execution Ordering (v4.4.1) - Control the sequence of scenario and feature execution during parallel runs using order tags (e.g., @ORDER_001
, @ORDER_010
). Now includes strict ordering mode (--order-tests-strict
) for scenarios that must wait for lower-order tests to complete.
š Allure Reports Integration (v4.2.1) - Generate beautiful, comprehensive test reports with Allure framework integration.
š Console Progress Bar (v3.2.13) - Real-time progress tracking during parallel test execution.
BehaveX is a BDD testing solution built on top of the Python Behave library, orchestrating parallel test sessions to enhance your testing workflow with additional features and performance improvements. It's particularly beneficial in the following scenarios:
BehaveX provides the following features:
@MUTE
tag to test scenarios to execute them without including them in JUnit reports.-d
Behave argument.@AUTORETRY
tag to automatically re-execute failing scenarios. Also, you can re-run all failing scenarios using the failing_scenarios.txt file.
To install BehaveX, execute the following command:
pip install behavex
BehaveX is compatible with the following Behave versions:
BehaveX automatically installs a compatible version of Behave. If you need to use a specific version of Behave, you can install it explicitly:
# For Behave 1.2.6 (stable)
pip install behavex behave==1.2.6
# For Behave 1.3.0 or newer (latest)
pip install behavex behave>=1.3.0
Note: BehaveX includes compatibility fixes to ensure all features work correctly with multiple Behave versions.
When upgrading to BehaveX 4.5.0 with Behave 1.3.0 or newer, be aware of the following potential challenges:
Case-sensitive step definitions: Step definitions are case-sensitive. If the case doesn't match exactly between your feature files and step definitions, you'll encounter "undefined step" errors.
Trailing colons in steps: Steps with trailing colons (:
) are no longer automatically cleaned by Behave and may not be detected properly.
Relative imports: Using relative paths in imports may cause issues. Consider updating to absolute import paths for better compatibility.
For complete details on Behave breaking changes, refer to:
Execute BehaveX in the same way as Behave from the command line, using the behavex
command. Here are some examples:
Run scenarios tagged as TAG_1
but not TAG_2
:
# v1 syntax (all Behave versions)
behavex -t=@TAG_1 -t=~@TAG_2
# v2 syntax (Cucumber Style, supported in Behave 1.3.0+)
behavex -t="@TAG_1 and not @TAG_2"
Run scenarios tagged as TAG_1
or TAG_2
:
# v1 syntax (all Behave versions)
behavex -t=@TAG_1,@TAG_2
# v2 syntax (Cucumber Style, supported in Behave 1.3.0+)
behavex -t="@TAG_1 or @TAG_2"
Run scenarios tagged as TAG_1
using 4 parallel processes:
behavex -t=@TAG_1 --parallel-processes=4 --parallel-scheme=scenario
Run scenarios located at specific folders using 2 parallel processes:
behavex features/features_folder_1 features/features_folder_2 --parallel-processes=2
Run scenarios from a specific feature file using 2 parallel processes:
behavex features_folder_1/sample_feature.feature --parallel-processes=2
Run scenarios tagged as TAG_1
from a specific feature file using 2 parallel processes:
behavex features_folder_1/sample_feature.feature -t=@TAG_1 --parallel-processes=2
Run scenarios located at specific folders using 2 parallel processes:
behavex features/feature_1 features/feature_2 --parallel-processes=2
Run scenarios tagged as TAG_1
, using 5 parallel processes executing a feature on each process:
behavex -t=@TAG_1 --parallel-processes=5 --parallel-scheme=feature
Perform a dry run of the scenarios tagged as TAG_1
, and generate the HTML report:
behavex -t=@TAG_1 --dry-run
Run scenarios tagged as TAG_1
, generating the execution evidence into a specific folder:
behavex -t=@TAG_1 -o=execution_evidence
Run scenarios with execution ordering enabled (requires parallel execution):
behavex -t=@TAG_1 --order-tests --parallel-processes=2
Run scenarios with strict execution ordering (tests wait for lower order tests to complete):
behavex -t=@TAG_1 --order-tests-strict --parallel-processes=2
Run complex tag expressions (Cucumber Style, supported in Behave 1.3.0+):
# Advanced filtering with wildcards
behavex -t="@smoke* and not @*_slow" --parallel-processes=3
# Production-ready filtering
behavex -t="(@api or @ui) and @high_priority and not @flaky" --parallel-processes=4
Run scenarios with custom order tag prefix and parallel execution:
behavex --order-tests --order-tag-prefix=PRIORITY --parallel-processes=3
BehaveX supports two types of tag expressions for filtering test scenarios:
Tag Expressions v1 use a simple syntax compatible with all Behave versions:
Basic Examples:
# Run scenarios with a specific tag
behavex -t=@smoke
# Exclude scenarios with a tag
behavex -t=~@slow
# Multiple conditions (AND logic)
behavex -t=@smoke -t=~@slow
# Multiple tags (OR logic)
behavex -t=@smoke,@regression
Advanced v1 Examples:
# Run smoke tests but exclude slow ones
behavex -t=@smoke -t=~@slow
# Run regression or integration tests
behavex -t=@regression,@integration
# Run critical tests but exclude known issues
behavex -t=@critical -t=~@known_issue
# Complex filtering with multiple exclusions
behavex -t=@api -t=~@slow -t=~@flaky
Note: Tag Expressions v2 (Cucumber Style) require Behave 1.3.0 or newer. BehaveX will automatically detect v2 syntax and use Behave's native parser.
Tag Expressions v2 support advanced boolean logic with a more intuitive syntax:
Boolean Operators:
# AND logic
behavex -t="@smoke and @api"
# OR logic
behavex -t="@smoke or @regression"
# NOT logic
behavex -t="not @slow"
# Complex combinations
behavex -t="@smoke and not @slow"
Parentheses Grouping:
# Group conditions with parentheses
behavex -t="(@smoke or @regression) and not @slow"
# Complex nested grouping
behavex -t="(@smoke and @api) or (@regression and @ui)"
# Deep nesting
behavex -t="(((@smoke or @regression) and @api) or @critical) and not @slow"
Wildcard Matching (Cucumber Style Feature, supported in Behave 1.3.0+):
# Prefix matching
behavex -t="@smoke*" # Matches @smoke, @smoke_test, @smoke_api
# Suffix matching
behavex -t="@*_test" # Matches @api_test, @ui_test, @smoke_test
# Substring matching
behavex -t="@*smoke*" # Matches @smoke, @smoke_test, @test_smoke
# Complex wildcard combinations
behavex -t="@smoke* and not @*_slow" # Smoke tests excluding slow ones
behavex -t="@*_api or @*_ui" # All API or UI tests
Advanced v2 Examples:
# Production-ready test filtering
behavex -t="(@smoke or @regression) and not (@slow or @flaky)"
# Environment-specific testing
behavex -t="@api and (@staging or @production) and not @experimental"
# Feature-based filtering with wildcards
behavex -t="@user* and (@*_positive or @*_critical) and not @*_slow"
# Complex business logic filtering
behavex -t="((@smoke and @high_priority) or @critical) and not (@known_issue or @skip)"
# Multi-level wildcard filtering
behavex -t="(@auth* or @payment*) and (@*_test or @*_check) and not @*_manual"
Multiple Tag Arguments (Combined with AND logic):
# Multiple -t arguments are combined with AND
behavex -t="@smoke or @regression" -t="not @slow"
# Equivalent to: (@smoke or @regression) and (not @slow)
# Complex multi-argument filtering
behavex -t="@api* and @*_test" -t="not @experimental" -t="@high_priority or @critical"
Feature | Behave 1.2.6 | Behave 1.3.0+ |
---|---|---|
Tag Expressions v1 | ā Full Support | ā Full Support |
Tag Expressions v2 | ā Not Supported | ā Full Support |
Boolean operators (and, or, not) | ā | ā |
Parentheses grouping | ā | ā |
Wildcard matching | ā | ā |
When upgrading to Behave 1.3.0+, you can migrate your tag expressions:
# v1 Format # v2 Equivalent
behavex -t=@smoke -t=~@slow ā behavex -t="@smoke and not @slow"
behavex -t=@smoke,@regression ā behavex -t="@smoke or @regression"
behavex -t=@api -t=~@slow -t=~@flaky ā behavex -t="@api and not @slow and not @flaky"
--dry-run
to verify scenario selectionenvironment.py
module will run in each parallel process. This includes the before_all and after_all hooks, which will execute in every parallel process. The same is true for the before_feature and after_feature hooks when parallel execution is organized by scenario.Important: Some arguments do not apply when executing tests with more than one parallel process, such as stop and color.
BehaveX manages concurrent executions of Behave instances in multiple processes. You can perform parallel test executions by feature or scenario. When the parallel scheme is by scenario, the examples of a scenario outline are also executed in parallel.
behavex --parallel-processes=3
behavex -t=@TAG --parallel-processes=3
behavex -t=@TAG --parallel-processes=2 --parallel-scheme=scenario
behavex -t=@TAG --parallel-processes=5 --parallel-scheme=feature
behavex -t=@TAG --parallel-processes=5 --parallel-scheme=feature --show-progress-bar
For advanced tag filtering examples, see the Tag Expressions section.
BehaveX populates the Behave contexts with the worker_id
user-specific data. This variable contains the id of the current behave process.
For example, if BehaveX is started with --parallel-processes 2
, the first instance of behave will receive worker_id=0
, and the second instance will receive worker_id=1
.
This variable can be accessed within the python tests using context.config.userdata['worker_id']
.
BehaveX provides the ability to control the execution order of your test scenarios and features using special order tags when running tests in parallel. This feature ensures that tests run in a predictable sequence during parallel execution, which is particularly useful for setup/teardown scenarios, or when you need specific tests to run before others.
Test execution ordering is valuable in scenarios such as:
BehaveX provides three arguments to control test execution ordering during parallel execution:
To control execution order, add tags to your scenarios using the following format:
@ORDER_001
Scenario: This scenario will run first
Given I perform initial setup
When I execute the first test
Then the setup should be complete
@ORDER_010
Scenario: This scenario will run second
Given the initial setup is complete
When I execute the dependent test
Then the test should pass
@ORDER_100
Scenario: This scenario will run last
Given all previous tests have completed
When I perform cleanup
Then all resources should be cleaned up
Important Notes:
--parallel-processes > 1
)@ORDER_001
runs before @ORDER_010
)@ORDER_10000
)--order-tests
): When the number of parallel processes equals or exceeds the number of ordered scenarios, ordering has no practical effect since all scenarios can run simultaneously--order-tests-strict
): Tests will always wait for lower-order tests to complete, regardless of available processes, which may reduce overall execution performance001
, 010
, 100
) for better sorting visualization--order-tests-strict
automatically enables --order-tests
, so you don't need to specify both arguments# Enable test ordering with default ORDER prefix (requires parallel execution)
behavex --order-tests --parallel-processes=2 -t=@SMOKE
# Enable test ordering with custom prefix
behavex --order-tests --order-tag-prefix=PRIORITY --parallel-processes=3 -t=@REGRESSION
# Enable strict test ordering - tests wait for lower order tests to complete
behavex --order-tests-strict --parallel-processes=3 -t=@INTEGRATION
# Strict ordering with custom prefix
behavex --order-tests-strict --order-tag-prefix=SEQUENCE --parallel-processes=2
# Note: --order-tests-strict automatically enables --order-tests, so you don't need both
# Order tests and run with parallel processes by scenario
behavex --order-tests --parallel-processes=4 --parallel-scheme=scenario
# Order tests and run with parallel processes by feature
behavex --order-tests --parallel-processes=3 --parallel-scheme=feature
# Custom order prefix with parallel execution
behavex --order-tests --order-tag-prefix=SEQUENCE --parallel-processes=2
# Strict ordering by scenario (tests wait for completion of lower order tests)
behavex --order-tests-strict --parallel-processes=4 --parallel-scheme=scenario
# Strict ordering by feature
behavex --order-tests-strict --parallel-processes=3 --parallel-scheme=feature
--order-tests
)@ORDER_001
, @ORDER_002
, @ORDER_003
, all three tests start at the same time--order-tests-strict
)@ORDER_002
tests won't start until all @ORDER_001
tests are finishedPerformance Comparison:
# Scenario: 6 tests with ORDER_001, ORDER_002, ORDER_003 tags and 3 parallel processes
# Regular ordering (--order-tests):
# Time 0: ORDER_001, ORDER_002, ORDER_003 all start simultaneously
# Total time: ~1 minute (all tests run in parallel)
# Strict ordering (--order-tests-strict):
# Time 0: Only ORDER_001 tests start
# Time 1: ORDER_001 finishes ā ORDER_002 tests start
# Time 2: ORDER_002 finishes ā ORDER_003 tests start
# Total time: ~3 minutes (sequential execution)
You can customize the order tag prefix to match your team's naming conventions:
# Using PRIORITY prefix
behavex --order-tests --order-tag-prefix=PRIORITY
# Now use tags like @PRIORITY_001, @PRIORITY_010, etc.
@PRIORITY_001
Scenario: High priority scenario
Given I need to run this first
@PRIORITY_050
Scenario: Medium priority scenario
Given this can run after high priority
@PRIORITY_100
Scenario: Low priority scenario
Given this runs last
When using --parallel-scheme=feature
, the ordering is determined by ORDER tags placed directly on the feature itself:
@ORDER_001
Feature: Database Setup Feature
Scenario: Create database schema
Given I create the database schema
Scenario: Insert initial data
Given I insert the initial data
# This entire feature will be ordered as ORDER_001 (tag on the feature)
@ORDER_002
Feature: Application Tests Feature
Scenario: Test user login
Given I test user login
# This entire feature will be ordered as ORDER_002 (tag on the feature)
Feature: Unordered Feature
Scenario: Some test
Given I perform some test
# This feature has no ORDER tag, so it gets the default order 9999
Contains information about test scenarios and execution status. This is the base report generated by BehaveX, which is used to generate the HTML report. Available at:
<output_folder>/report.json
A friendly test execution report containing information related to test scenarios, execution status, evidence, and metrics. Available at:
<output_folder>/report.html
One JUnit file per feature, available at:
<output_folder>/behave/*.xml
The JUnit reports have been replaced by the ones generated by the test wrapper, just to support muting tests scenarios on build servers
You can attach images or screenshots to the HTML report using your own mechanism to capture screenshots or retrieve images. Utilize the attach_image_file or attach_image_binary methods provided by the wrapper.
These methods can be called from hooks in the environment.py
file or directly from step definitions.
from behavex_images import image_attachments
@given('I take a screenshot from the current page')
def step_impl(context):
image_attachments.attach_image_file(context, 'path/to/image.png')
from behavex_images import image_attachments
from behavex_images.image_attachments import AttachmentsCondition
def before_all(context):
image_attachments.set_attachments_condition(context, AttachmentsCondition.ONLY_ON_FAILURE)
def after_step(context, step):
image_attachments.attach_image_binary(context, selenium_driver.get_screenshot_as_png())
By default, images are attached to the HTML report only when the test fails. You can modify this behavior by setting the condition using the set_attachments_condition method.
For more information, check the behavex-images library, which is included with BehaveX 3.3.0 and above.
If you are using BehaveX < 3.3.0, you can still attach images to the HTML report by installing the behavex-images package with the following command:
pip install behavex-images
Providing ample evidence in test execution reports is crucial for identifying the root cause of issues. Any evidence file generated during a test scenario can be stored in a folder path provided by the wrapper for each scenario.
The evidence folder path is automatically generated and stored in the "context.evidence_path" context variable. This variable is updated by the wrapper before executing each scenario, and all files copied into that path will be accessible from the HTML report linked to the executed scenario.
The HTML report includes detailed test execution logs for each scenario. These logs are generated using the logging library and are linked to the specific test scenario. This feature allows for easy debugging and analysis of test failures.
The HTML report provides a range of metrics to help you understand the performance and effectiveness of your test suite. These metrics include:
BehaveX enhances the traditional Behave dry run feature to provide more value. The HTML report generated during a dry run can be shared with stakeholders to discuss scenario specifications and test plans.
To execute a dry run, use the --dry-run
argument:
behavex --dry-run
behavex -t=@TAG --dry-run
For advanced tag filtering in dry runs, see the Tag Expressions section.
In some cases, you may want to mute test scenarios that are failing but are not critical to the build process. This can be achieved by adding the @MUTE tag to the scenario. Muted scenarios will still be executed, but their failures will not be reported in the JUnit reports. However, the execution details will be visible in the HTML report.
For scenarios that are prone to intermittent failures or are affected by infrastructure issues, you can use the @AUTORETRY tag. This tag enables automatic re-execution of the scenario in case of failure.
You can also specify the number of retries by adding the total retries as a suffix in the @AUTORETRY tag. For example, @AUTORETRY_3 will retry the scenario 3 times if the scenario fails.
The re-execution will be performed right after a failing execution arises, and the latest execution is the one that will be reported.
After executing tests, if there are failing scenarios, a failing_scenarios.txt file will be generated in the output folder. This file allows you to rerun all failed scenarios using the following command:
behavex -rf=./<OUTPUT_FOLDER>/failing_scenarios.txt
or
behavex --rerun-failures=./<OUTPUT_FOLDER>/failing_scenarios.txt
To avoid overwriting the previous test report, it is recommended to specify a different output folder using the -o or --output-folder argument.
Note that the -o or --output-folder argument does not work with parallel test executions.
When running tests in parallel, you can display a progress bar in the console to monitor the test execution progress. To enable the progress bar, use the --show-progress-bar argument:
behavex --parallel-processes=3 --show-progress-bar
behavex -t=@TAG --parallel-processes=3 --show-progress-bar
For advanced tag filtering with progress bar, see the Tag Expressions section.
If you are printing logs in the console, you can configure the progress bar to display updates on a new line by adding the following setting to the BehaveX configuration file:
[progress_bar]
print_updates_in_new_lines="true"
BehaveX provides integration with Allure, a flexible, lightweight multi-language test reporting tool. The Allure formatter creates detailed and visually appealing reports that include comprehensive test information, evidence, and categorization of test results.
Note: Since BehaveX is designed to run tests in parallel, the Allure formatter processes the consolidated report.json
file after all parallel test executions are completed. This ensures that all test results from different parallel processes are properly aggregated before generating the final Allure report.
To generate Allure reports, use the --formatter
argument to specify the Allure formatter:
behavex --formatter=behavex.outputs.formatters.allure_behavex_formatter:AllureBehaveXFormatter
behavex -t=@TAG --formatter=behavex.outputs.formatters.allure_behavex_formatter:AllureBehaveXFormatter
By default, the Allure results will be generated in the output/allure-results
directory. You can specify a different output directory using the --formatter-outdir
argument:
behavex -t=@TAG --formatter=behavex.outputs.formatters.allure_behavex_formatter:AllureBehaveXFormatter --formatter-outdir=my-allure-results
For advanced tag filtering with Allure reports, see the Tag Expressions section.
When using Allure reports, you should continue to use the same methods for attaching screenshots and evidence as described in the sections above:
For screenshots: Use the methods described in Attaching Images to the HTML Report section. The attach_image_file()
and attach_image_binary()
methods from the behavex_images
library will automatically work with Allure reports.
For additional evidence: Use the approach described in Attaching Additional Execution Evidence to the HTML Report section. Files stored in the context.evidence_path
will be automatically included in the Allure reports.
The evidence and screenshots attached using these methods will be seamlessly integrated into your Allure reports, providing comprehensive test execution documentation.
After running the tests, you can generate and view the Allure report using the following commands:
# Serve the report (opens in a browser)
allure serve output/allure-results
# Or... generate a single HTML file report
allure generate output/allure-results --output output/allure-report --clean --single-file
# Or... generate a static report
allure generate output/allure-results --output output/allure-report --clean
By default, scenario.log
files are attached to each scenario in the Allure report. You can disable this by passing the --no-formatter-attach-logs
argument:
behavex --formatter behavex.outputs.formatters.allure_behavex_formatter:AllureBehaveXFormatter --no-formatter-attach-logs
BehaveX includes additional utility scripts in the scripts/
folder to help with common tasks:
Generate HTML reports from existing report.json
files without re-running tests:
# Generate HTML in the same directory as the JSON file
python scripts/generate_html_from_json.py output/report.json
# Generate HTML in a specific directory
python scripts/generate_html_from_json.py output/report.json my_reports/
# Works with any BehaveX JSON report
python scripts/generate_html_from_json.py /path/to/old_execution/report.json
This utility is helpful when you want to:
If you find this project helpful or interesting, we would appreciate it if you could give it a star (:star:). It's a simple way to show your support and let us know that you find value in our work.
By starring this repository, you help us gain visibility among other developers and contributors. It also serves as motivation for us to continue improving and maintaining this project.
Thank you in advance for your support! We truly appreciate it.
FAQs
Agile testing framework on top of Behave (BDD).
We found that behavex demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.Ā It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
/Security News
Malicious update to @ctrl/tinycolor on npm is part of a supply-chain attack hitting 40+ packages across maintainers
Security News
pnpm's new minimumReleaseAge setting delays package updates to prevent supply chain attacks, with other tools like Taze and NCU following suit.
Security News
The Rust Security Response WG is warning of phishing emails from rustfoundation.dev targeting crates.io users.