Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

adotestplan-to-pytestbdd

Package Overview
Dependencies
Maintainers
2
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

adotestplan-to-pytestbdd

Utility for translating AzureDevOps Test Plans to Gherkin Feature file and Pytest-BDD runners

  • 0.1.9
  • PyPI
  • Socket score

Maintainers
2

Summary

This package provides a utility that translates tests plans, suites and cases in Azure DevOps (ADO) into validated gherkin feature files, and then uses pytest-bdd generate to create the runners for those tests.

After that, it can validate that the test directory has all of the necessary fixtures to run pytest using pytest-bdd given/when/then fixture decorators.

It leverages ADO's notion of Shared Steps to reduce duplication when authoring the features and scenarios. This lets given/when/then clauses to be written once, and used many times.

It is capable of leveraging parameters on the test case (both "shared" and "non-shared") as Examples, creating a Scenario Outline instead of the standard Scenario.

Installation

This package is written only in python, and can be installed using:

$ pip install adotestplan-to-pytestbdd

Usage

from adotestplan_to_pytestbdd import ADOTestPlan
url = 'https://dev.azure.com/[ORGANIZATON_HERE]'
pat = '[PAT_HERE]'
project = '[PROJECT_HERE]'
out_dir='output'
tp = ADOTestPlan(organization_url=url, pat=pat, project=project, out_dir=out_dir)

The above example hasn't yet "done anything", and is equivalent to the following:

from adotestplan_to_pytestbdd import ADOTestPlan
tp = ADOTestPlan()
tp.url = 'https://dev.azure.com/[ORGANIZATON_HERE]'
tp.pat = '[PAT_HERE]'
tp.project = '[PROJECT_HERE]'
tp.out_dir='output'

Put differently - until one of the built-in methods is invoked, properties can be set via init or via property access.

To populate the internal memory structures from ADO:

tp.populate()

Next, to write feature files to disk from the populated:

tp.write_feature_files()

At this point, the ADO test plan has been synchronized to feature files on disk. Its possible that is a sufficient stopping point.

At this point begins the pytest-bdd integration.

First, use this method:

tp.write_pytestbdd_runners()

to create test_xyz.py files on disk corresponding to the feature files generated above. This is a wrapper around pytest-bdd generate (see ado_test_plan.py).

One reason this is seen as useful is that it avoids "checking in" boilerplate/generated code - the test methods created here are basically stubs, the majority of the test occurs in the given/when/then fixtures. With this approach, the test_xyz.py files can be just as ephemeral as the .feature files they are generated from - the one piece that is persistent/checked in is the fixtures where the actual test implementation occurs.

At this point, call:

tp.validate_pytestbdd_runners_against_feature_files()

This final call uses pytest utilities to collect all fixtures in the specified test directory, and compares those against the needed fixtures, determined during the populate() phase. It will print informative messages, and in the end raise an exception if some fixtures are not found.

Testing

Please see TESTING.md for notes on running the tests associated with this package. Note this refers to the unit test for validating the package itself, not the tests generated by running this package normally. That can be done after code generation via a normal call to pytest.

Possible Enhancements

  • Split the 2 basic pieces of functionality into a separate package (1 being ADO to feature file translation, 2 being feature file to fixture "pool" checking)
  • Use pytest --fixtures to collect available fixtures instead of raw searching through files. This is likely a more robust way of making sure fixtures aren't being missed. (this was looked into briefly, and it appears to be MUCH less performant than raw searching (on the order of 7s compared to 30ms), so this was tabled for now)
  • Document how "tags" can be used to filter test cases if one wanted to extend this utility for their own workflows.
  • A lot of the functionality in the tasks.py methods delete_test_work_items and generate_test_work_items may be revelant for "round tripping" this utility - going from .feature files into ADO, which has some utility in and of itself - for instance, migrating from a plain-text file approach to an ADO Test Plan backed approach.

Keywords

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc