The problem: I have a single test function that I want to run many times on different data. I don't want to just get a single pass/fail result — I want to see whether each item of data makes the test pass or fail individually. But I also don't want to write a whole bunch of nearly identical tests, with only the data changing each time.
The solution: "parametrization" in the Pytest tool. Parametrization means defining parameters that will populate your function-arguments.
Define an iterable (here called the_iterable
) containing the multiple items each of which you want passed to your test functions. Both it and a string containing an argument name (here called argument_name
) are passed to a decorator called on any test function (here called test_name
).
# test_file.py
import pytest
the_iterable = ['sequence', 'of', 'items', 'etc.'] # Etc.
@pytest.mark.parametrize('argument_name', the_iterable)
def test_name(argument_name):
# Code here makes use of each item of `the_iterable` in turn, passed
# in as `argument_name`.
pass
The names the_iterable
, argument_name
, and test_name
are dummies that you can change to whatever you want, although the name of the function has to start with test_
in order for Pytest to discover it.
You don't need to iterate manually through the_iterable
or summon specific indices of it, as the_iterable[0]
, etc. Pytest will handle that. Pytest will iterate through the_iterable
and assign each of its elements, in turn, to the target variable argument_name
. It will call your test function test_name
on that argument over and over again, with the argument representing each of the elements of the_iterable
in turn, and report the results to you one by one.
Run from the command line as
py.test test_file.py
Note:
test_file.py
can contain as many test-functions as you like, and each can be decorated withpytest.mark.parametrize
if needed.- Pytest normally runs all tests in sequence even if some fail. If you want Pytest to stop on the first failing test, use option
-x
. - To see which arguments are passed into each test, use option
-v
. Unless the arguments are strings, numbers, or booleans, they will probably appear as automatically generated id-names — to include an identifying string for each data item, do so using the keyword-argumentids
:
@pytest.mark.parametrize(
'argument_name',
the_iterable,
ids=['list', 'of', 'id-names'])
def test_name(argument_name):
pass
argument_name
can also contain multiple arguments, comma-separated, if your test function takes multiple arguments (and if the_iterable
contains tuples). For instance
the_iterable = [('sequence', 56), ('of', 3), ('items', 0)] # Etc.
@pytest.mark.parametrize('arg1, arg2', the_iterable)
def test_name(arg1, arg2):
# Code here makes use of each tuple `(arg1, arg2)` in turn.
pass
I have used this pattern to pass in 2-tuples containing an external function to be called and a boolean describing whether the external function is expected to return True
or False
. (A more official way of marking specific test cases as "expected to fail" is with the pytest.mark.xfail()
function, within the_iterable
.)
If you want to see the actual data passed into each iteration of the test function, use the pytest.fixture
decorator, with the params
keyword:
@pytest.fixture(params=['sequence', 'of', 'items', 'etc.']) # Etc.
def test_name(argument_name):
# Code here makes use of each item of `params` in turn, passed
# in as `argument_name`.
pass
Note that the first decorator example above is shorthand for the following:
def pytest_generate_tests(metafunc):
if 'argument_name' in metafunc.fixturenames:
metafunc.parametrize("argument_name", the_iterable)
def test_name(argument_name):
# Code here makes use of each item of `the_iterable` in turn, passed
# in as `argument_name`.
pass
Here are links to more details on Pytest parametrization and examples.
[end]
Comments are enabled.