@pytest.mark.skip vs @pytest.mark.xfail in Pytest

270 Views Asked by At

I have @pytest.mark.skip's test1() and @pytest.mark.xfail's test2() which are both True as shown below:

import pytest

@pytest.mark.skip
def test1():
    assert True

@pytest.mark.xfail
def test2():
    assert True

Then, I ran pytest, then there is the output as shown below:

$ pytest
=================== test session starts ===================
platform win32 -- Python 3.9.13, pytest-7.4.0, pluggy-1.2.0
django: settings: core.settings (from ini)
rootdir: C:\Users\kai\test-django-project2
configfile: pytest.ini
plugins: django-4.5.2
collected 2 items                                 

tests\test_store.py sX                               [100%]

============== 1 skipped, 1 xpassed in 0.10s ============== 

Next, I have @pytest.mark.skip's test1() and @pytest.mark.xfail's test2() which are both False as shown below:

import pytest

@pytest.mark.skip
def test1():
    assert False

@pytest.mark.xfail
def test2():
    assert False

Then, I ran pytest, then there is the same output as shown below:

$ pytest
=================== test session starts ===================
platform win32 -- Python 3.9.13, pytest-7.4.0, pluggy-1.2.0
django: settings: core.settings (from ini)
rootdir: C:\Users\kai\test-django-project2
configfile: pytest.ini
plugins: django-4.5.2
collected 2 items

tests\test_store.py sx                               [100%]

============== 1 skipped, 1 xfailed in 0.24s ==============

So, what is the difference between @pytest.mark.skip and @pytest.mark.xfail?

1

There are 1 best solutions below

0
On

The marks do different things and have different purposes, the output just looks the same in your trivial case.

Test with the xfail mark are expected to fail, while tests with the skip mark are not executed at all.

The purpose is different. Skipped tests are usually not executed, because some condition is not fulfilled yet / at the moment. More common are tests with the skipif mark, which come with a condition and are self-explaining, but skip marks can for example be used to mark tests that may pass in the future.
A common cause is to skip tests that fail due to a bug that cannot be fixed easily - in this case it is better to skip the test with a respective comment (which can be shown during test execution) instead of just commenting it out. Sometimes tests are written for future features that are not yet implemented - with the same reasoning, to make it visible that something is still missing, and as a kind of specification.

Tests that are expected to fail are probably less frequently used. Your simple case of:

@pytest.mark.xfail
def test():
    assert False

is identical to

def test():
    assert True

so it does not really make sense. There are cases however, where you want to show that a specific test shall fail (instead of just invert the condition to make it pass). An example are regression tests, that show that a test fails if some parameter has not been set, and succeed after it is set correctly. An example in my own code (in pyfakefs) are tests that show that pyfakefs does not correctly patch a module (e.g. does not behave as expected) if some additional argument is not used.

I have seen other cases where the xfail marker has been set for tests that fail but should succeed - this is another version of the mentioned scenario for skip. This will make it more visible if the behavior changes due to a fix - the test will fail in this case, so the change is more clear.

There are other use cases for both markers that also depend on the preferences of the developer, but I hope you get the gist...