test strategy for non functional test cases in continuous integration

569 Views Asked by At

In large-system development, the non-functional requirements are frequently the most important, and implementing them takes the majority of the development time.The non-functional tests are expensive and often take long to run.Non-functional tests frequently cannot be run in the normal continuous-integration-system cycle because they may take too long to execute—a stability test might take two weeks.Anyone could suggest any good test strategy to achieve manual execution of non functional testing in continuous integration process where taking automated build which is created in every 2 hrs

2

There are 2 best solutions below

0
On

Some lengthy tests could (and if so they should) be split in several shorter tests which can be executed in parallel.

In some cases it could be preferable to spend some money to increase the number of testbeds thus the overall test bandwith/capacity which would allow multiple tests overlapping each-other, reducing or even eliminating the impact of the long test duration - you could still use it in (some) CI systems - no one says that if the CI pipelines start every 2h they also need to complete within 2h - they can continue and overlap (staggered) as long as the resource capacity allows it (or at least a decent CI system should support such overlapping).

Alternatively the CI systems could be configured to selectively run longer tasks depending on their capacity: say do the typical stuff for every pipeline (2h apart) but only run a test with a capacity of 1 per day once every 12 pipelines or whenever resources for the long test are available (maybe selecting one pipeline which already passed the shorter verifications -> higher chances of passing the longer test, more meaningful results) (this could be done even "manually", by firing the long tests with artifacts from a subset of the CI executions).

In some cases the long duration is a side effect of limitations of the testing infrastructure or the actual test coding, for example inability to execute tasks in paralel even if that wouldn't fundamentally affect the test. In such cases switching to a more appropriate infrastructure or, respectively, re-writing the tests to allow/improve parallelism could shorted (sometimes significantly) the test duration.

0
On

First of all, congratulations for understanding of importance of non-functional requirements, this is still uncommon knowledge!

You've mentioned running tests for 2 weeks - this seems far too long for me. Continuous integration is about immediate feedback loop. If any test take that long, you may get notified of a serious problem only after 2 weeks after it was introduced. I'd think twice if this has to be like that.

Manual execution of non functional testing in continuous integration should be avoided as much as possible. Tests should run automatically straight after deployment. If for some reasons certain tests can't run in this fashion (e.g. because they take longer to execute), they should be triggered periodically - automatically of course.

There are a couple of options to speed up NFT execution time:

  1. Scale down the tests - e.g. instead of 1000 threads with ramp up = x, run 100 threads with ramp up = x/10. If you scale all necessary parameters properly, you may get accurate feedback much earlier.

  2. Parallelise NFT execution across a number of test environments, once functional tests passed. If you use platform like Amazon, this should be perfectly possible. And if you pay for time the machine was up, this doesn't have to significantly raise the cost - overall test execution time may be similar.