Performance Testing: Flag when a UI/API test's runtime increases by x% (Selenium C#)

134 Views Asked by At

Our team currently use Selenium and C# (and NUnit) to run UI Automated tests. All tests have been manually programmed, meaning no recorders have been used.

Issue: We now have a request that these tests track their own performance (and past performance) and raise a warning when its runtime increases by x% (5% or 10%, etc).

Question: What would be the best way to accomplish this? Should we create to tool to analyze performance (performance history) of these UI and API tests from scratch or are there other tools out there we can leverage?

Blogs or stackExchange questions discussing load/performance testing usually reference three main tools for C# (NeoLoad, SilkPerformer, LoadRunnerProfessional). However, I'm not sure that what I'm being asked to do is considered performance testing (load testing) in the purest sense and therefore, not sure whether the tools mentioned above will help achieve the overall goal. They also usually separate performance/load testing from UI/API testing.

Summary: looking for advice on what direction to take and/or what to read up on for this type of testing.

2

There are 2 best solutions below

3
On BEST ANSWER

IMHO there are too many wrong things on too many levels with this request.

these tests track their own performance (and past performance) and raise a warning when its runtime increases by x% (5% or 10%, etc).

Performance metrics of functional UI/API test proves what, exactly!?

However, in case this still have to be implemented, I would suggest a simple approach - use StopWatch in before/after each C# Selenium test and store this in a central DB, which you can later query and flag in case of increase.

// generate test case id for later use
// create and start the instance of Stopwatch in your [SetUp] method
Stopwatch stopwatch = Stopwatch.StartNew(); 
// your test code here
// in your [TearDown] method
stopwatch.Stop();
// store this in the DB alongside TestCaseId
Console.WriteLine(stopwatch.ElapsedMilliseconds);  
0
On

First of all, kudos for tracking single user performance. Up to 80% of all performance problems can be fixed by just making certain that every single request meets performance goals.

Second, there are some data points that you should be picking up, but likely are not which can make your job a lot easier: w3c time-taken in the HTTP access logs, and the w3c navtiming stats from a RUM (real user monitor) Javascript add-in. You can push that data to an open source solution, such as ElasticStack, for the management of the data.

Assuming you know the times of your functional runs, you simply need to compare stats for the two, such as

Run time = maximum log entry time - minimum log entry time (for test set for a given set of test hosts.) If your developers can include a version flag on each top level request as part of the parameter set after the '?' sign, all the better, as you can then use the version information as your grouping item for query purposes.

You can then look at data for each request from the logs to see which requests are improving or degrading by build

You can also look at the RUM stats, such as domInteractive & domComplete, to check for end user response times and whether they are improving or declining.

This is all passive, without modifying your scripts, by simply changing log models, adding a RUM agent, then collecting that data to a location where you can dashboard changes