.NET Automated Testing for Long Running Processes

1k Views Asked by At

We would like to do some automated integration testing of a process that requires sending data to an external source and then validating that the data has been correctly displayed on their website.

However it may take several hours before the data appears on the website.

The problem with traditional NUnit or MSTest is that the test will be held up for hours waiting for the result.

I have seen PNUnit which could be used - to run all the tests in parallel, but it doesn't seem an elegant solution to me. What if there are 1000 tests? Won't this create loads of processes/threads on the server? And how to keep a track of all of them.

So has anyone solved this problem? Did you home grow a solution, or is there an open source solution to this?

4

There are 4 best solutions below

1
On

There is a discussion about exactly this problem on the NUnit-Discuss Google Group right now: http://groups.google.com/group/nunit-discuss/browse_thread/thread/645ecaefb4f978fa?hl=en

I hope this helps :)

4
On

This problem can be easily resolved by separating test data insertion and verifying. Just load all available test data to the system, wait for several hours until processing is done and then execute verification tests.

2
On

Martin, before I get to my solution, for unit testing it would seem to me that you would want to test only what you can control. The above is more like what I call regression testing. I am assuming 'their' website is someone else's website. May I ask what happens if you follow the rules of the interface/integration but nothing ever gets on their screen, while it may be a problem what would or could you do about it? Moreover, what happens when they change their website or algorithms? You will end up needing to write code based on what they do which sucks.

That said, like mentioned above, you can separate the loading tests and the verifying data tests. I confess I know nothing of PNunit, but simply throwing threads at it isn't going to solve the 3 hours latency for each round trip test.

if you need to run synchronously, you could load all the data in ClassInitialize() then sleep until it is time to verify and run the actual tests.

If it were me, I'd just have one test project for the loading tests and then one project for verifying the results a few hours later. Having it synchronous doesn't seem like it would buy you much benefit other than ensuring the precondition passes before testing the results, which can be handled in other ways as well.

0
On

Seems like PNUnit would be a good solution. If you are worried about "too many processes/threads" on the server, just throttle how many tests PNUnit can run at once (say, Max N); when a test completes, schedule the next test. I'm not asserting PNUnit knows how to do this; you may have to implement this custom.