should E2E be run in production

4.3k Views Asked by At

Apologies if this is open ended.

Currently my team and I are working on our End to end (E2E) testing strategy and we seem to be unsure whether we should be executing our E2E tests against our staging site or on our production site. We have gathered that there are pros and cons to both.

Pro Staging Tests

  • wont be corrupting analytics data on production.
  • Can detect failure before hitting production.

Pro Production Tests

  • Will be using actual components of the system including database and other configurations and may capture issues with prod configs.

I am sometimes not sure if we are conflating E2E with monitoring services (if such a thing exists). Does someone have an opinion on the matter?

In addition when are E2E tests run? Since every member of the system is being tested there doesn't seem to be an owner of the test suite making it hard to determine when the E2E should be run. We were hoping that we could run E2E in some sort of a pipeline before hitting production. Does that mean I should run these test when either the Front end / backend changes? Or would you rather just run the execution of the E2E on an interval regardless of any change?

2

There are 2 best solutions below

0
On

In my team experience has shown that test automation is better done in a dedicated test server periodically and new code is deployed only after being tested several sessions in a row successfully.

Local test run is for test automation development and debugging.

Test server - for scheduled runs, because - no matter how good you are at writing tests, at some point they will become many hours in a row to run and you need a reliable statistic of them over time with fake data that wont break production server.

I disagree with @MetaWhirledPeas on the point of pursuing only fast test runs. Your priority should always be better coverage and reduced flakiness. You can always reduce the run time by paralelization.

Running in production - I have seen many situations when test results in a funny state of the official site that makes the company reputation go down. Other dangers are:

  1. Breaking your database
  2. Making purchases from non existent users and start losing money
  3. Creating unnecessary strain on the official site api, that make the client experience during the run bad or even cause the server to stop completely.

So, in our team we have a dedicated manual tester for the production site.

0
On

You might not have all the best options at your disposal depending on how your department/environment/projects are set up, but ideally you do not want to test in production.

I'd say the general desire is to use fake data as often as possible, and curate it to cover real-world scenarios. If your prod configs and setup are different than your testing environment, do the hard work to ensure your testing environment configuration matches prod as much as possible. This is easier to accomplish if you're using CI tools, but discipline is required no matter what your setup may be.

When the tests run is going to depend on some things.

  • If you've made your website and dependencies trivial to spin up, and if you are already using a continuous integration workflow, you might be able to have the code build and launch tests during the pull request evaluation. This is the ideal.
  • If you have a slow build/deploy process you'll probably want to keep a permanent test environment running. You can then launch the tests after each deployment to your test environment, or run them ad hoc.

You could also schedule the tests to run periodically, but usually this indicates that the tests are taking too long. Strive to create quick tests to leave the door open for integration with your CI tools at some point. Parallelization will help, but your biggest gains will come from using cy.request() to fly through repetitive tasks like logging in, and using cy.intercept() to stub responses instead of waiting for a service.