Every tester is in some way familiar with the concept of randomly failing automated tests.
These tests fail randomly on the same step or fail each time in a different way, regardless of the complete absence of changes in the testing environment (or a function).
An analysis of the results of these tests can be really time-consuming and some teams prefer running tests once again if they fail.
But is this efficient? The answer is not as obvious and clear as it seems.
Why tests fail: the main reasons
- The test environment is unreliable. The test environment usually has insufficient hardware system resources to work properly under load generated by automated tests. Or it can be configured incorrectly;
- Waits are not used (if we talk about Selenium tests). The test itself is incorrectly prepared and can’t take into account all possible asynchronous events that occur in an interface while testing a part of the software. In some cases, the usage of JS makes tests less reliable.
To receive only successful results after the test run (a green color), we usually use the method of retries.
[highlight dark=”no”]It helps to run failed tests once again or one time or the required number of times.[/highlight]
But this can show that tests fail for some reason and an error occurs due to a system defect.
Since tests failed the first time but were passed the second time, they can contain errors.
Examples of them:
- Sometimes after launching a server, an active test shows a defect that occurs during the first request to the test server only;
- In other cases, a test failed due to a real bug that can be seen only in the case of certain conditions inside the testing environment. In this case, test conditions were met and the bug showed itself. But during the second run, the test was passed, and nobody started analyzing the situation and simply attributed it to a glitch in the test environment. QA consultants and a client will be satisfied that the test was passed during the second try.
Apart from the fact that tests can’t be analyzed, the method of retries has one more feature: one test takes twice longer to run. First, it’s run, it fails and needs to be run one more time (at least).
Is there an alternative to retries?
- If a slow or unreliable environment is used, you should edit the environment. This method helps to not be lost among test paths. And it will definitely make software code clearer — it won’t contain any tries, retries, etc.;
- If it’s a matter of tests, you should edit them or replace them with other tests. Members of a QA lab should create the best and most reliable version of a test, not simply edit parts of software code and delete tests that seem unreliable to them. [highlight dark=”no”]If the environment and content of a test were not edited, any test run should return the same results.[/highlight] And it should be executed within a reasonable time frame, not twice longer.
Short conclusion
If you ignore such random failures, such tests can become unimportant and they won’t be rerun.
Or failures will be completely ignored because we will know that this test tends to fail — and when it will fail for some serious reason, we will think that it’s just a coincidence.
Nobody will analyze the reasons.
Therefore, it’s not hard to miss a bug and it will result in numerous premises.
And this proves one more time that software testing and verification should be debugged to perfection.
0 Comments