Ukraine Office: +38 (063) 50 74 707

USA Office: +1 (212) 203-8264

contact@testmatick.com

Manual Testing

Ensure the highest quality for your software with our manual testing services.

Mobile Testing

Optimize your mobile apps for flawless performance across all devices and platforms with our comprehensive mobile testing services.

Automated Testing

Enhance your software development with our automated testing services, designed to boost efficiency.

Functional Testing

Refine your application’s core functionality with our functional testing services

VIEW ALL SERVICES 

Discussion – 

0

Discussion – 

0

Using Test Retries as a Method to Hide Bugs

Using-Test-Retries-as-a-Method-to-Hide-Bugs

Every tester is in some way familiar with the concept of randomly failing automated tests.

These tests fail randomly on the same step or fail each time in a different way, regardless of the complete absence of changes in the testing environment (or a function).

An analysis of the results of these tests can be really time-consuming and some teams prefer running tests once again if they fail.

But is this efficient? The answer is not as obvious and clear as it seems.

Why tests fail: the main reasons

  • The test environment is unreliable. The test environment usually has insufficient hardware system resources to work properly under load generated by automated tests. Or it can be configured incorrectly;
  • Waits are not used (if we talk about Selenium tests). The test itself is incorrectly prepared and can’t take into account all possible asynchronous events that occur in an interface while testing a part of the software. In some cases, the usage of JS makes tests less reliable.

To receive only successful results after the test run (a green color), we usually use the method of retries.

[highlight dark=”no”]It helps to run failed tests once again or one time or the required number of times.[/highlight]

But this can show that tests fail for some reason and an error occurs due to a system defect.

Since tests failed the first time but were passed the second time, they can contain errors.

Examples of them:

  1. Sometimes after launching a server, an active test shows a defect that occurs during the first request to the test server only;
  2. In other cases, a test failed due to a real bug that can be seen only in the case of certain conditions inside the testing environment. In this case, test conditions were met and the bug showed itself. But during the second run, the test was passed, and nobody started analyzing the situation and simply attributed it to a glitch in the test environment. QA consultants and a client will be satisfied that the test was passed during the second try.

Apart from the fact that tests can’t be analyzed, the method of retries has one more feature: one test takes twice longer to run. First, it’s run, it fails and needs to be run one more time (at least).

Is there an alternative to retries?

  • If a slow or unreliable environment is used, you should edit the environment. This method helps to not be lost among test paths. And it will definitely make software code clearer — it won’t contain any tries, retries, etc.;
  • If it’s a matter of tests, you should edit them or replace them with other tests. Members of a QA lab should create the best and most reliable version of a test, not simply edit parts of software code and delete tests that seem unreliable to them. [highlight dark=”no”]If the environment and content of a test were not edited, any test run should return the same results.[/highlight] And it should be executed within a reasonable time frame, not twice longer.

Short conclusion

If you ignore such random failures, such tests can become unimportant and they won’t be rerun.

Or failures will be completely ignored because we will know that this test tends to fail — and when it will fail for some serious reason, we will think that it’s just a coincidence.

Nobody will analyze the reasons.

Therefore, it’s not hard to miss a bug and it will result in numerous premises.

And this proves one more time that software testing and verification should be debugged to perfection.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

You May Also Like

Encapsulation as One of the Fundamental Principles of Object-Oriented Programming

Encapsulation as One of the Fundamental Principles of Object-Oriented Programming

Knowing the basics of object-oriented programming is necessary not only for programmers, but also, of course, for testers who interact with program code, study it, or write it. Insight into programming fundamentals enables QA experts to better understand the program behavior, give effective recommendations on how to improve the structure of program code, and, more efficiently create autotest code.

Test Automation Strategies That Really Work

Test Automation Strategies That Really Work

An approach to the development and implementation of automated tests for an application-in-test depends on numerous factors. A size and complexity of an application, a structure of a project team, instantly appearing deadlines, requirements for security, and many other details define the most suitable strategy. Further, we will describe some working strategies that can be helpful for any project that requires automation.