When to automate a test case?

Tests are an important part of the software development process. Indeed, there even exist some styles of programming where tests are created prior to the development tasks. (Test Driven Development, for example).

A common practice is to create test cases that need to be executed for each application feature. Those tests usually include the “happy path” as well the “edge” cases.


Around 78% of the companies use test automation for functional or regression testing (https://theqalead.com/topics/2020-software-testing-trends-qa-technologies-data-statistics/#testingtools). One of the common problems that those companies face is deciding what automation strategy is the best to follow.
In this article I will try to cover 2 different approaches to this problem: automate “all the things” and automate the “right things”.

Automate "all the things" approach

The main benefit of this approach is quite intuitive You can be almost sure that when you execute your test suites, you would be testing almost everything (we say almost because it’s really hard to cover 100% of the scenarios).

But some cons need to be considered:

  • Automation requires maintenance.

The code of the application to be tested is not static. The development team is not only adding new features, but updating the existing ones. With those changes, our automations will need to be updated no matter what the change is.

  • · Lots of automation requires lots of time.
    • Time for coding the automation.
    • Time for execution.
    • Time for debugging when something fails and then identify if the problem is in the automation code or the production code.

Following this approach, you will need to have resources working all the time updating the automation project and adding new tests. Also executing a large test suite takes time. Imagine that your company uses a CI/CD production environment and you execute your entire test suite as part of the deployment pipeline. If you have to wait many hours for the tests to finish, this may impact the deployment goals.

Finally, in scenarios with a lot of failures,, we may have duplicated application paths but we don’t know if all the failures are related to the same issue, so we will need to verify test by test to confirm. That could take time too.

  • Redundant automation.

I already talked about that in the previous paragraph. If everything is automated, the probability of having overlapping tests is high.

There are though some scenarios where automating “all the things” could be a good approach.

  • If you have a small project with a low number of test cases.
  • If your project is stable and rarely changes.

Automate "the right thing" approach

From my point of view, this is the approach that should fit most projects. The important question here is to ask if we really need to automate this specific test case. Also, experience and a good understanding of the application to be tested are key factors.

There are a lot of things to review before deciding if a test case should be automated or not, for example:

  • Is this something on the critical path or something that's frequently used?
  • Do we require to test this because there's a legal issue?
  • Is there a lot of data and environmental setup?
  • Is this something that hasmultiple reuses?
  • How easy is it to automate this feature?
  • How important is the feature for the client?
  • How critical could a bug be in the feature?

But even with the answers, it could be difficult to decide what should be automated.

Angie Jones (@techgirl1908) in this session provides a mathematical approach to solving this problem and I will try to explain it.

The tool provides a way to identify those test cases by assigning values to some of the previous elements. Maybe it's not the perfect method but it is something that can give more objective information before making a decision.

Elements to take into account:

  • Customer risks (I * P)
    •  Impact (if broken, what's the impact to customers?) (0-5 where 5 is high impact)
    • Probability of use (frequency of use by customers) (0-5 where 5 is a high probability)
  • Value of tests (D * A)
    • Distinctness (does this test provide new info?) (0-5 where 5 is very distinct)
    • Induction to action (how quickly would this failure be fixed?) (0-5, where 5 is this, will have a high priority)
  • Cost - Efficiency (Q * E)
    • Quickness (how quickly can this be scripted?) (0-5 where 5 is quicker)
    • Easy (how easy will it be to script this?) (0-5 where 5 is easier)
  •  History (S * F)
    • Similar to weak areas (volume of historical failures in related areas) (0-5 where 5 is a high similarity)
    • Frequency of breaks (volume of historical failures for this test) (0-5 where 5 is high frequency)

Each element will have a minimum value of 0 and a max of 25 for each feature. After summing all the values, we will have a range from 0 to 100.

Then we can use the following range:

  • From 67 to 100 → Automate
  • From 34 to 66 → Possibly automate
  • From 0 to 33 → Don't automate

And this could work as a guide to deciding what to automate.

From my point of view adding all these calculations to your planning activities consumes time and each team member could have a different opinion. If you feel that is something that your team can handle then do it but if not, you can use it as a guideline to select what test cases need to be automated.

Following the Automate "the right thing" approach has some advantages:

  • Easier to avoid duplicated automated tests.
  • Your resources will be working on the important test cases.
  • The automation project code will be easier to maintain.
  • The test execution will be faster.
  • Possible bugs will be easier to debug and find.

But it also has one main con:

  • You won't be testing the entire application so there is risk of finding bugs in production though if you followed some process to select what test cases need to be automated the impact of that bug should be minor.

Conclusion

There is no wrong answer to selecting one approach or the other. The point is to select what’s best for your organization, taking into consideration what product you are building, your resources, what your expectations are for testing, and others.

About Encora

Fast-growing tech companies partner with Encora to outsource product development and drive growth. Contact us to learn more about our software engineering capabilities.

Share this post