Stop automating everything

luvo pty ltd > Blog  > Stop automating everything
automation testing

Stop automating everything

 

There’s no doubt that test automation is a key part of modern software development that can increase the efficiency of the delivery process whilst improving the quality of the end product. However, test automation should not be performed just for the sake of it, it deserves to be more than just a buzzword thrown around by upper management to please the number crunchers.

There is a common expression in the industry that ‘all tests should be automated’, to the point of even listing this phrase in the ‘definition of done’ criteria against Agile stories. This expression and approach need to disappear as quickly as they arrived. 

The cost savings and benefits of test automation can easily be unwound by spending time automating the wrong things.

So, what should we automate?

Let’s investigate what types of tests should be automated and what should remain part of the ‘manual’ testing process.

The goal of test automation

Before looking into what should be automated, we need to understand what we aim to achieve by test automation. Automation should not aim to replace testers but instead provide them with a tool to become more productive whilst reducing any repetitive and laborious tasks in their workload. No one likes to spend three days executing the same regression pack repeatedly to prove the quality of a release.

Typically, the aims of test automation should be to bring focused, repeatable, informative, and reliable test results into your release cycle.

What kinds of tests can be automated?

When deciding what to automate, we can consider some general characteristics of tests to determine the feasibility of automating that specific test case.

These characteristics may include things such as:

+ Is it a repeatable test case? Is it high value? Will it be added to the regression suite?

+ Is it a time-consuming test that does not require manual step intervention during execution?

+ Is it easily subject to human error?

+ Is it a stable scenario that is unlikely to change during the application lifecycle or perhaps only requires minor modification (i.e. core application functionality)?

+ Is it a multi-stage test case with significant meantime or alternative actions required between stages?

+ Does it need to be executed under different configurations or environmental conditions (e.g.: UI tests under different browsers, devices, and OS)?

+ Is it a performance test that may include load and stress tests with specific metrics?

+ Is it a test that involves the input or processing of a large amount of data?

Any test case that can be automated must bring tangible benefits through automation, such as time savings, increased accuracy, and minimising the potential for human error.

What types of test to automate?

Tests can be broken down into two categories; functional and non-functional, both supporting various degrees of automation.

Functional testing:
Functional testing pertains to testing the functionality of the software. It ensures that the application matches the specified requirements.

Non-functional testing:
As the name suggests, non-functional testing covers the other aspects of the application, such as performance, security, compliance, and resource utilisation.

Test automation can range from simple unit tests to functional, API, and even UI testing phases in various test types. Now let’s look at some common test types and see whether they should be automated.

Smoke testing:
Smoke testing provides a quick way to check if a specified functionality is working and determines the stability of the test environment and systems. It also acts as the gateway to subject the functionality to further testing. Smoke testing can be automated with relative ease since it needs to be a frequent and fast verification of functionality.

System testing:
System testing seeks to execute all the possible functions against an individual system. It is usually extensive and exhaustive and is always valuable even when systems claim to be ‘out of the box’ solutions. Since many of these tests might need to be repeated – as long as the system remains part of our software solution – generally automation is a great idea. 

Integration testing:
Integration testing takes all the individual components of the application and tests them as a combined entity to ensure correct behaviour between all components. It can also be automated depending on the requirements and the exact functionality to be tested.

Regression testing:
Regression testing ensures that any new changes to the application have not affected the existing functionality. Usually, regression testing includes running already executed functional and non-functional test cases to re-verify the functionality. Thus, it squarely sits on the automatable section.

Security and compliance testing:
This testing is entirely dependent on the scope and the specific requirements. Some tests can be automated, such as vulnerability testing for known attack vectors, enforcing administrative user policies, and testing correct data retention policy configuration. Yet, other aspects like penetration testing may require manual testing.

Performance testing:
As a quantifiable metric, performance testing can be completely automated regardless of the goal. Metrics such as network latency, CPU and memory usage, database query execution time, caching performance, and even storage performance are all things that can be exercised via test automation. Performance testing verifies if the application performance is at an acceptable level under differentiating load and stress conditions. Automation allows users to easily simulate different conditions and test the application performance to verify the stability, reliability, and robustness of an application.

Acceptance testing:
As the final testing stage of any delivery process, the goal of acceptance testing is to ensure that the application meets the specified requirements. This type of testing typically involves end-users. As it is the final verification and relies upon end-user involvement, typically these tests should not be automated.

While a test type or phase may lend itself to test automation, it doesn’t mean that all tests in that particular phase should be automated. Each group of tests within that phase should be considered on their own merits as to whether it makes sense to automate them or not.

Of course, the test types listed above are not necessarily separate phases. When it comes to agile methodologies your tests may take on multiple flavours from the menu above. But the same logic applies… consider the merit of automating the test before starting.

As testers, we should naturally question everything. If you don’t see value in automating a particular test set then ask someone who does to explain to you why they think it is necessary. Allow yourself to be convinced if they can present a compelling case for it, otherwise present yours.

Communication and collaboration will always result in an improved outcome.

Automating testing environments and configurations

Tests are not the only thing that can be automated in the overall testing phase of the SDLC. Testing also relies on external factors such as creating test data sets and setting up test environments. These will provide the correct framework and data to give confidence in the coverage and correctness of the testing efforts.

Consider these factors when automating testing and include data preparation in the list of ‘things to automate’.

External factors must also play a part in the final decision on what should be automated. Even if the test itself can be automated, it would not be an ideal candidate for automation if it depends on a manual task each time for data setup for example.

Which tests should NOT be automated?

Now we have outlined what should be automated, let’s briefly look at the kind of tests we should not automate.

+ Tests that only need to be run once, with the only exception being when dealing with extensive data sets, which would be time-consuming.

+ Experimental feature tests for developers to quickly get feedback from the test, especially in the development phase. It’s always easier and faster to run these tests manually rather than creating an automated test and running it through the delivery pipeline.

+ There is no benefit from partially automating a test case if it cannot be completely automated.

+ User experience tests that depend on the perspective of the end-user and can be subjective – whereas test automation results are binary, an automated test either passes or fails, there’s no in-between.

+ Exploratory testing where test cases are not predefined and instead dependent on the expertise of the individual tester.

Conclusion

Test automation can undoubtedly improve the overall testing maturity of your software development process. However, organisations and users need to carefully consider what to automate and only automate those tests that provide notable benefits to maximise the ROI of test automation – therefore recognising, supporting, and preserving its place in the SDLC.

If you are about to embrace test automation or need help formulating a solid Test Automation Strategy, then reach out to the luvo testing team via [email protected]

Gary Brookes
Gary Brookes

Director of Testing | [email protected]

No Comments

Post a Comment

Comment
Name
Email
Website