Automated software testing is nearly as old as software development itself. It is also chronically misunderstood. Many core beliefs are myths. Believing these myths can cripple engineering teams. But the teams that move past these myths can turn quality into a strategic advantage.
Here are the top 3 myths of automated testing.
Myth #1 - Automated testing tells us if the app is behaving correctly.
Fact: Automated testing only tells us if the app behaves the same way it did when we created the tests.
Proof: Test engineers spend at least 40% of their test coding time on test "maintenance" - fixing broken tests.
This myth is the most common justification for investing in traditional automated testing. We want to believe that our test coverage is both broad and deep enough to catch every bug. Because each test merely asserts that an element equals a specific value, manually creating all of the required tests is beyond any test team's ability.
This is especially true with automated UI testing, where the app's behavior changes frequently.
If automated testing told us if the app was behaving correctly, a test would never need to be updated once written. Test engineers spend at least 40% of their test coding time on test "maintenance" - fixing tests that no longer capture the app's correct behavior.
Myth #2 - Automated testing replaces manual testing.
Fact: Automated testing only replaces repeated manual testing.
Proof: The act of writing an automated test requires that the test engineer perform the test manually.
The act of authoring an automated test requires knowing at least:
- the URL of the page to test
- one or more unique identifiers for the element to be tested
- one or more correct values for the element
Usually, the URL of the page isn't known and must be copied and pasted from a browser into a test. The unique identifiers must also be copied and pasted from the browser into the test. In the case of checkboxes, selects, and other form elements, even the values must be copied and pasted.
The test engineer must manually follow a test script to drive the browser to where they will copy and paste the data. They are performing the test manually.
While manually performing the tests, they must also use their judgment to determine if the behavior they see is correct. If it isn't, they should open a bug and not capture the behavior in the automated test.
Myth #3 - Automated test coverage of critical paths is good enough
Fact: Nearly every defect is worthy of being tested by the team if found by a user
Proof: It is considered best practice to create automated tests for every user-found defect to ensure that the defect doesn't happen again.
"Good enough," or best effort testing forces teams to choose which parts of the app they test before the release and which parts the users will test for them after the release. Yet, once a user finds a bug, it becomes important enough to cover with automated tests to prevent it from being reintroduced in the future.