Manual Testing and Automated Testing – the myths, the misconceptions and the reality…

There are many misconceptions in the software industry regarding both Manual Testing and Automated Testing. Some people believe that Automated Testing is the bee’s knees and exists as a replacement for Manual Testing. Others believe that Manual Testing is a simple set of step by step tasks that anyone can run through to check an expected output, and that it’s dying out.

The truth is that both are very important and necessary; they go hand in hand and complement each other. The bottom line is that in order to produce the highest quality app, you should have a strong manual testing element in place alongside utilising an automated framework.

So let us start by explaining a little bit about Automated Testing…

Automated Testing is a form of testing that utilises scripts to automatically run a set of procedures on the software under test, to check that the steps which are coded in the script work.

For example: if we had a script that logged into the website, then added an item to the basket and placed the order, a basic automated test would check that this path through the system is operational – the “happy path”, (which is an affirmation that the function operates without causing any known validation errors or exceptions and produces an expected output)… It does not check anything that is not written in the script.

The key word here, relating to automation, is “check”. As computers can’t think for themselves, they can only follow a set of commands that we give them, which offer a “yes/no” response. Anything that has a set “expected result” can be classed as a check, where no real sapience is required. You simply need to “observe” > “Compare” > “Report”. This process occurs with every automated test. “Step 1: Do this – observe – does it work as expected? Yes – inform that this check has passed… Step 2: Do this – observe – does it work as expected? No – inform that this check has failed”.

Checking

Image has been adapted from James Bach’s presentation “Testing Without Testing”.
(http://skillsmatter.com/podcast/agile-testing/london-tester-gathering-with-james-bach)

Don’t think of this as being negative though! Automation is great and is essential to include as part of your testing if you want to be Lean and keep a good focus on quality while working in any type of fast paced development environment, such as Agile or RAD.

If you find yourself running through manual test scripts (i.e. for regression or smoke testing), then these are areas that you should be automating to speed up your process and be able to focus your manual testing in other important areas.

Regression testing is where you check that the existing, working parts of the system are still working after a new feature has been implemented. This is to ensure that the change in code, with the new feature being implemented, has not broken any of the existing functionality. As code interacts with other code, it can become very complex and there are many ways in which implementing new code might affect other unchanged parts of the system without us realising. This is why it is important to perform regression testing.

Smoke testing is where you perform a very basic check of the system to touch every part of the system in a cursory way. It is a simple check that is done to discover whether the system is functioning correctly, but not bothering with the finer details within each feature… Wide but shallow.
It is definitely worth automating anything that you have a manual script with stepped procedures that have to be run in every release. Automating these scripts helps to reduce the time and effort required to manually check that the existing functionality has not been disturbed.

Manual Testing, on the other hand, is more than just checking.
Although “checking” is an important part of manual testing, it is in fact only a small part of manual testing… The most useful and important tool when it comes to manual testing, is your brain. This is something a computer doesn’t have!
There are various things that an automated script cannot do for you. The diagram below only shows a subset of factors that are utilised when using a Manual Testing approach!

Testing

Image has been adapted from James Bach’s presentation “Testing Without Testing”.
(http://skillsmatter.com/podcast/agile-testing/london-tester-gathering-with-james-bach)

To explain the meaning of a few of these areas (from my own interpretation of them):

  • Functional Testing: Automation can’t test what the system CAN and CAN’T do. Automation can only check what you know (and specify) that the system SHOULD and SHOULDN’T do. It cannot deviate from this path.
  • Usability Testing: Automation can’t test the layout or “look and feel” of the website. It also can’t test the intuitiveness of the website either.
  • Requirements Analysis: It’s not possible for a computer to ask questions of your requirements.
  • Tacit Test Procedures: These are the tests that you KNOW you need to perform, but don’t tend to write them down… A few examples might be; “I know that pound symbols can sometimes cause problems in text fields”, or “Invalid dates should cause a suitable error message to be displayed rather than a ‘server error’ message”, or “The Surname Field should allow dashes and single quote symbols as some surnames have these in them”. These tend to be spur-of-the-moment tests that you perform while you are exploring. To add these tests into an automated script has to be planned and maintained.
  • Domain Testing (or data testing): This is testing how the system processes the data – looking at both inputs and the outputs, and tracking the data through the entire system.
  • Risk Testing: This is identifying and imagining what kind of problems or bugs the function might have and then looking for them. It’s impossible to think of all the scenarios up front, before you are able to see the system…
  • Sympathetic Testing: Similar to “happy path” testing, but with an extra element of being even more gentle with the system. The main aim is not to try to find bugs but to build a model of the system, to think about possible sufficient benefits of the software before you start trying to find bugs. Automated scripts can’t be sympathetic.
  • Lateral Thinking: A major part of testing software is being able to think laterally about the functional areas of the software being tested, about all of the different ways that the functional areas might be used… Did you know that there are over 50 ways that a plain input field might break? Computers can’t think for themselves.
  • Bug Investigation: It is simply not possible for a computer to investigate bugs to find any details around the situations that caused the bug or even the root cause of the bug. They can only provide information that will allow us to manually investigate the problem.
  • Perspective/Prospective: Perspective is something that needs to be thought about with every bug found. You have to think about the defect from multiple perspectives to be able to determine the severity of the issue or to determine if it is actually a defect or not. Prospective is another key element in testing. This is where you have an expectation of the system (based on various factors; customer’s needs, customer’s wants, cost, time, quality, value, etc), many of which you might not be able to gather until you have a product in front of you… And this affects your testing as it sets your consistency oracles running.
  • Consistency Oracles: This is the niggle in your head that make you think “that’s a bug” or make you recognise a problem when you see it.
  • Playing: It’s important to “play” with the system to: explore, learn, question, break and generally get to know the system to be able to test it better.
  • Claims Testing: This is where you verify every claim made about the product (from every source available – implicit or explicit). Considering SLAs, advertisements, specifications, help text, manuals, communication with developers and customers, etc.
  • Galumphing: The act of randomness. But using true randomness for creating test data and for doing random actions during testing (“true” as opposed to the sub-conscious fake randomness that we all do…).

And these are just a handful of the activities surrounding what’s involved in manual testing…

So to conclude, there will always be a need for manual, sapient testing in the software industry, but being able to utilise automation for the checking activities is highly beneficial. In order to have an effective process that focusses on building quality products in a fast paced environment, both testing methods are important for being able to achieve this! You need to utilise the benefits of having automation, in the fact that it will free up time and effort that a tester would normally have to spend performing the “checking” tasks, and allows the testers to focus more on the sapient testing that is required to be done in order to be able to discover an accurate level of quality that the software bestows.

Markus Gartner (the author of the “ATDD by Example” book) summed it up nicely when he said: “While automated tests focus on codifying knowledge we have today, exploratory testing helps us discover and understand stuff we might need tomorrow”.

[This blog post was originally posted at: http://danashby04.wordpress.com]

Print Friendly

4 thoughts on “Manual Testing and Automated Testing – the myths, the misconceptions and the reality…

  1. Great arguments in defence of manual testing! Well written and refreshingly free of unnecessary jargon.
    Being the horrible nit-picker that I am, however, I feel that some of the examples quoted in “Tacit Test Procedures” SHOULD be part of the planned automated tests. I always include them as part of my “handling errors and niggles gracefully” tests.
    That said, this was a really interesting and informative article. Thank You for posting!

    • Hi Anthony! Thanks for the reply!

      You are right in the sense that some of the tacit tests might be included in an automated script (I could have picked my examples a little better!).
      I’ll try to re-phrase what I was meaning… If I use an example of a plain input field on a web form – I know that there are over 50 different ways that this input field could break (from experience), so that’s over 50 tests that I know that I could run.

      I wouldn’t necessarily write these down in any test documentation. And I wouldn’t expect the automation script to cover all 50+ tests, as there is might not be any need to repeat all of the tests for each iteration…

      Automation should cover the happy path and the validation rules though, and I guess some of my examples would fit into that validation checking. :)

Leave a Reply