What is Exploratory Testing?
Exploratory testing is an approach to testing. Traditionally, testing has always had a “scripted” approach, where tests are planned based on a list of requirements supplied by the business stakeholders, and these scripted tests are written long before any software is even written. Although this might be useful in some circumstances, and scripted tests are still used in some circumstances within Net-A-Porter (in the regression team for example), it can be painful to use a scripted testing approach in other situations (like when testing new functionality in the streams). If you have spent a lot of time and effort in planning and preparing your scripted tests to run but it then becomes apparent that the new functionality that you have based your tests on has changed, (which can happen a lot! Especially in an Agile environment!) then all that effort and time in preparing the scripted tests is wasted, and you now have less time to prepare new scripts and perform the testing.
Exploratory Testing can be regarded as a cure to this problem. It is simultaneous learning, test design and test execution, which is the opposite of predefined scripted testing procedures.
This means that exploratory tests are not defined in advance. You think of test ideas and perform the tests as you navigate and explore the system, taking into account any possible modifications or unexpected functions as they occur.
There are many studies that point towards Exploratory Testing being enormously more productive than Scripted Testing. From my own experience, exploratory testing is the quickest way to find important bugs in any product.
So you just click around the software then?
A lot of people confuse Exploratory Testing with “ad hoc” Testing – This is not the same thing.
Ad hoc testing is random or improvised – you randomly click around and enter random, unplanned data in some fields. Exploratory Testing is certainly not random in this manner, but in fact requires a sophisticated, thoughtful approach, where you are strategically thinking of the best tests to perform using experience and knowledge… So it’s not random data into any field, but in fact it’s planned data into a field, the field that you are planning to test.
Take a “Surname” field for example: An ad hoc way to test this is to enter garbled characters of any length in the field (i.e. “yueriwqyruie” – the text is unplanned and is random). An exploratory test, on the other hand, might be to put in the surname “McDonald-O’Brian” with the hyphen and the apostrophe (these are symbols that are fairly common is surnames). This is a strategic test that is planned during the exploration.
Exploratory testing relies (to an extent) on epistemology (how you know what you know) and cognitive thinking (how you actually think – your senses, beliefs, biases and ideas – experience can alter your perspective of these things). It’s about thinking of all angles of the product, the functions, the screens, the fields and the end-to-end processes. And trying to ask difficult questions of all of these areas to expose problems and gathering evidence of the quality of the product.
Within Net-A-Porter, there are a lot of testers in each of the streams that do incorporate an exploratory approach within their testing tasks. Some teams almost completely use an exploratory approach for their testing. The NAP mobile team, for example, takes an exploratory approach for roughly 95% of their testing. In fact, they only ever use a scripted approach for the regression testing that they carry out.
But what about documentation?
Doing exploratory testing does not mean that there is no traceability or documentation; this is a false belief… Exploration notes are usually taken (such as session notes), and plans/mind maps are created and expanded on throughout the testing process. I prefer to use mind maps as it allows me to easily and effectively “brain dump” all of my knowledge about the system as I learn more, and it helps me to map out each functional area to stem additional scenarios to test as I think about them.
The exploration notes are useful in helping to detail various tests that have been attempted and any data that has been used or created, but are also good for generally learning about the thought process of the tester performing the exploration of the system. Notes can also be great for knowing what has been tested and how it’s been tested, along with any concerns that the tester might have spotted, however, I prefer to use my mind map for keeping track of the status of my testing and any concerns I find.
Mind maps can be written in either a high or low level of detail. This would depend on the purpose of the mind map. If your mind map is to aid yourself in exploring and it’s only going to be you that will look at the mind map, then it might not be as detailed as it would be if you were going to additionally use it to report on the status of testing to the business stakeholders. Mind maps are also great for being able to see the relationships between functional areas and for doing some risk analysis to determine the important areas to focus testing first.
How do you do Exploratory Testing?
If you are a tester at Net-A-Porter, then chances are you already use an element of exploratory testing in the testing that you do every day even if you do Scripted Testing, possibly even subconsciously. How many times do you deviate from the script (even slightly) when you notice some field or function on the system that isn’t detailed on your script? This is you exploring. You are parting from the script and using your own intuition to test a bit of the system, and you are using your intuition again when you spot an unexpected response or output from the system.
The best way to learn about testing in an exploratory manner is probably to use an example: Let’s say we have been tasked to test a “change password” function on a web app…
- We’d probably want to start with a happy path test to make sure that the function is in fact working. We’d test this by entering an acceptable new password and submit the change, then log out and back in with the new password to confirm that it worked.
- Then we’d want to look at the validation rules around the new password and could look at some negative tests (tests which are aimed at causing the system to act out of the ordinary). We’d test the: max character limitation, min amount of characters acceptable, capital/lowercase letters, numbers, symbols, spaces (leading and trailing too), blank field, and any other rule that we know might cause the field’s validation to throw a message to the user.
- We could then look at the different possibilities of entering data into the field. There is keyboard input, Pasting from the keyboard (Ctrl + V), pasting from the mouse (right clicking), there is pasting in from the browser (file menu), there might be an accessibility requirement for the web app to be compliant with voice to text applications, etc.
- Security is important to test. You could try some XSS and some SQL Injection in the field to see how the system handles the scripts. You could also use a tool to intercept the browser communication with the DB, to investigate the data being pushed to the DB and to attempt to modify it.
- You could look at other possible interactions and functions around the change password functionality: There might be a timeout that forces the user onto the change password screen once a month, there might be some data displayed to the user regarding the last time they changed their password. The changing of the password might cause an automated notification email to be sent to the email address of the account holder, how is the function affected with the use of the browser’s “back” button? What about the “password changed successfully” screen?
There are a vast amount of test ideas around this functionality. Too many to keep listing them out, but take a look at the mind map below for general test ideas and heuristics which will help stem scenarios to cover when the next release comes round!