Manual testing offers distinct and powerful benefits such as understanding system behaviour with minimal documentation, verifying changes rapidly in multiple environments and empathising with end-users. Structuring your manual test efforts compounds these benefits.
While automated testing methods have been established for a long time in the software development process (e.g. unit, integration and end-to-end tests), relatively less attention has been paid to manual testing.
However manual testing is far from "dead". Software developers still routinely verify their work by using products manually. Further, developers are usually required to take responsibility for the end-to-end functioning of their software, not just writing quality code and passing unit tests. They are usually encouraged not lean too heavily on QA.
In this article, I will:
Manual testing allows you to achieve certain specific goals which may not be available through automated testing:
As with any activity, manual testing can offer maximal benefit when performed in a structured manner.
In my experience this involves:
Here's a simple example of a test case involving a user logging in:
user_can_login.md:
# User can login
Users who have an account should be able to log in.
## Steps
1. Go to the homepage
2. Click the login button
3. Expect that the login screen is shown
4. Enter username
5. Enter password
6. Click login screen submit button
7. Expect that you are shown logged in, in the header section
Notice we have a brief heading and description, followed by neatly numbered steps.
Steps can be:
This format allows us to quickly follow the steps of the test case (actions) and know what to look at to determine whether the test passed or failed (expectations).
A realistic scenario might make it easier for you to see how manual testing can help.
Imagine you begin work as a software engineer at a rapidly growing startup, building a complex product with many user flows.
You are assigned to work on the sign up experience. Users provide various personal details such as their country of residence. Based on these, the system provides various prompts then accepts payment.
You are given your first development task:
"Please fix the flow for Japanese customers. They are getting stuck at the point where they submit their personal details, but before they have paid for the product."
This is based on direct customer contact. No one in the company can tell you exactly what "stuck" means or in exactly which part of the flow this is occurring.
There is also minimal unit test code, code quality is not good and there's little documentation. Remember, it's a fast-growth startup – they don't have the same time and resources as a more mature company.
How would you go about solving this? Your approach might look like this:
Notice how documented manual testing helped you to solve this problem:
As we'll soon see, this is only the beginning of the benefits!
Tagging can be a powerful way of making your whole test case collection searchable.
Suppose every time you refer to the login screen in your Markdown files, you use the exact phrase: "login screen". Perhaps wrap it in brackets: "(login scren)".
Now this exact phrase is searchable, via a simple find-in-files in your text editor. By searching for the string "(login screen)" you can find every test case involving that screen.
For example, your search might yield the following results:
user_can_login.md
user_can_recover_forgotten_password.md
user_cannot_login_with_wrong_credentials.md
user_can_login_from_another_country.md
user_can_login_with_a_linked_google_account.md
This gives you powerful new capabilities such as:
Suppose a feature you want to test relies on certain data existing in the system beforehand.
For example, you might need a certain kind of user account, such as a user who has their country set to Japan.
You could create a test user in your testing environment - hiroshi@yompail
– and save it in your test case under a "Test data" heading.
user_can_login.md:
# User can login
## Steps
1. Go to the homepage
...
## Test data
- User: hiroshi@yopmail.com / P@ssw0rd
It can be very useful to know the full list of dates/times when you ran your test and what the result was on each run.
These can be added to a "runs" section of the test case file.
user_can_login.md:
# User can login
## Steps
1. Go to the homepage
...
## Runs
| Date/time | Result |
| ----------------------- | --------- |
| 2024-10-01 9:00 AM | Succeeded |
| 2024-09-04 10:00 AM | Failed |
How might this be useful?
When with manual testing, it is common for engineers to capture artifacts of their work, such as screenshots, screen recordings and copies of log output. These serve to demonstrate work done, prove that things worked correctly at a certain date/time and capture additional information that could help identify additional problems or improvement opportunities.
Artifacts from manual tests can be organised alongside test cases, using a structured folder naming system.
I have found it best to keep artifacts in folders named after the test cases and test run dates from which they were generated.
Here's an example:
/test_cases
user_can_login.md
user_can_recover_forgotten_password.md
/test_artifacts
/user_can_login
/2024_10_01_9_00_AM
Screen Recording 2024-10-01 at 9.01.55 am.mov
Untitled2.png
/user_can_recover_forgotten_password
You can make manual testing a regular, consistent part of your workflow. As you strengthen this habit, your work quality and overall knowledge of the system should improve.
Here are some ideas:
There are a range of software tools to help you write and manage test cases.
Manual testing offers distinct and powerful benefits, not offered by automated testing, such as understanding and representing current and desired system behaviour, making fast progress in challenging environments with limited documentation and test coverage, verifying changes in multiple environments, verifying complex workflows and empathising with end-users.
Structuring your manual test efforts compounds these benefits: you can quickly locate related tests (enabling regression and exploratory testing), ease your test efforts (using test data) and keep track of test results (helping you identify patterns in failures or find the root cause of an issue).
These resources inspired this article: