Our product testing workflow

profile picture

Ivica

Tester

24 May  ·  12 min read


Quality is an important company value, and we’ve been working hard to improve our processes. And because we see testing as one of the biggest quality drivers, we’d like to walk you through our flow!

If you’ve spent any time at all learning about product quality, you’ll know testing is a crucial part of the puzzle. Let’s start with the basics! The purpose of testing is to make sure the product reaches the standard of quality that was agreed on with the client. To do this, we defined a few key testing values:

  • We test throughout the project lifecycle rather than at the very end. We test at different points throughout the development cycle, which you can read more about here.
  • We’d rather prevent bugs than find them. Our approach means possible edge cases and pitfalls come to the tester’s attention sooner, preventing possible bugs.
  • Quality is the responsibility of the entire project team, not only the tester. Testing throughout the development cycle makes testing a shared responsibility – and the product’s all the better for it.

As you can probably see, those values are already hinting heavily at an agile testing methodology. So when we were looking to establish a solid testing process, we needn’t look far…

What is agile testing?

Agile testing is the practice of testing software for bugs or performance issues within the context of an agile workflow, with testing evolving iteratively with the product. In each cycle, product requirements can change, and development will need to adapt. Naturally, this means that testing must adapt too.

Simply put: our testing approach can’t follow a classic waterfall approach, because our product management and development cycles are agile.

In the waterfall approach to testing, test cases are prepared at the start of the project. They are not updated until they are found to be deprecated by testing (usually at the end of the development process). This means that a test case can already be deprecated by the time testing starts, making it a complete waste of time. These are the kinds of scenarios that have led to the rise of agile methodologies! If you want to learn more about agile testing, we recommend Agile Testing by Lisa Crispin and Janet Gregory: this is the book we used as a starting point.

In November Five’s workflow, requirements can change with each new development cycle, and our test cases should change accordingly. Another important point is that we aim to deliver a release candidate after every development cycle, which brings us to the next challenge:

If a release might go out every week, when is the best time for the tester to start testing?

The answer, of course, is to make sure testing is ingrained in every stage of the typical release cycle: define, develop, stabilise, and support. When we defined the purpose, activities and deliverables for each stage, testing was included, from writing test cases to providing maintenance and support on our production environments.

The goal of agile testing is to enable us, as the product creators, to fulfill our customer’s requests, by replacing functional silos with self-organizing project teams. At November Five, everyone is expected to work closely together to achieve a single goal: high quality products with each iteration.

We don’t test to find problems, we test to make sure the product is of the highest possible quality!

So, practically speaking: what does our testing process look like?

Our testing follows the process of internal releases: each cycle deliverable needs to be tested, no matter how little functionality has been added. We go through four stages:

  • Define
  • Develop
  • Stabilise
  • Regression testing (post-release)
Banner

Define

This is the initial stage of the testing process. We create a test plan to describe the scope, approach, and deliverables of intended test activities. It identifies the user stories to be tested, the testing tasks, the test environments and devices we’ll be using.

As input we take the user stories defined for development. Initially, the tester goes through the acceptance criteria of stories and challenges them. This is important because it adds additional feedback on potential risks down the road of project development. It also ensures that our developers get more insight in the full scope of the story.

After challenging the stories, the tester creates a set of test cases (a test suite) to ensure the test coverage of the feature. A test case consists of preconditions (e.g. I am logged in in the app as user X, user X has an account balance of 40 euros), steps to execute (e.g. click on the menu button, select the ‘consult’ menu item) and an expected outcome (e.g. the consult page is shown, the user account balance of 40 euros is shown).

Defining test cases

This predefined set of test cases ensures that the product can be validated consistently multiple times in later stages, without the tester having to reread all documentation and reacquainting himself with the product.

Each test case also indicates the exact, unique scenario that leads to the desired behaviour. If the product deviates from that behaviour in any stage of development, something’s not right.

Because we never have too much time on our hands (who does?), test cases need to be prioritised. The quickest way to do this is to see how risky it would be if a certain story failed, which we can see from the risk analysis we always execute as part of the project’s functional analysis. The risk priority assigned to a user story in the backlog impacts the velocity and planning of the sprints. Unsurprisingly, high-risk user stories will require more effort in both development and testing, allowing the testers to apply their heavier test-design techniques and more of their time testing high-risk user stories.

An example: in an app that requires you to log in before you access any of the features – think Twitter, or your banking app – a failing login would carry a much higher priority than, say, the feature that lets you change your email address.

The main deliverables in the define stage are a test plan and a set of test cases with priority assigned to them.

Develop

This phase repeats continuously during product development, as it’s an integral part of the development cycles.

The testing cycle

First off, we check if there have been any changes to the product scope since the previous cycle. If anything has changed, we update our test suite first.

Next, we check if any of the defects from the previous cycle were fixed, and then validate those fixes. If a fix doesn’t hold up, the defect needs to be reopened. If defects were properly fixed, they are closed.

Once the test suite is ready and defects are validated, it is time to do the work. We run every test on each device and make sure that everything works as expected. If things don’t work the way they should, we create new defects based on the failed test cases. We run through the test cases from the current and the previous cycle, to ensure that all functionalities are checked at least twice before a story is marked as done.

After this test execution, we create a report that gives everyone involved in the project a general overview of the product’s health.

The main deliverables for this phase are the updated test suite, the test report for the deliverable candidate of that cycle, and a list of defects.

Stabilise

The whole is more than the sum of its parts, which is where the stabilisation period comes in. In this period we prepare a release candidate for its release.

We fully test the actual product that will end up in the hands of the customers.

Stabilisation and full testing

We first make sure that all test cases (of the entire product) are up to date. If there haven’t been any changes since the last sprint, this should be the case by default.

Second, we make sure the release candidate is ready. If the development team completed all backlog stories, all work should be merged and ready to go.

Once these requirements are ready, we can start testing the product again. This time, we use a more extensive testing process, which includes more devices and the complete set of test cases.

Once the tests are completed, we create a test report which we present to the project manager and delivery manager in a go/no-go meeting. If the results are acceptable, the delivery process can continue. If they aren’t, the release candidate failed and a new release candidate needs to be delivered with fixes to all issues that are indicated in the meeting. This step is repeated multiple times until the release candidate is completely ready to go.

The main deliverables for this phase are the release candidate report and a list of defects found on that release candidate.

Regression testing (post-release)

Once a product is live, we don’t stop testing! In the post-release phase, we shift our focus to regression testing, both on new development and on the existing version.

Firstly, as stated earlier our development flows are agile – meaning a released product will often be improved iteratively. In this case, the new version of the product will go through all of the steps above – with all new features being tested fully – and then through additional regression testing before heading into the stabilisation phase. The main idea is to make sure that the new changes or elements haven’t broken any of the existing features.

We use a specific regression test suite to do this, which consists of the most important test cases. This ensures that the most important product features are fully functional. After all testing is done, we create a test report of the release candidate and continue the process as outlined above.

The main deliverables for this phase are the release candidate report (regression + new features) and a list of defects found in that release candidate.

Secondly, we also use regression testing on products that we offer continuous support for. This means we’ll retest an unchanged, released product when the OS or the APIs it needs to function have made any changes.

Our tools: TestRail, Confluence and JIRA

At November Five, we use three main tools to support our testing process: TestRail, Confluence, and JIRA.

TestRail is the tool we use to create and manage test cases. We did customise it slightly to fit our existing workflow: to keep everything in one place and ensure transparency across the team, we sync test cases to Confluence. We use Confluence for our complete project documentation, with a standardised structure across all projects, so linking test case documentation to our user story documentation made sense. This also allows us to share test information with clients more easily.

In TestRail, all test suites have the following structure:

testrail tree

Organising our test suite this way allows us to easily indicate which stories we need to include in the test plan. All we need to know is which stories are indicated as done – we’ll then select all test cases related to that story.

We also use TestRail to execute the tests. For every cycle, we create a test plan, which consists of multiple test runs with test cases for different product environments.

Finally, after each test plan, we use TestRail to report on test plan statistics. Before each demo, the team looks at these stats to validate which stories need to be checked and, if the issues are too complex to fix before the demo, which stories need to be reopened in the next sprint.

TestRail also has a JIRA integration, so we found it convenient to push issues from TestRail to JIRA.

Within November Five, each product has a JIRA project, including multiple affected versions (the current and next release versions are always available) and multiple platforms (Android, iOS, Web, etc.).

Once a bug is created, we assign to it a platform and a version and follow up on it for the release. There are 5 possible states for a bug ticket: New, Open, In progress, Ready for test or Closed.

Once all tickets are closed for a certain version, it can be released. If certain issues are low in priority, they must be assigned to one of the future versions or rejected by the project manager. And if an issue is rejected by project manager, the test case which detected the issue is updated or deleted.

The future of testing at November Five

Of course, while this way of working is already serving us well, we’re not done yet!

We are working on an in-house tool that gives us more insights into data exchanged over networks, helps us mock data, and a host of other possibilities which help manual testers to try out any test case they can think off.

Alongside that, we are currently working on the integration and automation of the defect tracking systems we use. This will allow us to file tickets with more metadata more efficiently.

Even with those improvements facilitating the life of a tester, manual testing can be a time-consuming and relatively slow process: for a product with a hundred user stories, an average of five test cases per user story and a test run on ten different device configurations, testing easily takes up more than two mandays.

We’re convinced that we’ll never be able to automate away the need for manual testing, but once a regression suite is clearly defined, automation can help speed up the regression testing significantly.

We’re using behavioural or UI testing frameworks like XCTest (iOS), Espresso (Android), Behat (PHP), Enzyme/Jest (React) and Behave (Python) to implement the validated test cases as features of the application. This allows us to retest stable features much, much more quickly, increasing our test coverage to a much larger pool of device-OS-environment combinations.

For our native clients, we’ve been using Amazon Device Farm extensively. They house a data warehouse full of actual smartphones, and provide APIs to run both monkey tests (randomly clicking around in an app) and regression test suites on an endless pool of devices.

We’re firm believers in this approach, and are looking into putting even more time in automating such test cases for the products that we’ve built for our clients.

Think you can teach us a thing or two about testing? We’re looking for a Test engineer! Or join team Product Operations as our new Support manager!


profile picture

Ivica

Tester

24 May  ·  12 min read