Adam Fanello
Adam.Fanello<Building Apps in the Cloud>

Adam.Fanello<Building Apps in the Cloud>

Software Testing Strategies

Explained for non-developers

Adam Fanello's photo
Adam Fanello
·May 7, 2022·

8 min read

Software Testing Strategies

In 2021, I was asked to participate in a podcast explaining software testing strategies to a team of people who were technical, but not themselves software developers. While I cannot share the actual podcast, I can share the prepared Q&A that was used as the basis of the live discussion.

Can you identify some of the most common testing strategies in this industry and define what those are?

That's a huge question! Testing strategies can be broken down into a matrix of categories. First, there is manual vs automated. That's easy to understand - automated tests are written as code and can run automatically. Manual tests are often viewed as a procedure, in a human sense, in that a person performs each step of the procedure.

From those two broad categories, we then have to look at scope. What is included in the test, and what is not. Software is built on layers. Some of these are easy to see: the app running on your iPhone, vs the custom business logic running in the cloud, vs third party services like what AWS provides. With IoT, there's the actual hardware devices as well - that's another layer. When you decide how many of these layers you want to test together, you are defining the scope of the test.

When we only test one layer in isolation, that's a unit test - one unit of code. If two or more layers are tested together, that's an integration test. If all layers are tested together, that's called end-to-end testing.

We also have regression testing. Regression testing refers to automated tests that are designed to catch when something changes unexpectedly. They are often written after fixing a bug that existing tests failed to catch. Once written though, they’re just another automated test.

There are more special adjectives for different scopes of integration tests. UI tests are scoped around the user interface. API tests are scoped to start with the application program interfaces - server endpoints - down through all layers of the server.

Can you dive deeper into unit tests and explain how those are typically setup?

Unit testing allows us to focus on the smallest scope of code, and ensure that it is working correctly. This requires isolating it, by replacing the layers around it with fake or "mock" implementations. Those surrounding layers are the input into the unit being tested. We code up a variety of inputs to feed into that unit of code, and verify that the resulting output is what we want. If it is, the test passed.

This sort of isolation is only really possible for coded tests, which means automated. Because these are entirely automated and isolated, they are fast and easy to run in the various automated continuous integration pipelines like Bitbucket Pipeline, AWS CodePipeline, Jenkins, etc.

What about UI tests, can you speak to when these might be the most useful and what type of tools would best accomplish this?

UI tests are typically automated integration tests that focus on the user interface. The automated tests, which is code, interacts with the user interface like a user would and checks that the results on the screen are as expected. These work well to test entire user stories, although they aren’t limited to that.

The most popular tools for UI testing for a long time has been Selenium and some other tools that build on top of it. They’re rather difficult to work with though. For webapp testing, a relative newcomer called Cypress is far more pleasant to work with.

What about testing with hardware? Is there a lot more to define there?

Firmware and device testing has all the same kinds of considerations as software. The manufacture of hardware will be concerned about physical testing - that's hardware testing - but that is outside my area of expertise. These devices are however, programmed. The programming on hardware devices is often called firmware, but that's more of a historical artifact than a real distinction these days. Devices now have an operating system and run application software. No surprise then that the same concepts of testing still apply. The distinction though is how we isolate all the physical inputs to perform automated testing. That sometimes involves another piece of hardware to electronically push buttons and flip switches, or it may connect via a communication port and simulate inputs that way.

What experience do you have with load testing?

The most common approach: throw it into production, monitor as use increases, and react. When you're dealing with a new product bringing customers in very gradually, that isn't as terrible as it sounds. With a hyperscaler like AWS, a well architected solution will be fine.

Better of course is a proper load test, and I led this effort for an IoT customer that asked for it. In this case, they were planning to very quickly migrate tens of thousands of devices and thousands of users from a legacy platform to a new one that we had built, so we have to be ready.

Load testing is just a form of integration test, usually scoped to just your cloud-side layers. A pretty basic load test will just hit one application endpoint hard and see if or when it breaks. More sophisticated is to script an automated integration test that acts like real users. Then you can use something like AWS Fargate to scale out our simulated users to produce a heavy load. For an application with input coming from somewhere other than end users, such as IoT devices, the load test is modeled very similar to a unit test - mocking the inputs from any direction into the cloud. With a single automated test simulating both thousands of IoT devices and the thousands of users interacting with them in realistic ways, you have a real cloud end-to-end test under load.

Cloud monitoring tools can monitor the system health during a load test. The testing scripts themselves can also track latency and failures from their end and report results.

Can you talk to us a bit about automated testing, which I think might tie into test driven development, and if not, can you explain the differences?

Automated tests are written so that they can be run again and again without human involvement. That is in contrast to manual tests.

Test Driven Development is a discipline of testing whereby the programmer writes the automated unit tests at the same time as implementing the logic. We start by writing a test the way we want to use the logic, and this initially fails because the logic code doesn't yet exist. We then write just enough logic until the test passes, perhaps be writing a function that does nothing. Then we write some more test until it fails again, and write more code until is passes again. This continues back and forth until the solution is complete. It sounds absolutely crazy, but can expose nice clean solutions that we might not have thought of without writing a fake consumer of the logic right from the start. You end up with just the right amount of solution and unit tests as you need, and no more. Integration tests can be created this way as well.

While this approach takes a little extra upfront effort, when done right bugs become rare.

What can you tell me about working with external QA teams? In your experience, have you seen an efficient workflow from external QA teams?

Oh yes. Quality quality assurance members are there to find problems before the end users do. That's a valuable service! Developers can get tunnel vision; we know what we expect the user to do and code for that. You need a separate person to do the unexpected to find bugs, and try the same thing in two versions each of three different browsers, for instance.

On the other side of that, what have you seen that simply just doesn’t work?

The key to successful QA is a respectful relationship. Not only do the developers and testers need to recognize that we are there to support each other, but the customer must not poison that. When a customer is willing to pay for QA, and then is shocked and assigns blame when QA actually finds bugs, that destroys the relationship. It can't be competition.

What are the best testing tools you would recommend?

I already mentioned Cypress for UI testing of web applications. Other than that, the tools vary by programming language. Unit tests must be written in the same language as the code it tests. That isn’t required for integration tests, but often using the same tool for integration testing as the unit tests reduces the cognitive load and allows for reuse of some code.

Can you describe what the ideal test coverage looks like?

Testing should be targeted to where the smallest amount of it can have the biggest impact. If you unit test, and API integration test, and end-to-end test the same code - you’re testing the same logic three times. There are always some added benefits to more test coverage, but the returns diminish.

That said, my default for unit tests is 100% function and line coverage. That is often reached, but I’m not going to spend much time trying to hit one hard to reach branch that is not even expected to ever happen. For APIs, with the code behind it already unit tests, the target is every API but with just a representative sample of possible inputs. Maybe one success and one failure.

Share this