Performance or Functional Testing: What Should Your DevOps Team Choose?

essidsolutions

It’s becoming increasingly important for companies to be conducting performance testing, regularly. Scott Moore, director of customer engineering, Tricentis, goes into the differences between performance and functional testing, why these tests are important, and what may happen if you aren’t conducting/monitoring these tests.

To illustrate the change in mindset around performance testing over the years, I’ll use an analogy with the appliances we use in our homes. It used to be that performance testing was the steam cleaner. If you have one, it was only brought out every six months or when there was a big mess to clean up. As Agile software became more popular, we all realized performance testing could and should be done more frequently – like the dishwasher. We could even run it more than once a day! 

Today’s applications are being built with many more moving parts on infrastructure that is always changing and more dynamic than ever. This shift in the way we build things requires a shift in how we approach performance. It must now be more like the refrigerator. It’s always on, always working for us, and always providing value.  

Because of the move to cloud-native applications built with microservices and containerized environments, it is easier than ever to correlate bad performance to higher costs. Companies should continuously perform performance testing and monitor application performance to ensure a great user experience, reduce the risks of outages under load, and keep the cloud bills as low as possible. 

Performance or Functional Testing

Functional and performance tests are indeed different. This is not only because one runs at the network layer and the other typically runs at the UI layer. It is also because performance testing has added, completely different, requirements to validate. It is more than whether a page loads in less than five seconds after a button is pressed. There are other specific goals to meet that a functional tester probably won’t think of, and that is fine. They should not have to think of that, which is why there are performance engineers. The problem that can arise when trying to make functional and performance scripts the same is that it tends to turn the performance test INTO a functional test, and we don’t want that. There are other things to measure that go beyond a page load, the code, or something within the GUI that functional testers typically care about.

Learn More: Training AI To Learn Like a Child: 3 Ways To Avoid Bad Habits

Is it Feedback or Noise?

Many companies are using standard continuous integration practices, and generally do a good job of deploying code through a CI pipeline. Some have added automated functional testing to their pipeline. However, when it comes to performance, fewer have ventured forth. Why? Is it because performance testing within a CI is harder? You may disagree, but I don’t think so. I think some have attempted it and have not been successful because the idea of a simple pass/fail criteria from a functional perspective doesn’t always apply. 

This means that there can be instances where performance issues can still slip through, so the test results become less trustworthy. There are times when we need more than a single set of pass/fail criteria to make a decision. Just because the trendline shows a function getting faster, doesn’t always mean it should pass. What if it is faster, but consumes twice the RAM and CPU power? This might increase the cost and increase the risk of application deployment, but if this isn’t considered it can be deployed anyway and the company suffers the consequences later after it’s in production. 

Creating a CI pipeline for performance testing isn’t always the best idea to say you have done it. This goal is to get feedback about performance characteristics earlier and faster than waiting until the last minute. That way, any issues that come up can be addressed earlier. It isn’t just to speed up the performance testing exercise. If no one looks at the results, or the results aren’t useful, where is the value in it? There needs to be some level of inspection, and the output needs to be a format that allows for solid business decisions to be made from it. If you aren’t getting this from your performance testing CI pipeline, you should probably take a closer look at why you are doing it. Without value, the data points in the results are just noise.

Testing or Monitoring: What Should Your DevOps Team Focus on?

The final aspect I want to address in the continuous realm is the cousin of performance testing: performance monitoring. This ALSO needs to be continuous. I am surprised that in 2021 I still come across clients where they have almost no monitoring, or the monitoring they have is not useful enough to isolate a performance bottleneck. The first area where performance monitoring is generally considered is in production, but it’s not just for production. Many companies are running load tests all the time (even in CI pipelines) with ZERO monitoring of infrastructure. They only have end-user timings or basic API performance timings. There is no correlation to what runs everything- the thought is “well we don’t have to worry about that. The cloud scales so hardware isn’t the problem anymore.” Really? How did that work on the last black Friday for many e-commerce retailers that were cloud-native in a DevOps culture? Hint: not so good.

Some have figured out that monitoring under load helps to easily find bottlenecks, but it’s not just something done during a load test. Performance monitoring should be shifted left as well. When a developer completes a feature/function/service – it should not only be tested for performance from the time it takes to execute but the number of resources it consumes. By adding traditional application performance monitoring (APM) and/or Observability metrics in the development environment, the infrastructure costs can be determined early as well. I’m amazed more organizations have not figured this out.

Learn More: ‘Thin IoT’ Will Accelerate Connected Devices in Future

Change or Stagnate

The approach to performance testing (and monitoring) has to change to the way software is developed. The old ways of doing it won’t work in a world of IT that is both dynamic and continuous. Testing should be continuous. Monitoring should be continuous. The value provided by these efforts should also be continuous. If your performance testing isn’t keeping up with the development cycle, re-evaluate the approach and determine how to be a value add instead of a blocking factor. The place to start is taking ownership of performance at the various levels and making sure it’s part of the entire lifecycle, and ensuring it plays a role in continuous feedback all along the way.

Did you find this article helpful? Tell us what you think on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We’d be thrilled to hear from you.