Blog

Opsview & Purposeful Performance Testing

webteam's pictureAdministrator |

This blog is written by Scott Johnson, Test Lead at Opsview.

Customer Focussed

I have recently been assessing the requirements for Opsview’s performance test environment and have found myself deliberating on what data I should be collecting, and how best to use it.

In the majority of cases, if you asked someone what one trait of their software they would like to understand from performance testing, it would almost certainly be ‘under what conditions does the software fail’. This is obviously a useful piece of information, but it is not going to help you to proactively ensure that your existing customer base is going to remain unaffected by performance issues from one release to the next. Customers will not tolerate poor performance from an application, and with the abundance of choice in today’s market it does not take much to induce them to look elsewhere. This risk becomes even more relevant with subscription based software, as the customer doesn’t just need to be impressed on day 1, but also 12 months later when the subscription is due for renewal.

Segmenting Groups

With this in mind I decided to focus the testing not in searching for the maximum tolerances of the various key features of the software, but on the specific performance requirements of Opsview’s current customer base. To do this it was first necessary to divide our existing customers in to rational groups, thus allowing us to create representative test cases for each. The easiest and most logical way to do this was by subscription level, as this would give us a definite specification to test against.  Now there is every possibility that this testing won’t hit any limits within either Opsview or the hardware. However, what we will come away with is a number of performance baselines that should represent the majority of our customer base. Going forward we can then compare each new release of Opsview against these baselines to see what potential impact it may have on performance. It may be the case that the smaller groups will exhibit no impact, whereas system resource is starting to be exhausted within the larger groups, where it was previously fine. 

By then running these tests regularly we can identify situations such as this and proactively introduce pre-release optimisations that can offset any potentially detrimental effect. Additionally, being able to identify which groups of customers may be affected will enable us to proactively approach them and advise of potential hardware shortfalls prior to them moving to the next version of Opsview. By doing this we should avoid any situations where customers start experiencing massive performance issues after upgrading, which could irrevocably damage the customer’s view of the software, thus creating a serious impediment when it comes to renewal time.

Whilst the above approach will form the primary objective for performance testing, there are also many other pieces of related information that will be collected and utilised. For example as well as announcing the expected maximum performance thresholds, we should also set a minimum level of performance that we guarantee each version of Opsview will meet.

This customer led approach to performance testing if managed correctly stands to add significant value, as if you can address issues before they start affecting your existing customer’s experience, then why would they start to look elsewhere for alternatives?

Administrator |