When I embarked on a high-level test strategy for my latest project, I knew that I wanted to learn how to conduct meaningful performance tests. I had never done any kind of performance testing before, but I knew it was time (probably past time) to cross the Rubicon.
I am lead on the project and my own testing effort involves Web services, so I had to figure out:
- Which aspects of performance I wanted to look at – meaning which questions I wanted to ask about performance
- How to use the tools at my disposal – or learn how to use FOSS tools – to answer those questions
- How to report results so that my project team could use the information I found
Now, if I wanted to learn how to write a Python program, I would have numerous online and print resources at my disposal. If I wanted to learn how to test Web service performance, I would have to look elsewhere. I became aware that I didn’t even know the correct questions to ask.
I knew that some of my company’s online applications had had performance issues in the past, so I consulted with the people who had looked at those issues in depth.
I also looked at the testing tools that we had in-house that could measure performance: their documentation yielded some information on possible questions I could ask.
It seemed that one critical (and obvious) question would be: what is the response time for a request? Even that question poses several new ones, though. Among those subquestions are:
- How many requests should I submit to get a decent sample size?
- Should I space those requests out evenly over time? Or should I vary their frequency?
- How do I make sure I get a realistic sample of the request data that could be submitted to the service?
- Does response time vary predictably with any other parameter? How about the size in bytes of the response? What about other load on the system that the Web services under test share with other applications?
- Should I use our production region to do performance testing? Or can I get away with using the test region? (It turned out that for various reasons our test region would not give us a realistic idea of performance. My tests, run in parallel in both regions, established this beyond a shadow of a doubt. So I had to bargain with our prod support folks to continue to run my tests in the prod region.)
- Should I look at average or median response time? (I had to refresh my decades-old introductory statistics course knowledge with online resources to answer that question.)
And I also had to look at the performance requirement to which my project team had stipulated. I learned early on from an old hand that without a specific performance requirement, your performance data will not be terribly useful to the team. Note that your information might well establish a realistic performance requirement where there is none.
More detail to come.