Web services performance testing: a pilgrim’s progress, part 2

Here is where I vent some frustration. I’m self-taught on all the tools I use and I am the only person in my shop who knows how to use them in this manner. What’s more, I am the only Java/Groovy person right now at my shop. Being an autodidact can make one feel nice and self-sufficient, but the (large) holes in my knowledge are starting to cause me trouble. Why does the tool I use appear to do something very different from what I expect after reading the documentation? How do I instrument the tool to put load on a Web site through the UI, or should I be using a different tool?

Fortunately there’s been a shift in leadership at my workplace so that if I need to make the case for training, I’m more likely to be heard. So I’ll probably be doing that shortly. That being said, if the training doesn’t exist, I’m not sure what I will do. Is there training for JMeter or do developers just pick it up on their own? And if you don’t work in a Java shop, what do you do? Should I start looking into LoadRunner? I suppose I could put this question out to sqa.stackexchange or Software Testing Club and see what I come back with.

The testing community needs good TESTING tools – which are not necessarily automation tools – and reliable documentation, training, and support on those tools. There is DEFINITELY a market for this.

Also: I need the support of prod support/sysadmin types who, frankly, have better things to do than monitor a service in test for hours a day.  After several stress/performance tests have yielded conflicting results, my best bet would probably be to learn how to use the monitors they are using and go conduct some tests on my own time.

Finally: it’s an education to watch how quickly other people and I can fall into the confirmation bias trap. Example: I ran a low volume performance test against the production region on Friday, one I’ve run several times before apparently without incident. Sure enough, the system that is the most likely to have trouble with the load I’m creating started going haywire during the test. It continued to do so after I shut down the script, which makes me start wondering if my tool’s UI is telling me what I need to know. (A tool that monitors incoming requests against the system under load would be helpful here.)

Right away, people (including me) wanted to ascribe the problems with the system in production to the load against the system under test.  However, there’s really no proof to support that unless it happens repeatedly. And repeated production failures are not something I would wish on anyone. Sadly, our test system is not an exact replica of our production system – shared resources need to be allocated first to prod – so I’m not sure what performance testing on the test system will tell me.

So I have some new goals today:

  • Get a monitor that does a different view of the system under test (incoming requests) and learn how to use that monitor
  • Seek out some REAL training on the tools I use, or different ones. First I have to find the right place to ask those questions, though.
  • See if performance testing under the test region will tell us anything useful.
Advertisements