How to Do Performance Testing Part 2 – Virtual Machines, Performance testing plan and Lessons Learned

This is part two of a three part series on Implementing Performance Testing. You can read Part 1: Setup here and Part 3:  Cloning to the cloud here. This post will focus on virtual machines and a performance testing plan.

————————————————————————————————————————————————

In my previous blog, I shared with you how in performance testing at RES starting in 2010, we had configured a Remote Desktop Server on a physical machine, and we put load on it from only three Agents. This post looks at how to complete a performance testing plan.

The story goes on:

It all changed in 2012 when a new server component was developed, to concentrate traffic from multiple Agents to a Database. To test this new component, we needed 1000 Agents. Obviously, three machines with Agents was not enough, and where do you get 1000 machines from, where do you place them, and how to maintain them?

Instead of acquiring machines, where on each machine one piece of Agent software is installed, we decided to host the processes from multiple Agents on to one machine: We purchased three blades, installed hyper-visors, deployed 10 Virtual Machines (VMs) per blade (the number of VMs was limited to the number of cores per blade). So in total we had 30 VMs. This way, per VM, we could host a maximum of 50 Agent processes.

Our Agent software caches data from the Database, so an Agent needs disk space to store its cache. Initially we wanted to accomplish this in the blades’ local storage, but soon experienced that it would not fit for the 50 * 10 processes and that the performance was not adequate for 10 VMs. As a workaround, we borrowed SAN storage from our IT department. This took us a long way, but space was limited, and we could not go beyond 1200 agents (SSD was not available to us these days).

In addition to our server component on Microsoft Windows, a new development effort was launched for a Linux version.  To be able to compare performance, we wanted to be able to switch between Linux and Windows, using the same Server hardware, and the same IP address. We did not want to use our deprecated mechanism (see the first blog) switching Operating System images, and decided to give virtualization a try. That is, with a limitation of one VM image active at a time.

So we did a Physical-to-Virtual (PtoV) migration from the Server to a VM image, installed a hyper-visor on the machine, changed the IP address, installed the VM on the hyper-visor, and wondered if measurement results would be different. Actually, it went pretty well in our situation, and since then we’ve not gone back to physical machines anymore. It’s so much easier to test with VMs. Next, we created a VM with the Linux version of the server component, which made it possible to repeat the same test, where test automation was configured to turn on the right VM (Windows or Linux) prior to executing the tests.

While this was a great improvement,

  • Unfortunately, due to limited storage space, the test load could not go beyond 1200 Agents.
  • Due to arrival of the Linux Server component test, the number of tests increased. By the autumn of 2012, the point was reached that the test time needed to run all tests just fit within one night – no more room for more nightly tests.
  • And as a results of having to develop and maintain tests during the day, nighty tests often broke down.

After the summer of 2012, things moved quickly: We created a plan to extend on-premises test hardware. While discussing this plan, it was then suggested we also look at the Cloud as a possibility to host test machines, and I got a 3-month Azure trial license to investigate.

We had already purchased MSDN subscriptions to host test systems/keys as a replacement for the MSDN DVD distributions, and when it became available to me in October, I found out it included Azure time! At the end of October 2012, during a meeting with Microsoft to find out if it would be possible to host test systems in Azure, only more questions arose. Therefore, in November, I joined an Azure Acceleration Lab, that gave me access to people with lots of hands-on experience, but unfortunately, the experience did not so much include hosting test machines.

Eventually, at the end of November, at an agile testers conference, I was able to speak to someone experienced in performance testing in the cloud. This person convinced me it could be done, taking into account you keep an eye on provided resource performance.

Lessons learned:

  • While understanding how hardware timesharing works, virtualization makes performance testing much more agile.
  • How important it is to read from and speak with people in the testers community!
  • Embracing new developments simply takes time. Better not wait for things to get stuck, try to continually explore new things next to your daily work.

This was intended to be a guide to a performance testing plan and how to go about Performance Testing. Being convinced that hosting automated performance tests in Azure can be done, is one thing. Going there, is something else. More about that in the next episode.

Read How to Do Performance Testing Part 3 here

About the Author

Bart

Grew up in packet radio X.25, got a job with Ethernet on yellow cables, then became interested in PCs and internet, got involved with Windows network card device drivers, java and network management systems, and now I got lost in Azure and performance test (automation).
Find out more about @bwithaar