Thank you.

Your uploaded files are waiting for moderation.

CLOSE

Blog

How to Do Performance Testing: Part 4 – Automated Performance Tests on Demand In The Cloud

Go Back
  • Posted by
  • 18/09/2017
Reading Time: 4 minutes

This is the fourth part of a series of posts on How to Do Performance Testing. The series follows our authors journey on establishing performance testing in his team.  The series ha, so far,  looked back on the Setup of Performance Testing , Virtual Machines and Performance testing plan, and Cloning To The Cloud. This post will look at Automated Performance tests.

————————————————————————————————————————————————

If you read Setup of Performance Testing , Virtual Machines and Performance testing plan and on Cloning To The Cloud, you know that we got Frank, 2 clones of our physical lab, and an Azure Enterprise agreement. No more worries about running out of our monthly Azure MSDN credits!

 

More Capacity

Starting 2015, the idea to use Azure as an environment to build and maintain our Automated Performance Tests evolved into: Azure as a place to run extra tests to what we could host in the physical lab. In each Azure clone, we configure the same Windows machines names, IP addresses, SQL Server databases. This enabled us to rather quickly port existing tests from our Physical lab into Azure. First, we ported our simplest Identity Director test, quickly followed by the RDP session load test for our RES ONE Workspace product.

We ended up creating more Azure lab clones, because this would make it possible to run more tests in parallel each night. However, it became tedious to use the Azure console to clone a lab. This is when we learned the Azure command line, which was a mix of NodeJS (the predecessor of Azure CLI) and PowerShell.

Improving while Testing

It’s good to investigate ideas when they surface, and if they qualify, there is no need to implement them right away, instead just put enough research into it and then put them on the backlog. To name a few, and what we did with it later:

  • a PowerShell script to create a VM in a Windows Domain and assign a fixed IP address to the VM (implemented);
  • creating a clone lab completely from PowerShell scripts (no NodeJS anymore) (implemented);
  • investigate how to perform snapshots in Azure as we had snapshots in HyperV in the physical lab. VHD templates was the outcome of the investigation (not used yet);
  • investigate what to do when an OS disk becomes too small: A VHD in HyperV can be enlarged, but how do you do that in Azure? (only used manually)
  • How to stop and start VM’s using a (at that time) preview feature called Azure Automation (not implemented, for now, we use an on premises system to schedule running tests in Azure).

Sometimes, you run into problems that need an immediate fix:

  • Each VM reboot resulted in more hidden Virtual Machine Bus Adapters, and when the number of hidden adapters reached about 100, the File System redirector service seemed not to work anymore. We implemented a script that removes those upon bootup.

 

A problem with our Azure clones is that they are isolated, the IP addresses are identical, you can’t VPN them together. Azure came to the rescue with the Azure File Service (at that time in preview). This worked fine and we embraced it with the following:

  • We scheduled a daily AZCopy to transfer a zipped archive of the builds to test to Azure in a Storage Account. Then, in each clone, prior to running a test, the control server retrieved the package by accessing a shared drive.

 

With all that functionality in place, what about performance? It became clear that the machines we were using were too slow. Especially the Azure A series disk performance of 500 IOPs was not enough and always felt sluggish. The SQLIO disk utility gave us a quick and simple insight into the disk performance on a windows drive letter. We experimented with combining disks into a stripe set, but we felt it was too much work to set up over and over again.  The solution was to make use of D type VM’s, which is a bit more expensive, but the disks are faster. Especially, the temporary data disk D: was fast enough. We could go with that. Prior to starting the SQL Server, we first moved the 2 database files to a folder on D:, and then attached them to the SQL server.  Admitted it would have been easier to use faster C: drive storage, but then the VM’s would have become too expensive.

Flexible lab access

RES hired new colleagues in Bucharest, Romania, and an updated version of Workspace had to be performance and functionally tested and compared with the existing version. Now that we had already ported the RDP session Workspace product test to Azure, this was an excellent opportunity to simply provide an Azure clone of it:

  • This could run both on demand and scheduled;
  • It was isolated in parallel with our existing tests;
  • Not needed to host extra machines in our performance testing physical lab, or in Bucharest;
  • Better RDP response time;
  • No problem with our colleague doing their own modifications to our tests;
  • Easy to turn off when not in use and throw away after use.

Higher Test Load

After summer 2015, most of the functionality of the physical lab had been ported to Azure. Now that we have the power of Azure, we wanted to put some more load on our Workspace Relay Server. In the physical lab, the Agents used to generate load for the Relay Server test were hosted on 10 VM’s, running in 3 hypervisors.  We did not want to simply copy 30 identical VHDs to Azure, because:

  • Each VHD needs periodically patched (windows update);
  • It takes time to add VM’s and configure them, in case we needed more;
  • Each VHD consumes blob space (much cheaper than CPU time, but still);
  • VM startup time: In the classical Azure Cloud Service, you can start or stop one VM at a time.

Azure Worker Roles (today: Scale sets) proved to be a good implementation for Agent type VM’s:

  • Tell Azure how many Windows machines we need;
  • Let Azure start the machines and join the Windows Domain,
  • Run some post installation script to install necessary tools.
    • After that the test script was executed to install the Workspace Agent software.

 

Lessons Learned

  • Standardizing on one script language, both for test scripts and framework, PowerShell, made new development and maintenance more efficient.
  • Provided enough disk performance, performance tests in Azure run just fine.
  • Save money: Cloud VM’s only incur cost when turned on, therefore:
    • only run tests when there’s a new build with relevant changes;
    • when a project is abandoned, just stop scheduling the test.
  • Generating Azure VM’s (= Scale sets) just in time from patched-up OS templates, saves a lot of time and effort in VM maintenance.
  • Lab clones provide flexibility:
    • isolating test development and maintenance from production;
    • running tests in parallel;
    • providing teams in other locations with excellent quality access to a lab.

Time’s up. In the next episode we’ll look at test result reporting, and do serous reflection like: Why do we still have that lab of physical machines?

Read Part 5: Integrate performance tests into teams

See Also

Go Back

Blog Post Added By

Bart has worked in Ethernet and ATM computer networks since 1987 initially in support, in 1995 with FORE Systems in software maintenance of network interface device drivers and network management products. He joined RES in 2009 as a tester, then focused on developing automated performance testing, in an Agile environment.

Join the discussion!

Share your thoughts on this article by commenting below.

Leave a Reply

Skip to toolbar