This is the final post in our series how to do Performance Testing. The series has looked back on the Setup of Performance Testing , Virtual Machines and Performance testing plan , Cloning To The Cloud, and Automated Performance Tests on Demand In The Cloud. The final post in the series looks at some of the learnings from the journey.
————————————————————————————————————————————————
The work at RES has evolved from on-premise, using recycled hardware, to parallel testing in the cloud. It’s time to answer basic questions, and see what’s needed to improve.
Goodbye physical lab
After all this work in developing cloud deployment, the question comes up: Do we still need the physical lab? We turned it off earlier this year, and never felt the need to start it up again, so in spring 2017 systems have been wiped and reconfigured for possible other duties. No, we don’t need the physical lab anymore.
Verify VM deployment
It would be wise to insert a VM deployment sanity test prior to start testing, to make sure storage, CPU, memory and network provided are up to specs. Usually the cloud provides good VMs, but we did experience a couple of times Azure did not work very well, usually causing disk performance to be not working properly. In these exceptions, it would have been very nice to receive an early warning, not even try to start testing, and avoid triggering an unnecessary investigation.
Verify the build to be tested
Sometimes a (nightly) build produced incorrect output or no output at all. Our test scheduler did not know about builds, and the automated performance tests would be scheduled anyway: either the test ran (again) on an older build, or failed when loading a not-working build.
Of course, it would only make sense to execute an automated performance test after the build and a functional smoke (happy flow) test completed successfully. This requires integration between the build system, the functional testing system and the automated performance system.
Integrate performance tests into teams
In our organization, automated performance tests had been created and maintained by Frank and me, making ourselves be like bottlenecks. But how do we get an agile team to be keen on test results if they did not create the tests themselves, and if they can’t determine if the tests succeeded or failed immediately?
In August 2016, Frank embedded into the RES ONE Workspace (Agent) development team that built a renewed version to make sure performance was equal or better than the existing version. I embedded into the development team that builds a new RES ONE Workspace (Portable) Relay Server, again to compare performance against the existing version. To be able to continue to work on the test framework, and still do some work for other teams, we contributed a maximum of 50% of our time each day. Both our test development work and the testing itself became part of the scrum process. It did really help to get teams interested in test results!
However, we were not able to add logic to our developed automated performance tests to report simple success or failure. Perhaps we focused too much on a generic solution, where a system is intelligent enough to interpret and compare graphs, and raise a flag if the difference from yesterday’s graphs becomes too large. I guess something like that requires AI. But it is an important problem that will need to be solved. Without that ability, there’s always a human required to look through the pictures and classify results. Guess who…. right, that’s us, the two friendly guys that created all those tests…
Test Reporting
When the above improvements would have been in place, test-result-reporting could have been greatly improved to only send a test-result-emails when really needed. So, it’s not good to:
- Send stakeholder emails about each test;
- Send no emails and tell stakeholder to check a website for test results (they forget, and might get lost).
The test result email should provide just enough detail to attract attention:
- Pinpoint a test failure: The problem is in test-deployment, functional-tests or in the performance test itself;
- Which test/subtest failed and a reason string and provide a URL to archived details;
- Do send an email in case a test was triggered or scheduled, but it appeared not to have run.
In short, if the test ran fine, don’t tell anyone.
Archive everything
Frank developed the prototype of this by simply at the end of each test run, zip the test folder in the Azure clone, based on its Test-ID, and use AZCopy the zip and put it in a test specific container name in a central blob location in our Azure storage. More in general, this is what’s needed:
- Store data in CSV file format, easy to import/export, easy to store, easy to access with a utility program (spreadsheet);
- Collect trace files of your applications to test during the test run;
- Collect system vitals of the VMs during the test run;
- Collect minimum level of event logs/system trace files on the VMs during the test run;
- Store all this in a simple file system folder per test named to the Test-ID;
- Archive and compress for later use.
Have a website
The website should have documentation on all tests, and should display:
- Current running tests and their status;
- History of tests, their status and test results;
- Documentation;
- Export test result into PDF test result options (Compiles test history and test documentation into a document);
- Option to request tests on demand;
- Option to schedule tests;
- [Option to run tests after a build, but usually it’s more logical to administer these in the build system maintenance.]
The easiest web location to start with is a WIKI related to your agile team, and use, for example, Python to update the pages with the latest test results.
Goodbye automated performance testing?
Now that we
- performance testers get better integrated into the agile teams;
- have some automated functional tests in place to make sure the software build is eligible for the automated performance test;
- reached a point automated performance tests can report simply failed or succeeded;
then why should functional and performance tests still be separated, in different frameworks and having different people working on them and be in different reports?
Further to that, in the cloud where we run the performance tests, you are more flexible to manage the larger number of different images and application setups that specifically functional tests require, since handling VMs and disk space is easier.
Wrap up
This was the last blog on this topic. We hope you enjoyed the lessons and final thoughts we shared. Our company RES has been acquired by Ivanti where Frank Brouwers continuous automated testing, while for me it’s time to start a new job. Goodbye from us, thanks for following, and happy testing!