Recently, I was working as a performance tester for a cloud-based application using LoadRunner. While creating the performance scripts, I faced few difficulties. In this blog article, I will discuss about those performance scripting difficulties and how I overcome those quickly based on my understanding.
Few Requests are missing:
For this cloud performance testing engagement, I was creating performance script using LoadRunner VUGen (2021) in a cloud virtual machine over VPN. While recording the scripts, I observed mismatch of requests for few transactions for web scenarios. These web scenarios are recorded via web (http/html) protocol.
As best practices, I always record the baseline version with LoadRunner tool twice with same set of parameters and another baseline version with different set of parameters to ensure all the requests are correct for receiving accurate response time. This also assist in identifying parameters easily – both for parameterization and server-side dynamic parameters for correlation. In addition, I also verify the requests with any other web debugging tools like Fiddler to compare with LoadRunner requests thoroughly and ensure that all the requests are correct requests and under correct transactions.
For this engagement, initially I thought that it was just mismatch of requests for java server pages, however later even I observed few requests are missing for transactions. As this was very time-bound engagement, I tried with different recording options quickly. However, as I don’t see much progress there, quickly I started looking with web debugging tool Fiddler. Fortunately, I can see all the requests in Fiddler. After through verification, I manually update the accurate requests in LoadRunner script (even converting Fiddler to LoadRunner resulting some requests are missing) under appropriate transactions. My suggestion will be verifying the requests are necessary after recording & before performance scripting else overall scripting will take more time, specifically in debugging the scripts.
Object Version for each time page traversal for all the pages are different
One more thing, I observed for this engagement is every time you traverse a page, there is a value as Response.objVersion. This value is changing for every pages. This value is also changing for the same page when you click again. From, performance testing point of view, the script will be executed as iteration wise. So, every iteration this value will be changed. Initially, tried with different options like capturing the value and was unsuccessful. Then, thinking about iteration concepts to get the value while recording (like page 1 iteration 1- value 5, page 1 iteration 2 -value 9, page 2 iteration 1- value 17, page 2 iteration 2 value 19, absolutely random numbers) and use different set of requests with changing values under different iterations. However, this idea may be good for load/stress testing as it will be executed for 1 or 2 hours maximum but not a very good solution for longer duration testing like endurance. Later, I thought that if any case two different users are having same value, there will be chances of transaction failures.
I explained back these observations to the team and was discussing if we can comment out this attribute in all our requests from performance testing execution perspective. The team agreed and I commented out this attribute for all the requests for all the traversed pages. We also verified live after executing the requests from LoadRunner VUGen for multiple iterations with different users and verifying the application logs for confirmation.
This blog is all about the performance scripting difficulties and how I overcome those quickly based on my understanding for my recent engagement. While these performance scripting difficulties may be very specific to this engagement, however I hope that the guide in this blog post will be useful for others if they come across these challenges in their performance testing.
Check out all the software testing webinars and eBooks here on EuroSTARHuddle.com