- July 17, 2015 at 4:42 pm #8807
Scenario: You’re a small team, you’re heading for Release_1, you have some automation tests and a large bucket of functional tests, say 3 000. The manual tests are prioritised so you would target the higher priority tests first. The testing effort could take say 3 weeks with a handful of testers. The industry, however, expects a faster release turnaround time. You finish Release_1 and don’t want to go through a similar test cycle again. Now you’re preparing for Release_2. Take into account that some products may be very technical to automate, so manual testing would currently be required, so this does slam one into the Waterfall.
What could be streamlined more to make this process far more efficient and fun?
How could we as testers be more creative about this (almost like throwing AllPairs at a release to glean as much coverage in the least amount of time)?
One could do one or more of the following:
– Prioritise your test requirements and decidedly test the higher prioriity requirements only
– Code more autos (a lot of time)
– Get more testers (budget restraints)
One thought we had was to put a hold on the Function tests and rather focus more on creating User Scenarios incorportaing a number of functions. So instead of testing 100 individual functions, we execute 10 User Scenarios in a lesser period of time covering all 100 function areas. The function tests would still be generated during a Kanban or Sprint cycle so could still be referenced.
I’d be interested in your thoughts, experience and expertise.
GraemeJuly 21, 2015 at 7:13 pm #8838JesperParticipant@jesper-lindholt-ottosen
Set a fixed amount of time for testing, and test only what is achievable in that timeframe. … Quite a bomb, but think about it. The amount of test is a decision – if the business require less than three weeks, and this is more important than the amount of test then make it so. If they don’t agree that testing can be “reduced” then ask for the policy / reasons behind that – compared to time to market.
(Side note: challenge the fact about the “industry” requiring faster test. Perhaps they want faster time to market – for the whole proces.. Perhaps the market don’t want fast crap, but a-bit-slower good stuff. yet sometimes, the first-mover takes it all).
One formal approach to timeboxed testing is session-based test management, aka test management based on sessions. It can work with exploratory testcases as well as prepared testcases / scripts. The whole idea is the timebox > refactor cycle. Or use scrum, or a kanban board perhaps? that is a seperate one for the testing tasks or testing timeslot?
Exploration practice examples here:
Or do some test case reduction techniques, like test case equivalence partitioning
Discuss the level of coverage actually required:
https://jlottosen.wordpress.com/2012/11/05/fell-in-the-trap-of-total-coverage/ (my blog)July 23, 2015 at 11:41 am #8847
Thanks Jesper, some food for thought. Another concept to throw into the pot is DevOps: interesting talk from Brian Harry
https://channel9.msdn.com/Events/Visual-Studio/Visual-Studio-2015-Final-Release-Event/Keynote-Visual-Studio-2015-Any-app-Any-developer#time=1h4m00s:pausedJuly 23, 2015 at 1:54 pm #8849RonParticipant@ronp
I think one would need to take into consideration company policy. What is the acceptable number of unknown bugs that could potentially be included in a release? Also, if a particular feature of a release has not been tested, would it be possible to not include it (the feature in question) in the next release, but schedule it to be included in the following release? Now that I’ve said it, it sounds like better management of the product backlog.August 27, 2015 at 2:32 pm #9169PaulParticipant@paul-madden
Hi Graeme, it’d be interesting to hear about the approach you took in this scenario and how it worked out (or how it is progressing)?August 27, 2015 at 3:54 pm #9171
Hi Paul, we’ve implemented pairing up of a Dev and Tester for feature testing, once the feature is completed that is. At this point auto’s could be written as well. The pairing has also been very welcomed by our Dev’s, they actually enjoy it. How does this fit into the topic, well, we noticed a larger bug grouping towards release completion on some of the features. The testers were so focused on user stories during feature implementation, yet the bugs were being logged during exploratory sessions way after the feature was completed. We needed to allow for more feature exploratory sooner, instead of bunching them up later. We’ll run with this change for the next release, 1) keep the autos stable and running, 2) streamline our exploratory sessions on our new features,
- You must be logged in to reply to this topic.