August 20, 2015 at 4:54 pm #9103Ronan HealyKeymaster@ronan
I hadn’t thought about testing and time management before for some reason until I read a blog post on what you should test when you are running out of time for testing.
So that got me thinking , If you are running out of time to test; what should you test? Or what do you prioritise? I suppose it depends on what area of testing you are working in.
Is it a case that some tests should always be run? Or is it a question of risk analysis and factoring in what tests cover the most important area of a product etc?August 21, 2015 at 9:42 am #9106PatrickParticipant@sysmodAugust 21, 2015 at 9:51 am #9107RameshParticipant@rameshkumar20
From my experience it is always important to prioritize the tests from the beginning of project onwards. Especially in Agile world, Customers are expecting some executable piece in regular basis. Waiting till the end to prioritize the tests would lead to missing of key test cases.
We used to categorize the tests as P1, P2, P3 and P4. All the test cases were automated. But P1 and P2 are the tests were scheduled to execute on daily basis and P3 and P4 were added at the time of sprint closure. Those which cannot automate will be marked as Manual.
What happens if there is not enough time to test before delivery:
As all the test cases were automated and P1 and P2 will be executed on daily basis on the latest build, so there wont be much problems though we need to deliver the products within short span of duration.
Manual QA will focus on non-automated test case execution along with ad-hoc testing where as Automation will take care about P1 and P2 test cases execution. And Automation testing doesn’t involve any resource time and its purely machine time, it wont impact the delivery.
This is the approach which we used to follow since 4 years and we never faced any issues though the delivery timelines are less.
Feel free to add or comment on the points which I mentioned above as these are just from past experience.
Ramesh KadaliAugust 21, 2015 at 10:00 am #9108adityaParticipant@adihere
Prioritisation is the first obvious option and there are many ways to get there , risk based testing or a simple MoScoW classification!
How about asking the team to help ?
– developers can test, so can business analysts ,delivery managers …
Crowd source it in your organisation or outside your organisation through platforms like utest (if feasible)
adiAugust 21, 2015 at 10:33 am #9113AnonymousInactive
I think you would always inevitably use prioritised approach, with input from the various stakeholders as has been discussed here.
If you are fortunate enough to supplement this through any automated tests that have been developed to this stage, then you are in a good position, as you are surely probably wanting to run a minimum level of smoke test too on overall simple functionality if you are at that stage of the project/delivery also.
DarenAugust 21, 2015 at 10:45 am #9114RameshParticipant@rameshkumar20
Yup but its not a smoke test. Its we termed as Minimum Regression comprising P1 and P2 tests.
Those are high priority tests that are mandatory to execute for every build.August 21, 2015 at 1:54 pm #9115EranParticipant@ek121268
Especially in the mobile space either if you work in an Agile method or not, the prioritization of tests should also include the device/OS/environments on which you would test your apps on.
If you are short in time, you would like to focus on the most relevant devices and operating systems and complement them with additional assets as time permits (http://go.perfectomobile.com/test-coverage-index/)
I general (for Mobile or other platforms) – Automation is key for success and Continuous Quality is the enabler for such method
Try to implement CI (using Jenkins or similar) which can run on nightly builds a good subset of your functional and also performance tests and add to these cycles additional tests which are either more error prone or harder to automate.August 21, 2015 at 4:09 pm #9116Charles TaylorParticipant@charles-taylor
I always think about Risk! I assume you are talking about testing things that have not been automated yet. In that case, I like to us a variation of the FMEA RPN (Risk Priority Number) system. Estimate 1 – 10 for the factors: s-severity, o-likelihood of occurrence and e-likelihood of escape then Calculate the product.
s – Severity is how big an impact there will be to the customer if a failure mode occurs.
o – Occurrence assesses how likely it is that a failure mode will occur which in this case is related to which part of the code base was touched. For example if a new feature was thoroughly tested during development but some of that code affects another legacy feature that was not fully tested, then the likelihood of failure would be set higher for the legacy feature test cases.
e – Escape assesses the risk that you will not catch the problem if you do not run a specific test case. For example, you will be logging in many times during testing so you would assign a low e and not usually run a specific login test because if something is wrong with login you will catch that while executing other tests. However, if the login code was just changed then e and o might be set higher than normal.
I have varied my use of this a lot over the years and have never bothered much with all the forum arguments around FMEA and RPN with software. It is a great way to get a handle on work planing and to justify the plan to management. Keep in mind that if you are short on time then you will not have time to do this analysis! This should be done early in a project and should involve a cross functional team because a tester might not know the customer risk or all the risks and relationships in the code base. If you do the work, then the priority sort of falls in your lap and the work of doing the analysis is very eyeopening for everyone involved on the analysis team.
Here is an example of the wealth of information available for FMEA and RPN: http://www.ihi.org/resources/Pages/Measures/RiskPriorityNumberfromFailureModesandEffectsAnalysis.aspxAugust 24, 2015 at 9:38 am #9121RaulParticipant@raul
Regarding test prioritization, while I worked on a previous project we created a comprehensive list of test cases, and ordered those by prio using TestLink. Test cases with prio 1 were included in a build verification suite, to be run on every build once. Test cases with prio 2 were up next, and test cases with prio 3 – edge cases or with low importance are to be run only if time allowed for it.
So to summarize:
prio 1 – always run
prio 2 – try to run
prio 3 – optional to run
We are trying to implement a similar system on our current project, so that we make sure that important test cases will be run every time, and we don’t waste too much time on tests with lower importance. I find this system to be simpler to develop and use, and relatively time-efficient, as opposed to doing a risk analysis of the product.August 24, 2015 at 10:15 am #9123Saga B.Participant@sagadavids
Hei, I wrote an article on Test Huddle a bout this subject, about how to prioritize test when you have little time to test. Hope this will be helpfull:August 25, 2015 at 5:24 pm #9148StephenParticipant@stevean
A bit late to this discussion – because of workload and compressed timescales.
In my current role – in fact in all my roles – I have faced compressed timescales and limited time and resource to perform the planned level of testing. My first approach is to explain the situation to the team; particularly the Product Owner and Project Managers. The situation being: we have a resource triangle: Time; Staffing/Automation CPU; amount of testing (coverage, etc.). You cannot change one, e.g. time, without changing at least one other, and probably both. The response is usually – “Let’s throw more people at it!”, but that can only cover so much lost time. Remember you can’t have 20 people dig a hole – it just doesn’t work. So more people or processing power is only part of the answer. Then we come to:
Ok so now we have the optimum number of test resources and we still don’t have enough time to perform all planned tests, what do we do?
There’s already been a lot of talk of risk based testing, and prioritising tests, and I agree and do prioritise all testing at the outset of a project. But the real questions for the team are:
-What test can we afford to drop?
-For the Product Owner, what requirements are prepared to drop or release with minimal validation?
-For the Developers, what features are stable and safe enough to reduce the testing on, that doesn’t give you any concern?
-For Project Manager, can you really not give us more time, are you happy to accept the additional risk? and if so, in your project/program view, who do you trust and what do you see as the areas that are safest for a light touch.
-You might even ask customers what they don’t mind having a light touch, or being left until after the release.
Of course, you run the risk of all of these answers conflicting and you getting no help. But at least you have some valuable information that you can base you decision on and provide support when presenting your revised test approach and eventually your results/report.
Of course, to do all of this you have to have enough notice that your losing time to test. If you don’t have time to research and plan then it’s back to the seat of your pants, balls in your court to perform a rushed assessment to focus testing efforts and assess each test as you come to it (to do or not to do).
At this point your reporting is key and you need to be open and transparent about what you are doing, why and the risk/impact. Make sure everyone is aware of the risks every day, until either you get the help you need or people accept the risk. At least this way, each day you are performing your end goal of providing the information to the stakeholders to enable them to make decisions. If they play the Ostrich game… well what can you do with a headless chicken? If you feel people are not responding, hold a meeting to make sure everyone reads and understands your reports. It may be that your information is not clear for them. (But that’s another subject).
In short, find out what can be dropped, and report constantly on status, risk and omissions.August 25, 2015 at 7:16 pm #9152JesperParticipant@jesper-lindholt-ottosen
Prioritizing tests up front is one approach I have previously used a lot. We gave the priorities names according to the MoSCoW method: Must testcases, should test cases and the left behind could test cases. Until the day the manager came and said that only 20% of the identified testcases could be 1-Must cases… It was a really clever customer who then identified so many testcases that the ones she wanted as MUST testcases was exactly 20% of the total. She realized that you can always identify more test cases.
My personal favourite would be Session Based Time Management perhaps supported by scrum boards and exploratory testing.April 11, 2016 at 11:35 am #11363
- You must be logged in to reply to this topic.