Home › Forums › Software Testing Discussions › Slow Down to Speed Up – Leveraging Quality to Enable Productivity and Speed
- This topic has 22 replies, 9 voices, and was last updated 10 years, 1 month ago by Padmaraj.
-
AuthorPosts
-
October 1, 2014 at 8:50 am #4454
I hope you enjoyed my webinar. If you have any questions please type them below…
You can view the slides and webinar recording Here
October 1, 2014 at 1:06 pm #4466it’s ok – let’s go ! 😀
-^. ^=-
~~ ~~October 1, 2014 at 1:27 pm #4467We see challenges in validating the understanding of the team on clarity of the objectives behind the tests they are performing.
How can this be addressed to benefit the outcome of the testing?Thanks,
-VaibhavOctober 1, 2014 at 1:33 pm #4468slides are not properly centered in the citrix display
October 1, 2014 at 1:49 pm #4469In a room full of testers and developers we all can’t help but notice the incorrect spelling of Consultant 😛 “Consultan” This would be on the main title page 🙂
October 1, 2014 at 1:55 pm #4470😀
the more i listen about testing in agile teams the more amazed i am about my dev team – the functional tester speaking 😀
-^. ^=-
~~ ~~October 1, 2014 at 1:55 pm #4471Definition of Done
Sprint level,
bugs committed in sprint resolvedI disagree, might be that bug isn’t that important and fix is moved to next sprint or even future.
And/or bug found very late, leaving no time for fixing (even if integrated as shown in example C).October 1, 2014 at 1:57 pm #4472@Pia, depends if you are working to Zero Defects policy or not.
Personally I disagree with Zero Defects and expect some defects to be identified as more costly to fix than they are worth and that’s what a release note is for.
October 1, 2014 at 1:59 pm #4473here we use to report (in Jira) only that bugs that the Team cannot fixed in the current Sprint, so them may be handle as PBI and planned for one of the next sprint, depending on the severity/priority of the Bug.
October 1, 2014 at 2:00 pm #4474Ha Emma you don’t miss a thing. I was editing the slide and hadn’t hit save 🙂
October 1, 2014 at 2:01 pm #4475-Vaibhav
We see challenges in validating the understanding of the team on clarity of the objectives behind the tests they are performing. How can this be addressed to benefit the outcome of the testing? Thanks, –VaibhavFran: I think this comes down to the test strategy being defined, it being clear and using terminology everyone can understand. It should state the types of testing needed, why, their objectives, etc. so that the problem you describe is less likely to happen. Maybe combine this with some mentoring or review of team practices to highlight any issues/misunderstandings.
October 1, 2014 at 2:01 pm #4476It’s interesting to see how Agile has evolved. I remember my first agile project back in 2001 as being very much the first chart you showed with the testing being a sprint behind.
Since then I have worked in the second chart with ‘system testing’ being at the end of the sprint. In this case the DoD is different depending on the person working on the story. i.e. Dev’s DoD is not the same as Testing.
I’ve yet to find a team that has evolved to the final chart where dev and test are simultaneous and symbiotic to the point of not being identified as separate tasks.
October 1, 2014 at 2:02 pm #4477Hi Marzio
I will edit the slides and audio together so that you can see everything 🙂
October 1, 2014 at 2:03 pm #4478Emma
In a room full of testers and developers we all can’t help but notice the incorrect spelling of Consultant “Consultan” This would be on the main title page –I’ll have a chat with Daragh! 🙂
October 1, 2014 at 2:04 pm #4479Alt
the more i listen about testing in agile teams the more amazed i am about my dev team – the functional tester speaking –Is that in a good way ?
October 1, 2014 at 2:09 pm #4480Pia
‘Definition of Done Sprint level, bugs committed in sprint resolved I disagree, might be that bug isn’t that important and fix is moved to next sprint or even future. And/or bug found very late, leaving no time for fixing (even if integrated as shown in example C). – ‘I guess an agile team needs to decide what a ‘bug’ is. A bug should be defined to be anything you intend to fix. In general I see agile teams then saying, based on this definition, that they are not done until the bugs are fixed. So they don’t demo and get credit for work that is not ‘done’ . The unfinished story and its bug go back on the product backlog (with a new estimate) for the product owner to prioritise for the next sprint. It seems strict but I guess if you don’t adhere to your definition of done its a slippery slope…..
October 1, 2014 at 2:10 pm #4481Has anyone managed to bring UAT into the sprint? Or does that still get done after the sprint and possibly at the end? As was mentioned, leaving defects to the end in UAT is costly and can destroy the benefits of agile development. Which I find all the time where I am. Either functional testing isn’t being done with correct user stories, or the customer is not getting what they want. So as a UAT Manager I am facing way to many defects, when I should just be ‘demoing’ the release.
If you have managed to bring it in, how?
October 1, 2014 at 2:16 pm #4482Stephen
It’s interesting to see how Agile has evolved. I remember my first agile project back in 2001 as being very much the first chart you showed with the testing being a sprint behind. Since then I have worked in the second chart with ‘system testing’ being at the end of the sprint. In this case the DoD is different depending on the person working on the story. i.e. Dev’s DoD is not the same as Testing. I’ve yet to find a team that has evolved to the final chart where dev and test are simultaneous and symbiotic to the point of not being identified as separate tasks. –I have seen many examples of scenario C. It basically is a by product of the whole team approach whereby there is no ‘them and us’ i.e. developers and testers, just a team with a blend of competencies working together and sharing tasks to a common definition of done. Any test professionals are just another member of the team involved right from the start. ‘System’ tests of the increment, when automated can be incrementally built up and run on the evolving increment throughout the sprint. It may seem inefficient for testing but its a similar issue to saying performance testing should be left until just before release as its more efficient to do it once at the end rather than incrementally.
October 1, 2014 at 2:19 pm #4483Stephen:
we do UAT during the deployment, at the end of the sprint.
we have two days dedicated to deploy, UAT, planning & grooming, spike and all that kind of stuff we don’t want occurs into the sprint.
we manage bugs raised in that days as new PBIs or as a friendly chat with the team involved (but this depends on the severity/priority of the bug).
So far it works for our needs, but I cannot say if it is the fairest way or not.October 1, 2014 at 2:27 pm #4484Stephen
‘Has anyone managed to bring UAT into the sprint? Or does that still get done after the sprint and possibly at the end? As was mentioned, leaving defects to the end in UAT is costly and can destroy the benefits of agile development. Which I find all the time where I am. Either functional testing isn’t being done with correct user stories, or the customer is not getting what they want. So as a UAT Manager I am facing way to many defects, when I should just be ‘demoing’ the release. If you have managed to bring it in, how? ‘Again I usually see agile teams start with a weak definition of done whereby UAT is performed once at the end of the project before a release after a series of sprints . However, through engagement of business representatives at the sprint review/demos at the end of sprints, feedback is elicited to start to mitigate the risk of developing the wrong system. Formal UAT may then follow and be performed at key points thru the project (not necessarily every sprint but when some meaningful business features/workflows are available for scenario based testing. It does require engagement with the business though to help this change along, as the business are used to just being involved at the start and end in plan-driven. I have not yet seen UAT happening within sprints.
October 1, 2014 at 2:33 pm #4485Fran
I guess it’s down to my recent true Agile exposure. I’ve worked with so many teams that claim Agile but don’t know much beyond a mini waterfall approach.Marizio
Sounds like you have a good working relationship with end users. I’d love to be able to get our UAT into the development phase, but I’m struggling because the client isn’t involved early enough and the off shore dev and system test teams seem afraid of anything customer facing.New task for me: Get closer to the dev and system phases and encourage them to talk to the customer and involve them. That’s a big one though.
October 1, 2014 at 4:00 pm #4487Yes -> Is that in a good way ?
we have passing GUi tests in the definition of done. Even if the only thing that is pending are the gui tests – story does not get closed.
as a person with testing capability – i bring up cases i’m concerned on (behavior wise) the guys discuss – point out why they would/would not consider those a problem
we don’t automate everything – but the things that make sense and/or are cost effective. If it’s a checking its for GUI tests – and the only exception is the rare cases which are too expensive to be tested via scripts and don’t hold massive value to the business.if a case for framework expansion is presented, that holds value – devs help to expand the testing framework (introduced test-ability + actual improvements in the framework).
we don’t work as a clockwork, we still have loads to improve but – we don’t have all that much of a waste that could be there 🙂
-^. ^=-
~~ ~~October 3, 2014 at 8:10 am #4550Hi,
I like to bring light on “Testing Pyramid & Inverting the Testing Pyramid” http://blogs.agilefaqs.com/2011/02/01/inverting-the-testing-pyramid/ -
AuthorPosts
- You must be logged in to reply to this topic.