Lessons Learnt Integrating Test Into The Agile Lifecycle

Home Forums Software Testing Discussions Lessons Learnt Integrating Test Into The Agile Lifecycle

Viewing 13 posts - 1 through 13 (of 13 total)
  • Author
    Posts
  • #7234
    Fran
    Participant
    @franoh

    In my webinar I shared my experiences of the lessons learnt by agile teams when attempting to integrate test into the agile lifecycle, Many teams still struggle to achieve the required level of quality and I hope my insights into why this is the case have helped.

    Have you any insights or lessons of your own on how to improve quality when integrating test into the agile lifecycle?

    Thanks
    Fran

    View Webinar & Slides Here

    #7240
    Soile Sainio
    Participant
    @soikka

    Hi Fran

    Thank you for good presentation. You talked a lot about DoD. In our organisation we are using beside Dod also Definition of Ready. In DoR there is criteria that backlog item needs to be testable and it is people with test competence who can evaluate wheter that critetia is met or not. In quite often – in my opinion – planning of testing has already been started in this phase. Are you familiar with DoR and what is your opinion for it?

    Br, Soile

    #7241
    Rene
    Participant
    @renetuinhout

    Thanks, Fran, for your great presentation.
    Many quotes and diagrams I can use in an organisation setting up Agile for re-iterating the importance of including test competence into the agile teams!

    #7243
    James Readhead
    Participant
    @james-readhead

    Delivery team struggling to move from ideal days, after years, to a points based estimation model – any suggestions?

    #7244
    Paul
    Participant
    @pjb1981

    Hi Fran,

    Thanks for the presentation.

    I found that using the first 1-2 days of a sprint to plan exploratory sessions, ‘gherkin-ize’ acceptance criteria and select stories that were prime for automation was a good use of time.

    This often meant that those smaller stories that you highlighted were ready for test by time this exercise was complete.

    Cheers,
    Paul.

    #7245
    Nina Perta
    Participant
    @nina-perta

    Thanks Fran, great presentation.

    The collaboration seems to be the key to success, alongside with having the right mix of competences and personalities in your team.

    Good point to clarify the definition of Developer, this is, at least in my experience, misunderstood here in Finland quite often.

    Would like to hear little bit more on how you think former test managers could and should use the experience they have to be better advisers? And what skills testers should enhance to be more useful to agile teams? Do you think that coding skills are always needed?

    #7246
    Fran
    Participant
    @franoh

    Thanks Soile,

    Yes I have seen the definition of ready used in some teams to good effect. In addition to the testability criteria I often see INVEST used.
    Independent
    Negotiable (a requirement rather than a prescriptive solution)
    ValUable
    Estimable
    Small
    Testable

    I think the whole team together including the test competent should be involved in evaluation the backlog items according to these criteria.

    I agree that analysis/planning is started here but this is at a backlog item level and there is still a need for bigger picture test planning.

    Thanks again
    Fran

    #7247
    Fran
    Participant
    @franoh

    Thanks and Good luck!

    Fran

    #7248
    Stuart
    Participant
    @stuartpates

    I am involved with a Global organisation where agile is an evolving culture and is different within different locations and projects.

    Towards the end of your presentation you talked about the ‘Test Strategy’. One of the main conversations I am currently having is regarding the question from the project manager “How do I see the whole scope of testing when the Test Strategy only mentions areas of risk, testing tasks and guidance?”. Management do miss the large Test Plan it seems, I think this is more to do with having something to hit you with if it all goes wrong in live – but I could be wrong 🙂

    One of the proposals being throught of is to have Testing Epics (Such as Performance, Security, etc..), so that scope that does not easily fit to a card can be captured and then later cross referenced to 1 or more “functional tickets” where developers are more comfortable. This gives visibility to the various managers of the testing to be understaken as the project evolves. This seems to fit with the slide showing usability, non functional task effort increasing. Is it a valid approach? Do you see any flaws?

    #7249
    Fran
    Participant
    @franoh

    Hi James
    get one or two teams just to try it out as an experiment. We should always be looking for ways to adapt and improve. The experience can then be used to judge its value for your teams.

    Maybe explain the benefits….
    – Relative sizing is probably easier to do and we are better at it
    – my ideal day may be different to your idea of an ideal day
    – time based estimates may therefore be less additive
    – Time based estimates are prone to becoming stale (if we become more productive the estimate drifts away and we may need to re-estimate). Points are more stable and we just use velocity to adjust for changes in productivity.

    I would also highly recommend using planning poker for story point estimation as it ensures all perspectives are represented and there is usually a significant improvement in the common understanding of the requirements after it is used – this is defect prevention and very powerful.

    Thanks
    Fran

    #7251
    Fran
    Participant
    @franoh

    Hi Fran,

    Thanks for the presentation.

    I found that using the first 1-2 days of a sprint to plan exploratory sessions, ‘gherkin-ize’ acceptance criteria and select stories that were prime for automation was a good use of time.

    This often meant that those smaller stories that you highlighted were ready for test by time this exercise was complete.

    Cheers,
    Paul.

    Thanks Paul
    Yes that makes sense. Exploratory though can also be useful to use in a more reactive manner based on the results of earlier (automated) tests and an assessment of residual risk/weak areas/gaps in your testing. So maybe a blend of planning some sessions and leaving room for some later can be useful.

    I agree…ATDD/BDD and any associated early definition of acceptance criteria/tests will be powerful to prevent defects and get testing starting early and better integrated into sprints.

    I would still keep the focus on those test tasks for the highest priority stories so we try to focus on 2-3 stories getting fully done. Based on the guideline I quoted, the largest story is 6 days effort which includes test effort. Even with only one person implementing and one performing high level testing it should still mean executing those tests after 4-5 days effort. If two people are implementing we should be executing in 2 days….

    Cheers
    Fran

    #7253
    Fran
    Participant
    @franoh

    Thanks Fran, great presentation.

    The collaboration seems to be the key to success, alongside with having the right mix of competences and personalities in your team.

    Good point to clarify the definition of Developer, this is, at least in my experience, misunderstood here in Finland quite often.

    Would like to hear little bit more on how you think former test managers could and should use the experience they have to be better advisers? And what skills testers should enhance to be more useful to agile teams? Do you think that coding skills are always needed?

    Thanks Nina
    Re test managers – for the line function I believe they should lead the way re the agile test strategy, providing/sourcing agile test related training, test environment and tooling provision (related to the CI framework), hiring the right testers for the agile teams’ needs, setting up the test CoP, etc.
    For the release/project test manager, they should help with high level risk analysis/test approach/test planning/etc. on the agile release/project. They may be a full time member of an agile team doing this but I often see that they are in short supply so they get shared by a number of teams (acting as a test consultant/coach, helping to co-ordinate test dependencies between teams, etc.)

    Some of the above can of course be delegated to agile coaches or ScrumMasters with an organisational focus and having the required test competency.

    Re testers…
    It may help for testers to have coding skills but no, I don’t thing coding skills are a pre-requisite. The slide which showed the whole team collaboration was based on a functional tester having a programmer automate their Q2 tests on their behalf. This approach is advocated by Lisa Crispin/janet Gregory and they speak of testers needing ‘technical awareness’ (as they need to communicate effectively with designers/programmers and identify risks) but not necessarily programming skills. However over time we can of course encourage.testers to become more T-shaped – some may be more suited to enhancing their domain expertise, others may be more suited to improving technical skills.

    Kind regards
    Fran

    #7254
    Fran
    Participant
    @franoh

    I am involved with a Global organisation where agile is an evolving culture and is different within different locations and projects.

    Towards the end of your presentation you talked about the ‘Test Strategy’. One of the main conversations I am currently having is regarding the question from the project manager “How do I see the whole scope of testing when the Test Strategy only mentions areas of risk, testing tasks and guidance?”. Management do miss the large Test Plan it seems, I think this is more to do with having something to hit you with if it all goes wrong in live – but I could be wrong :)

    One of the proposals being throught of is to have Testing Epics (Such as Performance, Security, etc..), so that scope that does not easily fit to a card can be captured and then later cross referenced to 1 or more “functional tickets” where developers are more comfortable. This gives visibility to the various managers of the testing to be understaken as the project evolves. This seems to fit with the slide showing usability, non functional task effort increasing. Is it a valid approach? Do you see any flaws?

    Thanks Stuart – interesting question.
    Firstly to clarify we are on the same page re testing strategy… I see that includes quality risks and then covers how these should be addressed in terms of the ‘levels’ or types of dynamic testing as well as static testing e.g. some blend of ….static analysis, code reviews, unit test , component integration test, story/acceptance testing, security (functional tests, tools, static analysers etc.), and various non-functional testing such as usability (static/dynamic), performance/stress, etc. It should cover techniques such as code coverage, relevant black box techniques etc. It should address how all these should be addressed across sprints and relative to release points. It can make the link to the definition of done where the strategy effectively gets captured/implemented for the team and therefore the tasks they will have to do to be ‘done’. Again we need to balance things so the strategy isnt too prescriptive but instead is supportive.

    Managers wanting to see the whole scope of testing just need to know working software is done – that is the key measure of progress – when the software is done it means it includes whatever testing is required as make explicit in the corresponding level of done (PBI, increment/sprint or release level).. Its OK to develop a test plan or include test planning into a release plan but the danger is that we become plan driven rather than embracing change. If they add value to create test plans then they should still remain lightweight and adaptive.
    Again the whole team are accountable to they need to ‘hit’ the whole team if something goes wrong in live (I would prefer to see them suggesting opportunities for improvement of course !) – there are no functional silos to blame. Agile/lean are about systems thinking rather than locally optimising functional areas.

    Re the non-functional Epics – I would see them as requirements rather than testing tasks. They will of course generate testing tasks to do security testing etc. A functional story/epic will result in functionality being implemented and associated testing tasks. A non-functional story/epic will be a constraint to be considered when developing the other functional stories and itself will directly result in testing tasks. But its still a requirement in the backlog not a task. There is no generally accepted single approach to handling NFRs in agile – some teams stick them on the wall beside their spring backlog to remind them of the constraints they must meet when developing functional stories. If the constraint refers to one story it goes in as acceptance criteria for thta story (I guess this would also work for a number of stories) However if it relates to large features/areas of the system then I have seen teams put them into the definition of done to help generate testing tasks at the sprint (or release) level. Bottom line is you need to define the requirement up front, design and develop functional stories with them in mind, and test to make sure they have been met.

    Hope this helps a bit
    Fran

Viewing 13 posts - 1 through 13 (of 13 total)
  • You must be logged in to reply to this topic.