• Author
    Posts
  • #21278
    @darwin

    What are the different levels of KPI for a software tester?

    #21300
    @rpwheeler

    As far as I see, it depends on different schools of thinking about testing.

    If you look at testing as at “factory line” activity, then it is some numerical thing like “number of bugs discovered in testing versus number discovered in production”.

    If you look at testing as at investigative  / learning activity, there should be no KPI, because investigation and learning is not about KPI.

    I chose the letter point of view. When investigating what others did / created, you never know what you will get. And in my experience KPIs do nothing good. They treat humans like machines with some mechanical parameters.

    #21355

    Ken
    @kenm

    I’ve never been a huge fan of KPIs for Testers. Ultimately what you measure you get more of as that becomes the sole focus, and is that the right thing?

     

    Testing is inspection… and finding defects. If thats your sole focus then great.

    I prefer to think of our profession and champions and custodians of quality and drive behaviours to prevent defects from occurring in the first place, well groomed and reviewed user stories, test case review in advance of code development etc.

     

    Ultimately KPIs will be used to evaluate and compare testers. If you have a great QA’s driving defect prevention in a team leading to few defects vs a poorly performing team with a lot of defects then second tester, by KPI, looks better…. I know I prefer the former by far. A lot more cost effective software production and development. Hope it helps

    [Typos edited by mod/DS]

    #21359
    @darwin

    @rpwheeler, I do agree that it depends on how we think about testing. But this is to make it fit in our environment and to help by measuring & monitoring the performance based on a metric.

    Key Performance Indicators for a tester in an organization should be based on their roles & responsibilities. Most importantly it has to be discussed between the tester & test manager. Hence for testers, KPIs can be a modelled as follows:

    70% + 20% + 10%

    70% of which can be based on the day to day tasks (Eg: Shipping the product with the least possible issues, improving the process, improving the knowledge base of your project  )

    20% of which can be learning what you are really interested  (Eg: Sitting with the developer of your team and understanding what they do, how they do )

    10% of which can be trying out something new (Eg: Trying out new framework )

    Not necessarily have to be in the same splits or in the same proportion, it can be varied. But the end goal is tester should continue doing the daily activities in the best possible way and at the same time, there should be enough room to learn and grow.

    Note: This answer is a summary of responses contributed by test professionals in a slack channel

    #21369
    @sagarleo1

    As rpwheeler said, it depends on different schools of thinking about testing. Still you can refer the some points i given below

    1 – Active Defects

    Tracking active defects is a pretty simple KPI that you should be monitoring regardless. The Active Defects KPI is better when the values are lower. Every software IT project comes with its fair share of defects. Depending on the magnitude and complexity of the project, I have seen 250+ defects active at any given time. The word “active” for this KPI could mean the status is either new, open, or fixed (and waiting for re-test). Basically, if the defect is getting “worked”, then it’s active. As a Test Manager, you should set the threshold based on historical data of the IT projects you have oversight on. Whether that’s 100 defects, 50 defects, or 25 defects – your threshold will determine when it is OK and when it is not OK. Anything above the threshold you set is “Not OK” and should be flagged for immediate action.

    2 – Authored Tests

    This KPI is important for Test Managers because it helps them monitor the test design activity of their Business Analysts and Testing Engineers. As new requirements are written, it’s important to develop associated system tests and decide whether those test cases should be flagged for your regression test suite. In other words, is the test that your Test Engineer is writing going to cover a critical piece of functionality in your Application Under Testing (AUT)? If yes, then flag it for your regression testing suite and slot it for automation. If no, then add it to the bucket of manual tests that can be executed ad hoc when necessary. Our suggestion is to track the “Authored Tests” in relation to the number of Requirements for a given IT project. In other words, if you subscribe to the philosophy that every requirement should have test coverage (i.e., an associated test), then you should set the threshold for this KPI to equal the number of requirements or user stories outlined for a sprint. That would equate to one (1) test case for every requirement in “Ready” status.

    3 – Automated Tests

    We have to admit that this is a tricky KPI to track. Opinions abound on what to automate vs. what not to automate, as well as the costs associated with maintaining the automation of system test cases. Generally speaking, the more automated tests you have in place – the more likely it is that you’ll trap critical defects introduced to your software delivery stream. What we would suggest doing with this KPI is to start small and adjust upwards as your QA team evolves and matures. Set a threshold that 20% of test cases should be automated. Tracking this in HP ALM testing is simple to do through Project Planning and Tracking (PPT) – which is not available in HP Quality Center Enterprise Edition.

    4 – Covered Requirements

    As a former QA Test Manager, this is by far my favorite KPI to track. Here we’ll track the percentage of requirements covered by at least one test. One hundred percent test coverage should be the goal for your QA organization in 2016. The validity of a requirement hinges on whether a test exists to prove whether it works or not. The same holds true for a test that lives in your test plan. The validity of that test hinges upon whether it was designed to test out a requirement. If it’s not traced back up to a requirement, why do you need the test? Every day as a Test Manager you should monitor this KPI and then question the value of orphaned requirements and orphaned tests. If they are orphaned, find them a home by tracing them to a specific requirement.

    5 – Defects Fixed Per Day

    Don’t lose sight of how efficiently your development counterparts are working to rectify the defects you brought to their attention. The Defects Fixed Per Day KPI ensures that your development team is hitting the “standard” when it comes to turning around fixes and keeping the build moving forward.

    6 – Passed Requirements

    Measuring your passed requirements is an effective method of taking the pulse on a given testing cycle. It is also a good measure to consider during a Go/No-Go meeting for a large release.

    7 – Passed Tests

    Sometimes you need to look beyond the requirements level and peer into the execution of every test configuration within a test. A test configuration is basically a permeation of a test case that inputs different data values. The Passed Tests KPI is complimentary to your Passed Requirements KPI and helps you understand how effective your test configurations are in trapping defects. Keep in mind that you can be quickly fooled into thinking you have a quality build on your hands with this KPI if you don’t have a good handle on the test design process. Low quality test cases often yield passing results when in fact there are still issues with the build. Make sure that your team is diligent in exercising different branches of logic when designing test cases and this KPI will be of more value.

    8 – Rejected Defects

    The Rejected Defects KPI is known for its ability to identify a training opportunity for our Software Testing Engineers. Think about it for a minute. If your development team is rejecting a high number of defects with a comment like “works as designed”, maybe you should take your team through the design documentation of the application under test. No more than 5% of the defects submitted should ever be rejected.

    9 – Reviewed Requirements

    The Reviewed Requirements KPI is more of a “Prevention KPI” rather than a “Detection KPI.” If you have noticed, several of the KPIs we have listed focus on the detection of defects, rather than how they can be prevented in ALM testing. However, this KPI focuses on identifying which requirements (or user stories) have been reviewed for ambiguity. As we know, ambiguous requirements lead to bad design decisions and ultimately wasted resources. As a QA or Testing Manager, it is your responsibility to monitor whether each of the requirements has been reviewed by a subject matter expert (SME) within your organization who truly understands the business process that the technology is supporting.

    10 – Severe Defects

    We see too many of our clients get hung up on the severity level of defects. It’s a great KPI to monitor, but make certain that your team employs checks and balances when setting the severity of a defect. After you ensure the necessary checks and balances are in place, then you can set a threshold for this KPI. If a defect status is Urgent or Very High, count it against this KPI. If the total count exceeds 10, throw a red flag.

    11 – Test Instances Executed

    This KPI only relates to the velocity of your test execution plan. It doesn’t provide insight into the quality of your build, instead shedding light on the percentage of total instances available in a test set. Think of it as a balance sheet for your test instances in the TEST LAB of HP ALM testing. As a Test Manager, you can monitor this KPI along with a test execution burn down chart to gauge whether additional testers may be required for projects with a large manual testing focus.

    12 – Tests Executed

    Building this KPI in HP ALM is a way to look beyond the Test Instances and monitor all different types of test execution, including manual, automated, etc. This shouldn’t be your only tool to monitor velocity during a given sprint or test execution cycle. You should also pay close attention to the KPIs described above. This KPI is more or less a velocity KPI, whereas a few of the ones outlined above help you monitor “preventative measures” while comparing them to “detection measures”.

Viewing 5 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic.