Home › Forums › Software Testing Discussions › Differentiate Test and Production defects
- This topic has 7 replies, 6 voices, and was last updated 9 years, 2 months ago by Michael.
-
AuthorPosts
-
August 12, 2015 at 9:37 am #9023
Hi Guys,
I am trying to establish metrics to differentiate defects logged during testing phase and those which are found in production. What should be the best way to accomplish this? Is adding a field to differentiate the bug in test and production is only way to achieve this? I would appreciate if someone respond with industry standard pointers. How do people achieve this?
Thanks,
HimanshuAugust 17, 2015 at 1:22 am #9046Hi Himanshu
We use two options:
1. A radio button and select between TEST, UAT and PROD.
2. When the software goes live we stop using our defect tracking system and use our service desk software. The user would log a ticket saying they have a problem with the software. We would identify that it is because of the change that went live that caused the bug. We would then link the service desk ticket to the original change number for the project.Hope that makes sense.
Berny
August 26, 2015 at 10:49 am #9156@Bernard
We would identify that it is because of the change that went live that caused the bug. We would then link the service desk ticket to the original change number for the project.
Live bugs are often the result of the latest change. However this is not always the case. Older bugs can surface soon after a release, either by chance or because the users are more alert and more likely to report issues. Do you dig back to find the actual release that introduced the error?
Another case that occurs from time to time is a bug found in test that proves to be in the current production version. We would let support know if we identified these but I’m not sure how there were logged if the users weren’t reporting.them.
September 8, 2015 at 5:12 pm #9272Hi Himanshu,
For making a difference between test and production bugs we usually add in our bug tracking system some annotations and the title of the bug should be like this on the test system:
{Bug number} {Test} {Product version} Bug title
or on the production side:
{Bug number} {Prod} {Product version} Bug title
Depending on the bug system used by the project, the details like product version or sprints can also be entered manually or automatically by the bug tracking system.If a bug is found on production it will be entered in the production tracking system. The interesting part comes when we investigate it on the test environment. Generally, the issue is also reproducible on the test system but sometimes related bugs might also be discovered and they are something like a defect masking situation. Thus, new bugs must be created. There are some exceptions when the issue on production cannot be reproduced on the test environment even if the same software version runs on both environments. This is caused mainly by infrastructure factors like databases, network connectivity, configurations of the third party systems etc.
Regards,
AlinSeptember 18, 2015 at 5:40 am #9395What bug tracking tool are you using? Are you tracking down bugs from users too or just what the testers find in production? One example how I’ve done this is to not have a version number for non-production bugs and use a version number only when bugs are in production. I didn’t create any metrics based on that since I didn’t see any point in it. But we were able to see what kind of issues there are in production.
September 18, 2015 at 11:07 am #9404I forgot to ask why you have a need (or feel the need) of making metrics based on this.
September 18, 2015 at 11:16 am #9405Thanks you all for your response.
@jarilaakso – We are using Atlassian JIRA.We want to measure differentiate Production and Test bugs since everyone is using the same system to log the bug and we need to define month wise or sprint wise production issue reported.
September 21, 2015 at 7:58 pm #9440Jari: “I forgot to ask why you have a need (or feel the need) of making metrics based on this.”
Himanshu: “We want to measure differentiate Production and Test bugs since everyone is using the same system to log the bug and we need to define month wise or sprint wise production issue reported.”
I notice that you didn’t answer Jari’s question. It sounds like you’re looking to produce some kind of variation on the “defect escape ratio” or “defect detection effectiveness”. Please correct me if I’m wrong. But if I’m not, read this: http://www.developsense.com/blog/2015/07/very-short-blog-posts-29-defective-detection-effectiveness/
If you’re considering measuring something, it’s usually a good idea to think of the what and the why, and not just the how. What is your goal here? What are you trying to understand with the measurement?
You might like to try reading “Software Engineering Metrics: What Do They Measure and How Do We Know” (Kaner and Bond) (http://www.kaner.com/pdfs/metrics2004.pdf) and reflecting on what you’re trying to understand, and the purpose for which the measurement will be used. You might also benefit from reading Measuring and Managing Performance in Organizations, by Robert Austin, to consider some of the effects that gathering the data might introduce. Austin warns that these effects are likely include distortion and dysfunction.
There is an alternative to gathering data in this simplistic way. That is to study and learn from every problem, irrespective of where it’s found.
—Michael B.
-
AuthorPosts
- You must be logged in to reply to this topic.