• Author
  • #9020

    Maybe this is an old school thing but I read on a testing resources site recently about what to do when bug reports are rejected. I didn’t that this was that common.

    Is getting your bug report rejected a common thing? If it has happened to you, how do you deal with it? Is it an discussion you have with your manager or do you let it go?

    I’d be interested in people’s experience of this.


    I believe that before filing a bug report you want to have all the information needed. I don’t file a report if I think something is broken. I file it when I know it is broken and maybe with a theory why. The same goes for the person who receives the report. He doesn’t reject the report unless it is totally useless. If that person doesn’t understand the report completely he should go and talk to the tester and clear things out. Next time the tester knows how to write an understandable bug report and there will never be a need to reject anything.


    I have had bug reports rejected on occasion. Sometimes it is because the requirement has changed between writing and executing a test case. On other occasions the developer spots an error in my test. Sometimes the developer rejects it in error… If I’m not sure I raise it as a query ‘should it be doing this?’

    I have also rejected quite a few bug reports raised by the User Acceptance testers.


    I usually reopen bugs resolved/closed with resolutions like “rejected”, “won’t fix” etc if there’s no valid reason for it commented. And “because it’s supposed to be like that” or “because person x says so” are not valid reasons. Mostly I require an explanation as to why fixing this is of no benefit to the business.


    I’m not convinced that we should be thinking about this from the viewpoint of ‘is there value to the business to fix this’. There is always some value to the business to fix a defect in the software. I think the correct question is ‘is there enough business value to fix this above all the other work that delivers value’.

    If it is unlikely that this defect will ever be valuable enough to be prioritised above other work, then it should be removed from the backlog. Whether this is marked as Rejected or Won’t Fix is not the important part*, the important part is that it is removed from the list of work that needs to be reviewed and prioritised.

    * I have a preference for marking valid defects that we do not think we will ever do as Won’t Fix, leaving Rejected for incorrect tickets. While it is an unimportant distinction for whether we address them, it is sometimes useful to review the trends separately when looking at ways to improve your velocity. Plus no one likes having work Rejected!


    Everyone has made some great points in this discussion and I think we can all agree that having a fully comprehensive defect report can help give a developer deep insight into an issue. We all know the quicker a developer can narrow down scope, the easier it is for them to fix, and the greater the odds are of project leadership giving more legitimacy to your reports in general. I’ve had loads of defect reports tossed aside, but usually not due to poor reporting.

    In the purest sense our goal is to help stakeholders (investors, business leadership, and users) mitigate risk. We write objective reports and move along to the next thing. We should do this with no judgement, and certainly no blame. And yes, there have been issues where I have gone above leadership directly to management about when I felt the risk was too great to NOT fix, where it could potentially harm the company’s revenue. (e.g. security issues)

    Unilaterally dismissing well documented issue reports is unnecessary. Each should be reviewed by project leadership and triaged accordingly. At that time they can be prioritized into development lanes, remain in a backlog state for further consideration in a future release, or archived off when the release/project is complete.

    Each issue report is an opportunity for either the process or the outcome to improve.


    All the previous comments seem to come from people in well-functioning organizations. I think that reflects the population of Test Huddle subscribers. Testers in dysfunctional organizations rarely spend time (or allowed to spend time) in activities that might improve their professionalism—because they know their organizations won’t value their professional behavior.

    One of the characteristics of such dysfunctional organizations is the common practice of downgrading the seriousness of found defects. Such downgrading frequently involves rejecting uncomfortable defect reports, usually to “improve” the release criteria and make the schedule.

    When auditing a development organization, I find a common practice of rejecting defect reports to be a strong indication of a dysfunctional organization—a dysfunction that usually runs much deeper than this one symption. Consequently, if you work for such an organization, I recommend you look for another job. If, however, your organization only rarely rejects one of your defect reports and is otherwise well-functioning, then I recommend that you carefully re-examine your report writing. Don’t argue for the defect, but improve your description—or perhaps improve your testing that “found” this defect.


    I have had bug reports rejected many times. If they are not accurately accounted for the in the defect management system, I find some way to communicate that it is for future reference or just make my own list to try and slide in for a quick fix with a similar bug that may present itself down the line.

    If I disagree on the reasoning behind the rejection I perform a few actions to try and make them reconsider or better understand why.

    The end result should always have some positive outputs:
    (1) I get to know the product better and can make better judgement calls on what is or what is not a “Bug”
    (2) I help the program analysis be aware of ambiguity in the products requirements used for its design that can result in further clarification and communication/updates/changes of a requirement(s).
    (3) I may interface with a UX and/or clinical specialist who may communicate to the requirements engineer (Product Owner, Analyst, or whatever you organization calls them) a use case that will show why the “bug” is of higher concern (or not). This fosters communication that gets the team on the same page of product deliverable expectations.
    (4) Sometimes I find out how much a particular feature will be drastically changed in the near future that does not warrant resource time to actually fix the bug since it will no longer apply after that change. This will make me more aware on what to look for when testing the product as well as possible update my tests to better fit the upcoming vision of the product.
    (5) I am sure there are others, but it all stems around making the group more “tight nit” with communication.

    In the end, following up on bug rejections can be very beneficial in getting the rest of the development team/group in the same mindset/goal/vision of what the finished product should look like.

    I wrote this rather quickly and do not comment often on boards due not having much free-time to do such things, I am open to any and all criticism.

    “Whenever you find yourself on the side of the majority, it is time to pause and reflect.” ~Mark Twain



    I believe in delivering value.

    My work requires me to understand the customer requirement, the process and to do the test from the customer point of view.

    Despite this, I do communicate very frequently with the designer and developer.

    Hence, the moment we started to have bugs, I will communicate across with the designer and developer, how they produce the code.

    A bug can be due to:
    1. A misunderstanding of the requirement, when the requirement says a BAU solution, we may need to go back to them and ask, we have BAU1, BAU2, BAU3, which BAU they are referring to? Instead of, we select the most frequently used BAU type of journey, and got it all wrong.
    2. A bug can also be due to, the people do not understand how the process works. When they selected the wrong type of product, the pricing and billing just don’t seem to be right at all.
    3. A bug can also be due to, over time, the equipment code instead of MX64, they changed it to MX60, and you just can’t find it!

    I do not simply raise a lot of bugs, to show that I am really working hard. Instead, I look towards understanding the whole situation, and get people to understand from one end to the other end.

    This is never gonna be easy, for sure.


    I’ve had defects rejected but mostly because of requirements changed / the defects were no longer applicable, not because the defects were not reproducible or invalid. There were also times when I misinterpreted a requirement and defined the related test incorrectly, obviously in this case the defect was rejected with a comment from Dev.
    When log a defect I usually put the reference as to where the test comes from so the expected result / actual result are all there.


    David wrote:

    I have a preference for marking valid defects that we do not think we will ever do as Won’t Fix, leaving Rejected for incorrect tickets. While it is an unimportant distinction for whether we address them, it is sometimes useful to review the trends separately when looking at ways to improve your velocity. Plus no one likes having work Rejected!

    We would close defects with the code ‘tolerate’. This can be useful if the defect is later raised by the client. Mind you, you have to be sure that they have agreed to ‘tolerate’.
    Scott wrote:

    Unilaterally dismissing well documented issue reports is unnecessary. Each should be reviewed by project leadership and triaged accordingly

    I wouldn’t expect the triage of issue reports to go as high as project leadership. Certainly it makes sense for reports to be reveiwed by senior testers and/or developers. I would not expect any report to be simply dismissed but a valid explanation and reason for rejection to be provided. Having a record of tolerated defects is useful here as new testers may find them again in subsequent releases.

    If developers are unable to identify or replicate an issue from the defect report I would expect them to send it back to the test team with a request for further information.


    @Hilary – I love the term ‘tolerate’ – I may steal this 🙂


    Hi Ronan,

    This is an interesting topic and everyone made great points in this conversation. As a software tester I have worked a lot with bug reports, but my bug reports were rarely rejected.
    Generally, on the project I have worked there is a strong communication between the QA, development and management team. Therefore, many encountered issues are discussed in cases when it is not clear if there is a bug or not. Thus, the number of rejected bug reports is strongly reduced. Of course, no discussions are needed if everything is obvious. In terms of bug reporting, the projects I have worked on might have been an exception. This may vary a lot depending on each company or project profile.
    Most of rejected reports happened when bugs were hard to reproduce or masked by other defects. These situations involve a deeper analysis made together with development and operations team.
    On the other hand, after a bug report is created I have seen that developers prefer also to talk to testers and get more quickly the needed information after reading bug reports. This is happening in most of the cases, even if the report is rejected or not and all the information is clearly provided in the report. Attaching logs to a report can also help reduce the chances of having it rejected. For example a HTTP 404 error code might be clear to anyone who gets the report.

    We would close defects with the code ‘tolerate’. This can be useful if the defect is later raised by the client. Mind you, you have to be sure that they have agreed to ‘tolerate’.

    I like very much this idea with using the word “tolerate”.



    @jerryweinberg – Gerry I strongly agree with your position on dismissing reported defects to “make the ship date”, or because “that would be too expensive to fix”. It truly is an indication of a systemic poison and a organization which does not want to make sure their processes improve.

    @hilaryj – Hilary by project leadership I meant senior level leaders (Test Lead, Dev Lead, PM) and not management, or maybe management if the issue warrants. Also, “tolerate” – that’s a keeper.


    Close defects doesn’t mean fixing. But yes ‘tolerate’ is new testing term that keep testers way out of minor bugs to keep those as Won’t Fix unless it add value to the business after fixing . Rejected bug reports leave testers with more focused testing. Moreover, it keeps tester learn new things and can come up with new errors that might have more impact on the software.


    Is getting your bug report rejected a common thing?
    Nope. It happens from time to time though.
    If it has happened to you, how do you deal with it?
    First I try to see where the other person is coming from and ask why they rejected it. That usually starts up a conversation – often one around “it works as designed/spec’d” But then the design or spec might’ve been wrong so I have to try and explain this to them. I often do this by getting them to see how things would be from the user’s perspective.
    If I’m in the wrong, I accept it, and look back to see where I went wrong and learn from it. e.g. I misunderstood something or my steps to replicate a bug weren’t clear enough.


    Good information with nice description.
    Not yet faced the bug reports rejected and hope not to face.But if the situations comes I think this gonna help me.Thanks for sharing !!!


    It seems there is a few thoughts here on bug reports and its obviously an issue that get people talking. From what I understand you are say @David and @Nicola that sometimes getting bugs rejected is a good way for testers to reflect.

    Also ‘tolerate’ is a new phrase to me. Was that something that ye createdin-house @Hillary?


    oh yes, I have had bugs and defects rejected many times. I find it a natural part of the conversation about defects/bugs/issues. When a rejected bug is assigned to me, I will TEST the arguments of the developer (typically, but could be others). If she’s right, it’s my bad and I can set it [Closed] if not I set it [reopened] with further examples, screen dumps and arguments. … This could result in and over-the-wall battle, especially if we are not in the same office. so keep an eye out. If it cycles too much setup a meeting with project management, product owner, subject matter experts or what you have.

    For states see: About Closure [The Testing Planet Nov 2014] When I’m in a testing activity I want my test cases [Passed], my user stories [done] and my coffee [black]. Stuff may have a start point, some states in between and an end state. Lets look at ways to represent states and articulate the meaning of states.



    Thanks for bringing up this interesting topic. I have a feeling that this is a timeless topic 🙂

    Yes, our bugs will someday be rejected

    and No, we don’t want every bug we find is rejected 😀

    There are many reasons to have a defect rejected which good comments above have been covered. To be honest, more often, bugs are rejected because testers didn’t write a good bug report and the first thing to do when having a rejected bug is to review what’s wrong with my report so that people has rejected it. Don’t offend right away.

    I’ve recently written a blog post titled: 3 Simple Reasons Why Your Bug Report Sucks (And How To Fix It) . Take a look and see if you can learn a thing or two from there.



    I would rather have someone raise a bug with limited information than not raise it. Say you see a bug only once, and you don’t know where it came from so you don’t say anything, and then two other people have the same experience. If you’ve made a record of it, then all three of you know that you’re on to something. If no one says anything, you’re going to end up shipping with the bug.

    If my details were sketchy, I would say so. Something like “I’ve tried x, y, and z, and I haven’t been able to reproduce the issue, but I wanted to mention it in case somone else sees something.” And sometimes the developer will have an idea of what went wrong, just from hearing the little information you have.

    If you’re liberal in raising bugs, you might face some rejection once in a while. I’ve rejected my own bugs!


    Pamela’s policy is extremely wise. This process is precisely analogous to the way surgeons handle the evaluation of their colleagues. If a surgeon diagnoses a cancer, for instance, and then operates and finds no cancer, that’s bad, like a rejected bug that can’t be reproduced. Obviously, if a surgeon has a lot of these “rejected” surgeries, s/he is doing a poor job.

    But, if a surgeon NEVER does one of these “unnecessary” surgeons, s/he is downgraded. Why? Because nobody’s that perfect, so s/he is being much too cautious in choosing to operated, and is probably missing some cancers that and thus not removed.

    It’s the same for a tester. If lots of your reports are rejected, you’re probably doing a poor job—perhaps at testing, or perhaps, as was said earlier, writing poor reports. But, if a tester NEVER has a rejected report, s/he is probably much too cautious, and is letting, as Pamela says, go unraised, so nobody notices, even if other testers or developers spot them.

    Consequently, the stats on “rejected” or “ignored” reports are a clue in evaluating the job a tester is doing. Too many, or not enough, are equally suspicious.


    Correction: There seems to be no way to edit the post I just sent, so I noticed an error but couldn’t fix it. (Isn’t that a dommon reason for rejecting an error report!?

    Anyway, “unecessary surgeons” in the second paragraph should be “unecessary surgeries.” (Not that there aren’t sometimes unnecessary surgeons, just as there are sometimes unnecessary testers, but it was not what I meant.

    Sorry for any confusion.


    Seems that the reasons behind rejected defect reports have been discussed somewhat profoundly in this thread already. As a tester and test manager I believe in raising any issues, which may or may not be defects. I think that rejection is not a sign of a bad tester and no one should take it personally. Sure the test team should keep track of the reasons why defects get rejected and evaluate their work to see if there is a need for test process improvement in that department. E.g. changed requirement not being updated to test cases is a defect of the process itself – lack of communication between different project roles mostly in my experience. Of course there may be cases where an individual tester is producing a lot of rejected defects. The test manager should go through the reasons with that tester and offer guidance and support.

    Mostly there is indeed a valid reason like the fix not having enough business value, e.g. functionality used by one person once a year and workaround exists. What comes to requirement clarity, a deviation from requirement reported as a defect may in fact be a requirement defect in disguise. Also the rejected defects do have an effect on overall quality and user experience and as such are valid and valuable. I think that the impact of small, “irrelevant” defects should be evaluated together, not just defect by defect. Users may do fine with couple of minor defects, but any more could push them over the edge of tolerance.

    Project or business management should never measure the quality of testing or test team by the amount of rejected defects. A good tester explores the system and points out risks, weaknesses and user experience issues in addition to verifying the requirements. It is up to the whole project team together with customer’s comments decide whether these observations lead to any actions (fixes, improvements etc.). Open and constant communication is the best way to keep rejected defects at minimum level, if that is what matters to the stakeholders.


    I totally agree with Kirsi’s line about “Open and constant communication” being the best way to keep rejected defects at a minimum.

    I work on a very small project and have the good fortune to work very closely with the one developer while he’s implementing and while I’m testing. If I find what I think is a defect I talk to the dev first, if he’s available. If not then I’ll record it and yes, sometimes he’ll want to reject something that I think is wrong. But he’ll talk to me first and we always reach an agreement, which means that occasionally I find myself agreeing to him rejecting a defect.

    This was a bit painful initially, because I’ve previously worked on larger projects where the communication between testers and developers wasn’t always a part of the corporate culture and all defects were formally managed, with defect review meetings held to analyse defects. This often happened after one or two or more had been rejected and the testers entered into open warfare, normally with the Development Manager, to reopen them. However I now enjoy the constant communication and the cooperative as opposed to combative way of working. I’ve even had the developer thanking me for finding errors!


    Is getting your bug report rejected a common thing?
    Yes, it is. Depending on the team, product, company culture, etc. I worked in several companies and saw that 1-10% defects are rejected and it is completely fine. People make mistakes, use wrong version of the system tested, misconfigure the environment, misunderstand requirements…

    If it has happened to you, how do you deal with it?
    As a manager, I strongly recommend to monitor this number – percentage of rejections – and react if it is too big for the whole team or for some individuals. High(er) number of rejections shows that either something is wrong with the team product knowlege, attitude or the atmosphere between developers and testers can be better (they work in silos). Usually the number is bigger for new team members and should go down as they gain experience.
    But never ever make this number a personal or team goal! This will result with people being afraid to raise issues and this is something you don’t want.

    Is it an discussion you have with your manager or do you let it go?
    If it was rejected correctly (you agree) – let it go. Good manager should react when systemic problem appear – see above.
    In some companies correctly rejected defects are cancelled at the end of the day. It is a good practice to configure process and tools in such way, that only person who raised a bug can cancel it. This force people to understand theri mistakes and admit they can be wrong from time to time. Most of them will also learn from this :).


    Hi Ronan,
    First of all question is nice. But dear this scenario may not be possible in Testing Because you know when you start the testing you have collect all the valid requirement from the client and according that requirement you start the testing. As a tester you are always try to report a valid bug.

    Bug Report may be possible to rejected like “Misunderstanding of requirement, not proper communication with development team ” or Every time or frequently requirement is change” so may be possible in this cases your bug report is rejected due to all the bug which you report is wrong .

    Solution : Try to Understand the requirement clearly first and if you have any doubt about that functionality then communicate to development team regarding that. try to understand the functionality. When you feel you doubt which you thinking is clear and requirement regrading that functionality is matching with your understanding then try to start the testing.

    2. In frequently requirement changes case ,try to communicate in written way because its real proof when developer rejected your bug which you think that is valid. Because if you do not have proof then you can not say your all bug is valid.

    Its my own view. I do not have lots of experience in this field but this is as my thinking and my view . please any one do not take it personally as developer or as a tester.

    Anubhav Khanduri


    There’s a question that’s been mostly begged through this conversation, and getting clear on it might help.

    What, specifically, is being rejected? Is it the bug report that’s being rejected? In that case, there may be a problem with the quality of the report; perhaps the problem isn’t being identified clearly; or there isn’t a clear example of how to observe the problem; or the report is confusing, hard to read, or presumptuous, or impolite (hint: declaring something a “defect” is one good way to be perceived as presumptuous or impolite). Those tend to be problems with the skills and mechanics of describing the problem in a way that’s useful and helpful to your clients, especially the programmers.

    Or is it the conclusion “this is a bug worth fixing” that’s being rejected? That may be less of a technical issue and more of a social or business issue. Remember, as a tester, it’s not your job to get every bug fixed; you are not the gatekeeper of quality. It’s your job, I would suggest, to provide information to your testing clients such that they can make a useful determination of whether the product they’ve got is good enough for their purposes, and for the purposes of their clients. To do that well, it’s not enough to describe the problem; you must describe why the problem matters to people who matter to your clients. This may involve identifying different kinds of users than the programmers or managers have in mind; different contexts in which people might use the product; or ways in which other products might handle things better. This is reflected in the “complex social judgment” that Harry Collins refers to here: http://www.developsense.com/blog/2014/03/harry-collins-motive-for-distinctions/

    —Michael B.


    Firstly, I don’t think the terminology we’re using is helping us. Using the terms ‘defects’ or ‘bug reports’ suggests immediately that there is a problem with the code. In fact, the code could be working perfectly but it is the design or more fundamentally the requirement that is at fault.
    My preference is for finding and logging/recording issues in the system under test. These are logged based on my understanding of the design/requirement as well as expectations based on my experience with the system under test or how other systems have behaved in similar circumstances etc.
    Only on analysis of the issue raised can it be determined if it is a bug in the code or a problem further up the chain with the design or requirement. Then when it comes to metrics, a breakdown the root cause is of more use than a more generic ‘bug’ count.
    As for the ‘defect reports’ themselves, these may not be the best solution especially when working in an Agile based environment where quick turnaround is more important and the functionality is incomplete and in a state of flux. A more flexible and less bureaucratic approach is needed to enable more effective and efficient communication with the rest of the Scrum team. If you are interested in more detail on this, have a read of the following which was my solution to this challenge:

    Now back to the original question asked!

    Yes, I have had ‘defect report’s rejected. Like others on this thread, sometimes it has been due to my misunderstanding of the requirement and I’m happy to accept the explanation and move on. But when it comes down to a difference of opinion, I will get a business analyst in or someone from the user group to mediate. The more frustrating situation is when the issue I have raised has been dismissed, and it is obvious by the comments that it has not been read or an attempt made to understand the problem. I do my best to be patient in those situations, and as it usually with the same people each time so you learn to take the baby by the hand.


    @Michael you bring up a good point about what is being rejected: the bug or the bug report and the issue of the relevance of a bug in the wider context. I wonder is it a common thing that testers have to “pick their battles” over issues or is that simplifying it?

    @Iain That’s an interesting thought, Indeed I had only thought about the code but incorporating the design/requirements is an obvious addition. Just out of curiosity, why do you think that might be the problem when you have an issue rejected but it is obvious that it has not been examined.

    Do you find that it is a personality or technical knowledge issue with the developer?

Viewing 30 posts - 1 through 30 (of 30 total)

You must be logged in to reply to this topic.