Unconventional Wisdom V18: Quicker, Cheaper, and Yet Better Proposal Evaluations

Evaluating vendor proposals can be one of the most intimidating and time-consuming aspects of acquiring software.  Proposal Evaluations is a form of testing which potentially has far greater impact than the code execution that usually is all that’s thought of as testing.  Software acquisition ordinarily ignores QA, even though QA could and should participate in it productively, if they developed suitable skills and knowledge.

Perhaps it’s a mixed blessing that proposal evaluation isn’t on QA’s radar, since as the following article discusses, the industry as a whole has a lot of trouble with it, largely because of following mistaken models and using practices that don’t serve purposes as well as needed or expected.

Relying on external vendors is a proven way to reduce software development time and risk; yet a large portion of those acquisitions fail to provide desired results.  Ineffective and inefficient proposal evaluation is one of the main reasons.  Because evaluation weaknesses seldom are recognised, they’re seldom improved.

Conventional Wisdom

There seems to be a single way that practically all proposals are evaluated.  It’s widely accepted that, “It’s got to be done that way.”

The basic process is for the acquirer (buyer) to issue a Request for Proposals (RFP) to prospective suppliers (vendors).  The RFP describes the acquirer’s requirements and requests vendors to propose how they will satisfy the requirements and what they will charge to do so.  The RFP usually prescribes the format for proposals and also instructs vendors to provide references and other information about themselves and their qualifications to do the proposed work.

To reduce total evaluation time and effort, it’s common for buyers to expend considerable effort creating a “short list” of vendors to receive RFPs.  Sometimes RFPs go only to vendors already on an organisation’s approved vendor list.  In addition to or in the absence of such a list, acquirers often gather and analyse information from various public and private sources, including often-fairly-generic Requests for Information (RFIs) to prospective vendors, to decide whether they think a vendor should be invited to propose.

The vendor’s proposal describes the products and/or services they propose to provide and often includes related sales literature.  To evaluate the various proposals, the acquirer first creates a worksheet listing the key capabilities they believe are necessary in order to accomplish their requirements. Some of the capabilities relate to the work the software is intended to do, and others relate to the vendor’s characteristics and practices.

The needed capabilities typically are prioritised and usually are assigned weights reflecting their relative importance.  Ordinarily, several evaluators review each vendor proposal.  Their evaluation frequently also includes vendor product demonstrations, checking references and other sources of information about the vendor and/or proposed products/services, and visits to vendor and/or vendor client sites.

Info

Each evaluator scores how well they believe the vendor will satisfy each of the listed capabilities.  The score for each capability is multiplied times it weight, totalled for the evaluator, and averaged across evaluators.  Finally each vendor’s proposed price is listed and a price-performance (actually performance-price) bang-for-the-buck ratio is calculated for each proposal.  Based on these scores and other typically subjective factors, the evaluators recommend the proposal to be selected.

Unconventional Wisdom—Prequalifying Vendors

The extensive effort spent prequalifying and limiting vendors to send RFPs easily can backfire and at best is very wasteful.  Limiting proposals to vendors already on an approved list prevents proposals from other vendors who may have better offerings.  Moreover, the general information typically used to prequalify vendors and ordinarily-uninformed analysis thereof may inaccurately portray the vendor’s actual capabilities and not reflect how the vendor actually could meet the acquirer’s requirements.

Info

Furthermore, the acquirer’s effort evaluating a proposal is a fraction of the vendor’s effort preparing a proposal.  For essentially no cost of electronically distributing an RFP to potential vendors without prequalification, vendors will make a far better-informed judgment whether or not to propose.  In those limited situations where an RFP contains confidential information that should not be distributed openly, request prospective proposers to provide reliable relevant information to qualify to receive the RFP.

Unconventional Wisdom—Proposal Scoring

The conventional proposal scoring practices again are excessively time-consuming while ironically actually undercutting the main legal purpose of contracting with a vendor.  Consider:  if a vendor says it will provide a certain beneficial result, the vendor is legally accountable if the result does not occur.  In contrast, if the acquirer says it thinks what the vendor proposes to do will provide the desired beneficial result, the vendor is far less likely to be held accountable if the result does not occur.

Some evaluations focus on what the evaluators presume are the vendor’s capabilities, usually with the same inadequacies of prequalifying vendors to propose at all, rather than what the vendor actually proposes to do.

Thus, all that extra effort figuring out whether a vendor or vendor’s proposal will satisfy the acquirer’s requirements actually shifts the legal burden of responsibility from the vendor to the acquirer.  It’s far cheaper yet more reliable to structure the RFP so the vendor explicitly proposes to satisfy the acquirer’s required capabilities in their score sheet.  My software acquisition training and forthcoming book show proven ways to do it.

“It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”

 – Will Rogers

I hope you enjoyed this ‘Unconventional Wisdom‘ blog on Proposal Evaluations.  Much of conventional wisdom is valid and usefully time-saving.  However, too much instead is mistaken and misleadingly blindly accepted as truth, what I call “conventional we’s dumb.”  Each post, I’ll share some alternative possibly unconventional ideas and perspectives I hope you’ll find wise and helpful.

 

Check out all the software testing webinars and eBooks here on EuroSTARHuddle.com

About the Author

Robin

Consultant and trainer on quality and testing, requirements, process measurement and improvement, project management, return on investment, metrics
Find out more about @robingoldsmith

Related Content