Evaluating Vendor Proposals: There's a Better WayEvaluating Vendor Proposals: There's a Better Way
One of the many reasons consultants are hired is to bring an experience-based process to a technical selection when evaluating product purchases.
August 27, 2014
One of the many reasons consultants are hired is to bring an experience-based process to a technical selection when evaluating product purchases.
A common tool used by many project teams is an evaluation spreadsheet that divides the proposals into specific categories. In many cases, certain categories will be given more weight than others to reflect the relative importance of the categories within that enterprise.
The goal is to add balanced objectivity to an otherwise subjective process. Typically, several evaluators will participate by rating each factor for each proposal prior to a final process that combines the opinions of the evaluation team.
This process is a proven technique, but it can sometimes produce flawed results. It is common to average the individual scores to produce a blended total outcome. However, averaging can produce an unfair statistical variance in some cases. Take the chart that follows as an example, which shows scores from a multiple member evaluation team. In this case, each category is given equal weight.
Example Score Results:
It is clear that the evaluators were not of a like mind in a couple of instances, and the discordance of a single individual (in this case, Evaluator #3) can alter the result. If those scores matched the other evaluators (using the statistical median), "Vendor B" would equal "Vendor A" and would be a finalist instead of receiving a "Sorry, you lost" letter. This type of unfair result can happen even more easily if the categories with an inconsistent score are given a larger weight.
One way some teams deal with such variances is to "throw out" high and low scores, similar to the judging of events at the Olympics. However, this is not completely fair or reliable as a means to produce accurate results, especially with a small scoring team.
Variations are not as rare as one might think, and they can sometimes be explained. One possibility is that the low score is due to a problem uncovered by a sharp-eyed reader or someone on the team with insight or expertise in the area in question. Of course, the exact opposite can also be true, where an evaluator misses or does not understand the answer. And there is always the possibility of bias, where an evaluator has personal reasons to favor one vendor or be more critical of a specific competitor.
Consensus Evaluation Process
To deal with all of the above, one time-tested approach is to gather the evaluation team for a consensus scoring process. As before, the entire team reads and scores the alternatives first, and then comes to the meeting with reasons for each score. However, instead of just averaging the results, the team discusses each score collectively at the meeting. This allows the team to leverage the expertise of individual members while mitigating the impact of any evaluation errors or personal biases.
After a complete discussion, the team selects a single (consensus) score to assign to that category for that alternative. Naturally, this is repeated for each score until completed. The scores are then compared to other contenders in the same category to ensure that relative differences are properly accounted for between proposals.
The final results are invariably endorsed by the evaluation team as accurate, appropriate for the project, and defensible. This final point is not insignificant when the decision is subject to any protests from proposing vendors. With the averaging method, public sector agencies are often required to produce the individual score sheets (as part of a public disclosure request). This can lead to vendors arguing that certain scores by a single evaluator were unfair, and that a fairer process would impact the results in their favor. The agency must now spend significant time either defending both the scores and the process, or altering the results based on the protest. Either situation not only delays the overall project, but it also can encourage other vendors (especially the one with an award at risk) to file their own requests to be heard.
The consensus evaluation process reflects a collective score and thus only has to be defended on its merits. This is usually very easy to do because the team's reasoning for the singular score is documented and balanced. In addition, if the evaluations are subject to a public review, it shields the individual evaluators from criticism or accusations of personal bias.
The consensus process may not be for everyone, but for me, 25 years of usage has proven its value.
J.R. Simmons is President and Principal Consultant at COMgroup, Inc., an independent consulting firm, and currently the Executive Vice President for the Society of Communications Technology Consultants (SCTC). The SCTC is an international organization of independent information and communication technology (ICT) professionals serving clients in all business sectors and government worldwide
Don't miss the Society of Communications Technology Consultants annual conference, open to members and non-members of the SCTC. The event will feature essential technical and logistical updates, networking with peers from around the world and fun in the sun - San Diego, 29 Sep to 2 Oct. More at http://c.ymcdn.com/sites/www.sctcconsultants.org/resource/resmgr/Docs/Brochure.pdf
The Society of Communications Technology Consultants (SCTC) is an international organization of independent information and communication technology (ICT) professionals serving clients in all business sectors and government worldwide