Skip to main content

Improving Panel Decision Making: Understanding Methods for Aggregating Reviewer Opinions

PI: Elena A. Erosheva
Sponsor: Improving Panel Decision Making: Understanding Methods for Aggregating Reviewer Opinions
Project Period: -
Amount: $419,995.00


This research project will develop mathematical and statistical modeling as well as machine learning methods for understanding the decision-making process of peer review panels. Many high-stake decisions such as grant funding or candidate hiring involve peer review panels. In these panels, qualified individual reviewers provide their opinion on grant proposals or job candidates via a predefined process. Despite a potential plurality of opinions among panel members for a single application, the panel-level outcome of peer review often is a single number, such as the average of reviewers' scores. This project will systematically study existing methods for aggregating individual opinions into panel-level decisions. It also will develop a set of tools to communicate panel decision-making information to stakeholders. The tools to be developed will make the panel decision-making process more transparent. Stakeholders will be able to identify applications where there is less consensus and more uncertainty, potentially having direct impact on biases in human judgements. The project will refine the new tools with real data collected from various peer review processes.

This research project will develop new methodologies to model and represent uncertainty and robustness in panel consensus, whether around one dominant ranking of applications or as divergent opinions that provide different rankings depending on the latent opinion group. A decision support system called Improving Panel Consensus Tool or ImPaCT will be developed. ImPaCT will present the information and opinions from panel review via a set of visualization tools that can display score and relative order between applications, indicate which applications are comparable, and, if desired, assign importance weights to reviewers to determine if that changes the outcome. ImPaCT could be used as an analysis tool for understanding and summarizing reviewers' scientific merit assessments for stakeholders such as funding agencies and program officials or as an interactive tool for assisting panel members with making panel-level scientific merit decisions. ImPaCT will present a synopsis of the task, flag issues needing extra attention (ties, lack of consensus, or robustness) and offer the relevant information for the current sub-task (e.g., submission under discussion). By providing a synopsis of reviewers' assessments in real time, this tool will help keep in check what could otherwise be the undue salience of the loudest voices as well as the undue influence of anchoring, selective memory, and other cognitive biases on individual judgment in the aggregation of complex information under uncertainty.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.