The Scoring Process

Each item writer and expert judge responsible for scoring and commenting on your application will use the same scoring tool, our Trait Scoring Rubric. This is available to you so that you may consider the four traits that every completed Collection should address. During the Peer to Peer Review, each Collection will receive between five and ten reviews from other item writers who have also submitted a Collection. Scores will be calculated using a normalization algorithm that ensures a level playing field for everyone. The final Peer to Peer Review score for a Collection is the average of all normalized scores received.

During the Expert Review, each of the Semi-Finalists’ Collections will receive an additional five reviews from credible authorities. These scores will also be calculated using the normalization algorithm. No matter which item writers or experts are assigned to you (whether they are typically hard or soft graders), everyone will be treated fairly. Please take time to consider how this scoring process has been designed and how you should develop a strategy that will appeal to our reviewers.

Meet Our Contributors

We'd like to thank our contributors. The William and Flora Hewlett Foundation has generously funded the development of the Algebra Readiness Challenge and the prize purse. CoreSpring has provided the authoring tools to create the items. Parcc Inc. provided invaluable guidance and outreach for the Challenge. CCSSO has provided support and guidance. And we'd like to especially thank Ted Coe, Ph.D., Director of Mathematics at Achieve, who donated his time and expertise in developing the trait-scoring rubric and judging protocols.