Structuring Fair Evaluation for NASA’s MagQuest Challenge

Challenge

The National Geospatial-Intelligence Agency (NGA) launched MagQuest, a multi-phase incentive prize challenge, to accelerate innovation in measuring and modeling the Earth’s magnetic field. As the challenge reached its fourth phase, NGA needed a robust, transparent system for evaluating competing teams and distributing a multi-million-dollar prize purse. The task was to design a scorecard that balanced scientific rigor, fairness, and clarity, ensuring that both the judging process and the final award allocations would stand up to scrutiny.

My Role

I partnered with the NGA challenge organizers and HeroX platform team to co-design the scoring framework and build the prize allocation calculator. My contributions included:

  • Facilitation & Alignment: Leading collaborative working sessions with subject matter experts to define evaluation quadrants, criteria, and weighting.

  • Scorecard Design: Developing a structured rubric that captured both technical progress and program reliability.

  • Automation: Creating a Google Sheets–based calculator that converted qualitative assessments into weighted scores and automatically calculated prize money distributions.

Process

1. Collaborative Scorecard Creation
We began with two virtual whiteboard sessions (similar to MURAL / Miro) where stakeholders debated and refined evaluation quadrants. Each quadrant represented a major dimension of success:

  • Technical progress

  • Operational concept viability

  • Measurement performance

  • Program reliability

Within each, we identified specific, measurable criteria. For example, “How much testing have they done themselves?” and “How does their mission concept affect the magnetometer?” Each was weighted based on relative importance.

Translating into a Rubric
The output of these discussions became a structured scorecard. Each criterion was assigned star ratings, which translated into weighted numeric values. This ensured that evaluators could capture nuanced judgments while keeping scoring consistent across teams.

Prize Allocation Calculator
I then built a Google Sheets tool that automated prize calculations. Judges could enter star ratings for each team, and the sheet would:

  • Apply the weighting logic from the rubric

  • Calculate total team scores

  • Distribute prize money under two different models (percentage of maximum possible score vs. percentage of total awarded stars)

Outcome

The final framework achieved several key goals:

  • Transparency: Every decision could be traced back to clear criteria and weights.

  • Fairness: Multiple perspectives were incorporated into the scoring system, reducing bias.

  • Efficiency: Automated prize calculations reduced manual work and minimized the risk of error.

Ultimately, the MagQuest challenge concluded with confidence in both the evaluation process and the award allocations. The system we created not only supported NGA in this phase but also provided a reusable template for future large-scale innovation challenges.

Key Takeaway

By blending facilitation, design thinking, and automation, I helped NGA move from a complex set of expert opinions to a clear, defensible system for rewarding innovation. This case illustrates how structured collaboration and thoughtful tool design can transform high-stakes decision-making into a transparent and trusted process.