Are digital notebooks judged more thoroughly than physical ones?

Our team recently had a competition that had notebooks turned in digitally a few days before the tournament happened. We were curious whether notebooks turned in this way were judged more thoroughly and whether more pages were reviewed since there were multiple days before the tournament actually happened. Usually, judges have a lot less time to review notebooks physically at tournaments, so we were wondering if they spent the same amount of time reviewing digitally and physically turned in notebooks.

1 Like

If notebooks for tournament as digitally submitted ahead of time, they get same consideration ahead of tournament. Physical notebooks should not be reviewed at the event if remote notebook review is taking place.

4 Likes

One of the best things to help your team understand the process is to get one of your coaches or parents to volunteer as a judge at a local event, and see the process for themselves, then they can become a advising resource for your team.

Here’s what I can tell you from my experience as a Judge Advisor for several years. A couple years ago, we switched our notebook judging from “physical” to “digital” and won’t go back. The method we use (similar to World’s judging, and we use the same Judging Software used at Worlds (called RoboMentors)) ensures better reliability in the rubric score for the digital notebooks.

Here’s how we do it: The judge advisor sends each notebook to two different judges for scoring against the rubric. The scoring is done “blind”…that is, only the JA sees the rubric scores and the notebook judges don’t see their colleagues’ scores or comments during the initial scoring process. If the two rubric scores agree within 5 or 10 points, the average of the two notebook judges is used to rank-order the notebooks. If the two rubric scores don’t agree within 5 or 10 points, the JA sends the notebook one or more other notebook judges for scoring, this either “strengthens the average” when the 3rd judge’s score falls between the first two scores, or it “identifies an outlier” if the new score agrees with one or the other initial reviews. Sometimes a 4th review is done…but it depends on what the scores are looking like (remember, we’re looking to identify the top 4 or 5 notebooks to be considered for Design, Excellence, and Innovate. Low-scoring notebooks don’t get much additional attention).

When we used to do notebooks physically, a couple judges would quickly leaf through each notebook, sorting out “developing” notebooks (which don’t get evaluated) from good notebooks. (A typical developing notebook is 5 or 6 pages with maybe a crayon drawing). Then, depending on how big the tournament is, all the judges sit down and score a few notebooks before going out to interview, or a couple judges are dedicated to the task while the other judges do interviews. The amount of time spent really depends on the initial impression of the notebook.

15 Likes

Amen. Not only should they do this at least once, having a mentor-type do this at least once per year keeps them up on the current on the status quo in your region.

There is literally no substitute for this knowledge, as no data may leave the judging room other than what is carried in the memory of the judges. They cannot share details about other teams or specific deliberations, but they can offer invaluable direction and insight in broad strokes.

6 Likes