I’m going to be brutally honest here…
My Past
I was a teaching assistant for 2 years in the College of Engineering at Texas A&M, and I have graded thousands of college student papers for $11.50/hour. I’m going to be honest with you, the money is NOT worth the efforts in a realistic sense but as a student it made do to at least put food on my plates while in college. Additionally, I have a feeling that nowadays teachers are quite underpaid for the efforts they are pursuing and the financial shortcomings are getting worse as many cities are pulling the financial rug on teachers financially.
However,
The State of AI
I have used numerous AI chat systems including NVIDIA Chat with RTX which runs a chatting system locally on your PC, as well as ChatGPT and META's Llama. And consistently I have concluded that
AI is not to any satisfactory level to be relied in for academia. To put it bluntly, AI is really helpful as a starter template, but I have never relied on AI code a solution that worked the first time. This is because the state of AI right now, as we know it, is that the AI system like ChatGPT hallucinates. That is it mixes rules and standardized systems with rules and standardized systems of other things it learned. For example, if you ask ChatGPT to write an essay with a Works Cited and have proper citations and sources, it wouldn't even ask what format but may instead write an essay with APA citations, MLA works cited, and paragraphs in Chicago style. Even if you ask it to write an essay in MLA, it may deviate occasionally using rules from APA. Additionally, the sources and links it generates oftentimes do not exist as a valid source. And even if you point it out the AI is still likely to spit out the same problem or just make up a random irrelevant solution to the issue. These sort of responses are known as "Hallucinations" where the AI makes up its own rules and gaslights itself that it is correct. That being said, systems like ChatGPT is incapable of being fully relied upon to grade papers and therefore would not be reliable.
The Truth about AI Grading Solutions
The only solution I know of is Packback, and my oh my I really dislike that system. First of all I know of peers who have circumvented the AI, tricking the AI to give full marks by using special characters, formatting, etc. That being said, if you create an AI system, there is a high probability it is flawed and ultimately over time the student body may figure out the AI's patterns and easily trick it into giving full marks, completely discrediting the students who genuinely do put in the effort.
The Verdict
Although I understand the situation of professors/teachers being underpaid, I feel like I would rather bite the bullet and make sure that the student body is graded respectively to the effort that they put into their notebooks, without the reliance on AI.
Alternative Solutions
Because Engineering Notebooks are writing-intensive, I would highly suggest finding alternatives. For example, the students could take AP English to further improve English comprehension and writing skills, or to see if the department could consider having your class labeled as "Writing Intensive" and make AP English a corequisite, with the added bonus of students having the potential to get college credit as well. Ultimately, students with Engineering Notebooks come down to
- How well can you write?
- How well can you follow directions of a notebook template?
- How organized and structured can you make the notebook?
Honing in on the obvious weaknesses and trying to perfect the shortfalls may help students better improve their notebooks naturally, without needing to rely on external factors like a judge to tell you what is wrong. And ultimately, by doing so, the notebook will surpass state level and be able to compete with world level by simple self-assessment and following the notebooks guidelines with academic-level writing skills and practices.