From Calculators to ChatGPT: A Journey in Educational Transformation

This article was originally published in the Spring ’24 Edition of NJPSA’s Magazine Educational Viewpoints in March, 2024.

In 1958, American engineer John Kilby invented the integrated circuit, or “the chip,” paving the way for the evolution of large machines into handheld, compact devices. The personal calculator quickly became a hundred times smaller and five times more affordable than its initial design (Kilby, 2001). While society in general openly embraced personal calculators, educators were less enthusiastic.

Math teachers, not surprisingly, led the resistance. At the time, math education predominantly emphasized rote learning and basic arithmetic, principles that calculators threatened to undermine (Fey, 1978). The dilemma was clear: “How could one possibly teach rote arithmetic if students have a machine that can do it for them?” While a fundamental shift in teaching mathematics was necessary, its how and why were not immediately apparent.

Today, we rarely think twice about calculator usage in schools. More importantly, math education has since evolved towards more meaningful, authentic, experiential learning. With hindsight always 20/20, we now see clearly that these things are what mattered most all along. Perhaps, instead of asking, “How could one possibly teach rote arithmetic if students have a machine that can do it for them?” we should have asked, “Why should one possibly teach rote arithmetic if students have a machine that can do it for them?” 

History Repeats Itself

Enter ChatGPT, released to the world in November 2022. While the awesomeness of handheld calculators, especially those from the 60s, pales in comparison to Generative AI such as ChatGPT (and to be fair, math teachers in the 60s mostly resisted elementary calculator use) we find ourselves at a crossroads once again. Students suddenly have a magic wand for writing, putting a ton of unfair pressure on teachers and the educational system at large.

Sam Altman, CEO of OpenAI (creators of ChatGPT) even publicly expressed his sympathy for educators, but suggested that we should be more excited than concerned: “We’ll all adapt, and I think be better off for it. And we won’t want to go back” (Mok, 2023). After all, it’s our core mission as educators to prepare students for the future. In a world where that future is rapidly evolving, shouldn’t our teaching practices follow suit? 

Reframing Cheating

To move forward, we’ll first have to rethink what constitutes cheating; especially plagiarism. Merriam-Webster (2024) defines plagiarism as “to steal and pass off (the ideas or words of another) as one’s own.” ChatGPT and other AI chatbots are considered generative AI, meaning they use neural networks and machine learning to generate something entirely new that has never been written before. While using chatbots to write essays clearly undermines academic integrity, it’s hard to call it plagiarism.

To combat cheating, many educators have relied on AI detectors such as GPTZero. Unfortunately, these tools have proven to be inaccurate. Worse, they often wrongfully identify writing from English language learners as AI-generated content (Liang et al., 2023); a major red flag from an equity standpoint.

John Nash, Professor of Education at the University of Kentucky, argues that “Whether AI use constitutes cheating depends entirely on the assignment and intended learning goals set by the teacher” (Nash, 2023). If the assignment can be completed in a matter of seconds with an AI chatbot, is it even worth doing at all?

The sooner we can accept these ideas, the sooner we’ll be rewarded with the revolutionary changes that AI promises to bring to our classrooms, such as personalized learning, real-time virtual tutors, and super-powered productivity. Teachers will be able to trade time spent on repetitive tasks for time spent coaching, mentoring, and connecting with students. The shift from “sage on the stage” to “guide on the side” will be more possible than ever.

Pitfalls

But generative AI is far from perfect. ChatGPT 3.5, for instance, is trained on a vast dataset of internet text sources like books and articles, which is currently confined to information preceding January 2022. Moreover, its reliance on the World Wide Web—a source typically not known for its reliability and absence of bias—raises questions about its validity. For these reasons, among others, we should approach AI-generated text with a healthy degree of skepticism.

Also, while can appear to be creative, text generated by chatbots is considered to be all “left-brain” (analytical and algorithmic) and no “right-brain” (creative and artistic). In other words, it can generate text based on patterns and structures it has learned from its training data, but it is not (yet) able to generate new, innovative, creative ideas to solve complex problems.

Moreover, chatbots have been known to make simple math mistakes and generate text marred by racial, cultural, and political bias (Singh, 2023). Worse, they are even known to “hallucinate,” or present completely false information as factual. This flaw was recently brought to light by the story of the two New York lawyers who were caught submitting a brief containing fictitious case law generated by ChatGPT (Merken, 2023). 

Recommendations

While there are many reasons to be excited about the future of AI in education, there’s much more urgent work to do in the short term. Experts suggest that our immediate focus should be on the following:

1) Build capacity and understanding. To truly understand the strengths, limitations, awesome potential, and foreseeable risks of AI tools, educators need to go beyond webinars, PD, research, and general discussion. It’s time to get our hands dirty, play around, and kick the tires. If we can’t find a way to make these tools work for us, we’ll at least develop a deeper understanding of what’s possible.

2) Develop a clear policy and guidelines for students. It’s obvious that these tools are here to stay, so let’s prioritize seeking and giving guidance on safe and responsible use. Thinking of a chatbot as a “brainstorming partner” is a concise, overarching way to advise users on what’s appropriate.

3) Put assessments and assignments under the microscope. Revising our assignments and assessments to reflect more current, authentic, high-order thinking skills, along with relying on more multi-modal tasks for grading can negate the possible impact of a chatbot. The benefits will go far beyond circumventing chatbots as these forms of learning amplify students’ voices and empower them in ways traditional learning cannot. 

Luckily, there are plenty of educators leading the charge in these areas. Nonprofits AI for Education and AiEDU provide guidance on policy making, robust lesson plans, and insightful frameworks, as well as online courses, webinars, and other free resources. It’s also hard to scan social media or email without an offer for a workshop, webinar, or resource. When faced with a challenge, no group bands together like educators do.

Conclusion

While it’s easy to view this disruption as a setback, real leaders and innovators will frame it as an opportunity. Let us all follow suit. As we did in response to the calculator, let’s prioritize teaching and assessing the skills that matter most, such as the Four C’s, emotional intelligence, complex problem solving, and global awareness, among others. We are entering a world where students can do so much more than they’ve ever been able to before, so let’s challenge them to do so; because in the end, it’s not what you know, but what you do with what you know that matters anyway.

Share this article:

References

Fey, J. T. (1978). U.S.A. Educational Studies in Mathematics, 9 (3), 339–353. http://www.jstor.org/stable/3481942

Kilby, J. S. C. (2001). Turning potential into realities: The invention of the integrated circuit (Nobel lecture). ChemPhysChem, 2 (8‐9), 482-489. https://chemistry-europe.onlinelibrary.wiley.com/doi/full/10.1002/1439-7641%2820010917%292%3A8/9%3C482%3A%3AAID-CPHC482%3E3.0.CO%3B2-Y

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns (New York, N.Y.), 4 (7), 100779. https://doi.org/10.1016/j.patter.2023.100779

Merken, S. (2023). New York lawyers sanctioned for using fake ChatGPT cases in legal brief. Reuters. https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/

Mok, A. (2023). CEO of ChatGPT maker responds to schools’ plagiarism concerns: ‘We adapted to calculators and changed what we tested in math class’. Business Insider. https://www.businessinsider.com/openai-chatgpt-ceo-sam-altman-responds-school-plagiarism-concerns-bans-2023-1

Nash, J. (2023). Posts [LinkedIn Page]. LinkedIn. Retrieved January 15, 2024, from https://www.linkedin.com/posts/jnash_ai-llm-teaching-activity-7089680104515133442-zGC7/?utm_source=share&utm_medium=member_desktop

Plagiarism. 2024. In Merriam-Webster.com. Retrieved January 12, 2024, from https://www.merriam-webster.com/dictionary/plagiarism

Singh, S. (2023). Is ChatGPT Biased? A Review.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *