AI will revolutionize education — and reality

Advertisement

Advertise with us

On a recent test, a student submitted a perfect paper. I recognized that he had used OpenAI’s Chat GPT to produce the answer, so I gave him full marks.

Read this article for free:

or

Already have an account? Log in here »

To continue reading, please subscribe:

Monthly Digital Subscription

$1 per week for 24 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*Billed as $4.00 plus GST every four weeks. After 24 weeks, price increases to the regular rate of $19.00 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.

Monthly Digital Subscription

$4.75/week*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*Billed as $19 plus GST every four weeks. Cancel any time.

To continue reading, please subscribe:

Add Free Press access to your Brandon Sun subscription for only an additional

$1 for the first 4 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles
Start now

No thanks

*Your next subscription payment will increase by $1.00 and you will be charged $16.99 plus GST for four weeks. After four weeks, your payment will increase to $23.99 plus GST every four weeks.

Opinion

Hey there, time traveller!
This article was published 18/03/2024 (578 days ago), so information in it may no longer be current.

On a recent test, a student submitted a perfect paper. I recognized that he had used OpenAI’s Chat GPT to produce the answer, so I gave him full marks.

COVID-19 transformed how we teach and evaluate. Exams and testing needed to go online. I also decided to make all my exams “open universe.” Students have full access to their notes and the internet. This is how they will work in the “real” world and how we must evaluate them. Of course, this approach has consequences.

It encourages collaboration and plagiarism. I can police verbal communication when students are in the same room. I cannot monitor email communication, although I embed checks to detect file sharing. Educational software creates “lockdown” software to limit virtual collaboration, but these systems also cut off access to the internet, defeating my approach to testing.

Michael Dwyer / the associated press Files
                                Artificial intelligence is a game changer, and it’s coming fast. The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT.

Michael Dwyer / the associated press Files

Artificial intelligence is a game changer, and it’s coming fast. The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT.

After 50 years of teaching, I can smell plagiarism — like hitting a paved highway after driving down a bumpy country road. Google and other search engines allowed me to locate text that students had lifted.

Now it is getting harder. Many students use writing aids such as Grammarly, which I recommend. Since an increasing number of my students have first languages that are not English, I no longer need to spend time correcting spelling or sentence construction, which is good.

However, AI not only fixes grammar and spelling but can also write in different styles. AI also evolves rapidly. Re-entering the same prompt (AI’s word for question) produces different answers. I can no longer do a Google search to check whether a student copied the AI output.

These developments will force universities to revamp their teaching and evaluation. First, since many AI products require payment for the premium version, students with the ability and willingness to pay can draw on more sophisticated systems and produce higher-level work. This creates academic inequality, but it has always existed. In an earlier era, some students paid for tutors. Others paid impersonators for papers, with price determining the quality of the product.

Second, teachers must change student evaluations. Some will cling to the multiple-choice format, which is standard to many undergraduate courses. It is way easier, and marking impedes promotion by getting in the way of publishing research. Banning phones, writing tests in lead-lined rooms, and having students in their bathing suits could be solutions.

For upper-year courses with essays, the topics must become complex with multiple subtopics. Computation exercises need to replicate the tougher problems where students think through the steps needed to solve a general problem. Teachers will need to spend more time designing questions and topics. Marking also becomes much more time-intensive. No easy answers exist.

Research will also change. AI systems are performing more complex numerical analysis. They can input data, complete the analysis, present the code in several programming languages and then prepare a text document describing and interpreting the results. Other systems can analyze text and perform sentiment analysis, detecting the “emotional basis” for human testimony. Finally, AI can now write a decent literature review with citations.

Lest I sound like a fanboy for AI, deep concerns exist. I follow two Substack commentators debating AI. Ethan Mollick, the glass-is-half-full-and-filling guy, posts the latest development extolling the magical wonders of AI. Gary Marcus, the glass-is-half-empty-and-draining guy, takes great glee in citing AI’s latest misadventure.

And misadventures abound, like:

• Making images of Nazi soldiers inclusive by showing women, Asians and Blacks as stormtroopers. (Google’s Gemini);

• Depicting ants entering their nest with four and not six legs (Open AI’s Sora);

• Offering fabricated citations of scientific articles (OpenAI Chat GPT3.5); and

• In discussions of justice and sentencing, meting out harsher sentences to accused with African-American names. (OpenAI)

Despite these high-profile mistakes, I tend to side with Mollick. AI capabilities evolve rapidly, and these misadventures will resolve.

But AI leaves me with deep disquiet.

Those who use AI for computation court disasters. After 50 years of statistics, one develops a “nose” for numbers, like the nose for plagiarism. However, many of us are innumerate.

Fewer people will be able or inclined to perform independent analyses to check results. This is not an issue in an undergraduate test but could be a little more critical when managing flight paths for the thousands of planes in the sky at any moment.

As writers and researchers prepare the material with AI assistance, it enters the “corpus,” the vast amount of reference material on the internet that large language models such as Chat GPT use to produce their responses.

Misinformation will become embedded as those who know little of history uncritically accept AI’s results. TikTok, X and Meta are pipsqueaks compared to AI’s capacity to unhinge us from reality.

China has the largest saturation of closed-circuit cameras surveilling its population. AI has moved beyond facial recognition and can detect identity using gait and iris scans. How far off is scanning brain waves to interpret our thoughts?

Technofascism will then be complete.

Finally, human progress often requires a startling leap in perspective.

Einstein fixed problems with Newton’s physics by altering the obvious assumptions that space was straight and time constant.

He created physics with curved space and dynamic time, producing predictions that withstand continued challenges.

Such illogical leaps propel science forward. Will AI propel or constrain future intellectual revolutions? For me, that is the deep issue in AI and the advancement of knowledge.

Gregory Mason is an associate professor of economics at the University of Manitoba.

Report Error Submit a Tip

Analysis

LOAD MORE