Fri, Nov 22, 2024 | Jumada al-Awwal 20, 1446 | DXB ktweather icon0°C

The impact of AI on education: What ChatGPT's outage taught us

With digital tools making inroads into academic spaces, it's more important than ever to inculcate critical thinking among students

Published: Thu 13 Jun 2024, 8:49 PM

Updated: Fri 14 Jun 2024, 5:44 PM

  • By
  • Joseph John Nalloor

Top Stories

Last week, ChatGPT had an outage that created global headlines. As one online tech portal headline summed it up aptly — “Millions forced to use brain as OpenAI’s ChatGPT takes morning off”. We are still at an early stage of artificial intelligence (AI) integration, and as it forges ahead into every aspect of our lives, it begs the question: why do we still need to teach students the basics of language, writing, critical thinking, creativity and ethics as processes become automated?

ChatGPT was introduced on November 30, 2022, smashing records to reach 100 million monthly users in just two months. According to OpenAI, there are over a 100 million people using ChatGPT every week in 2024. It has been embraced by users the world over to help draft presentations, reports, speeches, office emails and even personal text messages. Overnight, it has transformed the vocabulary of friends, families and colleagues into that of wordsmiths. Messages and replies are peppered with terms straight from a thesaurus and an overtly positive robot lacking the authentic tone of the person you see in real life. Much to the bane of educators, it has also made the essay and other assignments redundant as students increasingly use it to generate assignments that all read the same.

To add to the chaos or for the better, OpenAI recently unveiled ChatGPTEdu, a ChatGPT for universities version built for university students, faculties and researchers to use. In their blog post ‘Bringing AI into the new school year’, OpenAI offered universities the ability to build customised versions of ChatGPT to share within university workspaces. It will use GPT-4o, the latest flagship model with text interpretation, coding and mathematics and other key features, such as data analytics, web browsing, document summarisation and an added bonus of working in over 50 languages. Universities would have higher message limits with the caveat that conversations and data are not used to train OpenAI models.

However, student assignments and academic research around the world now have elements of Generative AI as universities struggle to form and enforce policies on how to deal with it. You have the futurists calling for its integration wholeheartedly as it is embracing and mastering the inevitable, the second group is vehemently opposed to it and the third group secretly hopes it is a dream and will just go away. And then, there is a minority fourth group that wonders what about the checks and balances for these tools? Last year’s outright bans by schools and universities have evolved into a more cautious approach by academic institutions and educators. Just last month, in the middle of May 2024, OpenAI disbanded the Long-Term AI Risk Team that was set up “to think about the question of how to keep AI under control”. The OpenAI blog post last year in July 2023 announcing the setting up of the team had stated: “Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue. We are assembling a team of top machine learning researchers and engineers to work on this problem.” In May 2024 end, they replaced it with a new ‘Safety and Security Committee’ to evaluate and develop processes. Will that be the safety net we are looking for or end up with ambiguous terms is yet to be seen?

The technology is awe-inspiring yet frightening for many as the GPT-4o ChatGPT version launched in May 2024 can now process text, audio and images with memory and web browsing thrown in for free to offer a more personalised experience. The Memory option allows ChatGPT to remember details and preferences from conversations to tailor custom answers like vegan recipes to restaurant and travel suggestions. It does make you wonder where is it all going. Will it change the learning and teaching process? Will there be an over-dependence on technology?

Educators realise they have a responsibility to adapt to new technology and teach their students how to best use it and work around it. Doomsday predictions were also made at the advent of the Internet, search engines, Excel sheets and even Wikipedia. The Internet didn’t replace teachers, nor has it made each and every one smarter, despite the fact that every person with a smartphone has access to more information than ever in the history of humankind. However, Generative AI is a new revolution that requires existing systems to change, and change is difficult. New models of assessments and training will have to be imparted across education systems globally lest it increases the digital divide.

Will it mean the end of traditional education? Perhaps not — all this could also mean a return to the basics! Just last week, there was an article in The Guardian stating how armed forces around the world are now suddenly on an overdrive to “train sailors brought up in a digital world to master extremely analog technology, such as the use of sextants to navigate by the stars” lest satellites and the Internet fail, reverting the world to an analog environment. This sudden realisation now has led to certain armed forces undertaking training in centuries-old knowledge that enable personnel to identify stars and other celestial bodies to estimate wind, direction, distance, position and even build a basic compass to map a course without GPS.

The immediate challenge lies in imparting students with a strong foundation in the basics of creativity, cognitive skills, critical thinking, language, art, music, culture and other subjects. Only when they master the basics will they be able to navigate this evolving environment. Educators have the responsibility to teach students how the limitations and biases of the human world are amplified in AI models. Systems trained on biased datasets tend to magnify AI bias, also known as algorithm bias or machine learning bias. It’s common to see headlines exposing glaring racial or gender misrepresentation or non-existent data being generated by AI models.

Ethical guidelines and AI governance policies are being drawn up across the world and need to enter classrooms as well. It is a fast-evolving environment that will require educators, institutions and students to move in tandem to not just be subject experts but also technology experts to stay ahead of the developments and yet be grounded in the basics of knowledge. Tom Fletcher, a former British diplomat, wrote how big data could be worthless unless one knows what questions to ask the machine and interpret the data. He then stated, people who can “curate, interpret, analyse and present it will wield disproportionate influence”. Same holds true for AI.

Students who have skills in prompt engineering, creativity, original thinking, language, critical reasoning will have an edge over others. They will not be confined by the information/content generated but will be able to mould and push boundaries of this new technology using their brains.

~ Joseph John Nalloor is the Discipline Lead, School of Media & Communication, Murdoch University Dubai. He has been teaching media for over 19 years in the UAE and serves as a board member on regional industry boards.

wknd@khaleejtimes.com



Next Story