3
Generative Artificial Intelligence (GAI)
They leverage statistical patterns in the training data to make plausible inferences about how to continue and expand

Generative artificial intelligence (AI) refers to a broad category of consumer-facing and business-to-business services that have the capacity to generate cultural material such as text, images, audio, and video in response to user prompts. While this field has gained sudden popularity and visibility in late 2022 and early 2023 due to the viral success of ChatGPT, generative AI encompasses a much wider array of applications beyond chatbots.


The core enabling technology behind most current generative AI services are large language models - neural networks trained on massive text datasets - which allow systems to generate fluent, human-like text. When provided with a text prompt, these models can continue the text by predicting the most likely next words and sentences. While the outputs may seem intelligent or creative, these systems do not actually understand language or possess true intelligence or autonomy. Rather, they leverage statistical patterns in the training data to make plausible inferences about how to continue and expand upon the given text prompt in a human-like manner.

With sufficient computing power, massive training datasets, and advances in machine learning techniques like deep learning, large language models can now generate cultural material that meets or exceeds expectations around coherence, fluency, and relevance for many consumer and business applications. However, it is important to recognize the current limitations of generative AI in areas like reasoning, common sense, and originality. While these systems are rapidly evolving to be increasingly useful and versatile, they ultimately rely on inferences from their training data rather than true comprehension or creativity. Thoughtfully recognizing the strengths and limitations of modern generative AI technology will allow us to apply it in responsible and socially beneficial ways.
Generative AI for Coding
Generative AI for Video
The Landscape of Generative AI by Ramsri Goutham
These systems are being trained on huge datasets scraped from the web. Is it fair that private companies train these systems on the outputs of writers and artists? What if these systems eventually replace the writers and artists which the training depended on?
The Limitations of Generative AI
There are significant risks and limitations associated with generative AI tools in spite of their remarkable ability to respond to natural language prompts.
  • Hallucination
    They are prone to statistically plausible but factually incorrect responses, often delivered in a confident tone intended to please. There are many reasons for this, such as ambiguous user prompts, poor training data or lack of prompts. Most experts believe they can be minimised but not removed entirely.
  • Common Knowledge
    Their reliance on training data means they will tend to reproduce common knowledge i.e. what is widely believed. They have no model of knowledge which means they are producing what is likely rather than accurate. This leaves them prone to reproducing widely held beliefs which might be in error.
  • Unpredictable Consequences
    Chatbots in particular are inherently unpredictable. We simply do not know everything which they can do or what effect these will have in the real world. There are emergent properties at scale which even their designers often cannot fully explain. This means that unpredictable consequences for individuals, organisations and societies are inherent in generative AI.
Generative AI and the Crisis of Assessment in Higher Education
The launch of ChatGPT and other highly capable generative AI systems from OpenAI in late 2022 sparked an ongoing crisis around assessment integrity in education as we enter the 2023 academic year. These conversational systems can generate human-like essay and short answer responses to prompts, raising concerns about AI-enabled cheating and plagiarism. However, simply banning these tools is untenable, as generative AI is now part of the technological landscape students have access to.
Attempts to detect AI-generated text with specialized analysis algorithms also have serious limitations, as they often misclassify original student work, disproportionately impacting English language learners and other vulnerable groups. So rather than reactive approaches, educators need to take this opportunity to thoughtfully transition assessments towards more holistic, future-facing methods.

This involves incorporating open-ended, creative, and applied components that are difficult for current AI to convincingly replicate. It also requires explicitly integrating AI literacy into curricula to prepare students for a world where generative models are ubiquitous. Students should learn to critically evaluate, responsibly leverage, and complement the strengths of AI tools while recognizing their limitations.

While the assessment implications are pressing, they open up a broader conversation about what comes next after the AI crisis. How can we proactively realign learning objectives, activities, and evaluations to develop human capabilities and knowledge that will remain uniquely valuable? How do we build student resilience and agency in an age of increasingly capable AI partners? Rethinking assessment is just the beginning of reimagining education for an era of exponential technological change. If we seized this challenge, it could catalyze far broader and enduring improvements to our educational philosophy.

Made on
Tilda