Making the Grade: Exploring the Impact of GenAI on Education Equality 

How Generative AI is shaping the future of education in Ireland

Molly Newell13/11/2024

Since the launch of ChatGPT in November 2022 made generative AI (GenAI) tools more accessible than ever before, discussions about the role of artificial intelligence in education have accelerated. Now, both students and teachers have the ability to use GenAI to create, personalise, and expand educational content – transforming learning experiences within classrooms.

Major technology companies like Microsoft – which has ties to ChatGPT developer OpenAI – and Google – which recently launched several partnerships expanding GenAI in educational environments – will play a major role in shaping classrooms. The expanded use of AI for educational purposes promises significant benefits, such as individualised learning and enhanced accessibility. However, with these opportunities come challenges, as the rapid integration of AI also poses risks to equity and inclusivity within education. 

Today's students – most of whom are younger than the iPhone itself – have grown up in a digital world, their lives shaped by constant technological evolution. The pandemic exacerbated this trend as remote learning brought the educational experience almost entirely online. These digital natives are the first to experience AI-based learning aids firsthand, as schools explore how to balance the benefits of AI with the need for responsible, equitable integration. This article examines both the potential benefits of AI for students, and the risks associated with its adoption in classrooms, concluding with recommendations for ensuring equitable, inclusive, and effective AI-powered educational spaces. 

Benefits to Students 

In an environment of limited resources, AI tools can support inclusivity amongst students by efficiently adapting materials to students’ diverse learning needs. Responsible AI integration has the potential to bridge gaps between educators and learners, allowing all students to learn in the format that best suits their needs.  

Enabling Individualised Learning 

Schools around the world recognise the shortcomings of a one-size-fits-all approach to learning, moving instead towards “personalisation” to meet students’ diverse needs in the classroom. Personalised learning is an educational approach that customises learning experiences to address individual strengths, needs, abilities, and interests. "Access to quality education means access to personalized learning,” per UNESCO training documents – emphasising the role that personalisation can play in countering unequal educational outcomes. 

Artificial intelligence could accelerate the promises of personalisation by facilitating customisation. For a teacher to adapt learning materials to each individual students’ specific learning styles and needs would require significant time and investment – two resources in limited supply at most schools. AI tools can address this challenge by facilitating customisation. By combining AI techniques with instructional methods and information on students’ learning styles, these digital tools can respond to feedback in real-time to improve learning outcomes. While such individualised attention from a human tutor would be financially unattainable to many families, intelligent tutoring systems (ITS) can facilitate the adaptation that meets the diverse needs of the modern student body. ITS mimic a human tutor by leveraging AI-powered educational tools that track students’ performance and provide individualised feedback, analysing user data to identify strengths and recommend additional learning activities.  

For instance, AI-enabled personalisation could support teachers with students who have newly arrived in Ireland and may have unique language, cultural, and learning needs. GenAI can facilitate the development of visual aids or translation – personalisation that can make learning accessible for a wider audience of students. 

Serving Special Education Needs 

Many of the individuals who work most closely with students with intellectual and development disabilities (IDD) are optimistic about the role of AI – 64% of educators and 77% of parents of students with IDD view AI as a potentially powerful mechanism to promote more inclusive learning, per a recent study. Just as AI can facilitate personalisation to meet individual students’ learning styles, AI can adapt educational content and experiences to be more accessible, helping to remove barriers for students with visual, auditory, physical, and cognitive differences.  

Addressing these obstacles could increase the participation of students with special education needs (SEN) in mainstream classrooms – an integration that would improve learning experiences for all students. For example, Microsoft Translator provides real-time speech-to-text translations for hearing-impaired students, while Deaf AI offers real-time sign language interpretation. Additionally, AI tools like ECHOES help children with autism develop social communication skills through interactive, playful environments.  

Before a child is diagnosed, AI could help identify challenges and learning differences facing students. Diagnosing a child with a learning disability is a time-intensive, stressful process for many families, but AI tools enabling teachers to recognise developmental differences could facilitate detection and intervention. While classroom-based AI solutions should not replace clinical experts and their diagnoses, digital tools could raise early warnings of a problem. This speed could be a game-changer for many families as early intervention can significantly impact a student’s learning outcomes.  

Creating Enriching Content 

AI-enabled simulations, including game-based learning, chatbots, virtual reality (VR) and augmented reality (AR), offer immersive experiences that capture students’ attention and enhance education across various fields. These technologies present a cost-effective way for students to explore scientific, historical, and cultural topics within their cultural context, making “field trips” less dependent on budgetary, geographical, and physical constraints. Initial research finds that AI-enabled simulations enhance learning and memory, but more research is needed.  

Supporting and Advising Students 

AI-powered chatbots offer a budget-friendly solution for distributing educational resources and support more equitably. They provide students with instant access to crucial information, such as course materials, deadlines, and well-being resources, regardless of their socio-economic background. Chatbots like Roo can also address sensitive topics like sexual education, offering students anonymity and reducing discomfort. In underserved areas, chatbots can help bridge gaps by offering 24/7 support for academic and mental health issues, ensuring students have continuous, non-judgmental access to help, fostering a more inclusive and supportive educational environment. 

Risks to Students 

Like previous waves of emerging technologies, AI systems – and the data that powers them – have the potential to exacerbate existing inequalities or create new ones. School-based digital transformations must address these risks to ensure that classrooms of the future are more inclusive than those of the past.  

Unequal Access 

The integration of AI-powered education technologies can challenge equity and inclusion due to the "digital divide," where access to necessary resources varies due to income, education, geography, culture, gender, or other demographic marker. Effective use of AI tools requires technology availability, high-speed internet, and access to up-to-date databases. However, schools with a higher percentage of socio-economically disadvantaged students often face greater shortages or poor-quality digital resources. In 2022, nearly 30% of disadvantaged school principals across the OECD reported lacking adequate digital resources, compared to less than 20% in advantaged schools, highlighting the disparity in access and potential for further inequality in education.  

Ireland’s rural communities face additional challenges integrating AI. While Ireland has strong digital skills relative to the rest of the European Union, rural areas and suburbs lag behind cities. In 2021, just 35% of adults living in rural parts of Ireland and 37% of adults living in towns and suburbs had above-basic digital skills – far below the 47% of adults living in cities with similar capabilities. These findings – along with rural Ireland’s relatively poor internet access - suggest that rural Irish schools may struggle to leverage the benefits of AI in their classrooms fully. 

Perpetuating Techno-Ableism 

As administrators explore AI tools for addressing special education needs, care must be taken to ensure that implementation schemes do not perpetuate outdated understandings of disability. Techno-ableism refers to the belief that technology, particularly AI, can "fix" disabilities, framing disability as an individual problem rather than addressing societal barriers. This mindset often aims to help disabled individuals assimilate into an able-bodied, neurotypical world, perpetuating a narrow view of disability. In education, this approach can undermine equity and inclusion by reinforcing the idea that individuals need to be "fixed" instead of challenging societal structures that create barriers. As a result, AI tools designed without inclusion in mind may fail to meet the diverse needs of disabled students, further perpetuating exclusion. 

Encoding Bias 

AI tools are envisioned to be neutral decisionmakers – taking in data and providing a solution free from human emotion or error. In reality, algorithms can inherit harmful biases from training data or developers, perpetuating stereotypes and undermining commitments to inclusion. Breaking past this facade of objectivity is essential to understanding the potential of AI tools to further entrench inequality. 

Algorithmic bias can manifest as allocative harm, which affects resource distribution, and representational harm, which reinforces stereotypes based on characteristics like gender or ethnicity. Evidence of biased AI systems has been found across domains, including criminal justice, banking, medicine, computer vision, hiring, and education. 

Researchers have warned for years of the unreliable effectiveness and limited applicability of educational algorithms across different populations. In practice, this bias can take various forms. For instance, testing algorithms designed to evaluate English language proficiency might consistently undervalue the skills of learners from specific countries, potentially limiting access to higher education. Similarly, algorithms used to predict course completion may inaccurately assess students from certain demographic groups. preventing them from receiving necessary support. These biases can affect educational software, grading systems, and even social communication tools, disadvantaging students from diverse ethnic, linguistic, or socio-economic backgrounds. Furthermore, the underrepresentation of disadvantaged students and those with special education needs in AI bias research exacerbates these disparities. 

The real-world impact of algorithmic bias on students gained widespread attention in 2020 when Ofqual, the United Kingdom's Office of Qualifications and Examinations Regulation, turned to an algorithm to assess students for university admission after the pandemic rendered in-person exams impossible. 39% of students received A-level grades below those predicted by their teachers – jeopardising their places in university. Among the factors Ofqual used to determine scores was individual schools’ historical grade distribution from 2017 to 2019. Students from disadvantaged school communities saw their grades deflated, as a result, while students from private schools saw their grades inflated. Hundreds of students quickly protested these flawed results, chanting “F**k the algorithm” in front of the Department for Education. Officials did overturn the results, giving students the higher score between teacher predictions and the algorithm’s output, but this debacle demonstrates the power of AI systems to undermine trust and damage students’ academic and professional outcomes.  

While the Ofqual incident demonstrated the challenge of algorithmic bias in predictive AI models, GenAI systems present similar challenges. Image generators like Stable Diffusion and text generators like ChatGPT have both been found to produce harmful content that amplifies gender and racial stereotypes.  

Addressing bias in AI and education requires a multifaceted approach, including diverse data collection, transparency in AI development, and continuous monitoring. While AI can sometimes reduce biases, such as gender disparities in certain educational settings, it can also introduce new challenges if biases in training data are not properly addressed. At the same time, reducing bias in teachers also remains crucial. Effective strategies include training educators to recognize conscious and unconscious biases, ensuring fair evaluation processes, and promoting diversity within both AI development teams and the education system. Together, these efforts can help create a more equitable and inclusive learning environment for all students. 

Flawed GenAI Detection 

In situations where student use of GenAI is prohibited, educators are interested in reliable methods to determine instances of academic misconduct. Given the stakes associated with such accusations – loss of reputation, failure, suspension, or expulsion – reliability is of the utmost importance. Research has broadly found that humans struggle to accurately identify GenAI, making educators themselves unsuited to the task of evaluation. In this gap, several AI-detectors have entered the market, but their effectiveness remains unclear. In July 2023, OpenAI – the developer of ChatGPT – pulled its AI Classifier, citing a low rate of accuracy. 

The unreliability AI-detectors is even more problematic for non-native English speakers, who are far more likely to have their writing falsely identified as AI-generated. A study from Stanford University evaluated the ability of various AI-detectors to correctly identify instances of GenAI in the essay portions of the Test of English as a Foreign Language (TOEFL). While the test was “nearly perfect” at evaluating essays written by native English speakers, it falsely identified 61.22% of TOEFL essays written by non-native English speakers as AI-generated. For students who already face disproportionately high rates of school discipline, a false accusation could have a profoundly negative impact on their academic career. As Ireland continues to attract a global student body, the appropriateness of these tools in assessing students with diverse linguistic backgrounds must be considered.  

The Role of Teachers  

Teachers are crucial in navigating the integration of AI in classrooms, serving as both guides and facilitators for students’ learning experiences with new technologies. As the primary agents of implementation, they have a unique responsibility to ensure AI tools are used thoughtfully, equitably, and effectively. However, for many teachers, particularly those who began their careers before the current digital revolution, keeping up with rapidly advancing technology poses significant challenges. Most of today's educators are not digital natives, unlike their students, and the technology available now differs vastly from what they may have initially trained with. 

To address these disparities, upskilling and professional development are essential. Teachers must develop a working knowledge of AI and other digital tools so they can make informed choices about how best to use them in their specific classroom settings. Upskilling requires significant time and resource investment, however, and access to such programmes often varies depending on location and funding. For teachers in low-resource schools, these barriers are particularly acute, meaning that without targeted support, some educators may be less equipped to provide AI-supported learning experiences. 

In addition to knowledge of the tools, teachers also need to understand the ethical implications of AI in education, including data privacy, potential bias in AI algorithms, and the importance of fostering a responsible and critical approach to AI among students. They must consider how AI tools impact each student and whether they reinforce biases or create inequities in the classroom. This awareness allows educators to use AI in a way that supports all students’ development, promoting inclusivity rather than inadvertently widening achievement gaps. 

The role of teachers extends beyond instruction. With AI’s potential to automate administrative tasks—such as grading, attendance, and lesson planning—teachers have an opportunity to focus more on direct engagement with students. However, while AI can reduce certain burdens, it may also introduce new responsibilities, such as managing the technology, addressing its limitations, and monitoring its effects on student learning and well-being. Teachers must strike a careful balance between embracing AI’s efficiency and retaining the personal interaction that is foundational to effective teaching. 

Recognising these challenges, initiatives are underway in Ireland to support teachers with AI integration. Google, for instance, has partnered with the Irish education-focused non-profit Kinia to provide bilingual (Irish and English) AI training to educators, reaching thousands of students across the country. Similarly, ADAPT’s AI Literacy in the Classroom programme focuses on equipping secondary school teachers with the necessary skills to incorporate AI into their teaching and empower students to engage critically with AI technologies. 

Ultimately, teachers play a pivotal role in shaping the future of AI in education. With the right support and training, they can harness AI’s benefits to create dynamic, personalised learning experiences that engage and empower all students. Yet, to achieve this, education systems must invest in teacher training and support networks that enable teachers to adapt to this fast-evolving landscape, ensuring that the introduction of AI enhances—not disrupts—student learning.  

Ensuring Equitable Educational Spaces 

The development of these programmes in collaboration with educators offers a promising path forward, but steps must be taken to ensure that AI-powered technologies place student success over corporate returns. While artificial intelligence has the potential to transform education, its benefits will only be fully realised if it is implemented responsibly and equitably. AI tools must support, rather than undermine, inclusion by being accessible to all students, regardless of socio-economic or geographic background. Additionally, teachers need ongoing training and support to navigate these new technologies effectively. By prioritising inclusivity, transparency, and teacher training, Ireland can foster a future where AI in education empowers all students, creating an equitable learning environment that prepares them for a technology-driven world. 

Posted in: Technology

Tagged with: educationequality

Molly Newell

Headshot

Molly Newell leads TASC’s research on a variety of emerging tech issues, including AI, cybersecurity, digital regulation, and platform economies. An experienced project manager, Molly has led research teams analysing technology, cybersecurity, and security policy. She holds an MSc in Digital Policy from University College Dublin and a BA in Public Policy & Leadership from the University of Virginia.


Share:



Comments

Newsletter Sign Up  

Categories

Contributors

Kirsty Doyle

Kirsty Doyle is a Researcher at TASC, working in the area of health inequalities. She is …

Paul Sweeney

Paul Sweeney is former Chief Economist of the Irish Congress of Trade Unions. He was a …

Vic Duggan

Vic Duggan is an independent consultant, economist and public policy specialist catering …



Podcasts