Create briefs that empower agency partners and level up your campaigns. Download our Ebook here →

Back

Back to School: How AI can be used in education

What you'll learn:

AI technologies are reshaping EdTech, but are we adopting them without enough oversight? In this piece, we discuss why a re-education in AI is critical.

Since the pandemic, there’s been an upsurge of interest in edtech. In fact, Morgan Stanley projects global edtech spending to increase from $250 billion in 2022 to $620 billion in 2030. AI has contributed significantly to this boom, as shown by schools and universities, who have increasingly adopted AI technologies to collect and leverage data analytics — as well as provide personalized learning opportunities.

On Tech Can’t Save Us, Literal Human’s weekly podcast, we have been fortunate enough to discuss the edtech boom with innovators in the space. Across these discussions, it’s become clear that AI is no silver bullet to improving student outcomes – and can even have an adverse effect.

To properly identify the potential pitfalls of using AI in education, we all need a re-education in AI. How does it work – both commercially and technically speaking? How does it interact with different communities? And how can it be harnessed to optimize all student outcomes?

Engaging with AI as a learner opens up a whole new world of possibilities for understanding its use as a teaching tool. Here we’ll discuss insights from the podcast about these possibilities.

How AI can improve educational outcomes

As we’ve already mentioned, AI technologies are making strides in education. For example, AI-driven data insights provided by platforms like Doowii are allowing educators to better identify students’ needs. By analyzing metrics such as attendance rates and grades, these tools offer a new, holistic way of capturing each student’s learning behavior. By simply automating this administrative workflow, AI reduces educators’ workloads, giving them more time to focus on actually teaching their students.

The data insights can also enhance teaching. Take SYNKii, an online music platform, for example. It’s developing technology that collects data on users’ wrist and hand movements, which is captured by user-set-up cameras. This data, according to SYNKii’s founder and CEO, Sunghee Park, gives teachers insights about their individual students’ body compositions, allowing them to optimize teaching about the importance of subtle differences in carpal movements.

Generative AI tools can develop practice questions, courses, and other learning resources for students. The demand for these teaching tools was touched on by Cien Solon, co-founder and CEO of LaunchLemonade, a no-code platform that allows individuals and organizations to build AI assistants. Parents can enhance teaching with ‘AI co-pilots’, Cien advises. As a result, there are more avenues for students to receive personalized and more effective teaching.

In the careful adoption of AI technologies lies a global opportunity to reduce educational inequity. 

But we need to be wary about AI for AI’s sake 

However, our education systems – and their interactions with AI – need to evolve in order to unlock the benefits of these technologies. Currently, there’s a vacuum of good practice and policy in educational institutions regarding the uptake of AI; and increasingly, there’s danger of adopting AI for AI’s sake.

For instance, since ChatGPT launched in November 2022, there’s been a lot of talk of how to stop students using Generative AI to write their homework. But this misses the real issue. If a Large Language Model (LLM) can score full marks in a History GCSE exam with a few prompts, doesn’t this suggest an outdated and inadequate curriculum, marking scheme, and examining practice?

Similarly, the shiny solution – AI plagiarism detectors —misses the point. These technologies promise to flag submissions that cross a threshold of what is deemed to be ‘AI-generated text’. In other words, they provide a quick, hassle-free way for teachers to weed out the cheats. Yet, as Ben Dodson, Doowii’s founder and CEO points out, the accuracy claims made about these AI detectors are flawed; results from more successful trials are generalized:

“For example, when testing essays written by native English speakers at the university level, it might achieve 90% accuracy, yet if you introduce a subpopulation of English-as-second-language learners, that accuracy drops considerably because their writing tends to be more atypical, making it harder to detect whether the text is AI-written or not.”

This bias against non-native English writers – backed by a recent paper by researchers at Stanford – shows the role these detectors could play in disproportionately and inaccurately accusing, and even punishing, foreign-born students for cheating.

The accuracy claims made about detectors are further brought into question by their ever dwindling lifespan. Gen AI models that students use are constantly changing based on the latest releases of OpenAI, Anthropic and other companies. So, if an AI testing company claims 95% accuracy based on testing with GPT-3.5, that metric becomes almost irrelevant over time. It is also worth noting, Ben continues, that if machine learning models have a vested interest in sounding more human and genuine, they have a vested interest in evading being detected as ‘AI-generated’. 

“So, on the one side you have OpenAI with billions in funding, and, on the other, you have specialized AI detectors with far fewer resources. Who’s winning that arms race?” 

It is no coincidence that OpenAI removed their AI detector due to reliability concerns, and HiveAI did similarly. 

Finally, Ben points out, there is the moral issue of false positives.

“You could detect half of the cheaters pretty accurately, but if that comes at the expense of 20% false positives, we need to consider what’s an acceptable rate. Should we accept falsely accusing someone 20% of the time just to take action on the 80% who are cheating? That seems fundamentally unfair.”

The issue of AI detectors epitomizes why AI technologies, and the ways in which they are adopted, need rigorous oversight and evaluation before they are introduced into the education system. 

Everyone needs to be educated in AI

To critically evaluate AI, we must be first educated about its potential, risks, and history of governance. Will Saunter, co-founder and Biosecurity Lead at BlueDot Impact, a social enterprise offering free, part-time courses on AI and biosecurity, discussed the importance of providing accessible high-quality education about emerging technologies:

“We have people from over 100 different countries who’ve done our courses from a whole range of different fields and areas of expertise. And I think both [AI and biosecurity] benefit massively from having a very wide range of voices and ranges of expertise is contributing to them”

To ensure a wide range of voices feel empowered to reap the benefits of AI technologies, AI must be demystified for all. This is the guiding principle of Cien’s LaunchLemonade which, thanks to prioritizing the use of accessible language in marketing, has achieved a majority of women users in its first eight months:

“I don’t think you’ll be able to hear anywhere where an AI platform has more women builders than men, and I think that has just been that we achieve that through the language that we use and the mission that we talk about. Our mission is very clear: it’s AI for all, anyone can use AI. And I think customers have been able to resonate with my language, with my brand, and my intention around our marketing strategy to always have clear and accessible topics and language.”

Cien’s vision of an AI for All is one pursued by all the changemakers featured on TCSU in August/September 2024. It is by connecting their efforts we can see how this vision will become education systems’ reality.

At Tech Can’t Save Us, the conversation continues beyond each episode. Catch up on our latest discussions and stay tuned for more insights about edtech, and one school’s innovative approach to combatting students’ social media usage, next month!