
28. April 2025
Innovation in der digitalen Bildungslandschaft
„Von Vision zu Veränderung – Ein Gespräch mit …“ ist unser Interviewformat, mit dem wir jeden Monat mit spannenden Bildungsprotagonist:innen in den Austausch treten und deren Blick auf das Bildungssystem erhalten wollen. Dabei wollen wir sowohl über aktuelle Themen sprechen als auch von den Visionen erfahren, die unsere Gesprächspartner:innen für das Bildungssystem haben.
Berlin/Düsseldorf. 28. April 2025 Prof. Dr. Mutlu Cukurova is a researcher at the intersection of Learning Sciences, Computer Science, and Human-Computer Interaction, exploring how AI can complement human learning rather than replace it. At UCL Knowledge Lab, he leads large-scale research projects and policy initiatives, focusing on how education systems can move beyond routine cognitive skills to foster collaboration, adaptability, and the ability to “learn how to learn” in an AI-driven future.

Das Interview mit Prof. Dr. Mutlu Cukurova
Let’s start with your personal approach to what you are doing professionally. What drew you to this field? Why is education important to you?
Oh wow! It’s been a while since I reflected on these questions. But I guess it would be fair to say that I have always been interested in how humans learn and how intelligence works. My background is originally in science and engineering, and then I did my Master’s in analytics before pursuing a PhD in learning sciences. I have been designing and developing various computational and adaptive learning support systems, and that has remained my central focus. Of course, education is much broader than just human learning, but my initial interest in education stemmed from my fascination with learning itself. I was always fascinated by the idea of computationally modeling human learning – not only to understand it better but also to support it more effectively. I believe that is at the core of education.
Let’s stay with a pragmatic perspective on learning for a moment. How would you define successful education in this new landscape where digitality, AI, and other emerging conditions shape the learning environment?
Well, that’s a billion-dollar question. My argument in these discussions is that there are fundamental differences between human intelligence and artificial intelligence. AI, or digital technologies in general, do not process emotions, moral reasoning, or context in the same way humans do. That is to say that human vulnerabilities and limitations shape the way we process information – and, in turn, the way we make decisions.
I start with these differences because, for me, understanding them is key to defining successful education. In my view, education should go beyond straightforward knowledge acquisition and simple information processing tasks. Instead, it should cultivate uniquely human capacities: empathy, ethical and moral reasoning, theory of mind (the ability to consider the perspective of another), as well as our values and virtues. These are the fundamental qualities that shape how we think about the world, and I believe their development should be at the heart of a meaningful education. So, in essence, I see successful education as one that helps people nurture and refine these uniquely human qualities. Does that make sense?
It does! What I feel compelled to ask here is: Should we, in the context of education, consciously focus on what remains uniquely human? AI cannot enjoy fresh air or feel the bark of a tree nor have similar tactile or aesthetic experiences. This also reminds me of Nicholas Maxwell’s wisdom narrative – the idea that we should value more than knowledge and ought to prioritize “wisdom” over mere accumulation of information. Do you think we should educate for wisdom rather than just knowledge?
Yeah, that’s an interesting question. Many modern human learning theories – including those rooted in constructivist approaches – view knowledge construction as a highly subjective, shared process rather than as an isolated, intracranial activity where individuals simply accumulate information stacks in their brains. Constructivist learning theories generally emphasize the importance of embodied learning experiences – like the example you gave, where interaction with the physical world plays a key role. These theories would also resist reducing core educational constructs, such as wisdom, into simplistic, reductionist representations. Instead, they highlight the value of lived learning experiences as the foundation for developing essential competencies and skills. Through real-world, open-ended problem-solving and social interactions, individuals cultivate these abilities. So, if we consider wisdom as something that emerges from lived experiences, then it should indeed be at the core of education. Most constructivist learning theories would support this idea.
The challenge, however, is that we still lack well-established methods to measure progress in these areas. And if something cannot be effectively measured, it becomes difficult for education systems to implement it at scale. Bringing the discussion back to AI, the reality is that most digital learning technologies today focus on the aspects of learning that are easiest to automate – things like content acquisition in intelligent tutoring systems or AI-driven content creation tools that help teachers save time. However, innovative constructivist pedagogies supported by AI are still rare and much harder to implement effectively.
You make an interesting point on the necessity to measure learning progress. Given that we have to rank people – to assign degrees, admit students to universities, and qualify people for jobs – there is a lively discourse around the knowledge and skills that should actually be taught. Many argue that the rapid pace of change makes it nearly impossible to determine what students should learn. What kinds of knowledge and skills do you think will remain essential?
That’s a question I think about a lot – both in my role as a professor and from a research perspective. As a university lecturer, I constantly ask myself: What will still be relevant in five years for the students I’m teaching today? I teach Design and Use of AI in Education at UCL, and in the past few years, I have never used the exact same course materials from one year to the next. The field is evolving so quickly that constant updates are necessary. So yes, it’s incredibly difficult to predict what specific knowledge students will need in the future. But there are a few things I’m quite certain about:
First, as we discussed earlier, human learning is fundamentally constructive – we build new knowledge by connecting it to what we already know. Unlike machines, which can process isolated pieces of information, humans don’t learn in a vacuum. This means we cannot justify eliminating foundational knowledge just because new, more “relevant” information emerges. Take mathematics as an example. Just because we have calculators, we have not stopped teaching basic arithmetic. Students need to understand multiplication and basic mathematical operations to build toward more advanced topics like calculus. So, whenever people argue that education should only focus on the most up-to-date knowledge, I think it’s important to push back. We still need to teach fundamental concepts that allow students to engage with future developments in a meaningful way.
Now, from a skills perspective, this question becomes even more interesting. To me, the most critical skills are those that support learning itself – things like learning how to learn, adaptability, and cognitive flexibility. Given the rapid changes in technology and the job market, people will constantly need to acquire new knowledge and adjust to new environments. That is why self-regulation skills, social regulation skills, and metacognitive skills are so important. The ability to reflect on one’s own learning process, adapt strategies, and work collaboratively will only become more valuable. But these competencies need to be integrated into education systems as core components– not just as optional “nice-to-haves.”
You talked about critical thinking but also empathy, and I assume there is an overlap between skills that are essential for the future and those that AI cannot easily replicate. That made me wonder: How can we actually foster these uniquely human skills in education?
Sometimes, when people talk about skills like critical thinking or adaptability, they treat them as if they exist in isolation – as if they could be taught in a vacuum. But in reality, these competencies develop through engagement with specific topics. We need to ensure that students are acquiring these skills within meaningful subject areas, rather than as abstract, standalone abilities.
But beyond that, I think one of the most important factors is ensuring that students have enough opportunities to practice higher-order thinking skills. These skills are very difficult, perhaps impossible, to teach with direct instruction. They can only emerge through students’ engagement in well-designed learning activities and lived experiences. The way we currently structure education often emphasizes content acquisition far more than these broader cognitive abilities. For instance, imagine a school day where the first 60 to 90 minutes are dedicated specifically to knowledge acquisition. AI-driven tutoring systems are fantastic at supporting this kind of structured learning – they can adapt difficulty levels, personalize feedback, and adjust the pace to fit the learner’s needs. This makes them incredibly effective for foundational knowledge acquisition. However, after that initial phase, the rest of the school day could be structured to focus on applying knowledge in open-ended, real-world problem-solving tasks. This is where students engage in complex, collaborative, and creative thinking – where they actually develop and use skills like critical thinking and adaptability.
Instead of being the sole knowledge provider, the teacher would act more like a mentor – a more experienced peer who facilitates discussions, guides exploration, and supports students in developing these higher-order skills. In an open-ended, inquiry-based learning environment, teachers need to be comfortable admitting that they don’t have all the answers. That can be difficult, because there are power dynamics in play. Moving toward a mentorship role requires them to embrace uncertainty and guide students through exploration rather than just delivering information.
Maybe I can shift back to AI more broadly. You’ve worked extensively on the role of AI in education, and I know that you’ve outlined three distinct conceptualizations of AI in this context. How would you explain these three conceptualizations to me in your own words?
Yes, of course. I’ve been thinking about conceptualizations of AI in education mainly through the theoretical lenses of extended cognition and augmented human intelligence. This has led me to develop three primary conceptual frameworks for how AI can be understood and used in educational contexts.
The first is the externalization of human cognition. This refers to identifying a task that could be considered an intelligent act, clearly defining it, and then modeling it computationally or statistically so that it can be performed by an AI system. Whenever we use AI to take over such tasks, we are externalizing human cognition. We see that everywhere. Most common interactions with tools like ChatGPT go something like:
“Write me an essay on X,”
“What is gravity?”
“Create a lesson plan for teaching gravity,”
All of these are cases where the user is asking the AI to perform a task they would normally do themselves, often with the aim of saving time. But the problematic aspect of this is that if we continually offload core professional activities—like lesson planning or content design—we risk cognitive atrophy. These are essential tasks for developing and maintaining expertise. If AI takes them over entirely, we lose the opportunity to practice and grow in our own competence.
The second conceptualization builds on the idea that in any learning situation, people bring with them either an explicit or implicit mental model of what they think they should be doing in that particular context. Here, AI can be used to represent what “good” performance might look like in a given learning setting and then reflect that representation back to the learner. The goal is that humans internalize this model to refine or change their own thinking. In education, this could mean promoting learning gains, behavioral shifts, or the development of competencies. For example, a multimodal AI system could track a student’s group work – analyzing who talks to whom, who listens, who builds on others’ ideas, and who dominates the conversation. The AI can then present visual feedback on participation over the last two hours. This kind of reflective feedback doesn’t prescribe actions like “do this to become more collaborative”, but instead provides data that helps the student adjust their own mental model of what effective group work looks like. That’s a good example of internalization—supporting learners in adjusting their own thinking.
Finally, AI can be used to extend human cognition by building synergistic systems in which human and artificial agents interact in ways that produce outcomes neither could achieve alone. The key idea here is synergy—the combination should lead to emergent intelligence that exceeds the sum of the parts. It’s not just a matter of the human doing one part and the AI another, but rather of interacting in a tightly coupled way that augments both. We don’t yet have strong, real-world examples of AI systems that truly augment human intelligence in a way that creates something new through interaction. That said, there are glimpses of cognitive extension. For instance, when students develop a solution to a complex problem and then engage in a kind of dialogue with the AI (“Here’s what I did, what do you think?”), you start to see a form of turn-taking that might lead to deeper learning. But overall, this remains a theoretical ideal, currently with limited empirical investigation and evidence of impact.
Wouldn’t this second concept of human-AI interaction – the internalization of AI-generated insights – require a major cultural shift? Teachers, school leaders, parents, and pedagogical staff would need to develop new or deepened competencies to engage with this kind of data or risk feeling intimidated by it.
Absolutely – that’s certainly the case. The way these reflection opportunities are designed is critical. Stakeholders should be actively involved in co-creating these systems. This ensures that their values, prior knowledge, and skills are integrated into the design, making these AI-driven reflections more meaningful and usable for them. But beyond that, involving stakeholders in this process is also a way to improve their data literacy, which I think is going to be essential moving forward.
There are valid critiques of this approach. Some argue that reducing lived experiences to data models risks oversimplifying complex human interactions. And that’s true – we should never assume that a computational model perfectly represents reality. But we already rely on models in many areas of life to help us make decisions. Some models are highly useful, and the challenge is to design better models that offer meaningful insights. The ability to critically analyze data visualizations and extract meaning from them should be considered a core competency for educators and students.
If we are forced to rethink the role of the teacher anyway, maybe this is a good moment to reconsider what it means to be a teacher. Germany offers some of the highest intra-European salaries for teachers and yet faces a severe teacher shortage that will not be solved in the next 5-10 years. How is that possible? Maybe this is an opportunity to nudge the profession in a better direction?
That’s a great point. There is indeed an opportunity here, and perhaps the current developments in generative AI and its impact on teaching could accelerate this transformation. But I also have some serious concerns – especially when I think about this on a global scale and given my work with organizations like UNESCO. A positive shift like this might be realistic for a country like Germany, but for most countries around the world, it simply is not. And that’s exactly where my biggest worry lies: If countries with strong economies can use AI to enhance education, while others rely on it primarily as a cost-cutting measure to automate poor educational practices, we risk widening the global education gap even further.
In many parts of the world, digital technologies (including AI) are seen not as tools for improving education but as ways to reduce the need for human teachers, simply because governments want to save money. In a dystopian future, we could see two very different educational realities emerge: In well-resourced countries, AI could be used thoughtfully to enhance competency-based learning and teacher mentoring. In under-resourced countries, AI could be deployed mainly for efficiency – automating direct instruction, grading essays, and managing basic content delivery – all in the name of economic savings. This would lead to an even greater divide in educational quality.
I want to wrap up with one final, broad question. If you had to identify the biggest structural barrier to achieving this kind of education system – one that truly integrates these new approaches – what would it be?
Yeah, well, that’s a big question. One of the most important things to address is the misconception that digital technologies – especially AI – are some kind of silver bullet that will solve the deep-rooted challenges of education systems. I don’t believe that any single piece of technology, including AI, will democratize education, revolutionize learning, or suddenly make education more equitable. These are systemic challenges that require human-centered, structural changes. And yet, whenever education systems face major problems, there is a tendency to lean on technology as a quick fix. If we only respond to these immediate concerns with AI-driven automation, we risk creating a dystopian scenario where students use AI to complete assignments they don’t actually engage with, and teachers use AI to grade assignments they don’t actually read.
So, if I had to name one core issue that we should be focusing on, it would be this: Using technology as a trigger for pedagogical innovation rather than as a tool for increasing efficiency within outdated systems. And unfortunately, I don’t think enough people are talking about this yet.
Thank you so much, this was really insightful!
Über die Vodafone Stiftung Deutschland
Die Vodafone Stiftung setzt sich für gute Bildung in einer zunehmend digitalen Welt ein, die auf die individuellen Talente und Fähigkeiten der Schüler:innen eingeht und Lehrkräfte für einen digitalen Unterricht befähigt. Die Stiftung engagiert sich für die Vermittlung von 21st Century Skills und eine bessere Nutzung der digitalen Chancen, um Lehren und Lernen auf eine neue Stufe zu heben und mehr Bildungsgerechtigkeit zu schaffen. Dazu unterstützen wir die innovativen Kräfte im Bildungswesen und arbeiten konstruktiv an strukturellen Reformen des Bildungssystems mit. www.vodafone-stiftung.de