The Digital Transformation Playbook

How AI Can Transform Learning for Students with Disabilities

Kieran Gilmurray

Fear often dominates the conversation around artificial intelligence in education, but what if AI could actually bridge gaps we've struggled with for decades? Professor David Brown, a pioneering researcher in AI accessibility at Nottingham Trent University, offers a refreshing perspective that challenges mainstream anxiety.

TLDR:

  • AI for accessibility means using technology ethically with a human-centric approach that puts students, parents, carers, and teachers at the centre
  • Professor Brown's team uses AI to monitor students' emotional states during learning by analyzing facial expressions, eye gaze, and body language
  • A digital divide exists for marginalized communities, requiring prioritization of equity of access through good internet and appropriate devices
  • Young people have already embraced AI; policymakers and educators risk being left behind if they don't adopt it ethically
  • AI should augment teaching practice rather than replace teachers, handling routine tasks to free up time for creative education
  • AI should be inclusive and explainable, making life better for everyone. This will only happen if we intentionally design it to be so, putting vulnerable communities at the center of development processes.

Professor David Brown reveals how AI can positively transform education for students with disabilities when developed ethically and inclusively, challenging the fear narrative often portrayed in media.

When most people think about AI, they picture complex algorithms making incomprehensible decisions.  Brown reframes this narrative by showcasing how thoughtfully designed AI systems can transform educational experiences for students with learning disabilities, autism, and other challenges. 

His ground breaking work includes developing systems that monitor emotional states during learning—detecting when students are engaged, bored, or frustrated—and automatically adjusting educational content to maintain optimal learning flow. 

"Without engagement for these students, there will be no deep learning or meaningful outcome," Brown explains, pinpointing why this technology matters so profoundly.

The conversation explores the critical concept of explainable AI - the principle that while algorithms don't need to make the same choices humans would, people should be able to understand how those decisions are reached. 

This transparency builds essential trust, particularly when working with vulnerable populations. Brown's human-centric approach prioritizes inclusive design, placing students with disabilities and their support networks at the center of development processes rather than as afterthoughts.

Perhaps most compelling is Brown's discussion of the digital divide affecting marginalized communities. Rather than developing expensive specialist equipment, his team focuses on creating accessible applications for mainstream devices, training their algorithms on appropriate data that represents the communities they serve. 

From social robotics that engage students with profound learning disabilities to VR rehabilitation systems that provide AI-guided therapy, the potential applications are transformative.

Connect with David here.

Support the show


𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray

Kieran Gilmurray:

AI could work for everyone, not just the tech savvy, but right now, parents are scared, students are being left behind and explainability often comes last. Today we break that down. Hi, I'm Ciarán, an author and advisor in AI, automation and digital transformation. I work with some of the world's largest companies to help leaders understand AI's impact on people, performance and progress. My guest today is Professor David Brown, director of Computing Informatics Research Centre and Head of Interactive Systems Research Group at Nottingham Trent University. David's groundbreaking work includes using AI to support students with learning, physical and sensory impairments, virtual reality and rehabilitation, and social robotics for learners with autism. He's one of the leading voices in AI and accessibility, and today we're diving into how AI can be used for good and what needs to change to make that happen. David, very nice to meet you and thank you for coming on today.

Dr David Brown:

And you, ciaran, thank you for inviting me today.

Kieran Gilmurray:

My pleasure, my pleasure. Let's start simple. What is AI and why are so many people, especially parents, afraid of it today?

Dr David Brown:

You know, there are many definitions of AI, but the one we tend to use in computer science is it's a technology which enables computers but it can be machines too to simulate human intelligence processes and specifically around learning, reasoning, problem solving, perception, decision making and creativity. The second part of the question is why are parents afraid? And I think it might be just the way that news and journalism reports ai skewed, I think, towards some of the perhaps potential negative outcomes of using ai in the news, it's like will people's identities be stolen and reputations be damaged? Will it skew election results? And I just don't think that some of the positive aspects of AI and potential uses are reported as much as some of the negative aspects.

Kieran Gilmurray:

Yeah. I kind of say that all the time. A journalist friend of mine explained it. She said Ciarán, if it bleeds, it leads. In other words, you know, all the horrible stories make it to the top of the pile but, I reflected in the parents I meet.

Dr David Brown:

I am a governor at the special school and they come in and they express real fear of ai and they they know I'm a researcher using a range of technologies, including AI, to support people with learning disabilities and autism and they'll say it all sounds great, but I'm afraid fundamentally of AI.

Kieran Gilmurray:

Gosh, we've done a wrong job or a wrong list of parents. If we're doing that, you work at the intersection of AI and accessibility. What does AI for disabilities really mean in practice?

Dr David Brown:

So for me, it means using AI ethically and for good. So for us, we're using AI to support students with intellectual disabilities and autism, but in a very much human-centric approach putting those students and their parents and carers and teachers. And autism, but in a very much human-centric approach, putting those students and their parents and carers and teachers at the centre of any development process and making sure we're using it for the right reasons and we're developing it with those people in mind and providing training data from those people and evaluating it with those communities for their benefit.

Kieran Gilmurray:

So what are some examples of where AI has made a real and positive difference for students with learning and physical impairment?

Dr David Brown:

So, karen, we've met before and we've talked about this subject before, so I'm going to talk about my experience. So we've been using AI in various projects to understand students' emotional states to do with learning, so whether students are engaged, bored or frustrated in learning. And then what you do with that information? So you can use camera data and you can look at things like facial emotional expression, eye gaze, body posture and an eye speech and put those all together with machine learning to understand these states. And then if a student is bored, why are they bored? Is it because they're bored the level of challenge of the learning materials is too high? If so, we reduce the level of learning challenge. Or if they're bored because it's just too easy, we increase automatically the level of learning challenge. Really, what we're trying to do is keep students in a state of educational flow, and why is that important for the students I work with? It's because without engagement for those students, there will be no deep learning or meaningful outcome. So we're just trying to promote engagement and learning we carry on, sir.

Kieran Gilmurray:

No, I was gonna say it's just. I adore that to know when you're explaining it. You and I have talked in the past and you've explained it. It all makes sense, but I think people don't necessarily understand how AI makes decisions and therefore we almost need to give a definition of explainable AI. Explain how AI has been used for vulnerable communities so that it feels important, not risky, helpful, not harmful. Does that make sense?

Dr David Brown:

It does. So the basic premise in many research teams and I want to really show people many research teams that I know working at universities and working with industry are really they prioritize the use of explainable AI. So the basic premise, as many researchers have said, is that an algorithm doesn't have to make the same choice as we would make as humans, but a human should at least be able to understand the process by which the algorithm made that decision. So it has to be explainable and the methods and processes that make the decisions that AI systems make should be understandable to humans.

Dr David Brown:

You've probably all heard about black box models or including neural networks, but these are really hard to understand by even the experts who create them Hundreds of hidden layers for some of the very deep neural networks transmitting millions of signals, and even then they make an inference or a decision, but it's hard to then work backwards to understand the processes by which those models make a decision. So and it's really explainability is really important because it provides transparency and clarity about how decisions are made and if you have that, you will build trust and protect rights. Especially in the communities I work with are very vulnerable populations well, how do we?

Kieran Gilmurray:

I suppose there's a couple of things here that come to mind. You know, when we were at a conference recently for the ETF Education Training Foundation, even the teachers in the room were worried that AI would be used. You know to and I'll use my words, not anybody's specific words, but I'm going to dump vulnerable students to the AI. Where I might dump a student, you know, to a classroom assistant, and they don't get the same educational opportunity. What you're describing there is actually ai is in support and augmenting the teacher to and the student to identify a particular need, to, most importantly, address that need. Yeah, but they're scared again, they're scared and there there could be you know what I call a digital divide and you know, and this is for educators and for students who are already facing barriers to access, you know, through a variety of maybe financial impairment reasons or whatever else, it feels like a daunting task.

Dr David Brown:

I think the first thing that you said, if I understood you correctly, was oh, we're just going to use AI for students, vulnerable students, and it will. You know, it's an easy thing to do and it will take care of a problem or an issue. I don't actually think that is the case. I think there already is a digital divide for people who are marginalized, including people with disabilities and autism. In order to have equity of access, you need really good internet. I need devices for underserved communities. I need devices for underserved communities, but we really need to prioritize that we put people with disabilities and autism at the center of any development process. So why is it? First of all, we're developing the AI application? For what reason? It must be to support their behavioral or educational needs rather than some kind of other purpose, and then everything should be developed in an inclusive and human-centric way. So you must include them in the design process, and you probably all heard about bias and training data. So the training examples you give to an algorithm are really important, and quite often you might hear of you know an algorithm for people with autism, but actually it's really been trained on a typically developing population.

Dr David Brown:

So all of our approaches if we're developing an algorithm to understand, say, emotional dysregulation and people with autism, we train the algorithm on video samples of people with autism. When we label those data, we do so directed by specialists in autism. So everything really what we try to do is to address potential bias and fairness. And also some of our algorithms are really trained on facial emotional expression, but we never use the raw data. What we do is convert that into a point cloud, a facial mesh, and it's those facial meshes, which are just a series of 500-odd points, which represent a whole range of facial emotional expression. It's that that we train the algorithms with. So everything we do is to address bias, to be safe and privacy preserving, and really that's how we address ethical development and inclusive development of algorithms. But if we don't use, you know, quite often I've worked in assistive technology projects quite often we use assistive technology and special education.

Dr David Brown:

But, it's developed on specialist devices and they tend to be very expensive. And I'm all for assistive technology too, but I think a lot of the systems we develop for marginalized communities should be developed inclusively on mainstream devices and designed inclusively, and ai is an example of that. So a classic example. At the moment, in your area, a speciality is generative ai. It has great potential to support students and adults with learning disabilities making decisions in social care or about their health options as an adult in the community, because I think it's a scalable solution. You don't need huge amounts of hardware to make it work and you can really engineer it to provide prompts and responses which are differentiated into a range of different modalities, such as speech, text, pictures, which are accessible by a wide range of people with different types of impairments.

Kieran Gilmurray:

I was at a conference in Belfast a couple of weeks ago and we talked about security by design. But one of the speakers said well, why are we not doing accessibility by design?

Dr David Brown:

but we have done, karen, we've been doing this for a long time, but in general, the continuing you have, and this is the interesting.

Kieran Gilmurray:

I sometimes worry, david, because when we got to that conference, you're, to a degree, almost having to over explain what it is you're doing, because everybody felt threatened by something that's actually been positioned by an expert.

Dr David Brown:

I've done this at the most premier human computer interaction conferences. Until recently, we're talking about about having people with learning disabilities that I work with as co-researchers and co-designers, and that seemed a radical concept to some of the people researchers at one of the premier human-computer interaction conferences and that was like 2016. And I thought, well, I've been working here and many other researchers in this field for a long time. Why is this? Why does this appear to be a new concept even for academics in the field? And there was a prominent researcher who we can't mention any names, who said people with learning disabilities don't want to be co-designers or co-researchers. And I said to them have you actually asked them? And it was just like it's. It does feel a bit like you know security by design or accessibility design.

Kieran Gilmurray:

It sometimes feels like the concepts get lost and you constantly have to, every 10 years, reinvent them and I agree, I don't know why that is no, I don't understand it either, because, again, are we setting too high a bar, and it's not to say that, you know, there shouldn't be a high bar around accessibility in general. There should be. There should be a high bar around providing services for people with impairment. But but the conference itself, the questions that I believe you're being asked and the questions that are coming up, are of such high a bar and so testing that to a degree, it could be very off-putting actually designing, you know the hurdle's just too high to get something into production and therefore something that could benefit, you know, may be delayed or may never. Actually, you know the shift may not happen quite often in standards, on web standards.

Dr David Brown:

As you know, they're a range of conformance levels, aren't they? And you know? I think it should be something which is introduced, which are targets that you can meet to meet different levels of conformance, and you should, rather than using that as a way to punish people, you should, reward them by reaching different levels of conformance. But it's, you know, I think the whole thing is about inclusive design, that you include everybody in your design, and I think many computer scientists do take that approach.

Kieran Gilmurray:

Yeah, I'm starting to see it.

Dr David Brown:

A lot more. You know what I mean. One population or another, just design inclusively.

Kieran Gilmurray:

Yeah, and I see it. I think part of it's education of designers. You know, again, as I said, we take security by design for granted. The challenge currently is that, you know, a majority, dare I say, of developers don't have an impairment and therefore don't understand what it actually is I even noticed and, again, guilty as charged.

Kieran Gilmurray:

I was doing some slides for an education group the other day and I was gorgeously and I use the word gorgeously, david, gorgeously challenged to say look, your slides aren't accessible. And my immediate reaction was not to get defensive, it was, oh, my goodness, thank you for telling me that I just didn't know what that I can do to try and make a difference.

Kieran Gilmurray:

So if we were to say, david, what is it that, if people are listening into this and they will, what do we mean by accessibility, by design? What could they be doing today to make an impact so that this becomes common knowledge, this becomes more common practice, so that there is less of a digital divide?

Dr David Brown:

Yeah, so there's a whole range of standards out there. We have a project which is working with students with learning difficulties in higher education how you design university materials learning materials to be accessible by people with a range of physical and cognitive impairments, including things like dyslexia, dyspraxia, attention deficit, hyperactivity disorder. There are a number of standards out there and I think we should follow this podcast with some of those links. They're very simple and I think a lot of the accessibility by design was accelerated during lockdown, where we had to deliver a lot of digital learning materials online. So I think real moves are being made, but there's very clear guidelines out there about how you do design accessibly and I think part of this podcast could be followed up by some of those links.

Kieran Gilmurray:

Absolutely absolute pleasure. A couple of other things. What role do you see social robotics playing in inclusive education?

Dr David Brown:

So we've had a range of projects. We developed a couple of European projects where we use social robots like the Now Robot to understand traits of autism. I'm not really into diagnosis because it usually assumes a more medical model, but we looked at how students do turn-taking with robots and we found that people with autism interact with robots in a significantly different way than a typically developing population. So you might use it in diagnosis, but it's really trait identification. The other one is really how robots can engage students and improve task completion. Engagement is really important for the students I work with. Without engagement there is no deep learning and it's the single best predictor of deep learning. And we found that working in learning outcomes such as communication and sequencing skills for students with pmld, profound and multiple learning disabilities, it's at least as effective as traditional classroom instruction. And why is that important? Because if you're working in a special school, you don't just need one approach, you need many approaches to engage your students, many potential. The only. I think the only drawback with robotics is the expense.

Kieran Gilmurray:

Really, yeah, well, hopefully technology gets cheaper all the time all the time.

Dr David Brown:

But it's um, but it's.

Kieran Gilmurray:

I still think it's one of the barriers in terms of robots in education yeah, I don't think people realize, not not the barriers in terms of robots in education. Yeah, I don't think people realise not just that in terms of disabilities, but I think people who have computer technology, cameras, internet access, whatever else, they kind of in the nicest sense, isn't it wonderful? We can take all of this for granted? But I work for a social housing charity for a number of years supporting and volunteering and you know, even without impairments, people haven't had the same opportunities as you or I. They didn't get the opportunity to learn to read or write. Maybe you know dependency issues, Maybe you've got actually affordability issues, or you know, or a whole host of needs that denies access. And that is part of my worry going forward is that you know, I hear the phrases every day ai is making intelligence free and whatever else. Well, it makes it free if you've got access to all of the tooling and all the kits and all, but you know.

Kieran Gilmurray:

So it's. We live in interesting times. It's not that you feel guilty about what we have, but it's about how do we actually do something to to bridge that, what? What also you do VR and rehabilitation. How does that tie into AI and how do those tools support people with intellectual disabilities?

Dr David Brown:

So, Gerard, I think we've been using virtual reality and rehabilitation probably as long as I've been doing any kind of research. So the first project is around stroke rehabilitation. So it's really after stroke. People have to practice a high number of quite boring repetitions, such as in upper limb rehabilitation, things like supination, pronation, reach to grasp. So we gamified that process and put it in a virtual environment and then we thought, well, if people are doing this at home, independently, without a therapist, they could be doing the exercise in a wrong way, performing what is known as compensatory movements, which can cause further damage. So we use AI and vision techniques to work out if they're doing the movement correctly and, if they're not, to give them guidance to do correctly using ai. That's interesting that I met a german developer a little while ago.

Kieran Gilmurray:

Uh, he was doing the same thing. He, he is a physiotherapist. Um, what he was discovering is that his patients would turn up to get the physio treatment they would get mended for one of a better phrase give an exercise to do at home and then come back having done the exercise wrong and harmed themselves. Now you would think that you know that was a great source of income, but as actual, he was correctly and ethically positioned. That said no, I want to do something about this. So we put it on a mobile phone. So when you're doing your actual exercise, then there's a digital twin watching you and giving you feedback and telling you to do more different things. So if we, if we leave with one last thing what would you say, david, to education leaders, policy makers, parents or listeners who are on the fence about ai, who don't understand this area? You know, don't understand. Does you know? Understand accessibility, design or anything else? What would you like to say to them? What would you like them to do?

Dr David Brown:

Well, those people are usually older, kieran, I would say so young people, young like us. Those people like us are policymakers, academics, writers like yourself. Young people have already adopted. Not only they're pushing the envelope of AI. The only people are going to be left behind, as people like us, if we don't adopt um. But so we need to do so in an ethical and trustworthy way. So work, you know, with your communities to make sure we develop trustworthy, unbiased approaches um to ai. But the only people who are going to be left behind will be those policymakers If they don't get in there and think how can we use AI ethically and for good?

Dr David Brown:

See, I work in a university and I think this is a tremendous opportunity to improve your offering to students and to make your processes I don't mean getting rid of teachers to augment their practice and some of the boring things like marking and assessment, to free up time for the really creative um uses of teachers in higher education, and it will give you a commercial advantage too. Not one, but many things, david. Thank you luke commercial advantage too.

Kieran Gilmurray:

Not one, but many things, David. Thank you, Luke. If we summarise anything today, you're reminded AI isn't a tool, it's a responsibility, From explainable systems to inclusive robotics. I love your work. It's designed intentionally. I love the way you describe that, because there are fear amongst educators and parents as well.

Dr David Brown:

And that's really the imperative about why we should then move forward with our campaign, which you're part of, ciaran to develop inclusive, human-centric and trustworthy and biased approaches to AI, and we must do it now.

Kieran Gilmurray:

Well, we have a call to action that AI should be inclusive. Inclusive, it should be explaining them, it should make life better, but only if we make it. So. Look, david, if, if we want people to learn a little bit more about your work and a little bit explore inclusive ai, um, let's give them the links that they need at the end of the podcast and we'll get some some video together. We'll get this out there and like if we all do a little bit, imagine 8 billion people in the world doing a little bit each we could certainly make it.

Dr David Brown:

Fantastic Kieran, thank you for the opportunity of today, my pleasure.

Kieran Gilmurray:

I look forward to seeing you very shortly. See you soon, sir. See you soon, cheers.

People on this episode