This is an unpublished draft preview that might include content that is not yet approved. The published website is at w3.org/WAI/.

Evaluating Accessibility: Meeting Key Challenges
Online Research Symposium November 2023

An EU Project

Introduction

Researchers, practitioners, and users with disabilities participated in an international online symposium exploring best practices and challenges in accessibility evaluation and monitoring.

This online symposium took place on 16 November 2023 and brought together researchers, academics, industry, government, and people with disabilities, to explore practices and challenges involved in monitoring and evaluating digital accessibility. This symposium aimed to discuss current challenges and opportunities in three main areas, digital accessibility training, mobile accessibility, and Artificial Intelligence.

Videos from the sessions will soon be available.

Session 1: Digital Accessibility Training and Education

Transcript of Session 1: Digital Accessibility Training and Education

CARLOS DUARTE: And so now let's move to the first session. This is a session, as I mentioned, on Digital Accessibility Training and Education. It's going to be moderated by Jade from The Open University in UK. And our two panelists will be Sarah from the University of Southampton, also in UK, and Audrey from Access42 in France. So Jade, you can begin our first session. Thank you.

JADE MATOS CAREW: Thanks, Carlos. Hi, everyone. My name's Jade. I am Head of Digital Accessibility and Usability at The Open University. It's a real privilege to be here today moderating this session for you. So I am joined by two wonderful experts in the field of digital accessibility training and education. So we've got Sarah Lewthwaite, who is a Senior Research Fellow based at the University of Southampton. We've got Audrey Maniez, who is Director at Access42, and I will let them introduce themselves properly when we get going.

When you were registered for today, you sent in some kind of questions, and we had a look at them. And some of them were quite wide and varied. And because we have the experts in the room with us today, we wanted to make sure that the focus was really exclusively on digital accessibility training and education. So apologies if we don't answer any specific questions. If you think of any questions along the way, please ask them in the Q&A. And you can also comment in the chat as well. We are really open to welcoming questions and comments. It's important to know what's happening out in the wide world, and so we can react to what you are also doing in this field.

I think that's all I need to say. I am going to hand over, maybe I'll hand over to Sarah. Do you want to introduce yourself?

SARAH LEWTHWAITE: Hello, everybody. Welcome to our session. Thank you for joining us. My name is Sarah Lewthwaite. I am here at the University of Southampton, where I lead a project called Teaching Accessibility in the Digital Skillset, and we've been researching the how teaching of accessibility, the content, the approaches, the strategies and tactics that educators use in the workplace and also in academia. And I am based at the Center for Research and Inclusion. And I am also previously a member of the Web Accessibility Task Force for Curricula Development, part of the Education and Outreach Working Group on the Web accessibility Initiative. I will pass over to Audrey.

AUDREY MANIEZ: Hey, I'm Audrey. I am a digital access specialist at Access42. We are a company specialized in digital accessibility. We are based in France, so English is not my native language, sorry. And I am doing accessibility for more than 12 years now. I do audits. I deliver trainings, et cetera. And I also manage the training center for Access42, where we offer professional trainings based on accessibility.

JADE MATOS CAREW: Thank you. So we've got kind of a broad agenda within education and training, so we are going to be looking at things like resources, training needs in the workplace, and how we can embed accessibility in the curricula. But we thought we might kick off with having a look at some resources, how to get started with this. It's quite a complex area with lots of different themes throughout it. So who wants to take that first? Audrey or Sarah, how do we get started? What resources are we looking for when we are getting started with digital accessibility?

SARAH LEWTHWAITE: Well, I suppose the first thing to say would be that the W3C has some really interesting resources available. And Jade, you might want to talk to some of that. Also, we've obviously got some great repositories of videos and resources. I know the University of Boulder, Colorado, has a huge collection that they've been building, and Teach Access have also been collecting resources by teachers about how they've been using Teach Access funds to then develop accessibility teaching in different classrooms. Audrey, would you like to comment?

AUDREY MANIEZ: Just to say that the resources you cited are really great, and it's important to identify the authors of resources. That's a really great point. So resources that are created by the W3C are really a good point because we can find a lot of articles on the Web. And some of them, well, a lot, gives false information, wrong information, or outdated information. So it's you have to be really careful when you find something on the Web about accessibility to really be careful who wrote that and when it has been written. That's really, really important. Free resources are great, but be careful.

SARAH LEWTHWAITE: I think with that, when we've been doing our research with expert teachers, particularly in industry but also in academia, there's that question of where do you send your learners when they want to continue their learning journey? So if you are new to teaching accessibility or if you have established skills but you are aware that you continue to develop them, do reflect on how you continue to develop your skills, where you go for good knowledge, because that information about how we learn accessibility is really important to cascade to your colleagues, to your teams. Because obviously, we are aware this is not a static field. This is an area where we have to keep developing our own learning, even as we are teaching, even as we are researching.

JADE MATOS CAREW: It's a really good point. At the Open University, my team looks after a lot of the training and advocacy for digital accessibility. And when we signpost to external resources, we go through a vetting process to make sure that it's relevant and meaningful for our staff. And every resource that we post, we make sure that it's dated. And we routinely go through and check those dates because links break and things happen, things get outdated. So yeah, it's a real exercise in looking after that as a complete resource.

I am slowly putting links in the Chat, by the way, for everybody. And we've just had a question in: What criteria do you have in mind when deciding which resources to use and where to look at? How can an expert or even non expert decide?

AUDREY MANIEZ: That's difficult. That's difficult because you have to know a little about the industry to know where, who, which authors are relevant, are great authors or great company or great organizations. You have to know a little bit about them. The community can help to identify this. We have a brilliant mailing list that can where you can post questions, ask questions, and the accessibility community will answer you. So I don't have, really, criteria, but it's important to know who is who in the accessibility field, I think.

JADE MATOS CAREW: I think the WAI resources as well, they are certainly, because of the way in which they are designed and made, you know, by a panel of people that come together and work really hard to edit small words and details in all of those resources, so you know that you can trust them. They've been through a really rigorous editing process. So my personal, whenever I have to direct someone to a resource, that's always top of my list. And there's a whole range of resources on that for lots of different needs. Lots of short resources for simple advocacy as well. Sarah, do you have any comments on that last question?

SARAH LEWTHWAITE: No, except to say, obviously, this is an ongoing issue in a lot of fields, you know, the quality of online resources is a huge issue for anyone teaching in higher education. It's a question of how you assess and critique and critically engage with online resources. But as Audrey and Jade have mentioned, there are awesome, very solid, very well developed resources, good places to start from in terms of the field. But I do realize this is a particular challenge in accessibility because it is fast moving, and also because so much education takes place outside of formal learning environments. So you know, you will be learning on the job, learning by doing. There will be informal training, training organized by different organizations, conferences. There are a lot of places to learn. And traditionally, those have been where the majority of learning takes place. So it is a recognized challenge, but it is worth investing time and thought into.

JADE MATOS CAREW: Well, Sarah, one of your recent research papers, which I'll post the link to in the Chat, was looking at workplace approaches to digital accessibility education. And you raised the topic of having a foundational knowledge of accessibility or a baseline knowledge. I was wondering if you could talk to us about what that includes and who decides what it includes.

SARAH LEWTHWAITE: That's a big question. Excuse me. So yes, so as I say, with my project, we've been trying to get close to practice but to look across a variety of different locations, from workplace to higher education, to understand what characterizes accessibility as a field, as an educational field. So with that, I know when we looked at some of the questions submitted to this session, people wanted those kinds of tricks and tools and tips, and that's why we've kind of started in this resource place. But some of the questions that you will have to ask yourself as an educator are quite fundamental in the sense that different contexts will make different demands, and different learners will require different resources. And there's a different kind of cohort of central knowledge that you need to establish. And we wrote about this foundational approach because we realized that, particularly in the workplace, a lot of effort is put into bringing people onto the same page. So we recognize that accessibility is a shared endeavor. It's not located within one role. It shouldn't be about one accessibility expert serving an entire organization. A lot of people need to know what accessibility is so they can collaborate and recognize it as a shared responsibility, a professional responsibility.

So there are lots of dimensions to that, and when you are coming to this as a trainer or somebody trying to build capacity in your organization, there are a lot of facets that come into play. For example, understanding what prior learning you have, where your learners are coming from, what perspectives they bring. Where there might be misconceptions can be also vitally important to helping people on that journey. And you'll be needing to do work about defining what the core knowledge is for your organization, what your colleagues really need to know, what are the essential points. And within that, there can be quite complex different worlds of knowledge that you have to bring together.So for example, we are here talking about Web standards, Web accessibility standards, but there's also a piece about disability awareness, which can be more conceptual. How people understand "normal" I am doing inverted commas what their average user is and try and break that down and unlearn some of the assumptions people bring into organizations, sometimes from their educational pathway to date. So there's this kind of conceptual piece about disability, and there's the technical piece. But then between, there's also a lot of knowledge that people need to gain around how, how to do accessibility in the field, which can be to do with decision making, process, and often collaboration between and across a workflow. So that, then, introduces issues about whether you are bringing different roles together to learn about accessibility and those fundamentals and how and when you should specialize. I have talked quite a lot there, so... I'll hand over to Audrey because I'd love to know from her side what the kind of view is on that.

AUDREY MANIEZ: Okay. Thank you, Sarah. Great talk. So yeah, the core knowledge for everybody in an organization to share on accessibility, as you said about awareness about disability, a deconstruction, people, about what disabled people can do. I think there's also knowledge about the political aspects of accessibility that is really, really why we are doing that. Then for more technical, maybe, it's also important to know the user's need. That's really a key point in accessibility for all jobs, maybe for designer or developer, whatever. Understand why we are doing things, to resolve what kind of problem, what kind of issue. That's really the key point for everybody. Understand how users navigate on the Web, how they access or not access to information. I think that's the basis of knowledge everybody should share.

JADE MATOS CAREW: Does that also include compliance and legislation? This is one of the questions that we had in from a participant. So what role does that play in foundational training?

AUDREY MANIEZ: Yeah, that can be complex, legislation. So a bit knowing that it's required in some countries, but yes, knowing of it, it can be really complex. For example, in France, it begins to be really complex, to follow all the news about the legislation. So yeah, it's important. It's important.

SARAH LEWTHWAITE: I think in teaching, sometimes standards have quite a conflicted role. So some of our experts talked about how sometimes they won't use standards. They'll talk more to why this work is important and focus on the user and use that as the kind of motivating principle for learners. Others talked about compliance in terms of finding ways to introduce learners to standards without kind of dropping them in at the deep end, to use a metaphor, which means, you know, using sometimes resources which translate standards into more accessible format for people who are new to the field. Or maybe starting in a small place, taking parts of WCAG and exploring what they mean to give people that entry route where they feel they can try things out, that they are applying their learning, and that they can then move on to look at the broader standards picture themselves, feeling they've already entered and tried and practiced.

But there is also an important conceptual dynamic, which is I think standards are so important to Web accessibility, but how we present them is also important. So often our experts talk about presenting them as a floor, not a ceiling, in the sense that here's what we are going to try and do, and then you want to go and try and get beyond that. Not that this is what you are aiming for and then you are done. So always encourage developers, designers, content authors to use these structures of Web standards, but also to scrutinize what they are doing. So you are not just learning and designing to the standard. You are always critiquing your own practice, examining what you are doing, why you are doing it, how you are doing it, to keep that balance between the kind of the structure versus the creative piece because creativity is so important in our field. And it's recognizing that Web standards can be part and enable that; that they don't close down creativity. Because we know creative learning is so important in terms of getting people motivated and enjoying their work.

JADE MATOS CAREW: In my experience, different types of learners react to standards and guidelines in different ways. So for some people, especially if they don't have, maybe, a technical role, they can switch off if you present them with an overwhelming set of technical standards. So in my context, we have a lot of focus on practical demonstrations and examples rather than going straight to the guidelines. Do you think that following guidelines and standards helps people keep up with a changing landscape of digital accessibility? So this is another question which has come in. How can we keep up with evolving ways of accessibility and how it changes quite quickly sometimes?

SARAH LEWTHWAITE: I am going to hand to Audrey. How do you do this in practice?

JADE MATOS CAREW: Okay

AUDREY MANIEZ: How can we evolve, even if we follow the standards, that's the question, that's it. As you say, the standards are not obstacles. They are just things we have to do to allow people to access Web accessibility. And that's where it's important to know the user needs. I come back to that because if you know which goal we are trying to reach, we can imagine lots of solutions. That's why we have to know what our users need, how they navigate, because that allows people to create new solutions and just be in the success criteria and just yeah, that's because it's really important. I mean, it's really important to begin the reflection about the thinking with the user. You have to first you begin with the user, and then you can create a solution, I think.

SARAH LEWTHWAITE: I think that's so important because, you know, the accessibility standards are abstracted knowledge about what disabled people do online, how they use the Web. I think it's great that we've got so many resources from the W3C that show where these are coming from and why they exist in terms of helping close that gap with the standards. But yes, if you want to stay ahead of the game, it's always working with the people whose knowledge is the foundation for accessibility Web standards. So it's talking to your users all your users recognizing the breadth of your users. And it's also hiring for your teams and making sure that your teams reflect the world as it is, which means including disability, including disabled people, and recognizing what excuse me recognizing what we have to bring ourselves to that conversation also.

JADE MATOS CAREW: This links to another question that we've had in. Thank you for all of these questions. Please do keep them coming. And it's about AI. And the question kind of says this might be a bit more relevant for later, but this is really forward thinking stuff. How are we dealing with all of these kind of future evolutions, things like AI coming into different areas of accessibility? And there's even a question there about will the accessibility requirements sort of become redundant with AI doing most of the work in building websites? Maybe we won't need a need for training in the future. What do you think? Audrey, what are you seeing in the field?

AUDREY MANIEZ: Oh, that's a really complex question. I don't think AI will solve every problem of accessibility. Most of accessibility issues are based on understanding context. Even today, we have automated testing that tests as really, really little piece of our requirements in accessibility. Well, I am not sure AI will help more to detect or fix issues. They can help in other fields of accessibility, but to fix issues, that I am not sure. Well, really, I am not a specialist of AI, really.

SARAH LEWTHWAITE: Yeah, I think I am sure this will come up for discussion later, and I think there will be some really interesting answers in that session. But I think the concern I have sort of coming from a background in disability research and disability studies, critical disability studies, is that data, be it social statistics, be it those statistical views of populations driven by data tend to be highly normative. Where data is normative and these ideas of average arise, often people who are positioned on the edge of that are then missing and often further marginalized. So I have major concerns over AI in terms of what it deems "normal," be that websites do we think the majority of websites are accessible? What are these tools able to do in view of, as Audrey says, the changing and the contextual nature of accessibility?

I think there are some really interesting discussions happening, and there are some good people looking at how you do work data so it is more inclusive. So Jutta Treviranus talks about the bell curve and the need to cut, take a lawnmower to the bell curve so that you are always including and weighting data to take care of everybody, basically. But that may be a slightly different subject to this automation of testing dynamic. But I just think so often people are looking for ways to cut real people out of the system and the process, and I think it's really important to recognize the value of authentic experience of our products from our users.

JADE MATOS CAREW: Are you seeing links between accessibility and AI or XR, AR, VR, and is that being brought into training and education for accessibility? New, evolving areas, are they being brought into the curricula, do you think?

SARAH LEWTHWAITE: I think I don't want to sound downbeat, but I think at the moment, there are some tussles happening in the computer science curriculum, which sometimes mean AI is coming in and pushing out other areas. So some of our educators that we've interviewed talked about the need to kind of harness new fields and make sure our accessibility is part of that from the get-go. So yeah, we are seeing AI and accessibility courses starting. We are seeing people putting AI at the heart of XR and VR and also robotics. And there's some really exciting things. Whether those are coming from the mainstream of those disciplines or whether they are kind of accessibility people kind of busting in to make things happen I think is less clear. So I can't speak to that overarching picture. But it's really important to keep accessibility in these innovative spaces because standards and so on tend to come a step behind just by virtue of how they are made and created.

JADE MATOS CAREW: How can we keep up with that? There's a question in the Chat: How can we cope with the fact that advice from yesterday may no longer be relevant today because of evolution in technology?

SARAH LEWTHWAITE: I would say, as we said before, it's that perennial problem. It's an ongoing issue. And where you can, it's maintaining that user research, that accessibility research with real people that's going to help you bridge that gap. So keep that value in your developmental practice, in your learning practice, and then look at how you cascade that knowledge through your organizations. Because there is an organizational change piece here, I think, that we've not talked about yet. And it's a tension for me. My research is very much about what teachers do, what educators do in that kind of local space of the classroom. But there are also the sociocultural dynamics that push and pull on what's possible in education, in the industry. And there is that need to think about the organizational piece. And I know conversations about accessibility maturity and some of these overarching issues are really important, too.

JADE MATOS CAREW: Well, let's think about that change management piece. It's so relevant to accessibility and how we handle it in the workplace. Audrey, I think you have a lot of experience in terms of training in workplace situations. So how in your experience, how is accessibility incorporated into a professional development context?

AUDREY MANIEZ: We so yeah, we do a lot of training. We train a lot of people that are working in organizations. We train a lot of developers or auditors. And it's clear that, as you say, you talked about managing and so on. That's people to train their employee have already a political strategy of accessibility. That is the first thing that is needed in an organization, private or public. If the director, if the company, has no accessibility policy, so there's no training in the for the employees. So that's really a global political subject in companies and in public organizations so that people can access training. So that's it. So yes, we need a clear strategy in organizations so people can be trained to accessibility. It's not an individual initiative that comes to training. That's really important. Oh, sorry. So in the workplace, that's what I can do. I can talk well, that's what I can do.

JADE MATOS CAREW: Well, if we can pick that training apart a little bit. So something that interests me in particular is moving away from providing just guidance and just one off or ad hoc trainings to people that perhaps people go through in a very passive way, they don't really engage with the materials. So in your experience, for both of you, and Sarah as well, you are interested in the pedagogy behind how we can actually make people do this in reality. So what does the training look like? How can we make it effective and meaningful?

AUDREY MANIEZ: It's really accessibility jobs are really experienced work. Even if you have followed training for two or five days, like we have, for example, at Access42 for designers or developers, after that, it's really important to have a time to really train on real projects. For example, at Access42, for our own needs because we do it. So we train a lot of auditors that we have. So the time they have been teaching, so a training, it's it can take four to six months for the people to be really independent in their work. Really. So that can be as you say, you have the knowledge, and then you have to practice to be really effective, to be good at what you do. And you have to be well, you will be better if you are incorporated in our community.

JADE MATOS CAREW: Mm hmm.

AUDREY MANIEZ: That's really important, to have others to share with, to ask questions, to share practices. That's really in an organization, it's really important, yeah, community.

JADE MATOS CAREW: What do those communities look like? So for example, at the Open University, we have social media, internal social media channels. We provide drop ins and lots of opportunities for staff to network. Different things like that. What kinds of things do you experience?

AUDREY MANIEZ: In our company, for example, it's truly a place to share every day. So like we do audits every day. So we share every day about what we found in accessibility. So we have a chat where we, each other, speak, to ask questions, to find help, to fix some issues, et cetera. And what is great that we have chat that it's like a knowledge base. And if we recruit a new person, he can read all we discussed for a year, two years, and that's our knowledge base. And that's truly that's our documentation, our own documentation. But that's really, really interesting. But that's the same with what we have with the mailing list, if I can talk about the WebAIM, that's really two rich resources that you can search in. That's really, really great documentation too. So yeah, community share. So that's what we do. And once a month, we all have a meeting together to share in the meeting video about problems, about harmonize our way to work, our way to tell things, to present things to developers, et cetera, et cetera. That's what we do.

SARAH LEWTHWAITE: And if I can add, when we've spoken to, basically, government experts about how they build those large scale communities, so if you do have these Q&A spaces, questioning spaces for people to trade information and, you know, knowledge that's really specific to your organization, we've seen strategies by the managers of those lists where the experts will purposefully hold back slightly when a question is raised so that there's an opportunity for people in the community to start to express their expertise and practice and bring to the table, maybe for the first time, their knowledge.

And then you've still got that kind of safety net of the experts on the list ready to step in if there's any accidents or if anything is slightly incorrect. So if you are building these sort of online spaces where you are sharing information, think about ways to help bring the new people on board and let them step into their expertise and express what they know to help build that expert capacity more broadly. So it's not always falling down to the champions of accessibility to be the one person who knows everything. Because we know that model is precarious. If that person leaves, you lose the expertise. So, so much of this is about broadening the knowledge base. And I know many people talk about the importance of everybody knowing a little bit about accessibility. It's from this point then we can sort of build and build up the expertise.

JADE MATOS CAREW: We have a really good system at the OU where if we get asked a question, the first stage is to direct them to our internal social media, to ask there. And also, Audrey, as you were saying, search through what's happened before and whether it's been asked in the past. That's a really, really useful tool to have. But also it encourages other people who aren't accessibility champions to jump in and answer and share their expertise. And then if we still can't have the question answered in that space, that's when members of our team will come in and try and give an answer from an expert perspective. Thank you. Sarah, I want to ask you about what skills so we've spoken a lot about informal ways of sharing knowledge, but what about the formal ways? So what kind of skills do people need to teach accessibility effectively?

SARAH LEWTHWAITE: The skills to teach accessibility?

JADE MATOS CAREW: Mm hmm.

SARAH LEWTHWAITE: Well, so I think one of the reasons I started my project was because I was aware that sometimes, particularly in academia, where maybe there's more teaching knowledge, more teaching experience, there isn't necessarily the accessibility expertise that you see in industry. And likewise, in industry, I think a lot of teaching knowledge is quite hard won by doing the teaching and gaining the knowledge that way. So I was interested in how the pedagogic knowledge and the content knowledge, the knowledge about accessibility, are fused together. So what the teaching of accessibility requires specifically. And how to kind of build that knowledge up through research and cross case research. So I would if you are on this call, there's a lot of open access research about the teaching of accessibility, which I think often isn't where we first go when we are designing teaching; right? There are shared curricula. There are research papers which you can draw on. We wanted to do cross case research so we could look at a variety of contexts and what's important in those contexts. And of course, it does vary depending on who your learners are and what you are trying to do. So some of the questions that I would put to people on the call is about establishing what your learners need to know about accessibility, what is essential, what are your learning objectives? Try to set those and be clear with yourself so that you can then kind of put those into action. And it's difficult because I also recognize there's a lot of expertise in this room that we can't see. So you know, it's recognizing that.

Alongside these accessibility communities we've talked about, I think there's a real need for teaching accessibility communities, places for teachers and trainers to share what they do and start reflecting on what they do, naming it. So don't be afraid of pedagogic language and start to think about, you know, reflexive practitioner, thinking about learning by doing rather than learning through trial and error. You know, how do you when you are getting your teams to do projects, as Audrey described, when people are practicing in the field or in simulated situations, if you are teaching a graduate program and you are running project based learning with your learners, there are a range of things that you can put in place around that team to help them, to support them with resources, to check in around what skills that team might need.

But I suppose I am talking around a range of issues. But I think I want to come back to that key point around disability awareness by understanding the users, understanding disability, thinking again about ourselves, really. That awareness piece being so fundamental. And then with that, this process piece about the doing of accessibility, how are you going to give your learners opportunities to put knowledge into practice? And then also the technical piece, that there will be a certain range of techniques coding, et cetera that is also part of that kind of learning by doing. So it's bringing together those three, but recognizing that they are quite different worlds of knowledge that you are having to bring into like synthesize together. So you will have learners who are much happier coding, and you will have other learners who are much happier getting into the usability piece, trying to understand what people want and need and thinking about complex issues. Overall, accessibility does deal with uncertain knowledge. You know, we have to work hard to know what to do in any given situation. There aren't always straight answers. And Web standards take us so far, but they can't answer all the questions we have about what our users need.

Now, for some learners, that's deeply uncomfortable. They want to know what to do in any given situation. So I think and it's a real expert competency, dealing with uncertainty is one of those markers of expert knowledge in a vast majority of fields. But for us in accessibility, it's kind of like dead center. So I think often our experts, you know, and do read the papers that have been shared in the Chat, and love to hear your thoughts on that as well because obviously, this is a huge field. I am not saying we've answered anywhere near all the questions. We are just getting started looking at this piece. But recognizing that that uncertain knowledge, you know, working between compliance versus the realities of complex everyday experience, is a challenging space. And it has a range of expert competencies that you need to grow. And for some, it will be uncomfortable. So part of it is often bringing that to examples that are as clear as possible.

So often when we've spoken to people in organizations like Audrey's, if you are going into an organization, you want to show them their own websites and how they might work or how they might not work. When you are talking about disabled people, you might want to be naming members of your team and saying, you know, is this going to work for Jeff? Is this going to work for me? You know, like always trying to bring it back to something concrete, something real, so it's no longer abstract and somewhere else. Because the reality is much closer. It's in our everybody's world.

JADE MATOS CAREW: We've had success with that at the OU, when we developed our curricula for developer training. So using the WAI curricula modules. And using them as the foundation to add kind of really relevant contextual case studies, examples, demos. So we've had success with that and making it really relevant to our audience. And another thing we had success with was accountability partnering, so pairing up people from around the OU from different staff groups and having a really practical example of using guidance and training to making real fixes in their own documents or designs. So that's a really useful thing that we've come across. Where was I going next? There's been a question in the chat, it's a massive one: How can we integrate accessibility into the university curriculum? So taking into account the various roles within the accessibility field and their associated processes. So does anybody want to take that? Sarah, I think that's probably another one for you.

SARAH LEWTHWAITE: All I can say is it's going to take a big effort. Because I think, I mean, I've drawn a distinction between academia and the workplace, but I recognize that the university is a workplace. And as a workplace, it's quite a complicated workplace. So I think it has to be an effort at a number of different levels that run across a range of different groups. I mean, really, I should throw it back to you, Jade, because I know the Open University is world leading in so much of this. But I think there's a lot of work that's been done around accessibility maturity for higher education. There's really great conferences and networks, so I am thinking of HighEdWeb and their Accessibility Summit, which I think is annual. Obviously, there are lots of I think you've posted the mailing list on assistive technologies, which serves learning developer communities particularly. And there are, obviously, disability support dynamics as well.

I think the challenge for higher education at the moment is that accessibility is largely thought of as being for students. And it doesn't recognize our disabled staff at the level it should. And it doesn't recognize, in that respect, the kind of platforms that staff have to use, that researchers have to use, and that it's about more than just serving our disabled students. It's about the entire university estate. So for me, it's an accessibility maturity question, and I know there's really great people working in this space. I know AbilityNet have done a lot of really good stuff about accessibility maturity and higher education. So if you are looking at that piece, that's where I would direct you to go. But I think it's always a work in progress. But I also think the in Europe in particular, you know, the new regs on mobile accessibility and the Web mean that our universities are being audited now for the first time on their public facing accessibility. And that's a really teachable moment for the sector in terms of universities trying to turn their ships around. I think we deal with a lot of legacy systems in particular, which are troublesome. But in my experience, in the UK, certainly, it's becoming more positive in that beyond just serving our students and recognizing our duties and commitments to them, there's a growing understanding of the responsibility that we have to serving wider publics. And I think there's more mobilizing of political dimensions amongst staff to fully recognize the breadth and diversity of our staff groups.

As I say that, I know disability can be at the bottom of the list within even equality, diversity, and inclusion agendas. But I do want to be hopeful about us trying to make changes where we can and using these opportunities to put disability and accessibility front and center. Over to you, Jade. Now tell us what you are doing at the OU.

JADE MATOS CAREW: I was going to throw it to Audrey, actually, and ask ask about barriers to embedding accessibility into your workplace curriculums and how you deal with staff training. So barriers to embedding accessibility into your training and your curriculum.

AUDREY MANIEZ: In training, you mean in university or in general?

JADE MATOS CAREW: In general workplace. So in your practical experience.

AUDREY MANIEZ: Okay. The first barrier is always the same, it's the political barrier. Like if the direction I want to train people, so people will be trained, we can be faced with the problem of accessibility of the material itself. So tools that we are using to teach, so that's a problem when you have disabled students to train, and the content we deliver to people, that content, those are main barriers to train accessibility. That's mainly that. And I like when Sarah said in training, students want an answer to each problem. That's a barrier in training too. Because they want a clear answer to each problem they will be faced with in the real world, and we can't give them that. But teaching or barriers, that's all I can say, the tools are really a big problem. Because the learning tools really are not accessible at all. Can't allow us to give accessible content to our students, that's a big problem.

JADE MATOS CAREW: So in what ways are the tools so you mentioned the technical requirements of tools. So what kind of barriers do you see there?

AUDREY MANIEZ: The technical requirements? Yeah, they are to be WCAG compliant, and they are not WCAG compliant. Tools are really like LMS, they are really not taking accessibility in their roadmap. A few a really few tools do that. So that's it. I made a little study last year on CS tools, and we found over 30 tools for, like, LMS, just one list in accessibility report, it's really a few tools. So those tools can't give accessible interfaces to students, and that's a big problem. And the most in university.

JADE MATOS CAREW: Okay. Thank you. I am just trying to keep up with the chat here and check to see if there are any questions. Sorry. Just bear with me for a moment. Sarah, are we are you Sarah, sorry, did you mention that you are leaving or are you staying before I direct any questions?

SARAH LEWTHWAITE: I am afraid I am aware this session was running until 2:00 sorry, 2:00 local time. I appreciate it's different in Europe. So I only have a couple more minutes.

JADE MATOS CAREW: Okay. I am just wondering, there was one other question here for a university context. In a university, you likely have the opportunity to integrate accessibility into other disciplines so engineering, political science, lots of different things. Do we have any examples of how that's happened, where that's happened, any success stories?

SARAH LEWTHWAITE: I mean, I think it is happening. I am not sure there's visibility. And I think that's one of the challenges as a field is regarding, for example, just the level, knowledge of where and how accessibility is being taught. So I am aware, Kristen Shinohara at RIT did that survey of colleagues looking at the teaching of accessibility across the USA, and you know, whether it's appearing in software engineering, and other fields. I think that there is just a piece to be done about where accessibility is because at the moment, you only really see the specialist departments publicizing online where it's being taught. So you know, you will see that at some in the UK, say at Dundee, at the Open University, at other leading institutes, but it's difficult to know where and how it is being taught.

Of course, it is being embedded, so I would say look at what research is coming out about this, and I think there is a lot of work about teaching accessibly, you know, which I know the University of Washington have done a lot on in a range of fields. So that is building up. But it's a difficult picture to assess, and I think if you are somebody watching this and you have conduit to some of the professional organizations, there is that question of that raising knowledge of where it's being done, how it's being done, and how it's being done well, but I am sorry I don't have those answers. But I think in the next phase of my research, I am very interested in trying to look into that more fully. I am going to have to step away, so thank you very much, everybody.

JADE MATOS CAREW: Thank you, Sarah.

AUDREY MANIEZ: In France, we have a degree at the University of La Reunion that is focused only on accessibility, to train accessibility managers to create they are trained to create an accessibility culture inside organizations to manage accessibility requirements, audits, trainings, et cetera. So that's a real degree. That's the first year. That's a really, really great project. Yes, we are really proud. So that's really and we can see some units of teaching about accessibility in some degree at university. Really few, but you can find some accessibility words sometimes in the degree programs. So that's a little shy, but yes, maybe that will come a little more in the future. And I think that's linked with the job market needs. Since the job market, jobs do not require accessibility skills, university won't train people on accessibility. I mean, we have to we need to have a real need in the job market about the skills. Organizations have to put accessibility in job requirements so it can be a real skill to have, and I think it can be a circle from that.

JADE MATOS CAREW: That's one of the ways that we are looking at this at the Open University, so making sure that accessibility is visible in everything that we do. So if we are talking about accessibility, if we are hosting a presentation, that presentation needs to be accessible in itself. And I think this is really important for students to see as well, that accessibility is being prioritized and that it's visible in learning spaces. And I suppose that means that it's a more holistic approach holistic and informal approach to advocate for accessibility and raising awareness and building those skills in an informal way. Shall we have a look at the chat and see if there's anything we haven't answered before we move away from this? Just seeing if there's anything I have missed. Thank you for all of your questions. There have been some really good ones. There was a question which says could you recommend any courses that teach the accessibility requirements outlined in EN 301 549 in plain language? I suppose we'll direct you to right at the beginning of the chat, we posted links to some of our favorite resources. In particular, the WAI resources from the W3C. And on those, there's an Accessibility Fundamentals course.

AUDREY MANIEZ: Yeah, maybe not about the EN, but yeah.

JADE MATOS CAREW: Maybe not in particular, but I suppose it's the reason it's beneficial, obviously, is because it's referencing the most up to date materials. Do you have anything that you'd like to recommend, Audrey?

AUDREY MANIEZ: About the EN, no, I don't have. Just the document itself. But no, I have nothing else to recommend that we have.

JADE MATOS CAREW: Are there any other final questions, perhaps, that haven't been asked or anything that I have missed that's relevant to our conversation about education and training? Anything else? What is your position this is a good one that's just come in. What is your position on certifications such as the IAAP, which is the International Association of Accessibility Practitioners? Audrey, that's a good one for you because you'll have a lot of familiarity with this.

AUDREY MANIEZ: With the certification? With IAAP, I don't have IAAP. My position is that it's not required to be a good professional. We have really good professionals that don't have a certification. But for some people, that gives them a structure to a point, a goal. That can be great to have the certification. I don't know the content of the certification, so I can't tell if it's a good one or not. But the concept, the thing is something good because you have a certificate, you have proved you can do things, and that's great. We, too, do some certification at Access42. We do training, so people have to do some things, we evaluate them, and we give or not give certification. And that's great for some people to find a job because they can prove to their employer they are capable of doing what is written on the certificate.

JADE MATOS CAREW: I agree. Actually, I think that it demonstrates a commitment to accessibility, a professional commitment to it. And from my experience with IAAP, the content of that exam is quite broad and wide ranging. And it really enables somebody to focus on upskilling their knowledge in that area. So I think they are, on the whole, positive.

AUDREY MANIEZ: Okay.

JADE MATOS CAREW: I think they are still quite new, though, so we've yet to see the impact fully of these certifications. I've just noticed that Sarah has dropped back in, into the meeting. Do you have anything to add on certifications in your experience?

SARAH LEWTHWAITE: I am sorry, I may have a slightly noisy background, but I think the certification's really important, as the reasons you've raised. I think the only challenge sometimes is and stop me, Jade, if it is too noisy in the background...

JADE MATOS CAREW: It's okay.

SARAH LEWTHWAITE: ... is the cultural dimension, is the different territories have slightly different requirements. And sometimes sensitizing those kinds of certifications for, say, UK or U.S. or India, it's really important. And I think that's something the IAAP are doing, and that's really great.

JADE MATOS CAREW: Agree. Right. We'll close there and hand back over to Carlos. Thank you so much, Audrey and Sarah, for your time today and for your answers. I've got a couple more links that I'd like to post in the chat just to a couple more places that we compiled before we met today. And thank you for those who have posted links in the chat also and for your questions. So handing over, Carlos.

CARLOS DUARTE: Thank you so much, Jade, and thank you also Sarah and Audrey. It was a really, really great discussion. It's great to see that there's also a lot of positive feedback coming into the chat. And we'll now have a ten minute break, so we will be back at a quarter past top of the hour. Jade is busy still pushing more links into the chat. Great. If any of the panelists also would like to answer some of the questions that you haven't been able to tackle live, they are still in the Q&A, so please do. And yeah, in 10 minutes, we'll be back for our second session on mobile accessibility, so see you in a while.

Session 2: Mobile Accessibility

Transcript of Session 2: Mobile Accessibility

CARLOS DUARTE: Okay. I think we are ready to begin our second session. I hope everyone enjoyed the break. Just another reminder. You can use the Q&A to pose any questions to the session participants, and we'll also be monitoring the chat, but we are mostly using that for any technical issues or for the session participants to share any links to resources that are relevant. And so for the second session, the topic will be Mobile Accessibility. Detlev from Germany, from DIAS, is going to moderate it. And our two panelists will be André from Universidade Federal de Lavras in Brazil, and Paul from Digital Accessibility, in English, the English translation, in the Netherlands, will be joining us for the next hour. You can take it away, Detlev.

DETLEV FISCHER: Hello, and welcome. I am trying to emulate what Jade said because it's always difficult for me to find the right format for introducing everyone. And we had I was prepared to introduce the panelists, but I think it's probably better if they introduce themselves. Carlos has already given the names, so before we start with our topic, I would just like you, both of you, to just spend a minute to just say who you are and what you are doing, and I'll add to that, and then we can start. Do you want to start, Paul?

PAUL VAN WORKUM: Yes, that's fine. Yes, I am Paul van Workum, and I am working now a few years in the field of app accessibility. I created I am on one of the founders of the Appt Foundation, and we created a knowledge based platform with a lot of information about app specific things, like how do assistive technologies work, and how do the yeah, how can you fix certain issues in certain code bases. So that's an interesting resource, I think. Besides that, we have a company where we do everything about app accessibility, from training to testing to user testing, and also have an app organization with evolving on maturity level. Besides that, I work at the Dutch Government helping there with the most critical apps and doing some basically supplier management, to help suppliers of apps to governments, to help with becoming accessible. That's it. Andre?

DETLEV FISCHER: Do you want to pick up, Andre, and say a few words about yourself?

ANDRE PIMENTA FREIRE: Yeah, sure. Hello, everyone. First of all, thanks to Letícia, Carlos and all the organizers, and thanks to Detlev and Paul for sharing the session. I think we have a very good time here sharing some lessons learned and challenges of evaluating the accessibility of mobile apps. I am an assistant professor at the Federal University of Lavras. Officially in the field of human computer interaction. I also do teaching in optional courses on accessibility. And we've done some research, among other things, on mobile accessibility, including evaluation. So I hope we can share a couple of lessons we've learned in looking at a couple of different issues on mobile accessibility evaluation. From technical issues we've done jointly with colleagues on automated evaluation on how to do manual audit, auditing of mobile accessibility, work on different platforms, and even some more recent studies we've done on the policy level. We may have a couple of particular issues in Brazil to share, which might be applicable to many other developing countries on having what's not such bad regulation and legislation on accessibility in general but what is now, we say well, I think it's eight years old accessibility legislation covering digital accessibility. Which just left mobile accessibility out. And we have looked into how surveillance and law enforcement has worked in that scenario. We have some recent advancements in Brazil. Reinaldo Ferraz is here, he has done a lot of work with the brazilian national regulatory body to put out a new set of guidelines specifically for mobile accessibility in the end of 2022. So hope we can share a couple of lessons, both from the technical side going through to processes in universities and research agencies and in companies, and what we've seen in policies in different countries, both from advanced legislation, such as in European Union, and other countries that are kind of catching up with that. So looking forward to very nice discussions here.

DETLEV FISCHER: Okay. Thank you both. I will just say just a few words about me so you know who I am. I am here managing director of DIAS, which is a company specialized in accessibility testing and consulting. And I have been a member of the Accessibility Guidelines Working Group of W3C for I think about 12 years now. So I am actively involved in shaping the new WCAG 3 standard, which is kind of a difficult thing or challenging thing. So I've also been involved in policy consulting in the sense that I have been a member of the WADEX, Web Accessibility Directive Expert Group, that helped the European Commission to devise monitoring scheme so that the Web Accessibility Directive can be monitored across the Member States. So that was interesting as well.

Yeah, that's my personal background. And I thought I'd start with a very, very quick run on, to give some context of what we are discussing today. And that starts with the name of the session itself. It's called "Mobile Accessibility." That has been a bit under attack within the Working Group, the Web Accessibility Guidelines Working Group. Because it's increasingly a misnomer. There was a time when it was perceived as separate; there were the mobile apps on the smartphones, and there was the world of the desktop. But we increasingly see that apps are also run on a tablet, which has a keyboard, so there's keyboard accessibility issues there. And we also see that increasingly desktop environments have touch, for example, something which was not at least not common ten years ago or not available ten years ago. So those two worlds seem to be slowly growing together, and we cannot make assumptions anymore so clearly as we used to do in the past. And I think the result is that the Working Group has moved away from calling out mobile accessibility to something which is more looking at different input modalities. So you basically have two different input modalities. One is coordinate based, if you like, so that's your pointer. Whether you guide that with a mouse or with your finger on the touchscreen or whether you guide it with a grid when you are a speech input user, you can guide a pointer, a virtual pointer. And the other thing is the traditional keyboard method or input modality, where you have other assistive technologies which are based on that. For example, a switch user has some motor disabilities and uses a switch to interact with a webpage, for example, or an app, they would need a good keyboard accessibility to operate things. So that's the way things are going. So mobile accessibility as a term, I think, is probably on the way out. But that doesn't mean that it's not interesting to discuss the particular issues we have.

So one of the things we are faced with as evaluators is there are no specific or at least in the normative space, in the space of European norms and standards and directives, there are no specific mobile guidelines. They are basically all derived from Web guidelines. They have just taken out six of the Web accessibility requirements and basically put the rest into Chapter 11 of the European norm, which is called Software, and we are now supposed to use those to evaluate apps because they count as software. And obviously, there are some problems with that. And that's what we are probably talking about later on in more detail, at what points you see there are differences where the standard requirement cannot easily be applied to apps, and what can we do about that?

So there also have been some requirements which have been around in other recommendations for mobile accessibility for some time, which have only very recently become standard requirements. For example, WCAG 2.2 now has something called "target size" which gives us some idea of how big a target should be, and that has never been a standard requirement before. But several recommendations for mobile apps and accessibility of apps have included that. And also the framework of the big two operating system providers, Apple and Android, have their own guidelines which have things like recommended touch target size, for example.

So it's an interesting field. The thing is because of the difficulty of applying requirements which have been written for Web onto the mobile space, we also have the problem that in quite a few places, it's very difficult to apply them correctly. And you have a fairly high margin of error or of uncertainty where you think, you know, is this a reliable assessment? Especially in a context of conformance assessments, or if we are looking at it from a for example, from a monitoring perspective, where some public body needs to say this is accessible, you know, you have met all the requirements, you know, is that easy to do? And how do we do that?

So the big difference, of course, is apps normally come in a windowless environment. Usually they are in a windowless environment, not like the desktop. And they are often exclusively designed for touch. And we see that when we evaluate things, that most of the problems we find are in the area of linear access. For example, if I turn on a screen reader and want to traverse elements, things are not accessible. Or if I connect a keyboard, I don't have proper keyboard accessibility. And that's simply because people think, well, this is for the mobile use, so you know, these are apps on the smartphone. They are exclusively used by touch input. So the rest is not relevant. But the Standard says it must be keyboard accessible. And also, if you have a screen reader turned on, then you have to have linear accessibility, and you have to make sure that all the elements you encounter give you their proper name and give you their role so that a blind user, for example, will know what to do with them. So that's the situation we are in. And another important difference is that the apps are not open for coding inspection. So when we evaluate websites, we can turn on the developer tools, for example, and look at the source code of the page. And we can check things there. That's not available when we audit apps. Like in normal cases, I know that Paul has recommended as part of the app procedure that the auditor inquires what kind of platform or what kind of developing environment has been used, and that's certainly fine. But in our practice, when we normally audit apps, we don't have that information, simply. We may get it, but even then it may be difficult for us to know what exactly technically needs to be done to make things accessible. Because there are so many different developing environments. So we don't have that openness of code.

And also, we have a much less extensive tool set, and that's also something Paul has indicated they have some ideas on how to support or to increase, improve the tool set for us to evaluate. We don't have, for example, the bookmarklets we have in the website to check things. We don't have the developer tools where we can run plugins, giving us automated tests. There's some of that, like there's an accessibility scanner at Android, and there may be some others which we hear about, but it's a much less broad and powerful tool set we have at the moment. And that means that for testing, we have a very strong reliance on using the screen reader to work out whether all the elements in the app have proper accessibility. So you turn it on, and then you can hear what accessible name is behind an element or whether it can focus and, you know, whether it has the right role and whether you know, for example, as a blind user, what to expect and how it will behave.

So there's also another difference that the operating system for apps is now what the browser is for the Web space. You know, the accommodations you can make in the browser, they are not available, but there are other accommodations you can make as a disabled user on the operating system level. And there are a number of questions with that regarding conformance. Is it enough if you can, for example, improve contrast of elements on the operating system level, and then you meet the requirements? Or is the author responsible? So this whole question, what is the operating system's responsibility, and what is the author's responsibility, we have here in this field, which we'll get back to, I think. So I think that's probably enough for now.

Just to maybe open with the participant questions, we had a number of participant questions that were already given to us before the meeting, and I just mentioned them briefly, the topics that we had. One is around available tool sets, and that's certainly something that maybe both Andre and Paul have something to contribute. There's also something about testing guidance, you know, how do we know how to test, when is something okay, when is something not okay? There's a scarcity of information on that, I think. The next is, you know, there are different platform service capabilities with apps. You may not always be able to meet the requirements of the European norm, for example, depending on how you interpret it. So how do we deal with that? Another topic that was raised that we will cover, I hope, is Web views. Web views means you have an app which is a native app, but within that app, you have areas, you have views where people just pull in information from the Web, and that often creates difficulties because you suddenly have a break in the navigation. You may not be able to focus those Web views easily or get out of them easily. They may behave quite differently or may even carry a different navigation compared to your native app. So that's an interesting question, how do we deal with that? And there was one specific question on reflow, which we may also cover. What is the requirement for reflow? That means that, for example, if you zoom in on in a Web browser, you get a different design, which is also used on a mobile phone with a little hamburger icon often for the navigation. You know, what does this requirement that content can reflow when you zoom in, what does that mean for a platform which doesn't have a zoom in most of the time, where the magnification of text normally happens in the operating system by turning on the zoom accessibility function? So those are the four questions I just briefly report.

Maybe we start with the tool sets because I think several questions honed in on that. What are the ways or are there good tool sets to help us in this difficult evaluation task of mobile apps? Does one of you want to pick that question up?

PAUL VAN WORKUM: Yeah, I would be willing to start.

DETLEV FISCHER: Mm hmm.

PAUL VAN WORKUM: I think that there's a few levels. One is the process, like how to approach the testing. And yeah, in the Netherlands, each government is responsible for each website and each app to make a full report based on WCAG EM. But for apps, it's quite challenging to have WCAG EM evaluation to result in the same results for all auditing firms. And it's because some firms are trying to find the URLs, and then they find one URL that's just App Store or Google Play Store link, and there is no way, of let's say, cutting it in pieces and making a sample because there's no URLs available because there's no URLs. So what we try to do with the process is you identify an app itself differently than the website. So probably you need the version number because you can't download it like you can do with the website.

Secondly, like a screen would be a good alternative for a page. So then if you have a lot of screens identified as your scope, you are able to make a sample. Like certain things we wrote differently in the Appt Evaluation Method. That's basically for the process. You need to have a different process to test. And we identified some key differences. And we did it in the assignment of the Dutch Government, and it was quite a small project. But I think it would be very interesting to see how you can do it, how you can make the kind of evaluation method compared to WCAG EM, that a lot of companies are using and it could be the next standard. Because we see in the Netherlands auditing firms are doing it differently because there's no one way, one evaluation method that is described in such a way that each company does it in the same way. So that's I think it's an interesting source, and it's the best that we've found. But I think there are so many there's a lot of things to do here still on this process.

And then, of course, you have the interpretation of WCAG or the EN standard. You gave already a few examples. And yeah, so many things going on there. And some things you can't test. Maybe I will give some examples. Because everyone knows that on the website, there's the language of the page should be identified. And also the language of parts of the page, let's say a linear, should be identified. In software and mobile app is software it means that the software, the language of the software should be identified. So let's say that I have a Dutch app and identify the language as Dutch. Meaning that I am, like say I am complying to the EN standard. The second, like 3.1.2, it's like language of parts, I could read out everything in Chinese. So if when I am testing my screen reader reads out everything wrong because it's not using the Dutch language, but on the scale of the app, the language is set correctly, so I am not able to test it. But I am also not able to fill them on this criteria because they did set the language of the app correctly. This is one example.

I can give you another. Second one is even more funnier. We have an automated testing solution. We are using it ourselves already. What we see there is that companies are adding the role to the name. So they add in the name field, they add in, let's say, "login button," that's the name, but the role is empty. As auditing firms, if you can't go to the source code, you can't know what they programmed, so you are dependent on the screen reader. And the screen reader reads out, in both cases, login, comma, button. And of course, sometimes I notice already it's not a comma but a dot or two spaces. I was like hmm, probably they are cheating on me. But this is what happens. This is what happens. Yeah. But that's, I think, the two things that, at least in the Netherlands, we are trying to identify how to deal with this. And it's not clear yet.

DETLEV FISCHER: I noticed that most resources you find, I mean, what you have mentioned, WCAG EM, that's the Web Content Accessibility Guidelines Evaluation Method, which has been developed a number of years ago, that has defined certain steps that you need to do, for example, to set up a representative sample of websites and so on. But what you can and what you cannot exclude. And all that I think can be used on apps in a similar way. But all that does not give you anything on how you actually evaluate particular success criteria. So there's nothing in WCAG EM at all about what does it mean, reflow? What does it mean, text size, resize text? What does it mean for an app? So I think at that point, it becomes more interesting to see, okay, what are the particular testing processes or testing procedures that exist for mobile apps? My suspicion is often companies doing this may not be willing to detail the exact method they use to do this. So you end up with this very general statements in the European norm or in WCAG and then have to wonder and scratch your head, what does it actually mean? How do you work it out? How would you, Andre, tackle that? Are there test procedures you use for auditing mobile apps? Which ones are you using, how do you see that situation?

ANDRE PIMENTA FREIRE: Thanks for the question, Detlev. I think it's a very relevant issue. How we deal with the guidelines, how we deal with the testing procedures when there's so little defined, like ground rules, well established procedures as we have for the Web. What I have noticed, looking from the research perspective, is that many companies have stepped in and defined their sets of guidelines for native apps, their sets of testing procedures, so we've come across a couple of guidelines sets and testing procedures from companies.

So the BBC have defined their set of guidelines with a couple of indications of how to test their proposed guidelines. We followed the work of some of our colleagues in Brazil, working alongside some in Brazil, to define a set of guidelines to native apps and how they test. So many companies are stepping in and defining their own sets of procedures. And it was one of the guidelines, for example, we found in a research study with practitioners that, for example, they found it easier to test some of the BBC's guidelines with well defined testing procedures. They found it easier than to map the WCAG Guidelines for mobile apps, which still don't have a lot of the sufficient and advisory techniques attached to them, which is where you find the well defined testing procedures. They found that easier. So having well defined procedures to test specific guidelines. So I think this is still an open issue. We have to work on them. But for practical terms, I think it's interesting to look around at what specific companies have done to try and approach that while we don't have the well defined procedures.

And in terms of the tools, that's not something I have done research specifically on, but I have collaborated with some colleagues and have a lot of very interesting challenges in native mobile apps, as you mentioned. Like we don't have pages, so what screen should we check and evaluate when we are doing our sampling? That's very challenging to choose. And we have to find specific rules and guidance on how to do that. On the other hand, what I have noticed from some collaboration I have done with some colleagues, specifically from software engineering, they are coming from testing, what they've done is when they are doing testing for mobile apps, native mobile apps, they are employing techniques they are bringing from different testing techniques they have, and the approach is similar to, as we mentioned, Accessibility Scanner. But in research, some researchers have already advanced on that and tried to exploit some techniques that you can use to simulate different interactions you have with interface components. Maybe in the future, by exploring more of the accessibility APIs to dig in that information and bring some more information, maybe we'll also have a lot of difficulties when you compare to the Web world, we could also have some advantages by bringing different techniques that were more challenging to employ in the Web world but then bringing a lot of advancements that we have had in software engineering, in software testing, to mobile accessibility testing. We could have some new approaches we haven't seen in automated Web evaluation tools either. So I see that it's a field with a lot of open questions, how to sample, how do we have well defined rules to test and to evaluate and to report on those issues? From the research perspective, I also see that we have a lot of good opportunities.

In the Brazilian context, as Paul was mentioning what's going on with European standard using the software guidelines and trying to map them onto mobile accessibility, Brazil has been kind of a funny situation. Our current law for accessibility in general is from 2015, and it broadened the scope for public and private organizations, which was the previous law was very limited. It only encompassed the federal government and the executive power. It's much broader now. However, the law says that all private and public organizations should have accessible websites, which doesn't include mobile apps. So we are kind of in a gray area in the country. On the other hand, as I mentioned earlier and I will post it on the link there. I am sorry those who want, this in Portuguese, I think we only have the title in English. But as I mentioned, Reinaldo Ferraz here with us was working very closely on that working group. We have a very specific set of guidelines for native mobile apps. But it's only one year old. It was released at the end of 2022. And I haven't seen a lot of work in the country in terms of defining a set of guidelines to evaluate it. As I mentioned, the law doesn't point to that standard, so there's no law enforcement activity in that sense.

So I think we might be in for a lot of work in terms of having specific evaluation tools for that and procedures, but we don't have, at the moment, as I mentioned, it's still very early days. It's been published only a year ago. So there are many people getting acquainted to the new standard. But still, I agree with a lot of what Paul and, had discussed. There's a lot to do on reporting procedures, standards, and working on the automated evaluation tools. But again, we might have to look at it in a different way, looking at the opportunities we have by the different ways of working with it, but also seeing the differences we have from the Web world and having more work that I think we are going to see in future versions of WCAG and other standards as well.

DETLEV FISCHER: Paul, can you add to that regarding the automated opportunities? I think you mentioned that Abra is also working on some automated solutions. What can it do, and what are the opportunities, in your view?

PAUL VAN WORKUM: Maybe I want to react on that as well. What I see is that, of course I see two things. I see one, that in the Netherlands, like I am only working in the accessibility field for three years, and I am spending, together with the people around me, around 80%, 90% of my time on product development. So we are really digging into certain things. That's why we were able to gain so much knowledge in such short time.

But what I see is that there are some experts from the Web stating to me that apps are the same as websites. And that's what I find really interesting is that there's a lot of experts that know in Web accessibility that you should look at all the small details. In apps, that's quite frustrating if you are looking at the details and you can't go to the source code and you can't you don't know the test rules are not working, and so you have to interpret things. And that basically means that, like two things. One, apps are generally shitty. So there's a lot of improvements that can be done. So let's try to make first big steps for users. Because I think that's why I am in this field. I want to help users to help apps. And if you don't know it, think about the principles, like is it operable for everyone? So with a keyboard, with a and is it, like, a really big issue if it's broken? That's, I think you can quite easily make a big step. And then, of course, in the details, then it gets complicated. And then it gets also a lot of discussion because then people agree differently. But I think the first, to make big steps on name, role value, on font sizes, on contrast, on labels, on images. I think if you do that, you already make a really big impact for a lot of users, and that's not that complicated. Everyone is clear. It's quite clear what you need to do there.

So that's start with the user, I think that's a really good example, and fall back on the principles if you don't know the exact test rule or and it's different with Web because with Web it's like already in the level of detail that, as an expert, you can't do that anymore. I think with apps, finding the issue to be compliant, it's not a problem. Most of the time fixing the issue is the problem. Because fixing the issue with not only, like, native iOS, native Android, but also the framework, different frameworks, cross platform frameworks, there's a lot of solutions. And some frameworks are not able to do certain things. So what we see is that where in the Web, Web is like kind of markup language, you can always fall back on HTML. With an app, we have a programming language, and it's like it is what it is. And if you have an issue with your keyboard, it means that you should, like, start over and build a new app in a different programming language. And my question sometimes is to the government, on one side, I want it to be fully accessible. On the other side, they've invested a few million euro on a certain programming language. It's not that you have to have one module or one piece or one element that you have to change. You have to rebuild your, a new app, probably hire a complete new team of developers. So I think that's, for apps, also quite a big challenge.

And I think we should make accessibility a bit more fun, at least for people who are just getting in touch with it. So automated testing, that was a question from Detlev. I think automated testing is not a solution for everything, but it is a solution, you can make a big step with the basics with a lot of developers in the teams. Because if everyone knows if they develop something, when basics are going wrong, we can make a big step in accessibility. What we I don't want to do a big promotion, but what we are trying to do and what we are going to do in January I am not allowed to make deadlines for my team, but basically, we are going to launch a tool that can test the first screen of an app for free, so you can just put in the URL, and you get a report on the first screen. And if you want to test more screens, you can do it in the cloud, and it will be a very attractive

DETLEV FISCHER: Is that for because you mentioned URL, is that for Web for apps?

PAUL VAN WORKUM: We only do apps. But you add the URL of the app in the App Store or PlayStore, and we will do the rest. So that's one. Secondly, we will have a kind of debugger or accessibility inspector, and it will be able to take all the screens apart. And you can inspect the name, the role, the value, if elements are focusable. We are investigating if we can do it better with contrast because with the contrast check, if you make a print screen and send it to your desktop, it's between 1% and 10% change in color code, meaning that if there's a contrast of 4.1, it could be 4.5, and it could be a bust. So how can you deal with this? Can you only fail if people are below 4.1 or only fail when it's below 5? Because then potentially it could be in the danger zone. A lot of questions, but what we can do with automated testing is we can find the name, role, value. We can find contrast issues, target size issues, labels, if their labels are added, if decorative images get focus, and text sizes. That's when we look at apps and we see an app that didn't do anything, I think from 100 issues, 80 issues are name, role, value, text sizes, contrast, stuff like that. So you can make a really big step doing using this automated testing. Of course, if an organization is further, then the issues get more complicated, and then you can find, like, maybe 10% or 20% of the amount of issues. And so it's not fair to say that we can find 80%. I did not do that. But we can find quite a lot of the issues that are occurring in apps.

DETLEV FISCHER: That would be a useful tool to have and make testing probably easier on those things. I just just to get back to the question of how to apply certain WCAG criteria, would like to give an example and lead to a question which is about the responsibility of the author versus responsibility of the operating system and the settings the user can make there. For example, text size. Right? In the Web, you have ways of increasing the text size, for example, by most commonly by zooming in. You zoom into your browser. The text gets bigger. You have a reflow. At some point it's just one column of text. And you can then check, okay, does it increase text to 200%? In apps, that usually doesn't exist. Of course, it's a possibility that the app has something like increased text size, but most apps don't have that. And the common understanding of most practitioners evaluating apps regarding text size is, well, we actually don't need to have a look at that. We don't need to look at that at all because there's a zoom function in the Accessibility settings. So you can just turn on zoom, and then you can zoom up to 500% or even larger by now, 500% I think is the latest I remember. And that will give you enough text. But obviously, that also means that you don't reflow, so you get larger text, but you have to pan your screen horizontally to be able to read the lines. Right? Because you don't have a reflow, you have to do that. So reading gets a lot more difficult for people with visual impairments because they all the time have to pan in order to read the text if it does not reflow.

So the upshot is, okay, how do you decide, as an evaluator, whether that is good enough, as many people say, or whether you also check that the text in the app changes its size if you change settings in the operating system? You can increase a text size in your operating system assistive technology accessibility settings. You can say large text, and if you implement it well, then the text in your app should get larger. Since you can do this. as an app developer, is this something now that you also require and say if you don't do that, this point fails? Or is it something that you can do on top of something where you say, well, this passes anyway because we have the zoom function on the operating system level? That's my question to you. I mean, how would you deal with those questions in evaluation? What's your line on author's responsibilities versus operating system level capabilities? Does anyone want to pick that question? Paul?

PAUL VAN WORKUM: Yeah, I can give a reaction. I think in my bookshelf, on my bookshelf, there is a dictionary. And the question is because I have this dictionary, is it fine to use, then, very complicated words because I have the dictionary? It's the same with AI. Like there is AI maybe the possibility to give me feedback on how the screen is set up and what kind of icons are being used. And is that enough? That's the same with the book, the dictionary on my shelf. Like is it enough? And that's the thing. What we see also from the Guidelines, what we see for Web is that if you have a high contrast mode or big font size possibilities, and of course, if I put the high font size on, then my letters are big enough that you don't need to meet certain contrast criteria anymore. I think it's a discussion that you don't want to start.

DETLEV FISCHER: I started it. (Laughter)

PAUL VAN WORKUM: Yes, you have. I will give you my answer. Like I think that if you have an app without changing the settings on high contrast mode on the bigger font size, it should be working for, like, the average user. And the users that need bigger font size, they should be in the system, be able to put it at least to 200%. Then all the text in the app should scale. Not scale to 200% because in our automated testing tool, we found out that the body text is scaling 200%, but the text that is already big at the highest levels, it means that maybe it only scales 150%. So you can't so this is also, like, text should scale to 200%, but then in the settings, you should probably put it at 300% or 350%, and we see that headings only scale to 190%, so it's untestable. What we do is set it at 200%, and all text should be visible. Each letter should be in the box of the buttons. Meaning that you should check if text scales, yes or no; and if it scales, are all letters visible? That's basically the simplification. But with apps, in the title bar, in the tab bar, if you scale the text, you get issues again. But then there is a solution with long press that you have an alternative to press it a bit longer, and then it's shown as well. I think that's what you should do as a developer for large text. And also with the contrast, I think that every user that is using your app, maybe seeing really well, if you walk outside and it's sunny, you still need a certain amount of contrast. And you don't want to go to your settings to make it a little bit bigger, a little bit more contrast. No. By default, your app should be usable for users. So that's why we do it in this way. But maybe, Andre, maybe you do it differently. I am very curious.

ANDRE PIMENTA FREIRE: I totally agree with you, Paul. And from another perspective, when we look into what kind of devices people use, not everyone can afford let's take an example. What we've seen in a lot of research, the iPhone. We've seen that a lot, and many people have asked and many research studies, so why have you not invited iPhone users for your usability studies on accessibility? And we have actually done a few of them because we have very few iPhone users in countries like Brazil. Some research studies have shown that more than some of them, 85% of people surveyed used Android, Android phones, and even within the Android world, you have a lot of variability of the type of devices people use and devices' capabilities, models, et cetera. So relying on the device may be very tricky. So I totally agree with you, Paul, we should definitely try and look to provide the resources people would need even if they have devices that wouldn't provide more features to do that on their own.

As like Paul mentioned, there could be people who don't use assistive technologies every day, maybe because they are outside and it's sunny, or even what we've seen, particularly here we have a recent case in our University of an older student who is gradually becoming he is losing his sight. He still can see very little, but he he was not used to using assistive technologies from his early ages, so now he is struggling to cope with different settings and to learn with different things. So if he has to do a lot of work in terms of learning how to use assistive technologies on his mobile phone, that's not easy. So I take from this example that if we can provide, especially if it's covered in standards that are directed by regulation, I don't see why not to do it. I think it's good that we have devices that are providing good resources, that have good assistive technologies, but they are not always available. People are not always able to use them, as we would think. So I totally agree with that, Paul.

DETLEV FISCHER: Yeah, I think it makes a clear case for needing more specific advice on how to apply the success criteria or EN requirements. For example, for the resize of text, we mentioned there are different ways of doing it. There's zoom. There's also the accessibility settings for larger text. But it does not differentiate between body text, it does not say anything about text that's already large. It does also not say anything about, say, labels in a tab bar which could not grow by the same amount because they would break up or they would need to be truncated. So those issues, they exist. And if you apply WCAG requirements by the letter at the moment, which does not differentiate between types of text, it just says everything has to be 200% larger, then you end up with recommendations for developers which may not be useful, which may actually make things worse. You know? And the thing you mentioned, Paul, that you could have a pop up text, which is larger, is a nice way out. It's not really something that, I think, has been foreseen in this, and it would not really be clear if that would be meeting the requirements because it requires an extra step to bring up that enlarged text. But it's certainly something that would be more, you know, more germane to the app environment. And there are many cases

PAUL VAN WORKUM: Yeah, I think the problem is that if you enlarge if you say text should be enlarged in the tab bar, what you are saying is then you get dots and you can't see it as well. So the best alternative is what we see as well is if you look at landscape mode, it's quite often used in combination with larger font size because then you have three sentences with a lot of words instead of only like reading two words in each row. It's very tiring to read in that way. Yeah, what we see is that we cannot fail on bigger font size if the solution is breaking up other things as well. You cannot say to the developer in my opinion – you should do this but if you do this, I will fail you on something else. So best practice, long press implemented, but in audits, are you able to fail on this? Because if you fail on it, if you have five tabs, sometimes it gets three or four lines high, meaning that you don't have space for your content anymore. Or if you have space, it's basically not yeah, you can't have any overview anymore, especially with bigger font size on. Yeah.

I have maybe another one. It's like with lists, I think it's one of the questions of users as well, that basically, if you are in an app, if you present a list with dots, like bullets, it doesn't read it the same way as on a website, meaning like it's a list, one out of four or one out of five points. One of my cofounders made a Dutch COVID app accessible. He was the lead accessibility developer there. And if there was an auditing firm that did a lot of Web audits stating that it was a fail, the list. And because it was very important for the Dutch government to make the app fully comply, he built something that it was read out that the list, one out of four, but it means that if you have it in 15 languages, it's a lot of, like, translation and strings, and it was it took a really long development time. At a certain moment, the auditing firms in the Netherlands said if you can separately tap each item, it's also good enough. So we are organized in an inspection group, and we agree that we are dealing with lists in this way.

DETLEV FISCHER: Yeah.

PAUL VAN WORKUM: But now we go to the European Union, and in other countries, they do it differently. That's, I think, we should do something with that, that on a European level or, I don't know, that it should be that everyone does it the same because otherwise it's unfair that some countries do or some auditing firms do, and we have a lot of discussions in the Netherlands that some auditing firms fail on the keyboard controls or the focus indicator not having enough contrast, and others do not because they say it's standard. We can't fail on this.

DETLEV FISCHER: Yeah, it's also there's a lot of leeway and a lot of wiggle room in terms of interpreting the requirements. You know? So you can arrive at different results. We have a number of questions from the audience. Maybe I should pick some and throw them to you and see whether you want to answer them. One is what is your setup for mobile testing? Do you use multiple phones or a mobile emulator? Does one of you want to answer that? Do you use several devices?

ANDRE PIMENTA FREIRE: We've done a couple of studies with different devices, but I mean, it's very hard to have all sorts of different settings. So I mean, in many situations we've seen, and from what we heard from a couple of developers and testers, people tend to use emulators to have different settings, sizes, different OS versions, so I think emulators can really come in handy.

DETLEV FISCHER: Okay. And there's another question which goes to do you know tools emulate apps on a desktop so that Zoom text users can test apps more easily?

PAUL VAN WORKUM: Like I don't use emulators at all. When I do an audit, I use one device, and I test only Android or only iOS. And I report, yeah, for each issue on which screen it is, I add the description of the problem, I add a print screen, because then what we see that developers can identify very fast what is the issue. And we add the criteria, meaning that if you link it, then, to the Appt platform, there's also the code base where you can find the solution to the problem. So you can see a heading, and it's like with the screen reader with subtitles on, it says log in, then it's not without comma heading, we say this is wrong, it's 1.3.1, look at a site for a solution. So I can normally, yeah, like have in a meeting of one hour describe around 60 issues because it's like look at the image, this is the heading. Next, is this a button? Insufficient contrast. Okay. Next. That I think is a very important combination to have, like, visual information there. Especially in Appt because you can't with the website, you can download the website and, like, present it later. But in an app, if you update the version and you can't reach the old version anymore, and basically, it means that it's could be an issue that occurred only after in an update, so you never know anymore if you did it well or not.

DETLEV FISCHER: Yeah, well, I think, in my experience, there are differences between devices, especially with Android tests, so if you test something on a Pixel phone and you test something on a Samsung phone or some tablet, you may get different readouts on certain items. So there are differences, and it may be useful to cover those. But that's, in our experience, also down to the customer who wants to say, you know, please test this also on this device, on this Android skin, for example, because this is one which we know has a large market share. And we want this covered. But regarding sharing the screen, we have done internally, we have done quite a few app tests where the blind tester and the sighted assistant have been working together and sharing the screen via Zoom. So the blind tester shares his screen, and at the same time, the assistant has the same app open on their device so they can also use it independently to verify what the blind tester does. So that has turned out to be quite useful. And also, the blind tester can share the screen reader output, so the assistant can also hear what's being output. So that is a setting which has been quite useful. But it is time consuming. So be warned if you do that kind of testing, it is quite time consuming. I don't know how we are on time. We are now 16:25. Do we have more time, or are we...

CARLOS DUARTE: No.

DETLEV FISCHER: Okay, then. We have many, many questions. I am very sorry that we haven't covered more of them. There are many questions, and I hope I can answer some or maybe the other participants in the panel can answer some of them in the Question and Answer panel. And thank you very much very much for your insights. I think it's an extremely useful discussion and there's so much. We just have got just scratched the surface, and there's a lot more to talk about. But this is all we could squeeze in. So I hope it was useful for you.

CARLOS DUARTE: Thank you so much, Detlev, Paul, and Andre. It was definitely really interesting. There are still a lot of open questions there in the Q&A, so if you some of you can tackle those, it will be very good for everyone involved. It was really insightful and full of actionable material, I would say, so thank you so much for your contribution. And now let's have another ten minute break, and we'll be back at 16:35, so 5 minutes past the bottom of the hour, for our final session on Artificial Intelligence for Accessibility Evaluation. So see you in ten minutes.

Session 3: Artificial Intelligence for Accessibility Evaluation

Transcript of Session 3: Artificial Intelligence for Accessibility Evaluation

CARLOS DUARTE: Okay. So I think we are ready to start our last panel. The topic for this panel session will be AI for Accessibility Evaluation, and it's going to be moderated by Matthew Atkinson from Samsung R&D, in the UK. And our participants will be Yeliz from the Middle East Technical University in Turkiye and Alain from SIP in Luxembourg. Once again just a quick reminder for any attendee that has joined in the meantime, we are using Q&A for posing questions that you might have to the panelists or to the people in the session. And we are using chat to share any resources linked to topics being discussed or for any technical issues that you might have. So Matthew, you can take it over.

MATTHEW ATKINSON: Hi, everyone. Let me just juggle my windows slightly, first of all. Just one second. Okay. So we're very excited to have this chat. It's a privilege to be here. I welcome everyone. Thanks for your attendance and to the insightful panels that have gone before us. Where this topic has actually come up, so we'll try and give you our take on those questions. So how this is going to work, we are each going to introduce ourselves and speak for a couple of minutes just to set out our experiences. And you will see there's a lot in common between the three of us in terms of threads, parallel threads. So what we'll do is we'll do that. And then we'll move into general topics of discussion. Of course, there are some questions we already got from the audience which we've looked at, and as we can, we will answer things that come up during the session.

So I'll begin. Hello again. I'm Matthew. I am Head of Web Standards at Samsung R&D Institute, UK. However, just to be clear, I am not here representing Samsung. I am also co chair of the W3C's Accessible Platform Architectures Working Group, which I will call APA from now on. One of our main jobs is to review W3C's specifications for accessibility, but we also do original research of our own. And whilst I am not speaking on behalf of APA either, we do a lot of research in this area, and particularly our Research Question Task Force, we have a lot of experts in that task force that look the trends in this area. So I will rely some of my experience and some of theirs. So what follows is my personal opinions based on experience, some experience of accessibility auditing and a little of academia as well.

So one thing I wanted to do first of all is just distinguish between AI or machine learning and some of the current automated evaluation that we can do. As other people have mentioned, actually, in previous panels, there are automated accessibility evaluation tools, and they just use standard sort of heuristics. And they can capture around 30% of the sorts of problems that the Web Content Accessibility Guidelines, or WCAG, identifies. So they don't capture the majority of the problems, but they can give you a good barometer, a rough estimate of accessibility, and they can be run in an automated way. But they don't use machine learning. So we are not talking about those. We are talking about more recent developments. And on machine learning, you'll notice that we'll talk about risks and opportunities, and we'll also talk about mitigations. And I am just going to highlight one or two of each of those just now. And we'll revisit these as we go through.

So there's a concept from the literature called "burden shifting" or "shifting the burden." And a good example of this is, for example, automated captions that are generated on, say, videos, streaming videos. And whilst they can be useful, they are not necessarily 100% reliable, or they might be very reliable, but they are not 100%. And there are some risks presented by that because if you are somebody who can't hear what's being said in the video and you are relying on the captions to be accurate, then the burden for verifying the accuracy of the captions has been shifted onto the person who is least able to do so. So that's one of the big risks. And there are others that we'll talk about as well. Alain has some great examples of those. There are some opportunities, though, because there are some things that machines can do better than humans, and with some guidance, could present great opportunities. And Yeliz has some really good research that she'll share with you on that front when it comes to accessibility evaluation.

And in terms of mitigations, I just wanted to put two links in, which I will copy into the chat whilst I'm talking, and these are two W3C efforts trying to help in this area. So I am just going to paste these in, and I will tell you what they are. There's the W3C's Principles of Ethical Machine Learning, which is an official W3C deliverable, which is being worked on. And then there is also a community group, which isn't official W3C work, but it's something that is being incubated. This community group is called Accessibility at the Edge, and one of the things they are trying to do is gather consensus on where we can and where we might not find machine learning to be helpful. So anybody can join that community group. You don't need to be a member of W3C in the sense of being a paid member or a member organization. You only need a free W3C account. So we welcome feedback on these efforts. Okay. So that's definitely enough from me. So I will hand over, first of all, to Yeliz to give your sort of introductory statement.

YELIZ YESILADA: Hello, everybody. Good afternoon. Thank you, Matthew. First of all, thanks for inviting me here. It's been great. It's really great to see the first two sessions. I really enjoyed them myself. Especially mobile accessibility one. I remember it was in 2009 that we actually created that document in the Education and Outreach Working Group talking about the common experiences between mobile users and disabled users. So it's really interesting to see the discussions and how they evolved. Let me introduce myself. So I've been in academia for more than 20 years. It's been quite some time. I mainly do research on Web accessibility. And recently, actually, the last five years, my research mainly focuses on using AI to actually improve accessibility for disabled users. I do research in eye tracking and human computer interaction as well, so we also try to use, for example, AI in, let's say, for eye tracking research and how we can actually use AI for eye tracking.

The recent research that Matthew mentioned, we've been actually looking at we have already discussed this in the previous session, especially the importance of WCAG EM, evaluation methodology. It is a great resource for, of course, systematically evaluating websites. In this case, I am broadly using the websites definition. But there are a lot of subjective elements. I guess Paul mentioned it in the previous session, for example, even if we take a website, two auditors can generate different conclusions. One of the reasons for this is basically, WCAG EM has different stages, and one of the stages, for example, is defining the evaluation scope, what you consider, for example, as a website. Then exploring the target website, so deciding, for example, which pages, you need to sample from the site, which pages to consider. That becomes a complex and subjective task. And we actually propose in our research a parallel methodology. We call it "Optimal EM," where we try to explore mainly machine learning approaches for actually doing a bit more systematic sampling. So on chat, I added the recent two papers that we published on this.

So what we try to do, we try to, for example, first of all, define what is establish a population for a website, what is a population, because you need to decide, for example, what pages are there, which ones are used, which ones are not used, et cetera. And then what we try to do is we try to cluster pages that are available on the site by using unsupervised approaches, mainly based on statistical techniques. And we try to generate representative sample. But of course, generating a representative sample for a site is not that straightforward because you need to consider, for example, do we actually have enough coverage? Do we cover different pages? Do we cover, for example, the freshness of the pages? Do we cover the complexity, variety of complexity of the pages, et cetera?

So we also introduce, for example, different metrics that can be used to assess whether you are doing good sampling or not. This is basically trying to use unsupervised learning approaches to actually do sampling to help to choose, to guide what kind of pages you take from a site. And then you sample and you do the evaluation. In my research, I also I am also quite interested in complex structures. For example, tables are complex structures. How do we evaluate the accessibility of those complex structures? Because those kind of complex structures, for example, they are used for, let's say, not just for representing data, but they are also represented for basically visualizing or laying out the content of the page. So we also try to use, for example, supervised approaches, algorithms where they try to look at data and learn from differentiating from that data, learn to differentiate, for example, where tables are used for layout or they are used for structuring purposes.

In general, just to set out the overview, these are from my research examples. But I believe the AI can actually help in two ways. AI is not going to be, of course, a silver bullet. So it's not going to solve all the problems. So Matthew mentioned, for example, that the 30% of the issues can be automatically already identified. But of course, with the rest of the 70%, if they can help us and automate certain processes, that would be great. So it can actually be useful in two ways testing, and also for helping the authors, guiding the authors, or maybe we can call it repairing the accessibility issues. So for testing purposes, I also see that there are certain areas where we see potential. For example, language models can be used to assess the complexity of the text or the layout. We can actually also use, for example, alternative text generated, whether it is appropriate or not for certain kinds of elements. That can also be an area where automation can be done. Images, whether they are used for decorative or for semantic purposes, again, AI can help there for differentiating them. Page elements, I've been doing research on that for a long time. It's a complex task to actually take a page and decide what are the page elements and their roles. But of course, machine learning can also help there.

But there are also things that can help. Of course, AI approaches that can help at the authoring stage, in my opinion. For example, generating alt text. So we see that there are a lot of research in that, especially in image recognition and machine vision. Or automating the caption generation, so that can also help in, for example, automated caption generation. Or text translation, because we see that multiple languages can be an issue, so automated text translation. So AI models can also be useful there. And these kind of examples, I guess we will discuss them. But of course, besides the evaluation and also the support of authoring, there are also tangent issues to these approaches that we have to be careful. Matthew already mentioned. For example, these algorithms, they rely on a lot of data and good quality of data. So it's critical that we got data and we got good quality data. It's also important that we avoid bias. So for example, certain user groups and certain disabilities, we should not really have a bias towards certain user groups or disabilities. We should not really exclude users, so ethical dimension is critical in there. And also, the accuracy and reliability of these approaches that I mentioned, they are also critical. So how successful they are or how accurately they can actually help us. But of course, they cannot solve, let's say, the full problem. But they can at least assist and help and guide in the process. So these are the issues that I wanted to mention as the tangent issues.

Matthew, I think that's all I wanted to say. I guess we'll discuss them later again.

MATTHEW ATKINSON: Yes, yeah, lots to discuss, lots of common threads. Thank you for that, Yeliz. And now over to Alain for your introduction.

ALAIN VAGNER: Thank you, Matthew. Yes, so I will just briefly present myself. I am an accessibility specialist at the Information and Press Service of the Government in Luxembourg. It's a small country in Europe. I am also a member of the committee developing the European norm, so via CEN and CENELEC. I have background where I work in the field of human computer interaction for several years, and I have also been a software engineer and product manager. At the Information and Press Service of the Luxembourgish Government, I am a part of a small team in charge of several topics, like administrative transparency, open data, freedom of information, and also digital accessibility. And more precisely, we are the organization in charge of monitoring the accessibility of public sector websites and mobile applications. In the framework of the European Web Accessibility Directive.

So there are similar organizations doing the same job all across Europe in all EU Member States. And we are also in charge of the awareness and training of the public servants on digital accessibility, and we monitor complaints coming from the end users. And for each complaint, we act as a mediator between the end users and the public administrations. Regarding the monitoring, we are conveying more than 100 audits per year, so it may seem few, but we are also a small country, so that's why. And all our audit reports and all the data we produce during this monitoring are published online and are on open license on the National Open Data Network, and they may be used, for example, to train an AI model on this, for example. So I don't know if this is quality data, but they are some kind of readable data for sure.

I wanted also to mention that I am not an AI specialist, but I am interested in the topics and all tools and technologies which could help us improve the performance of the execution of our audits. That's one thing. And also, I wanted to mention that personally, I am quite aligned with Yeliz when she said that AI maybe not a silver bullet, and I don't think that accessibility can detect that sorry that AI may solve all accessibility issues, but we must find the right tool for the right problem. That was it for me. Thanks.

MATTHEW ATKINSON: Thank you very much, Alain. So lots of stuff to talk about. First of all, one of the things that we just discussed on the risks side of things was bias and avoiding or trying to avoid excluding certain user groups. And Alain, you actually have a really good example of this, involving language because of population size. So would you like to tell us about that?

ALAIN VAGNER: Yes, so regarding the languages, so Luxembourg is a very small country, but we have several languages here. So for example, the national languages are German, French, and Luxembourgish, which is a separate language. So yeah, in the population, also 75% of the population speak more than one language at work. And 50% between two and three languages at work. So this is the multilingual part in Luxembourg is very important. So it means that this can also be reflected on our websites. So all the websites need to be in multiple languages, or the official websites from the public sector. And we have also lots of issues with mixed languages on the websites. It means that as people are really used to speaking multiple languages, it's not uncommon to see, for example, someone yeah, so some chunk of text where the language is different from the main language of the website, for example. And this is really common. But this needs to be appropriately tagged with their language attribute, for example, in HTML, so that the recitation from the screen readers will be correct with the right speech synthesis. That's the first one. We have also some issues with the videos. So it means that, for example, we are trying to have subtitles and transcripts for all our videos, and there's no automatic captioning available for small languages. So we have 400,000 speakers of Luxembourgish, and the big platforms, big tech are not really supporting those these small languages. So it means if you have, for example, a video in French or in German, you will have automatic subtitles on YouTube, for example, but if something is speaking Luxembourgish, or worse, if somebody is speaking in the same video in multiple languages, then you are alone, and you should subtitle it yourself. So this could be more costly to produce. And here we have also some projects on this topic, like a project related to speech to text engine, and the tool for the automatic transcription of videos. So these are ongoing projects using AI. We are not there yet, but we are working in this direction. This is one point regarding the languages.

And another point is also the complexity of the languages. Because if you are in a multilingual context, you cannot assume that everyone is totally fluent in all the official languages. And this has also an impact in accessibility because, for the Deaf community, as you know, people who are born deaf have more problems acquiring languages. And maintaining context is also an issue. So we should also work on easy to read documents, easy to read pages so that it can help people with cognitive disabilities but also the Deaf community. And on our side, the Deaf community is mainly German speaking, so we are working mainly on the (speaking native language) it means easy to read pages on our websites.

MATTHEW ATKINSON: Thank you very much. I think there's some really good real world examples there of the implications of sizes of data sets and those kinds of issues. And the example of captions has come up quite a bit. And it's a good example because it allows us to introduce a concept of at which time do we use a machine learning or AI kind of approach? And in that captioning example, although it's not directly related to evaluation we will bring it back to that shortly the captioning example shows us that at one time, authoring time, helping somebody, a person, make the captions, it could really speed them up. Now, of course, right now, we are benefitting from a human captioners, which is the best you can get and is fantastic. But not everybody is able to support that. So authoring time allows a human the option of correcting the mistakes that they know are there. Runtime does not. So that's a difference in implications because of the time at which you employ the tool.

And talking about accessibility evaluation, doing things at sort of sampling time as opposed to audit time, perhaps, may be very similar implications there. The statistical models for looking at large sites, speaking as somebody who has had to do sampling in the past, I would really appreciate being guided by those tools. I would, perhaps, be less confident in machine learning's ability to pick up, certainly, all the issues, maybe even a fraction of the issues, in the real sort of accessibility testing side of things for reasons that Yeliz has mentioned and also was discussed previously about the issue of context. So again, guidance, supporting scalability by having the tool guide the human and using it as a tool, more on the authoring time end of the spectrum rather than the runtime end of the spectrum, in my view, at least, could result in more reliable and, therefore, fair usage.

So Yeliz, you already introduced us to Optimal-EM to some degree, and you also talked about ways that the tools could be used, machine learning could be used. For example, at authoring time to provide alt text. Could you tell us anything about this issue of context? And I think you touched upon it with the tables where the machine learning system has to do some interpretation and what sort of risks might arise from that and where there might be some opportunities.

YELIZ YESILADA: Of course, identifying context is a big challenge, I think. Of course, for evaluator, it's also a big challenge, considering different contexts for the evaluation. But, so certain complex structures, they can be by actually having, let's say, relevant data, certain algorithms can be generated to guide, let's say, the authoring stage, as you mentioned, Matthew. So during the authoring of course, these are all intertwined together, you know, the authoring and evaluation. Because if they are corrected at the authoring stage, then it's going to be easier to do the evaluation, and it's going to be kind of easier to test them at the evaluation stage. But if, for example, while the author is, let's say, authoring and generating certain structures, they can be the AI can actually help there to identify, for example, that certain structure is actually used, for example, for not putting the data not presenting data, but it's actually used for laying out, for example, that it should not have been used because it's causing problems to screen reader users. That would actually be a great help, as you mentioned, at the authoring stage. But identifying the context, it's a big challenge, and of course, it will be also algorithmically challenging for AI algorithms, I think, identifying the context. So it's not going to be straightforward issue.

MATTHEW ATKINSON: Indeed. So shifting gears slightly, isle not sure if we've had any questions. There are certain other topics we'd like to consider. And just a note that you will see me using an assistive technology here, which is called the Manual Zoom, so that I can check for questions every so often. But one of the things that might be useful is Alain, you had set out a series of requirements that you would have for considering using AI or machine learning technology. So would you like to tell us a bit more about those?

ALAIN VAGNER: Yes, no problem. Yeah. So, yeah, as a public sector organization, we have, of course, a few requirements regarding AI. I would say the first one is the transparency because in the public sector we need transparency. And for AI tools, we need to know, for example, how has it been trained, where the data is coming from, because it will help us also regarding the questions for the questions regarding biases, for example. Biases are a frequent issue in the AI field, and we absolutely want to avoid this. For example, to be more precise, more performant on one type of handicap and less on another, so this we would like absolutely to avoid it. And yeah, for example, if we had some kind of AI trained on all our reports, we could, for example, I don't know, maybe have automatic we could find some issues automatically. But for the edge cases, where we have less training data, we would have less precision. And on these edge cases, more often than not, these are the issues where we spend lots of time as an auditor. So this is something that may be a bit tricky.

I would like to also mention the accountability because we need to be able to explain a decision, how can we do it with if we have just a black box, for example. So this may be an issue with some models. This is also relatable to the AI. This is also the concept that an AI or algorithm cannot alone be made accountable for a mistake. So we cannot use AI to exonerate us or so from our responsibilities towards the persons with disabilities. Yeah, there was also the questions about the metrics, and I think Yeliz already mentioned a little bit. For us, we would like to know how to evaluate the efficiency of an automated tool, an AI tool. Two basic metrics I see are the detection rate and the false positive rate, so these are the two which are really important for us, so the tool should be able to detect the issues if there is one and also avoid saying there is an issue if there is none.

So yes, that's it, I would say. And more globally, maybe more at an abstract or political level, when introducing new AI tools, we should avoid the risk of disability danger, a concept introduced by Liz Jackson. It means that from time to time, we encounter some AI technologies that have been developed, and they have not been created including people with disabilities, and they don't really answer the need of people with disabilities. So this should also be, to my mind, be included in our requirements.

MATTHEW ATKINSON: Yes. On that point specifically, I am not actually sure if this is in the principles, Ethical Machine Learning Principles, but one of the things that was discussed around the development of those and they still are in development, like most W3C things, feedback is welcome it was discussed ideas around when a system makes a decision about a particular user or a constituency of users, those users need to be involved in the design of the system if it's going to be making decisions about them. And that's that feels to me like a related thing. And you mentioned about metrics, false positives and detection rates. And Yeliz was talking earlier about the work with Optimal-EM and getting to the stage where it could do unsupervised work. Could you say a bit more about false positives and detection rates that you've come across in research, Yeliz?

YELIZ YESILADA: Do you mean the metrics that are available or metrics in general for the sampling work? Because for the sampling work, we actually, with WCAG EM and in our research, we actually realize that we don't really have metrics to decide, for example, WCAG EM says you should explore the sites and pick certain pages that are representing the functionality. But these are really subjective definitions because you can pick a functionality of the website, let's say, but it is very outdated. So does that mean you covered the functionality or not? So in our work, we actually try to come up with metrics that they can be really assessing whether you are doing a good sampling or bad sampling. So these metrics that we introduced, for example, they include the they have, for example, coverage. So let's say you pick certain pages, but how much are you covering the whole site? You know? What's the population that you are covering? In fact, we are drawing similarities with the census work that governments are doing. For example, if you have a population and you want to do a survey with your population, you need to make sure the survey is done with a sample that is representative and it has the full coverage of the population. So we are trying to, for example, use these kind of metrics. And besides the coverage and representative, we also, for example, introduced the idea of freshness. So if you are actually going to sample pages, your pages should be kind of fresh pages, pages that people are using. So let me give you an example of the periods during COVID 19. In that period, for example, certain pages that were related to COVID 19, they were very critical for the population. So if an auditor is picking pages let's say they are auditing a site but they are not including those pages, they are missing critical, fresh pages that lots of people are visit. So we also introduced, for example, freshness. We introduced complexity, for example, because auditors when they pick pages, they might pick pages that are simple to evaluate and avoiding the complex. Of course, the question there is what do we mean by complexity? Because complexity can be like technically complex, it can be visually complex, so you can have different kinds of definitions for complexity. But we think for sampling, that also should be a criteria, for example, when you are picking pages for evaluating, you should not pick pages that are easy to evaluate, let's say, technically, but they should think, really, of the recent technologies that are used, you know, dynamic content. We know that they are challenging to evaluate. So do they include dynamic content? That's another metric we consider.

Based on these metrics, what we try to do in our work, let's say you generate the population of the site. We also explore, for example, how do you generate the population of the site? For example, you can crawl the site automatically and find out all the pages, which we know that is not possible. Technically very difficult. Or you can also look at, for example, the server side logs. So the server side logs can also be used to generate a population. And we use these metrics to actually, for example, compare different ways of clustering, using machine learning approach to cluster the pages. And then you can cluster them, for example, based on complexity. You can cluster them based on the structural similarity. You can cluster them based on, for example, freshness. And then what you do is you can sample from different clusters to make sure that you are actually kind of covering the a representative sample from a site. Of course, here we are focusing on the site. But in the previous session, there was a very nice discussion about what do we consider, what do we sample from a mobile application? That should be, of course, considered. For example, different screens, different layout, different pages generated, et cetera. So there are lots of questions that need to be answered, of course, from a research perspective.

MATTHEW ATKINSON: Indeed, yeah.

YELIZ YESILADA: I hope I mentioned your question about the metrics for sampling.

MATTHEW ATKINSON: Yeah, that was very helpful. That was very helpful indeed. So from my experience of doing accessibility audits, it is difficult to come up with a good sample. There is both science and art to it. And we would often go for looking at the things that a user could do with the site or the application and trying to cover as many of those different things as we could within the budget of the size of sample that we had. And we would generally go for the more complicated theming pages so that we were making sure that we would cover as much as possible. In some ways, it's easier if you have a smaller number of samples that you have to get because you can pick stuff that's obviously different. It gets harder if it's a bigger site and a bigger volume of pages to be audited because you want to make sure that each one is counting for something and not just repeating stuff. And machines are very good at spotting patterns. So as I have said before, I would have been interested in having some guidance, even though, as you've discussed, you know, in your answer there, it turns out that counting things is one of the hardest problems there is. Just counting how many things we've got is incredibly difficult.

So we actually had a question earlier I am just trying to see if we've got any additional ones now. But somebody asked earlier about whether we actually need accessibility guidelines so much anymore if AI is going to be building the websites? And I had a couple of perhaps not fully formed thoughts on that. Even if AI was building the websites, and even if a different AI was measuring them and for my part, I don't think that's going to be 100% of the case in future. I think it's a tool. But even if that was the case, we would still need accessibility guidelines in order to make sure that the site was being built to a particular standard and the site passed a particular standard in terms of requirements so that it would be expected to be accessible. And so I think there's still a need for accessibility guidelines. And in a way, my answer to that question probably says more about my perspective, which is we are building these things for people, and that means to me that people really are best placed to be involved in making those creative decisions around both the building of it and the creative or subjective decisions in the testing of it. It remains to be seen how effective machine learning can be as a tool, but there's definitely certain things that seem like exciting avenues for exploration. So that's my thought on that particular question. But I'd like to turn it over to either or both of you to see what you think about that question. And apologies for doing this suddenly. Hopefully my rambling has given you time to think about it.

YELIZ YESILADA: Matthew, I just want to add there I think we need guidelines, one for more another. Because what I see also in application of AI, we really need expertise. We need experts to we need people who have good understanding of the requirements of disabilities and disabled people such that they can also encode it into algorithms. You know? So when we say "AI," of course, these AI algorithms have to be developed, they have to be put in action, they have to generate the models. In order to generate models, of course, we need experts that understand the requirements of the disabled people and disabled users. And the understanding of those requirements are encoded in the guidelines. I mean, if you call them guidelines or requirement documents, one form or another, we will need them because we need people to have good understanding of what is needed, I think. Because I didn't mention it at the beginning, but I also did see this as one of the challenges for AI advancement. We need people who are good at algorithms development and application of, you know, generating models, et cetera, but we need people also having a good understanding of good requirements and good accessibility requirements. I think these guidelines or the "requirement documents" they are an excellent place for communicating these kinds of requirements so they can be automated or modeled in one form or another.

MATTHEW ATKINSON: Yeah, and to me, this is a continuation of the well known principle that if you want to really find out how accessible your site is, get some people who are facing accessibility barriers to test it. Somebody like me can come along and tell you where areas of potential risk are and technological solutions. At least me in my previous role. And that's all very well and good. And I do have some lived experience of disability; I have a vision impairment. But the best people to tell you are the people that are going to be using your site. And so it's always, always the best idea to get input from real people using your products and services as often as you possibly can. So just coming back to that question, do we need accessibility guidelines, Alain?

ALAIN VAGNER: Yes, so I think it is really needed. I just wanted to add something that is probably less interesting for most of you, but it's interesting for me. It's the legal part of it. So for us, for all the public sector websites, it's in the law, so the website should be compliant with the guidelines. So if there is no guidelines, we will have an issue. So we need to we need somehow a scale. We need to be able to compare. We need to be able to say if a website is compliant or not. And this cannot be done without any guidelines. And this is also important, I don't know, for the business also because you know the European directives are often an economic impact. And one of the aspects of the Web Accessibility Directive was also to develop a uniform market for accessibility in Europe. So we need these guidelines to have this uniform market.

MATTHEW ATKINSON: Excellent. So thank you very much for that perspective. We do have some questions that have come in. One of the ones I briefly wanted to come back to is there was the general question that we got about could AI be trained to evaluate accessibility? And I think we've all said that there are possibilities here, but there are challenges. But one of the things that was mentioned was this European wide monitoring exercise. And Alain, you mentioned, who knows, maybe some of the data for that could be used to train AI. And I am just wondering, Alain and then Yeliz, what your thoughts on that are, and then we can go to some of the questions that we've got in queue.

ALAIN VAGNER: Yeah, so I would say I think it should be possible, but probably the data should be of good quality. This is something Yeliz already mentioned. And we didn't think about it when we produced our report. So I would say for now, maybe we should also discuss with AI specialists that could tell us what they need as input to be able to train their models. But I think there are some opportunities or there are also some kind of pretrained models. I don't know if this is really totally answering your question. But for example, we have lots of programs, as I said, linked to languages, and there are some pretrained models, like I don't know, and these languages could help us a lot regarding our mixed language issues in the pages. So this, I think this is something that the model is already there, more or less. Maybe we need to refine them for some of the languages we use here that unfortunately may not be globally available, but yeah, this is one point. Yeah, for the rest, I would say that's it for me. Thank you.

MATTHEW ATKINSON: Okay. Any additional thoughts on that, Yeliz?

YELIZ YESILADA: I just want to say, as I said at the beginning, AI is not the silver bullet. So it's not going to actually solve the full problem, in my opinion, in the near future. We need a lot of development in the area. But of course, there are certain areas that we mentioned that it can really help. I think we already mentioned them, so I don't need to repeat. But there are certain things that AI can AI models can be generated to help out with the full process of evaluation, I think. Matthew, I hope I answered.

MATTHEW ATKINSON: Super from my perspective. So there's one question here that I feel like I can answer just myself, although I will invite you guys to chime in. And it's a good question, as they all are. Are there any accessibility guidelines or standards that we can use to evaluate AI or machine learning interfaces such as chat bots or ChatGPT?

From my perspective, the answer is yes. It's WCAG. It's the existing standards. These interfaces are presented via the Web, and so you can apply WCAG to them. Slightly more specifically, there are and on a little bit of a tangent, there is a growing resurgence of command line user interfaces, especially for developer tooling. And command line interfaces that actually operate on a machine natively, you can't apply the whole of WCAG to them, but there is work at W3C that tells you which bits you can apply. Just as we've talked about in other areas, WCAG is being applied in a range of different areas. But whilst these chat bot interfaces, they might look very conversational and almost like a command line interface in some ways, they very much are, to my knowledge at least, being presented as Web apps and, therefore, I would say that WCAG is the best set of guidelines for that. Do either of you if either of you have any additions to that or differences of opinion on it, then I'll just give you a couple of seconds to say so, and I will try and pick one of these other questions because we've got a few.

YELIZ YESILADA: I agree with you, Matthew, so I have nothing to add.

ALAIN VAGNER: Same here.

MATTHEW ATKINSON: Okay. So I see a couple of questions along the lines of could we say that AI will never be able to complete the last kilometer of on its own whilst doing accessibility testing or remediation? And I think at the moment, we are all saying pretty much that. And we've talked about that a little bit, but it's a nice way of phrasing it. There's a question here that says we know that AI is helping with meeting and perhaps even evaluating compliance. Do we know of any examples where AI has broken things that were working well? Would either of you like to talk about that?

YELIZ YESILADA: I can add there. I think we mentioned sorry, I jumped in, I think. I just wanted to say that, of course, in AI algorithms, accuracy is very important. So if the accuracy of the models are, of course, not high, that means that they will not be able to handle certain things, and they will make wrong decisions. So we can see that they can actually break things up. We see in, like, caption generation or alternative text generation that at certain times, for example, the models are not able to generate automated caption properly or automated alternative text. That's just what I wanted to say.

ALAIN VAGNER: I have maybe another example also in the same vein, so it's the same idea. I have been doing recently lots of tests for some tools for PDF remediation. So we have lots of problems on the public sector websites. There are, let's say, lots of PDF documents that are available on the website, and they are totally not accessible. So we have done some statistics, and on the 20 biggest websites in Luxembourg, approximately 60% are not accessible, which is really big. And on this, so some of the organizations asked us but yeah, it's really we have tons of PDFs. How will we be able to remediate them? And there are some AI tools, so they were testing them. We also tested them. And we have seen that so AI's mainly involved in the auto tagging, so tags are, in fact, metadata in PDF documents that are used to express the structure of the document for assistive technologies, in particular for blind people, for example. And this auto tagging using AI is a bit better than auto tagging based on heuristics, but it's still not there. So I have seen that some companies are announcing that their AI is more performant than manual tagging, but from my experience, it's not the case. I would be interested in seeing independent tests on this. Independent tests, that would be really helpful to see to what degree are these tools able to automatically tag documents.

From the issue we have seen, there were some things like you mentioned before, the complex problems, like the detection of headings in the tables, et cetera, detection of artifacts, what is decoration, what is not. False reading of complex layouts, so when you have complex layouts on pages, the reading is often not really good. And the detection also of some patterns. So in documents, you have some special patterns, for example, like table of contents. The table of contents is a special kind of pattern, and it should be detected by this AI. So these were a little bit the points where one or two AIs I have tested were not able to detect everything. But I think there is some, yeah, room for improvement there, of course.

MATTHEW ATKINSON: Okay. We've got three minutes left. I have seen one other question, which I wish I had seen it earlier. But I will do my best to just set out some general ideas about it. This is a complicated subject, so wish me luck. So somebody's asking could knowledge of overlay tools be usefully used for technical monitoring?

And I think it's important to introduce people to what the concept of an overlay is. At its most basic level, it's imagine you have a site, your organization has a site, and an overlay is effectively third party code that you import and you run on the client side in the browser. And its goal is to detect accessibility problems and to fix them. And as you can see from our discussion, our view is that there is potential for some things to be done with machine learning, and there's still a long way to go with a lot of other things to do with machine learning. And so there are differences of opinion in the industry as to the efficacy of these tools. But as you have seen from our discussion, you know, there's openness to exploring things. But overlays being run, if they were on many sites and they had the opportunity to see many sites, I think the question is can that add up to some useful monitoring experience for us? I am not sure that there would be enough verified data to start forming a model from that. But very quickly, I am just wondering if either of you have a few thoughts on that. I think it's going to just have to go to one of you because we've only got a minute left, so I apologize for that. But if any of you have got any extra thoughts on that to add to mine, please do.

ALAIN VAGNER: It's a good question. Yeah, it's difficult to say. So from our experience, these tools can be interesting for some fixes. But also, we should not rely only on them. It could be, for example, a first step. We have done something on our website. We have included a bit, it is not the end of the road. So there are some stuff that still should be done on the side of the authors, on the technical side on the website. So you cannot detect, as we have said, automatically all the accessibility issues. So you cannot if you cannot detect them, then you cannot fix them. So there is always still room for manual testing and manual evaluation and, yeah, improvements of the accessibility of websites.

YELIZ YESILADA: I agree with Alain. Matthew, I just wanted to add, I think we already mentioned it, as with the AI algorithms, these overlays, as well, I think we have to carefully approach them. You know, especially their accuracy, reliability, and transparency. So we have to, I think, carefully approach them as with the AI models. So rather than targeting to replace the evaluator, maybe, you know or evaluator or an author and to fix the problem, we can actually use them as, like, a supportive role. So making sure that they are checked afterwards whether they are doing the right thing or not based on the reliability, accuracy, and the transparency, as I mentioned.

MATTHEW ATKINSON: Cool. And we have to wrap it up there, and I apologize for us going over by a minute. I will just say again thank you to everyone, and especially Alain and Yeliz. And if you want to discuss issues like particularly like the last question, I think the Accessibility at the Edge Community Group would be a good place to do it because we are trying to get consensus in the industry on issues just like this. And also please check out the Ethical Machine Learning Principles. Thank you very much. Back over to Carlos.

CARLOS DUARTE: Thank you so much, Matthew, Yeliz, and Alain for this another wonderful panel. I think we have been really lucky with the three panels today. It was an excellent experience, I guess, with three different perspectives or three different topics on how to increase digital accessibility of resources.

So once again, just to conclude, many thanks to all our participants today Jade, Sarah, Audrey; Detlev, Andre, Paul; and Matthew, Yeliz, and Alain. It was a privilege to attend this session, and many thanks also to the interpreters and the captioner for their excellent work. And as I said at the beginning, this is the final symposium of the WAI CooP Project, but hopefully this trend will not end here, and W3C can pick up the organization of this Symposium and continue them in the future, hopefully. So thank you so much for all panelists. Thank you so much to all attendees. And looking forward to the future developments on Web accessibility. Bye bye.

Organizing Committee

Symposium Chairs

Back to Top

This is an unpublished draft preview that might include content that is not yet approved. The published website is at w3.org/WAI/.