LLMs and Accessibility: A Quiet Revolution

How large language models are transforming communication and usability for people with dyslexia and neurodivergent users.

Most conversations about AI focus on big shifts—automation, disruption, transformation. But its quieter, more personal effects are just as important. Especially when it comes to accessibility.

For many people with dyslexia, written communication can be a constant negotiation—what they want to say comes easily, but getting it down in a structured, conventional way is much harder. One common approach is to draft thoughts quickly, then pass them through AI tools to shape them into clearer, more polished language. The result still reflects their voice—just with fewer barriers in the way.

Among younger users in particular, there’s often a patchwork approach to digital accessibility: using voice notes instead of typing, relying on talk-to-type features, defaulting to video over text, or enabling narration tools on consoles and devices. These adaptations work well in familiar environments, but can quickly fall short when interfaces change or unexpected issues arise. What many of these users need is consistency and context awareness—something that traditional tools struggle to provide.

What’s changing now is that conversational AI is beginning to tie these fragmented tools together. Not perfectly, but enough to matter. Users can describe problems in their own words—halting, misspelled, unordered—and get responses that are intelligible, relevant, and human. There’s no need to decipher jargon, no guessing at the right search term. It doesn’t always work. But for the first time, many have access to a single interface that feels like it speaks their language.

This isn’t just anecdotal. Researchers are seeing similar results. One tool, LaMPost, helped adults with dyslexia by giving smart suggestions for subject lines and rewriting awkward phrasing. Another tool, LARF, simplified complex text in real time, improving reading speed and comprehension.

Professor Amanda Kirby has described LLMs as “digital allies” for neurodivergent people—helping with grammar, structure, and translating voice into written text without judgment or friction.

Most accessibility tools today are scattered—speech here, typing tools there, plugins and workarounds everywhere. And none of them talk to each other. But LLMs bring it all together in one place. They’re not built specifically for accessibility, but they act like they are—because they perceptibly understand what you’re asking and respond like a person would.

That said, we’re still missing something. There’s no single, AI-driven tool that unifies all these features properly across apps, systems, and interfaces—especially for users who rely entirely on touchscreens. It’s all still a bit fragmented. But the potential? It’s enormous.

This isn’t just about convenience. It’s about dignity. About helping people say what they mean without being tripped up by systems that weren’t designed for them. And about giving them tools that respond to their needs—naturally, conversationally, and reliably.

Hire Me 

Want to build what’s next? I help teams and individuals explore the inclusive use of AI in real-world settings—and I bring experience managing and mentoring neurodivergent team members.

Let's talk.

07771 535 355
[email protected]

By submitting this form, you agree to join our mailing list.
You can unsubscribe at any time using the link in the footer of our emails.