Why workers with ADHD, autism, dyslexia should use AI agents

The research suggests that neurodiversity professionals may see unique benefits from AI tools and agents. As creating AI agents ramps up in 2025, people with disorders like ADHD, autism, dyslexia, and more are reporting a more level playing field in the workplace thanks to generative AI.
A. last work A study from the UK Department for Business and Trade found that neurodiverse employees were 25% more satisfied with AI assistants and were more likely to recommend the tool than neurotypical participants.
“Getting up and walking around during the meeting means I’m not taking notes, but now AI can come in and synthesize the entire meeting into a transcript and pick out high-level themes,” said Tara DeZao, senior director of product marketing at enterprise low-code platform provider Pega. DeZao, who was diagnosed with ADHD in adulthood, has combination-type ADHD, which includes both inattentive symptoms (time management and executive function problems) and hyperactive symptoms (increased movement).
“I had a hard time finding my way in the business world,” DeZao said. “But these tools are very helpful.”
AI tools in the workplace run the gamut and can have highly specific use cases, but solutions such as note-taking, scheduling assistants, and internal communications support are common. Generative AI is particularly adept at skills like communication, time management, and executive functions, creating a built-in benefit for neurodiverse workers who have had to find ways to adapt to a work culture that was previously created without them in mind.
Because of the skills such as hyperfocus, creativity, empathy, and niche expertise that neurodiverse individuals can bring to the workplace, some research suggests that organizations are prioritizing inclusion in this area generate almost a fifth more income.
AI ethics and neurodiversity workers
“Investing in ethical guardrails that protect and assist neurodivergent workers is not just the right thing to do,” said AI expert Kristi Boyd of SAS data ethics practice. “It’s a smart way to leverage your organization’s AI investments.”
Boyd mentioned SAS research found: Companies that invested the most in AI governance and guardrails were 1.6 times more likely to see at least twice the ROI on their AI investments. But Boyd highlighted three risks that companies should be aware of when applying AI tools with neurodiversity and other individuals in mind: competing needs, unconscious bias, and inappropriate disclosure.
“Different neurodiverse conditions may have conflicting needs,” Boyd said. For example, people with dyslexia can benefit from document readers, while people with bipolar disorder or other mental health disorders can benefit from AI-powered scheduling to make the most of productive periods. “Organizations can create tiered arrangements by acknowledging these tensions in advance, or offer choice-based frameworks that balance competing needs while promoting equity and inclusion,” he explained.
Regarding AI’s unconscious biases, algorithms can (and have) been taught to unintentionally associate neurodivergence with danger, disease, or negativity. Duke University research. Even today, neurodiversity can still face discrimination in the workplace; This makes it important for companies to provide safe ways to use these tools without having to reluctantly announce any worker diagnoses.
‘It’s like someone turned on the light’
As businesses take responsibility for the impact of AI tools in the workplace, Boyd says it’s important to include diverse opinions at all stages, implement regular audits and create safe ways for employees to report issues anonymously.
Work to make the deployment of AI more equitable, including for neurodivergent people, is just beginning. Humane Intelligence, a nonprofit focused on using artificial intelligence for social good, published the following report in early October: Bias Prize CompetitionWhere participants can identify biases with the goal of creating “more inclusive communication platforms, especially for users with cognitive differences, sensory sensitivities, or alternative communication styles.”
For example, emotion AI (when artificial intelligence identifies human emotions) can help people who have difficulty identifying emotions understand their meeting partners on video conferencing platforms such as Zoom. Yet this technology requires attention to bias by ensuring that AI agents fairly and accurately recognize various communication patterns rather than embedding harmful assumptions.
DeZao said his ADHD diagnosis felt “like someone turned on the light in a very dark room.”
“One of the hardest parts of our hyperconnected, fast-paced world is that we are all expected to multitask. With my form of ADHD, multitasking is nearly impossible,” he said.
DeZao says one of the most useful features of AI is its ability to receive instructions and do its job while the human worker can focus on the task at hand. “If I’m working on something and a new request comes through Slack or Teams, it completely takes me out of my thought process,” he said. “I can take that request and very quickly outsource it and have it work on it while I continue to work [on my original task] It’s been a blessing.”




