When Clarity Isn’t Enough: Rethinking AI’s Role in Cognitive Accessibility for Expert Domains
The promise of artificial intelligence in accessibility work is often framed in hopeful terms. Large language models (LLMs) like GPT-4 are increasingly described as potential allies in reducing barriers for individuals with cognitive disabilities—particularly through text simplification. But in high-stakes, concept-dense fields like finance, simplification does not always equal understanding. My recent research highlights a critical limitation: when AI makes technical language easier to read, it sometimes distorts what that language actually means.
This tension became visible in a study where I tested GPT-4’s ability to simplify peer reviewer comments from academic finance. These comments were modeled on real-world feedback that a researcher might receive during the publication process—often dense, jargon-heavy, and methodologically complex. Each comment was processed using two types of prompts: one general simplification request, and another explicitly framed for authors with cognitive processing disabilities like dyslexia or working memory difficulties.
While the outputs did show improved readability—shorter sentences, clearer syntax, and softer tone—many simplifications introduced subtle but serious misrepresentations of meaning. In social science research, particularly in economics and finance, language is not merely descriptive; it encodes methodological choices, theoretical positions, and epistemic caution. When these are lost in translation, the damage is not just semantic—it’s interpretive.
Take the concept of causal inference. One reviewer had suggested using a difference-in-differences approach to better isolate the timing and magnitude of market reactions to policy announcements. GPT-4’s simplification rephrased this as “seeing how fast and how strongly the market reacts.” While superficially accurate, this version erased the causal logic of the suggestion and reduced a complex statistical technique to a vague observational insight.
In another instance, the technical term endogeneity—central to many econometric critiques—was reworded as “hidden effects.” This simplification may seem helpful at first glance, but it grossly underplays the seriousness of endogeneity, which involves statistical biases like simultaneity or omitted variables. Misunderstanding this could lead a novice researcher to dismiss the reviewer’s concern or respond with an irrelevant fix.
Even foundational behavioral concepts were misrepresented. The idea of bounded rationality, which refers to decision-making under constraints of time, information, and processing ability, was simplified in one version to “limited thinking ability.” Not only is this inaccurate, but it risks being unintentionally condescending—ironic, given that the prompt was aimed at supporting users with cognitive disabilities.
Across the 40 GPT-4 outputs in this study, the pattern was consistent: clarity improved, but fidelity often suffered. Importantly, the accessibility-specific prompt did not reliably yield better or more accurate simplifications than the general one. In fact, the model’s responses varied significantly across runs, even with the same input. A reviewer’s request for subgroup analysis within a dataset, for instance, was sometimes interpreted as a suggestion to add data from “different regions,” which shifted the intent from robust testing to mere expansion.
These findings matter deeply for social scientists because they challenge the assumption that AI can be dropped into accessibility workflows without domain-sensitive guardrails. If simplification is to serve inclusion, it must be both understandable and faithful to the original intent. Otherwise, we risk creating an illusion of accessibility—one that hides, rather than removes, cognitive barriers.
Social science disciplines have long understood the importance of communication as a form of epistemic justice. Making research more accessible—especially to those with disabilities—is not just about widening access; it’s about redistributing interpretive power. But power requires precision. Without it, we are not leveling the field—we’re muddying it.
This research suggests a need for more nuanced collaboration between AI developers, social scientists, accessibility advocates, and users with lived experience of cognitive disability. Together, we can build systems that don’t just simplify language, but also safeguard meaning. Otherwise, we risk using AI to open the door to inclusion—only to let distortion walk through first.