Skip to main content
Oversights and Opportunities 

AI is here and educators and educational systems are tasked with an enormous goal: readying their students for an AI shaped world.  

There are courses, frameworks, books, articles, professional development – all designed to give educators (and students) the roadmap to what every school system will eventually need: a structured approach to teaching students how to think about, use, and live alongside artificial intelligence. But there’s a critical oversight.  

All of these resources focus on what students should know. No one is talking about whether students are cognitively ready to know it. 

Why “Reflect” and “Synthesize” Aren’t Just Skills—They’re Signs of Cognitive Readiness 

Across many of the emerging frameworks are recurring calls for students to analyze, evaluate, adapt, synthesize, and reflect. These are critical aspirations. They represent the kinds of deeper thinking and judgment learners ideally bring to AI-supported work. They are precisely what is needed in an age of intelligent systems, where discernment, flexibility, and depth of thought are paramount. 

But what these frameworks often fail to account for is how these abilities actually arise. 

They’re described as skills. Sometimes as competencies. Occasionally seen as qualities of character. But in reality, each is the visible expression of hidden cognitive processes occurring in the brain 

For example, the ability to synthesize ideas—a core AI literacy expectation—requires much more than exposure to multiple sources. It demands a capacity to hold several concepts in mind, to perceive patterns and contradictions, and to resolve complexity into a coherent understanding. These operations depend heavily on Symbol Relations, a cognitive function that enables the brain to process the relationships between concepts quickly and fluidly. Consider how students rely on this function while engaging in AI: 

  • Reflect requires self-monitoring, abstract reasoning, and working memory 
  • Evaluate demands attention control, flexible thinking, and conceptual depth 
  • Adapt calls upon mental agility, error detection, and cognitive updating 
  • Resist manipulation relies on internal logic, impulse regulation, and sense-making 

A student with secure Symbol Relations capacity can follow abstract reasoning, move easily between ideas, and make meaningful connections. A student with challenges in this function may struggle to grasp nuance, misinterpret cause and effect, or default to black-and-white thinking—all while appearing to follow instructions. 

It’s a critical distinction in the AI literacy conversation. In fact, it’s critical in any conversation about learning, equity, neurodiversity, educational reform, and future-readying students. Skills are visible outputs we aim to teach and assess. Cognitive capacity refers to the brain-based processes that make skills possible: attention, memory, reasoning, self-monitoring. 

Skills like effective researching and source comparison can be taught, practiced, and improved over time. But whether a student can absorb, process, and apply those skills depends on their cognitive capacity. It’s the brain’s underlying networks—attention, memory, reasoning—that make skill development possible in the first place. 

Again, this is not just about AI literacy. Lifelong, or commonly called “21st Century” skills like communication and collaboration can be reinforced through experience, but their depth and durability are determined by the brain. 

Cognitive capacity is not just adjacent to AI Literacy lifelong learning—it is its foundation. 

The Role of Cognitive Diversity 

This brings us to another central yet underacknowledged reality: cognitive diversity. 

Every classroom contains students with different cognitive profiles—variations in processing speed, understanding, memory. These differences are not always visible, nor are they fully captured by test scores or classroom performance. But they shape everything: how students engage with content, how they respond to AI, and how effectively they can meet the demands of frameworks like the one now being proposed. 

Two students may be asked to reflect on a generative AI output. One produces a thoughtful critique, integrating prior knowledge and multiple perspectives. The other gives a surface-level response, vague and repetitive. This isn’t necessarily a difference in motivation or effort. It’s often a difference in the mental processes available to them. 

And because these foundational capacities aren’t guaranteed by age, exposure, or curriculum—they cannot be assumed. 

This is the critical gap in today’s AI literacy frameworks. 

They outline what students should do, but they don’t account for whether students have the brain-based capacities required to do it. 

Until we bring cognitive readiness into the center of the conversation, we risk building AI literacy on a foundation that is uneven at best—and absent for many learners. 

What AI Literacy Really Requires 

Being AI-literate is more than just knowing how to use tools or reciting responsible use policies. True AI literacy calls for the kind of deep thinking that allows students to navigate complexity.  

Students must be able to hold multiple ideas in mind, weighing them against one another without becoming overwhelmed. They need to evaluate information critically, distinguishing fact from fiction, relevance from distraction. As AI continues to evolve, learners will be challenged to adapt quickly and flexibly, applying their knowledge in unfamiliar contexts. They must also be able to detect bias, resist manipulation, and monitor their own thinking—skills essential not only for academic success but for civic and personal integrity. 

Without these foundational abilities, students are left with a surface-level relationship to AI—one that may look competent on the outside but lacks the cognitive depth to ensure real understanding. The result? Superficial learning and widening inequities in classrooms where some students are cognitively prepared for this new landscape and others are not. 

Without these foundations, students risk superficial learning and even greater inequity in the classroom. 

A Strategic Risk for Schools 

When schools adopt AI tools and curricula without first developing students’ underlying cognitive functions, they take on a significant strategic risk. 

Students may become proficient in operating AI platforms, but the deeper cognitive capacities required to analyze, question, and synthesize information remain underdeveloped. Learning becomes shallow, centered on performance rather than understanding. 

Over time, students may also develop an overreliance on AI, leaning on it for ideas, structure, and even memory—at the expense of building their own internal reasoning systems. What begins as a useful tool quietly becomes a cognitive substitute. 

Perhaps most concerning is the risk of widening equity gaps. Students who already struggle with cognitive tasks—such as memory, attention, or reasoning—are least likely to benefit from AI as a thinking partner. Without intervention, they fall further behind, not because of access to technology, but because they lack the brain-based infrastructure to use it well. 

In this way, AI doesn’t just amplify existing disparities—it risks hardwiring them into the future of education. 

Evidence from MIT: The Cognitive Cost of AI Reliance 

A study from the MIT Media Lab is June 2025 received a lot of attention – and gave stark evidence to this risk. In the experiment university-aged participants were split into three groups—“Brain-only,” web search (Google), and ChatGPT use. Researchers monitored neural activity via EEG during essay-writing tasks and found for those who relied on ChatGPT had reduced neural activity - meaning their brains literally exhibited significantly less engagement during the writing process compared to those who wrote without AI assistance. They also found these subject struggled to remember what they had written (pointing to weaker internalization of material), and produced essays that were more generic and less reflective of the students' own voice.  

Researchers term this a form of cognitive debt—where reliance on AI depletes the brain’s ability to encode, retrieve, and integrate knowledge, carrying long-term consequences.  

Of course, it's tempting to use this study as a complete indictment of AI in education, but the study's deeper implication is contextual: it reveals the cognitive risks when AI tools are used without effort or competence. 

The findings highlight the urgent need to address not merely tool design, but user readiness. 

Why This Matters for Schools 

Studies like this suggest that when students begin their learning with AI-generated responses—rather than engaging their cognitive faculties—they will: 

  • Bypass attention and effort: AI does the thinking for them. 
  • Fail to internalize knowledge: Without struggle, there is no deep processing or integration. 
  • Develop dependence: Over time, cognitive skills atrophy, and students may lose confidence in their own thinking. 

In effect, AI replaces thinking, rather than amplifies it. 

And of course there’s a broader question: what happens when a generation of learners grows up outsourcing not just information, but of thinking,  judgement and reflection? Does gradually weakening the cognitive foundations that support individual growth ultimately affect our ability to sustain an informed, thinking society? 

A Strategic Path Forward 

What current AI literacy frameworks overlook is the foundational role of cognitive readiness—the brain’s capacity to actively engage with complex thinking before AI tools are introduced. Neuroscience research makes clear that meaningful learning depends on this active mental effort. When cognitive engagement is missing, the brain’s ability to encode, integrate, and retain knowledge is compromised. 

Educators and policymakers must rethink how AI is integrated into learning. The core principle is straightforward but powerful: 

Students must first engage their own thinking—generating ideas, analyzing information, and organizing arguments—before layering in AI as a tool to support and extend their work. 

Let's go back to the MIT study for context:  

The group of students who started their essay writing independently and then used AI - their result? They maintained a high level of brain activity during their initial writing (indicating strong cognitive engagement). Their recall of the material was better than the group that began with AI (suggesting solidified learning and understanding).  

In other words, have AI as a digital scaffold, not a shortcut. Preserve and strengthen the mental effort foundational to learning. 

Looking ahead 

AI Literacy frameworks and curricula are still very much a work in progress for most schools and systems. And yet – AI is here, it’s being used in real time. This means students are already losing the opportunity to be equipped with a secure cognitive profile. Those who struggle now may have the gap increased - precisely when AI is reshaping the world they’re growing up in. Meaning, these risks won’t merely linger—they may intensify. Students struggling today may remain unsupported for years, exactly when AI is reshaping the contours of education, employment, and civic life. 

Identifying what is absent from current AI literacy frameworks is critical. It requires a deeper examination of the role cognitive capacity plays in student learning. 

Developing a clear understanding of this relationship is crucial for shaping educational approaches that truly prepare students for the demands they will face. 

Tara Bonner
Post by Tara Bonner
July 16, 2025
Tara Bonner collaborates with professionals and educators worldwide, envisioning the convergence of learning and neuroscience. Tara has witnessed that cognitive programming can be a transformative force not just for struggling learners, but for all seeking to experience learning with ease and joy. She's honored to be part of these discussions and an organization that's revolutionizing education by putting the "Brain in Education."