The Center for Universal Education at Brookings Institution’s comprehensive study, A New Direction for Students in an AI World: Prosper, Prepare, Protect, outlines the potential risks of integrating generative AI into an educational setting. The report outlines these risks in the section subtitled, “AI Can Diminish Learning if Overused With No Guardrails.” The word can stands out here, as it doesn’t say it will diminish learning. It instead states that AI adoption can diminish learning if it is overused or implemented with no guardrails.
This word choice aligns with the report’s overall message that while current risks of AI adoption may outweigh the potential benefits, there’s still time to shape its adoption so that risks are minimized and it can have an overall positive impact on education.
Six Risks of Generative AI in Education
Risk #1: “AI can undermine students’ cognitive development.”
This is the most frequent concern raised in the study, as participants worry that students will too often use AI to take the easy way out and bypass the challenges of critical thinking. This type of behavior has the potential of stunting cognitive development.
Many of the Brookings study participants said they were concerned that routine use and overuse of AI may not only harm development but may actually lead to cognitive “decline,” much like muscles atrophy from lack of use. The worry is that students will become dependent on AI thinking for them, leading them to “offload” more and more of the hard cognitive work to the AI chatbot.
The study acknowledges that adults have seen tremendous productivity gains by using AI. However, they point out that, most times, these adults are using AI to speed up tasks at which they are already proficient. In other words, they are using AI to supplement, not replace, their critical thinking process.
The danger for students is that they have not yet developed these core skills. Rather than leaning on AI for help, they need to first experience productive struggle in order to grow their critical thinking and problem-solving skills.
In some ways, the United States’ education system has contributed to this problem by equating grades with assignment completion and boxes to check. Instead of striving to learn, students are often trying to check off a list of required tasks, so they can move on.
In this sense, education has become more transactional than exploratory. In many cases, grades have become the focus over learning. Since AI is really good at checking off boxes and getting good grades, students are increasingly tempted to use it for those purposes.
The authors write, “In a world where AI is always available, motivation and engagement will be the defining factors separating students who think deeply from those who use AI to shortcut their development.” If students aren’t motivated to learn, AI becomes the easy button. If AI is here to stay, the challenge then becomes, how do we motivate and engage our students so that they want to do the challenging work and resist the temptation to offload their thinking?
The Brookings report indicates that task completion and compliance-based work are currently winning out over motivating and engaging learning experiences. This is leading to a decline in proficiency around basic facts and the deeper conceptual understanding of course content.
This trend has also been shown to have a negative impact on core cross-curricular skills, like reading comprehension and writing. For example, when students use AI to simplify or summarize long text documents or provide them with a summary or action plan, they are offloading the hard work, and their complex thinking skills begin to diminish.
Risk 2: “AI can impede students’ social and emotional development.”
The authors write, “Study participants worry that AI is undermining students’ emotional well-being, including their ability to form relationships, recover from setbacks, and maintain mental health.” Too often, students are relying on AI chatbots to be their friends or emotional support companions. While this can happen with regular chatbots like ChatGPT or Gemini, it’s even more common with chatbots designed to foster connection like Character.AI.
One panelist from the study stated, “AI systems, especially conversational ones, are built to satisfy: to mirror our tone, reinforce our views, and simulate empathy. They create an illusion of connection that is difficult to distinguish from genuine rapport.” The report also points out that “This is a particular concern for students who are lonely or who are experiencing mental health or emotional issues.” The connection with AI chatbots becomes even more alluring because the AI is always available and oftentimes comes across as more empathetic than another human.
This trend raises valid concerns:
- Will students learn to interact with others based on their chatbot interactions as opposed to human interactions?
- With more and more people suffering from loneliness, will people turn to chatbots for companionship because it’s always available and feels less risky than a real human interaction?
Indications are that this type of interaction generally happens outside of school, but it is still raised as a significant concern by those involved in the study.
Risk 3: “AI can degrade trust in education.”
This breakdown of trust is outlined on several levels. The first, and potentially most important, is in regard to teacher-student relationships. Students must have confidence in their teachers’ skills and competencies, and teachers must trust that students are doing their own work and not relying on AI to do it for them.
In general, students surveyed in this report felt it was fine for teachers to use AI for planning lessons, but they did not like receiving AI feedback and grades, indicating that automated responses made them feel like they weren’t worth the teacher’s time to provide personalized feedback.
Perhaps the biggest danger here is the cat-and-mouse game of trying to catch students cheating. Trust is eroded when teachers are constantly trying to determine if students are using AI to cheat. This can lead to a constant state of suspicion, where teachers think that students are cheating, and students begin to feel defensive, even when they’re not trying to cheat.
Perhaps most harmful of all is when students are incorrectly accused of cheating with AI. A reliance on AI detection tools is particularly problematic in this regard because these programs can regularly lead to false positives.
Another area of trust degradation involves truth and accuracy of information. AI chatbots hallucinate, and they can produce incorrect outputs. Even when people are aware of this, many still do not verify or cross-reference the outputs they receive. Ironically, at the same time, there is evidence that people trust the output of an AI more than an answer they might get from another human.
A loss of self-confidence is an additional concern, as individuals may be more prone to rely on AI and trust their own ideas less. Mistrust is happening at the corporate level as well. Large tech companies are mistrusted much of the time, and people suspect that AI tools are being put out there to collect personal data for ulterior motives, rather than for the common good or the basic functionality of the program.
Risk 4: “AI can threaten students’ safety.”
This risk builds off those worries about the intentions of large tech companies. What are their motives, and do those motives drive practices that potentially put people and their data in harm’s way?
There are a number of challenges entwined with keeping students and their information safe. First of all, tech companies harvest data to train their platforms. At the same time, there are inconsistent regulatory frameworks in place to protect users. School systems are unprepared for these rapidly changing challenges, and students are often too quick to share their personal information with AI systems.
While digital AI tools need some data to be able to function and provide relevant responses, schools need to balance those functional needs with their mission of protecting students from the unnecessary or potentially harmful harvesting of personal information.
There are federal FERPA and COPPA protections in place to help with this process as well. Still, the tech landscape is changing rapidly, and there is much that is not understood about how AI systems work and how or what data is being harvested.
Additionally, states make their own regulations and policies beyond FERPA and COPPA. Because this is a decentralized process, there is inconsistency from state to state, and this can add another layer of confusion to the equation. On top of regulatory concerns, cyberattacks are also on the rise, potentially compromising student information.
Some of these safety problems are embedded in the technology itself, some are dependent on available media literacy training, and others are due to insufficient policy safeguards.
Risk 5: “AI dependence can erode students’ autonomy and agency.”
There is concern that as students increase their use of AI, they will become more dependent on it. Overdependence has the potential of making students less confident in their own work, as they may begin to feel that they need to use AI and that their work won’t be good enough if they don’t. This spiral can bring about a lack of self-confidence.
AI use can also lead to the AI flywheel effect, whereby as AI gets better and users gain more confidence in the outputs, they use it more and more. They start to say, “Why wouldn’t I use it if it gives me the best answer?” This can lead to overreliance, using AI not only for learning but also entertainment, relationships, and even life decisions.
Risk 6: “AI can deepen equity divides.”
The first divide is about access. Not everyone has the same access to AI. In the U.S., this divide is perhaps not as significant as other places since most schools provide some level of access to AI tools. However, schools in some parts of the world have limited or no access at all.
The socioeconomic divide also impacts access. This is apparent both worldwide and within the U.S. Wealthier schools can afford to purchase elite tools with greater security provisions and improved functionality and accuracy. Schools that rely on the free tiers of AI products, though, are getting significantly more limited functionality and quality of output.
In this context, the report goes so far as to state, “This may be the first time in the history of educational technology that schools must pay more for accurate factual information.” Beyond that, there is an urban/rural divide, with varying levels of policy implementation that can help guide use and ensure safety. Though there are exceptions, the economic and policy advantages often favor urban areas over less regulated and poorer rural areas.
In addition to outlining these risks, the Brookings study also highlights potential benefits of using AI in schools as well as 12 suggested action steps for reducing risks and advancing benefits.
AVID Connections
This resource connects with the following components of the AVID College and Career Readiness Framework:
- Instruction
- Systems
- Leadership
- Culture
- Relational Capacity
- Rigorous Academic Preparedness
- Student Agency
- Insist on Rigor
Extend Your Learning
- A New Direction for Students in an AI World: Prosper, Prepare, Protect (The Center for Universal Education at Brookings Institution)
- FERPA (U.S. Department of Education)
- COPPA (Federal Trade Commission)
- The Pros and Cons of AI in Education: Benefits, Risks, and Real Examples (Michael Healey via Discovery Education)