The Center for Universal Education at Brookings Institution’s comprehensive study, A New Direction for Students in an AI World: Prosper, Prepare, Protect, outlines the potential benefits and risks of integrating generative AI into the education setting. The report was authored by Mary Burns, Rebecca Winthrop, Natasha Luther, Emma Venetis, and Rida Karim.
After weighing the advantages and disadvantages, the authors write, “Without proactive and comprehensive intervention, the risks and harms we have identified are likely to persist and intensify, potentially outweighing the benefits of AI and widening disparities in access and effective use.” To this statement, they add two very important follow-ups. First, “it is not too late” to shift the current trajectory, and second, “we must act with urgency.”
In response to their own call for action, the authors conclude their report by outlining 12 action steps, divided into three subcategories—Prosper, Prepare, and Protect—which form the structure for their framework for action. To help steer our schools in a positive direction with AI adoption, they urge anyone reading the report to identify and act on at least one recommendation in the next 3 years.
Prosper
The first recommendations are organized under the Prosper pillar, which focuses on “transforming teaching and learning experiences so that children and youth can thrive in an education system where AI is omnipresent.” The Prosper pillar includes four calls to action.
1. “Shift educational experiences in school.”
This section explicitly states, “Technology is not pedagogy.” Rather, it is a tool that can potentially empower educators to overcome past challenges and limitations to reshape learning for the better.
Educators should identify the core competencies that students will need to thrive in an AI world and then identify when and how AI should, and should not, be integrated into teaching and learning experiences to achieve those goals. This means shifting some educational experiences and crafting “new pedagogies that are AI-aware, AI-assisted, and, when necessary, AI-resistant.”
The authors also suggest an interdisciplinary approach where the sciences are integrated with the humanities and social sciences. They argue that this will be important “for developing critical thinking and ethical reflection” in an AI world.
Learning experiences should also be designed so that they connect to students’ interests while providing some degree of choice and control to help boost their motivation, as well as engagement, and allow for the development of student agency.
2. “Co-create educational AI tools with educators, students, parents, and communities.”
This action step is targeted toward companies that create AI tools. It encourages them to form truly collaborative relationships with stakeholders, especially students and teachers, to make sure that their products transform learning based on research and best practices, rather than simply serve to repackage old practices in a flashy new interface.
The authors offer a few suggestions for how to accomplish this. One is to establish teacher-tech co-design hubs, where teachers are involved with designing new products from the earliest stages of ideation through production. Another idea is to include students and parents in AI decision-making and design, perhaps through a student AI council or a parent committee. Finally, the authors suggest involving communities so that local languages and priorities are supported through the new tools and platforms.
3. “Use AI tools that teach, not tell.”
Publicly accessible generative AI tools like ChatGPT and Google Gemini were not designed with research-based learning strategies in mind. They were developed to provide the general public with fully developed answers to prompts and questions. In other words, they were not designed to teach.
The recommendation here is for tech companies to create AI-powered tools that are intentionally designed to be child-friendly and incorporate research-based best practices to facilitate learning.
One suggestion is to design an AI interface that isn’t so quick to agree and please. It could instead be designed “to challenge, critique, and productively disagree with users.” This approach could “push students toward greater self-reflection and higher quality standards.” Other suggestions include designing AI tools to provide reasoning behind the answers they provide and to require users to pause and reflect before they get AI assistance.
In general, the goal is to keep students in the active mode of critical thinking and self-reflection, not just functioning as passive receivers of information.
4. “Conduct research on children’s learning and development in an AI world.”
AI is so new that we simply still don’t know enough about how to best deploy it for the highest quality learning experiences.
In that context, the report states, “There is an urgent need for high-quality research to track children’s learning, well-being, and development in an AI world.” The authors provide a few suggestions that align closely to the structure of this report:
- Prioritize research that identifies AI risks and solutions for how to mitigate them.
- Research ways to leverage AI to benefit student learning and development.
- Discover what teachers need to be successful, including ways to “enrich student experiences and strengthen motivation, engagement, and agency.”
- Use a variety of quality research approaches, including examining real school situations, conducting controlled studies, running evidence-based pilot programs, and completing rigorous longitudinal research.
Prepare
The next four recommendations appear under the Prepare pillar, which acknowledges, “Preparation requires building the knowledge, capacity, and structures for ethical and effective AI integration, ensuring that schools develop clear AI visions with dedicated resources, organized adoption processes, and measurable evaluation criteria to track implementation success.”
With that in mind, the Prepare pillar urges governments, funders, private sector players, education systems, families, and communities to all work together to advance the following actions.
5. “Promote holistic AI literacy for students, teachers, parents, and education leaders.”
Several specific actions are outlined under this recommendation:
- Adopt holistic AI frameworks to guide implementation. There are many frameworks already available that can be adopted, including examples from the European Commission, ISTE and ASCD, and UNESCO.
- Create or adopt guidelines for AI literacy. National guidelines are ideal and can provide consistency and clarity, but where these are absent, schools or states can adopt their own.
- Support systemic AI literacy approaches. AI literacy should be integrated in all curriculum areas and not be confined to computer science classes.
- Support peer-to-peer AI literacy, where students become leaders, mentors, and facilitators for their peers.
- Include families and communities in AI literacy efforts.
6. “Prepare teachers to teach with and through AI.”
If we want our students to be AI literate, we need our teachers to be as well. Ideally, AI literacy becomes a component of pre-service teacher preparation so that teachers have these skills when they enter the profession. However, the pre-service teaching curriculum often takes a period of time to catch up to current learning needs. It also doesn’t address the many teachers already in the profession. As a result, robust in-service professional development is also needed. As with any other quality training, these learning experiences need to be differentiated and sustained. They should both precede and accompany AI implementation, so teachers are supported throughout the process.
This learning must also go beyond simply how to use AI. It needs to weave together AI skills with effective pedagogy, course content, and techniques for developing high-quality student learning experiences.
7. “Provide a clear vision for ethical AI use that centers human agency.”
The report recommends “developing a clear vision for how AI can be used to help ethically advance human agency.” This begins with solid policy that can guide this vision into practice.
8. “Employ innovative financing strategies to close the AI divide.”
This recommendation is focused on making sure that all students and teachers have access to AI tools and AI literacy education through proper and equitable funding.
Especially in underfinanced regions, this may require innovative funding models, including public-private partnerships. The government can also provide funding structures that ensure access for all. This could include models like the current E-rate system, as well as other mandatory school discounts, or perhaps access to free, open-source AI models.
Protect
The final four recommendations appear under the Protect pillar, which aims to protect students from the potential risks that have been outlined in the report. This means implementing safeguards for student privacy, safety, emotional well-being, and cognitive and social development.
9. “Break the engagement addiction and design platforms that are centered around positive mental health for children and youth.”
Through this recommendation, the authors of the report are asking AI companies to operate ethically, putting student safety and health concerns first.
One way to do this is to “require online products to meet safety standards.” This includes making a distinction between general-use AI products and those specifically designed for students. Requirements for adhering to such standards would improve confidence that these products would be developmentally appropriate, safe, fair, reliable, and transparent.
Another approach would be to “stress test AI platforms for safety,” requiring AI companies to slow down and ensure that their products are ready and appropriate for student consumption before being released.
Other suggestions include the creation of advisory boards and the intentional inclusion of student voices to serve as necessary and appropriate safeguards.
10. “Establish comprehensive regulatory frameworks for educational AI.”
This recommendation is calling on government entities to take action. By providing government oversight and regulations, leadership can help ensure that new AI products are safe and appropriate for children.
Policies may be developed and implemented to enforce safety and ethics requirements. Some of these regulations may be universal, and some may be AI-specific. Some may address technical standards, with others requiring independent audits.
While a comprehensive list of policy suggestions is not offered, the point here is that government regulatory frameworks can guide, and even require, that the creation and release of new AI products is done in a way that’s thoughtful and keeps safety at the forefront.
11. “Procure technology that protects students’ privacy, safety, and security.”
Because they are purchasing customers, school districts have inherent influence over technology companies. If schools hold these companies to higher standards before purchasing AI products, those companies will be more likely to strive for the high standard needed to both acquire and keep these schools as paying customers.
One way that districts can help to ensure this is to develop and adopt child-friendly procurement policies and practices that outline specific, high expectations. Districts can also lean on product certifications from well-respected nonprofit organizations.
12. “Support families to manage children’s AI use at home.”
This final point acknowledges that AI use is not confined to schools. It frequently and regularly is being used at home and in the community. With this in mind, schools should strive to support parents in guiding their children on the safe use of AI.
One suggestion is encouraging students to share with parents how they are using AI in schools. Schools can also provide families with AI safety information. Many such resources are already freely available, and the report recommends several for review.
AI is still an emerging technology, with the state of its capabilities and implementation often changing on a daily basis. Nevertheless, the suggestions in this report can give educators a starting point as they look for ways to successfully navigate the assimilation of AI tools into our world.
AVID Connections
This resource connects with the following components of the AVID College and Career Readiness Framework:
- Systems
- Leadership
- Rigorous Academic Preparedness
- Student Agency
- Align the Work
- Advocate for Students
- Collective Educator Agency
Extend Your Learning
- A New Direction for Students in an AI World: Prosper, Prepare, Protect (The Center for Universal Education at Brookings Institution)
- Parents’ Ultimate Guide to Generative AI (Common Sense Media)
- AI Risks to Children: A Comprehensive Guide for Parents and Educators (The Safe AI For Children Alliance)
- Artificial Intelligence for Children: Toolkit (World Economic Forum)