Blueprint for an AI Bill of Rights

Explore how the five guiding principles in the Blueprint for an AI Bill of Rights can help guide the integration of artificial intelligence in our schools and classrooms.

Grades K-12 9 min Resource by:
Listen to this article

As with any new technology, artificial intelligence offers some incredible possibilities as well as some potential concerns. As educators, it is our responsibility to always work to maximize those positives while minimizing the dangers. As part of those efforts, we must carefully vet the technology that makes its way into our schools and classrooms, monitor terms of use agreements, craft policies and guidance that protect students and staff, and make sure that these practices and protections apply evenly across our school populations.

In an effort to provide some guidance in this area, the White House Office of Science and Technology Policy has created the Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. It is not a formal policy but rather a thoughtful document that can help guide schools and society at large in the safe and equitable adoption and integration of artificial intelligence. The Blueprint states that this document is “intended to support the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems.”

This document was created after a year of comprehensive information gathering from a wide range of sources, including experts, leaders, and widespread input from the general public.

Within it, you will find “a set of five principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of artificial intelligence.”

Here is an overview of the five principles that are outlined:

The Blueprint states, “You should be protected from unsafe or ineffective systems.” This point is essentially reinforcing the idea that we should put safety first, which includes carefully studying and testing new software before deployment. If the program meets our expectations, we can adopt it. If not, we should be prepared to move on if the system is unsafe and ineffective. We must also remember that software changes over time, so we should continue our monitoring even after implementation.

For this second principle, the Blueprint reads, “You should not face discrimination by algorithms and systems should be used and designed in an equitable way.” In other words, we need to make sure that the automated systems we implement and use don’t contribute to any discriminatory practices—intentional or not. This pertains to the system itself as well as how we use that system. Even well-designed systems can be misused.

In the area of data privacy, the Blueprint states, “You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.” That means, on the administrative side of technology, we need to pay careful attention to the terms of use and make sure that only necessary data is being collected and necessary permissions granted.

It also means that user behaviors should not be subject to what the document refers to as “unchecked surveillance.” This is especially true with automated systems where students are judged or evaluated based on algorithms alone. An example of this is software designed to detect cheating and plagiarism. While these tools can be informative and helpful in teaching students how to be better digital citizens, we need to remember that they make mistakes and can return false-positive reports. This is especially true of AI detection tools, as they are largely ineffective and can lead to false accusations of cheating. As educators, we need to be very careful about how these tools are used with students and give thoughtful consideration to whether they should even be used at all.

This principle reads, “You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.” Communication regarding the terms of use should be clear and written in plain and understandable language, not dense legalese that only a lawyer can understand. Users should know when a system will be used and how it will be used to make decisions that may impact them.

The guidance attached to this principle explains this further, stating, “Automated systems now determine opportunities, from employment to credit, and directly shape the American public’s experiences, from the courtroom to online classrooms, in ways that profoundly impact people’s lives. But this expansive impact is not always visible.“ Essentially, the intent and use of any program and the data it collects should be clear and transparent, with no hidden agendas.

The fifth and final principle states, “You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.”

The section goes on to detail why this is important. It states, “There are many reasons people may prefer not to use an automated system: the system can be flawed and can lead to unintended outcomes; it may reinforce bias or be inaccessible; it may simply be inconvenient or unavailable; or it may replace a paper or manual process to which people had grown accustomed. Yet members of the public are often presented with no alternative, or are forced to endure a cumbersome process to reach a human decision-maker once they decide they no longer want to deal exclusively with the automated system or be impacted by its results.”

In other words, we need to make sure that the technology isn’t limiting some people’s access and success but rather improving it. While it may take some work to set up these options, they are important in making sure that all of our students have an equal opportunity for success.

These five principles are good reminders for us as we implement new technology in our classrooms. In addition to the five principles listed in the Blueprint for an AI Bill of Rights, the White House Office of Science and Technology Policy has also included links from each principle to a Principles to Practice section, which includes more context, details, and examples that can help bring the Blueprint to life and make it more practical and understandable. This section breaks down why each principle is important, what expectations we should be able to have about automated systems, and what the principle looks like in practice.

While not designed specifically for education, both the principles and accompanying documents can help guide us as we integrate more and more technology into our schools and classrooms. If you are involved in writing school policy or writing your own classroom guidelines, this guidance can be very helpful.

Just as our constitutional Bill of Rights helps protect our freedoms as US citizens, this Blueprint for an AI Bill of Rights can protect our students as we wade deeper into a world permeated with artificial intelligence.

AVID Connections

This resource connects with the following components of the AVID College and Career Readiness Framework:

  • Instruction
  • Systems
  • Leadership
  • Culture
  • Student Agency
  • Break Down Barriers
  • Advocate for Students

 

Extend Your Learning