AI and the 6th C: Character

Explore ways to use artificial intelligence with character in K–12 classrooms.

Grades K-12 13 min Resource by:
Listen to this article

In 2014, Michael Fullan and Geoff Scott released a white paper, Education PLUS, which expanded upon the famous 4 Cs framework of: Communication, Collaboration, Critical Thinking, and Creativity. They added Citizenship and Character and called their new version of this model the Six Cs of Deep Learning.

In many ways, citizenship and character are intertwined. You could argue that it takes someone of high character to be a good citizen. Conversely, a good citizen typically demonstrates the traits of high character. Character has been widely studied in terms of K–12 education, and while there are various definitions of character, there are a lot of common threads to all of these definitions. Here are just a few of them.

In a research document published by the Association for the Advancement of Artificial Intelligence, Rania Hodhod, Daniel Kudenko, and Paul Cairns write that character education aims “to promote ethical, responsible, and caring young people.”

The Institute of Education Sciences (IES), an arm of the U.S. Department of Education, writes, “Character is associated with such virtues as respect, responsibility, trustworthiness, fairness, caring, and citizenship.”

Within a recent article in the Journal of Character & Leadership Development, character and AI were examined within the context of the U.S. Air Force. According to the author, Christopher S. Kuennen, “The concept of responsibility is critical to ethical AI in the DoD [Department of Defense] because ethical AI is ultimately dependent on human character.” He adds, “Genuine training for RAI [responsible artificial intelligence] must be approached as character development. . .”

The UK-based mentorship group Oppidan Education says, “Character education teaches young people emotional intelligence, personal and social awareness, effective communication and teamwork. As adults, we need to ensure that new generations are equipped with the right skill set to successfully navigate through our rapidly changing world and that they are not only academically proficient but also personally, emotionally, socially and ethically strong characters.”

A couple of themes that run through these descriptions and definitions are ethics and responsibility. You could sum this up as “doing the right thing.”

This article will focus on the common thread of ethics and responsible use of AI in K–12 education spaces. In other words, it’ll offer suggestions for how both teachers and students can “do the right thing” when using AI in a school setting.

Considerations for Teachers

First, what should teachers consider as they plan to use AI for designing student-facing lessons? How can they approach AI both ethically and responsibly?

  • Put student safety first. This means taking time to gain at least a general understanding of the digital tool that you’d like to use with students. You don’t need to be an expert, but you should have a basic understanding of what the tool can be used for and how it works. In the case of generative AI tools like ChatGPT, that means understanding that responses are being generated from existing data across the internet and, as a result, may contain inaccurate information and bias. By recognizing this context, you can better assess the limitations, risks, and safety concerns that may arise when using it. For example, if you’re using it as a teacher to plan, this means not entering any student data into the system. If you’re using it with students, you will want to help them understand that they should evaluate all responses for accuracy and bias before accepting the results as fact.
  • Review terms of use. Before having students use any digital tool in your classroom, it’s important to review the terms of use agreement and requirements. For example, OpenAI, the parent company of ChatGPT, specifies the following safety policies:
    • “ChatGPT is not meant for children under 13, and we require that children ages 13 to 18 obtain parental consent before using ChatGPT. While we have taken measures to limit generations of undesirable content, ChatGPT may produce output that is not appropriate for all audiences or all ages and educators should be mindful of that while using it with students or in classroom contexts.”
    • “We advise caution with exposure to kids, even those who meet our age requirements, and if you are using ChatGPT in the education context for children under 13, the actual interaction with ChatGPT must be conducted by an adult.”
  • Review school and district policies. Nearly every school has an acceptable use policy that guides what technology can be used with students as well as any requirements for that use. Be sure to review yours. Also, find out if your school has a formal process in place for reviewing new technology. If you’re not sure, check with your local technology leadership.
  • Respect differing opinions regarding the use of AI. Not everyone feels the same way about using artificial intelligence tools, and that’s to be expected. As you introduce these digital tools into your classroom, you’ll need to be respectful of those differing opinions and also prepared to modify student experiences accordingly. This may mean allowing a student to opt out of an AI experience and providing them with an alternative. While this can add an additional layer of work, it’s another part of the character equation: respecting others’ points of view.
  • Adopt new technology thoughtfully. Adopting new tech can feel complicated and confusing at times. When in doubt, err on the side of caution and seek out more information.

Considerations for Both Teachers and Students

While some considerations are unique to either teachers or students, others apply to all users. One of these broader considerations is understanding the limitations of AI, especially generative AI. By understanding the potential limitations, you and your students can better evaluate when it is appropriate to use artificially intelligent tools and when it’s not. Two key factors to be aware of are bias and accuracy in generated information.

  • Bias: The Cambridge Dictionary defines bias as “the action of supporting or opposing a particular person or thing in an unfair way, because of allowing personal opinions to influence your judgment.” There is near universal agreement that generative AI can produce biased responses. This makes sense because it’s pulling from content created by biased human beings. If the source is biased, the outcome will be as well. As we use generative AI, it’s important to be aware of this potential bias and to review any generated outputs with a critical eye. We need to ask ourselves: Is this response biased, and if so, what types of bias have crept in?
  • Accuracy: It is also important to recognize that generative AI can make mistakes. These mistakes are often called hallucinations. When you think about how generative AI works, this is not surprising. The AI is essentially looking for probable patterns across billions of bits of information. A generative AI chatbot produces its answers by predicting the next most likely word in a series of words and sentences. It’s not really thinking. Rather, it’s building sentences based on probable patterns. In doing so, it can make those mistakes. Therefore, we all need to review the accuracy of the content that we receive from an AI query, conducting our own follow-up research to verify the information provided.

Considerations for Students

Perhaps the biggest concern that teachers have about generative AI is that students will use it to cheat. In response, some have resorted to using AI detection software to catch students cheating. While the use of this type of tool is tempting, it’s seldom effective. Not only are they unreliable, but they can also return false positive reports that result in a student wrongly being accused of cheating.

Rather than taking a “gotcha” approach, it’s usually better to focus on educating students and setting clear expectations. This is where character education comes into play. By engaging in open, honest dialogue with your students, you can collaboratively agree on how to use AI with academic integrity in your classroom. When having these conversations with students, here are a few considerations.

  • Define academic integrity with your students. Have an honest conversation about what it means to have integrity in general and then what it specifically means to have academic integrity. You might even draft a definition together.
  • Talk about cheating. Cheating isn’t a secret. Teachers and students both know that it happens. Therefore, facilitate a classroom conversation about why cheating happens and what can be done to reduce or eliminate cheating. Students may acknowledge that cheating happens when they are unmotivated, underprepared, or stressed. Identifying the causes can help you discuss strategies for alleviating the motivating stressors that lead to cheating.
  • Develop an AI Use Agreement with your students. It can be an incredibly powerful experience to have a classroom of students collaborate on writing guidelines that they will use to regulate their own behavior. What is acceptable? What is not? What happens if someone violates classroom expectations? Having this open discussion and coming to a collaborative agreement can lead to student buy-in, better understanding, and more ethical and acceptable use of AI in school.
  • Use scenarios. As you discuss and work through the nuances of using generative AI in the classroom, consider posing realistic scenarios to your students. Have them work through the nuances of each example and provide a recommended action, as well as a rationale for that action. Not only will you be defining how students would respond in your classroom, but you’ll also be setting them up for future successes. In many ways, you are helping them develop their academic character action plan.
  • Model and practice. Once you have developed common expectations and practiced them hypothetically, you can advance this practice to real academic situations. You could choose to first guide the entire class by entering AI prompts for the class to see and then respond to. You could also have groups of students work together in critiquing sample prompts and responses. The gradual release strategy of “I do, we do, you do” can be an effective approach. No matter how you structure it, practice will help crystalize expectations and begin to make these behaviors habitual.

 

Overall, discussing character can be complicated, just as beginning to use generative AI in academic settings can be as well. By learning about each, having open conversations, and moving forward with positive intent, you can create a space in your classroom for students to develop strong character while also learning how to responsibly and ethically work in the world of generative AI.

AVID Connections

This resource connects with the following components of the AVID College and Career Readiness Framework:

  • Instruction
  • Rigorous Academic Preparedness
  • Opportunity Knowledge
  • Student Agency
  • Break Down Barriers

Extend Your Learning