In this episode of Unpacking Education, we sit down with renowned education technology advocate and author Ken Shelton to discuss his latest book, which he coauthored with Dee Lanier, The Promises and Perils of AI in Education. Ken brings his years of experience as a middle school teacher, technology lab leader, and global speaker to a thought-provoking conversation about the transformative potential—and challenges—of artificial intelligence in education. Ken shares how we must engage critically with AI and its implementation to make sure that it’s used ethically and effectively in our schools and classrooms.
There is a whole lot of promise around AI in education. There are also a whole lot of perils and unpredictables.
Ken Shelton, from his book, The Promises and Perils of AI in Education, coauthored with Dee Lanier
Resources
The following resources are available from AVID and on AVID Open Access to explore related topics in more depth:
- AI in the K–12 Classroom (article collection)
- AI-Powered Web Search (podcast episode)
- Making Music With Generative AI (podcast episode)
- NotebookLM, Three Powerful Updates (podcast episode)
- AI in the K–12 Classroom, with Eric Curts (podcast episode)
- Brisk Teaching (podcast episode)
- AI Pedagogy Project (podcast episode)
You’re Enough
When introduced to disruptive technology like generative AI, Ken Shelton points out, “The immediate reaction is always one of two camps; it’s always a binary. It’s ‘I’m all in, and it’s amazing,’ or ‘We have to stop this.’” The better reaction, he believes, is somewhere in the middle.
Using AI too aggressively with little thought about process and inputs can lead to unintended consequences with potentially harmful results. Schools must, therefore, be intentional about how they integrate AI, especially when it’s used to automate systems because AI will only guide systems as well as the information and directions that are provided to it. Tune in to hear Ken’s insights and suggestions for implementing artificial intelligence into our schools in a way that maximizes the promises and mitigates the potential perils. The following are a few highlights from our conversation with Ken:
- About Our Guest: Ken Shelton is a former middle school teacher in the Los Angeles Unified School District. His last 11 years in the classroom were spent in a technology lab, leading him to become a vocal advocate for career and technical education (CTE) opportunities. He is a well-respected education technology thought leader, speaker, and author who has spoken to educators in 49 states and across 50 different countries.
- Ken’s Coauthor: Dee Lanier was Ken’s coauthor for the book, The Promises and Perils of AI in Education. Ken says, “That’s my brother from another mother.” Dee has written two previous books and is based in Charlotte, North Carolina. Ken says that he and Dee “have the same approach” and that their “concurrent paths [are] what led us to ultimately writing the book that we coauthored. . . . It truly is a collaborative effort.”
- Three Types of AI: Ken points out that artificial intelligence is not new. It’s been around since the 1950s. He goes on to explain that there are three types of AI: “There [are] reactive, predictive, and generative.” Reactive artificial intelligence includes things like voice assistance and spell-check. Predictive AI includes tools like subscription platforms that predict and suggest other titles based on your viewing or listening history. Finally, there is generative artificial intelligence, which came to the public in November 2022 with the introduction of OpenAI’s ChatGPT. This AI chatbot generates new content based on learned patterns within existing content upon which it has been trained.
- Transformation: If something is transformative, Ken says, “It requires a complete shift in many of our learning paradigms.” Generative AI has this potential, but that doesn’t mean using it to create more worksheets; rather, it needs to fundamentally change the learning experience, such as personalizing education.
- Labeling Students: Ken says that we should not label students as underperforming because that also means that some students are “overperforming”; instead, we should say that students are or are not “appropriately resourced” to succeed.
- Supplying AI With Appropriate Data and Direction: Schools need to be careful when using AI to automate systems, making sure that AI systems have appropriate data and direction in order to improve systemic practices in schools.
- Detailed Prompts: It’s important to write clear and detailed prompts in order to get quality data in return. To this point, Ken recalls writing a detailed prompt that was about “five paragraphs long” for a school. If a prompt is vague, the AI will fill in the blanks, often with details that were not intended. Because of that, it’s important to include all relevant details and concepts in the prompt.
- Overreliance: Ken believes that one of the potential dangers of AI is overreliance. He says, “Overreliance can lead to what I would describe as intellectual laziness. . . . And this is where media literacy is essential here; it can lead to the propagation and the amplification of disinformation.”
- Media Literacy: AI requires new media literacy skills. Students and teachers need to learn how to use AI effectively and responsibly. Ken compares it to the introduction of search engines and says, “It’s no different than Google search, where you [need to] understand how search works and how search technology works.”
- Data: “A concern, also, is the lack of transparency, as well as accountability, on the AI companies and their data governance practices,” says Ken. Schools must review terms of use and understand how information collected by AI tools is collected and used. Schools should also ask, “What data cleansing and algorithmic bias weighting measures do you have in place?”
- Ethical Questions: Three ethical questions come from Dee’s business ethics classes in college. The first two are: “Is it against the law, or does it violate explicit school or district policy?” and “How would I feel if someone did this to me?” The third question is, “Am I sacrificing long-term benefits for short-term gains?” Ken likes to apply these questions to AI scenarios in schools.
- Student Input: Ken also likes working with student groups and asking them to process the ethical questions. For example, he might ask students how they would feel if AI spoke for them or if a classmate used AI to turn in a paper when they had written their own. Students often respond with, “Well, it’s not fair. That’s not cool. You didn’t do it yourself. That’s cheating. That’s plagiarism.”
- Not a Luxury: Technology is not a luxury. It is “foundational,” Ken says. This includes access to devices, broadband, and now artificial intelligence. It also means access to information and strategies for using that technology effectively. Additionally, students must be given rich opportunities to use the technology at a high level and to develop the skills associated with this use.
- Digital Divide: Beyond having access to technology, the new digital divide also includes how it is being used. Is it used to transform learning, or is it simply a low-level replacement for basic tasks?
- Maximizing Time: AI can save time if used appropriately, but Ken suggests that we first must ask, “What are you doing with the time you actually have?” Are we using it for administrative tasks that have little to do with learning? If so, let’s try to regain that first, and when we do that, let’s make sure that the AI is performing those tasks fairly and ethically. To make that happen, we need to make sure that the system has the proper information to effectively perform the tasks.
- A Starting Point: “I think the first step is the human aspect,” says Ken. “What can it do that I can’t do myself?” Ken shares a success story of how he worked with a physics teacher to use AI to develop authentic connections for his students and course content.
- Toolkit: With AI technology, Ken encourages us to realize, “I don’t need to use it for everything, but again, when I recognize that it can do something for me that I cannot do myself, then that’s when you want to go ahead” and use AI to support your work.
- One Thing: Ken ends by saying, “The way in which we define literacy needs to adapt and evolve. . . . Literacy occurs in five dimensions—reading, writing, speaking, listening, and observing—and we want to curate and cultivate a skillset that’s across all five of those areas, and AI is included in that context.”
Use the following resources to continue learning about this topic.
If you are listening to the podcast with your instructional team or would like to explore this topic more deeply, here are guiding questions to prompt your reflection:
- What has been your experience with generative artificial intelligence?
- What are the potential promises and benefits of using AI in schools?
- What are the potential perils of this use?
- How have you seen AI used in a way that maximizes benefits?
- How have you seen AI used in a way that resulted in unintended consequences?
- How might your school move forward with the integration of generative AI in a way that maximizes promise and minimizes potentially harmful consequences?
- Ken Shelton (official website)
- The Promises and Perils of AI in Education (written by Ken Shelton and Dee Lanier)
- Meet Ken Shelton (CanvasRebel)
- AI Insights in EdTech: Ken Shelton – Author and EdTech Educator (The Windward Institute via YouTube)
#366 The Promises and Perils of AI in Education, with Ken Shelton
AVID Open Access
58 min
Keywords
AI in education, personalized learning, task optimization, critical thinking, generative AI, ethical implementation, digital divide, over-reliance, media literacy, professional growth, student agency, literacy evolution, AI literacy
Transcript
Transcript is under construction. Please check back later.