A decade after its widespread adoption, it’s safe to say that U.S. schools utterly failed to acclimate our students to social media and to anticipate the profound damage it could do. The result of widespread social media use among students (said the U.S. surgeon general in a recent advisory) is increased anxiety, stress, and depression. Parents are lost, too. As a school technology specialist working with a population of middle- and high-schoolers, I see firsthand parents’ desperation when I host standing-room-only sessions about social media and mental health.
With the swift emergence of A.I., educators have an opportunity to do better.
This summer, K–12 schools must get to work drafting academic policies governing the use of A.I. and facilitating professional development for teachers about the new technology. Educators need to address this trend head-on with students while simultaneously redesigning instruction and assessment in an age of A.I. We cannot start this work quickly enough.
When social media was nascent, educators failed to see it as more than a purely social and extracurricular distraction, assuming that it didn’t deserve pedagogical attention and that, if we just banned it, we could simply ignore it. Many schools brought in an outside consultant (sometimes, that was me!) to discuss the dangers of social media, and then proceeded to ignore it for the remainder of the year. Often, the extent to which social media instruction was included in curricula was a single unit in a special course like Health or Library. This neglect left students to navigate social media on their own, and ultimately forced parents to deal with the fallout, which ranges from bullying, bigotry, and body-image issues to misinformation, disinformation, and radicalization. We still don’t fully understand how bad things will get.
I can already anticipate the ways that easy-to-access generative A.I. applications might amplify the mental health problems social media created. What happens in a world where A.I. produces infinite, personalized content fed through social media to students—all of whom have underdeveloped impulse control? And if A.I. becomes a trusted source of information, will students similarly turn to the technology for social and emotional support? What happens when an A.I. becomes a stand-in for friends or mentors? So many students are already lonely and depressed. A.I., unchecked, could further isolate kids at a socially vulnerable time.
Schools are already struggling to teach civic and character lessons in an age of social media algorithms that promote anger, grievance, and risk-taking. Inappropriate memes, risky viral challenges, and controversial anonymous accounts broadcasting conflicts between students—and between students and their educational institutions—to a national audience routinely disrupt schools around the country.
A.I. will similarly hinder skill development. We try to teach critical thinking, focus, and resilience. But with A.I., students can outsource to an app the assignments meant to teach and reinforce those skills. Instead of working through a challenging essay about Lady Macbeth, for instance, students can now prompt large language models to complete the task in half the time. Instead of placing themselves in someone else’s shoes to develop debate points about a complex historical event like the Civil Rights Movement, students can ask an A.I. to step in. These learning losses and intellectual shortcuts will extend beyond the classroom and will affect how students, and eventually adults, confront adversity.
A.I.’s possible effects are scary, and it’s no doubt tempting to turn away from them. But that was largely schools’ response to social media. That didn’t work. This time, with the example of social media fresh in our minds, we know we can’t count on technology companies to self-regulate or lawmakers to act quickly. Educators need to be proactive. If we try to simply ban A.I. from schools, we will replicate our past failures. If we hide behind the technology’s terms of service, we ignore reality. And if we simply ask students to cite A.I. when they use it, we leave learning opportunities on the table.
Instead, we must acknowledge that students are going to use A.I. Many already do. I’ve seen how students can be endlessly creative. They will impress us with how they manage to leverage this new technology. (Middle-school students I know have already used ChatGPT for help with Spanish homework and for assistance responding to slights from friends online.) We should encourage creativity and innovation here. At the same time, students need our informed guidance so that when they utilize A.I., they know that these models often produce falsehoods, are trained on biased and stolen data, and have negative environmental impacts. Students should have the opportunity to confront the ethics of A.I. and its implications for topics ranging from intellectual property to the job market to deepfakes, but they should do so in our classrooms, rather than on social media.
To that end, we need administrators to collaborate this summer to write policies that account for new A.I. features that will appear in the applications we already use, such as search, email, and word processing. We need academic integrity policies that acknowledge and teach rather than discipline students who reach out for assistance from an A.I. And we as educators need to work to understand the technology, to both enhance our own teaching and better support students. Some of my colleagues already use A.I. in exciting ways, including as a supplement to lesson planning and as a tool to generate research questions and tailor lessons to a range of student abilities. Schools should reserve time for every teacher to experience the potential and the pitfalls for themselves, which will improve teaching and learning in our schools. A.I. outputting a “correct” answer is not the same as the deeper, human learning we do in our classrooms every day.
We should learn from our social media mistakes and teach students how to use, understand, and be skeptical of A.I. Done right, our lessons will spark discussions that embrace complexity, enhance critical thinking, and build empathy—precisely the values that social media has eroded.
link
More Stories
Bad Bunny on Sex, Social Media, and Kendall Jenner
Micas Networks Frees Networks from Conventional
Slack Is Basically Facebook Now