AI Guidelines for K-12 Aim to Bring Order to the ‘Wild West’
[ad_1]
Education has had a wobbly relationship with the still-evolving presence of generative AI in schools — with some school districts banning it only to reverse course. It can save teachers time by automating tasks, while also causing headaches as an accomplice to cheating students.
So how much work would it take to come up with guidelines to help educators manage the challenges of using generative AI tools for their work? In Michigan, it was a team effort.
A coalition of 14 education organizations, helmed by the nonprofit Michigan Virtual Learning Research Institute, released sample guidelines earlier this month that walk teachers and administrators through potential pitfalls to consider before using an AI tool in the classroom or for other tasks. That includes things like checking the accuracy of AI-generated content, citing AI-generated content, and judging which types of data are safe to enter into an AI program.
Ken Dirkin, senior director of the Michigan Virtual Learning Research Institute, said the group wanted to create a document that was digestible, but “there’s probably 40,000 important things that could have been included.”
“What we’re experiencing when we go out and work with school districts is that there’s a general lack of knowledge, and an interest and awareness” of generative AI, Dirkin says, “but also a fear of getting in trouble, or a fear that they’re doing something wrong, because there’s not any strong guidance on what they should be exploring or doing.”
Dirkin says the group wanted the document to help school districts and educators think through the use of generative AI without defaulting to either extreme of banning it or allowing unrestricted use.
“That’s really been kind of our mode of operation: How do we just enable exploration and not disable access,” he says, “or have people say, ‘It’s the latest trend, and it’ll go away.’”
The speed at which generative AI is evolving makes this a critical time for educators and districts to have guidelines about when and how they use it, says Mark Smith, executive director of the Michigan Association for Computer Users in Learning.
“AI is everywhere. It’s doing everything for everyone and anyone that’s interested,” he says. “By the time we get a handle on the one-, three-, five-year plan, it’s changing right underneath our noses. If we don’t get in front of this now with a nimble, flexible, guideline policy or strategy as a collective whole, it’s gonna continue to change.”
Protecting Student Data
School principals want to know how AI can be used in the classroom beyond having students copy and paste from it, Paul Liabenow says, and are of course concerned about students using it to cheat.
But many of the questions he gets as executive director of the Michigan Elementary and Middle School Principals Association focus on AI programs and legal compliance with student privacy laws, Liabenow explains, and how they stay in line with laws like FERPA and Individuals with Disabilities Education Act.
“There’s a myriad of questions that come weekly, and that’s growing,” Liabenow says. Principals want guidance from organizations like Michigan Virtual “not just to avoid stepping into the black hole as a leader but to use it to effectively use it to improve student achievement.”
The AI guidance document urges educators to always assume that, unless the company that owns a generative AI tool has an agreement with their school district, the data they’re inputting is going to be made available to the public.
Liabenow says one of his confidentiality concerns is over any teacher, counselor or administrator who might want to use an AI program to manage student data about mental health or discipline — something that has the potential to end with a lawsuit.
“People are thinking they’re gonna be able to run master schedules with the AI tools, where they’re inputting individual students’ names, and that leads to some challenges both ethically and legally,” Liabenow says. “I love this guidance tool, because it reminds us of areas that we need to be sensitive to and diligent at protecting.”
Smith, of the Michigan Association for Computer Users in Learning, says that the privacy pitfalls aren’t in the everyday use of generative AI but in the growing number of apps that may have weak data protection policies — one of the agreements that virtually no one reads when signing up for an online service. It may be easier to run afoul of privacy laws, he adds, considering the proposed changes to strengthen the Children’s Online Privacy Protection Act.
“How many of us have downloaded the updated agreement for our iPhone without reading it?” Smith says. “If you magnify that to 10,000 students in a district, you can imagine how many end user agreements you’d have to read.”
Is AI Your Co-Writer?
It’s not just student use of AI that needs to be considered. Teachers use generative AI to create lesson plans, and any school district employee could use it to help write a work document.
That’s why the new guidelines include examples of how to cite the use of generative AI in educational materials, research or work documents.
“The more we disclose the use of AI and the purpose, the more we uplift everybody in the conversation,” Dirkin says. “I don’t think in two or three years people will be disclosing the use of AI — it’ll be in our workflows — but it’s important to learn from each other and tie it back to human involvement in the process. It’ll eventually go away.”
When AI Is Baked Into Everything
Generative AI is increasingly becoming integrated into software that is already widely used. Think of spell-checking programs like Grammarly, which a Georgia student says got her accused of cheating after a paper on which she used it was flagged by AI-detection software.
That growing ubiquity is going to make it easier to access AI-powered education tools and, therefore, more complicated when it comes to using it with safety in mind, Dirkin says. One important consideration about the current generative AI landscape is that people still have to copy and paste content into — and, therefore, pause for a moment — an AI program to use it.
“A lot of times, it’s the Wild West in terms of access to tools. Everybody has a Google account, and people can use their Google account to log into a ton of free services,” Dirkin says. “We wanted to make sure people had a tool to reflect on whether they’re using it in a legal or ethical [way], or if they’re violating some sort of policy before they do that. So just stop and think.”
Smith points to the section of the new guidelines that asks educators to think about how something generated by AI might be inaccurate or contain bias. Even as generative AI gets better, he says, “there are risks and limitations to all AI, no matter how good it is.”
“Sometimes the best data set for an educator is the teacher down the hall with 10 years more experience, and not an AI tool,” Smith says. “There is still a human element to this, and I think the guidance document mentioning those risks and limitations is kind of a friendly nudge. It’s a polite way of saying, ‘Hey, don’t forget about this.’”
[ad_2]
Source link