PHOTO CREDIT: WAVEBREAKMEDIAMICRO/STOCK.ADOBE.COM
In her role as communications manager for Pennsylvania’s Allentown School District, Melissa Reese spends countless hours “chasing ghosts”—fake social media accounts and posts that are “disparaging, untrue, or flat-out mean.”
Like many districts, Allentown and its 20-plus schools use a variety of social media platforms to communicate with parents, students, and community members. But the platforms’ refusal to have a dedicated verification and reporting process for K-12 schools has resulted in widespread misinformation that has resulted in the harassment, intimidation, and bullying of students.
“At the end of the day, it’s a disruption to education, and we spend a lot of time trying to track down and figure out who's writing these posts,” says Reese, whose district about 60 miles north of Philadelphia has 17,000 students. “These platforms need to do what they can to help keep school districts and the kids we serve safe.”
The issues Reese faces are felt in districts across the U.S., according to results from a survey conducted by the National School Public Relations Association (NSPRA) and the Consortium for School Networking (CoSN) last year. Almost 60 percent of the respondents said they have dealt with accounts that harass, bully, and intimidate students, and more than half have found fake accounts using the district’s logo and branding.
Just as disturbing: 45 percent of the respondents said they have asked the platforms to remove the accounts or posts but have been unsuccessful.
“It’s a problem, and it’s a huge time suck for our members and for administrators who have to monitor social media and respond when posts are harmful and misrepresent information about the district,” says Mellissa Braham, NSPRA’s associate director. “The platforms have certainly struggled with these types of accounts popping up, and we wanted to bring attention to the struggles our school districts are facing, along with some possible, easy-to-implement solutions.”
Reese has worked for five years in Allentown, where the student population is 75 percent Latino. Much of her work has been around bilingual communications, and she finds herself leaning on certified interpreters, community liaisons, and translation service providers for help.
“We’re a high-poverty district, and some of the posts and comments we see are disparaging, untrue, or just flat-out mean,” Reese says. “We’ve seen people, fully grown adults, come into our schools or board meetings and tell us that our kids are worthless. And then they go on social media and make comments that are so disparaging that they shouldn’t be allowed. We don’t want our kids to grow up feeling like they’re less than. It’s not fair to them.”
Allentown has been “fortunate” to get through the pandemic without the same level of controversy that has been seen in other communities over COVID precautions and disagreements over teaching history, Reese says. But if you just paid attention to social media comments, she says you would have a much different picture.
“We made a big push to vaccinate our staff and student populations and held dozens of clinics in our buildings that we opened up to families,” she says. “We had 450 people at our first clinic, and everyone we had had hundreds of people, but the comments you would see were not reflective of the community interest. It was a strange dichotomy.”
Reese monitors all comments on all posts and tracks all mentions of the school district. The district has a banned words list it posted on Facebook to block comments and phrases; the list now has more than 700 entries. When she sees something that is false or a fake account, she sometimes spends hours trying to get it taken down.
This past summer, NSPRA and CoSN reached out to Meta, which owns Facebook and Instagram, as well as Snapchat, TikTok, Twitter, YouTube, and LinkedIn to work on possible solutions. Staff from the two organizations met with representatives from each platform and proposed a dedicated process for verifying school districts’ accounts.
Braham notes that fraudulent accounts have been a concern since the 2016 presidential election and have become heightened since the pandemic. While most of the platforms have added more staff and artificial intelligence systems to sniff out these types of programs, some have had more success than others.
“All school systems have a federal identification number, so it’s easy for the platforms to tell that they are an official entity,” Braham says. “And most of the platforms have a process for reporting issues, but each one is unique. The challenge is that generally the reporting is for all users, so a school district with an urgent need is fighting with every other consumer user on the platform for attention.”
Braham says several of the platforms — an exception is Snapchat — have agreed to look for solutions to the problem around verification, and YouTube is interested in developing a process for reporting fraudulent accounts and posts.
“We are making progress, and it has been a learning experience for us and for the platforms as well,” Braham says. “But I’m optimistic. I think we can get it done.”
Reese is cautiously optimistic as well. And she notes that the onus remains on school districts to communicate proactively while plugging the holes that social media can bore in their reputations.
“We let people know that if we have an emergency that this is how we will notify you and this is the time frame for doing so,” Reese says. “Some days it feels like we're fighting a losing battle, but if we can get these companies to see and understand what we face, and the transparency we’re trying to show, then hopefully things will become easier.”