Highland Park shoots suspect’s YouTube videos lost in YouTube’s content moderation purgatory
Videos posted to YouTube accounts associated with the suspect in the Highland Park, Illinois shooting included the kinds of images and themes that pose a particular challenge to tech companies’ moderation efforts: violent but vague.
The suspect, Robert E. Crimo III, was also a rapper under the name “Awake”, posting videos of himself and his music. Some videos depicted extreme and ultraviolet imagery, including shooting scenes. Ten months ago, on a separate account that featured numerous Crimo videos, a video was posted of an apparent view of the parade route in the predominantly Jewish suburb of Chicago where the attack occurred. Other videos on the channel included narration warning viewers of what someone who appeared to be Crimo described as his unstoppable fate.
These are the types of videos that can be difficult for automated moderation technology or even human moderators to catch.
“The social media company is not the one that can find that needle in the haystack,” said Emma Llansó, director of the Free Expression Project at the Center for Democracy and Technology.
Tech companies have come under scrutiny in recent years for a hands-off approach to moderation, with very few rules about what isn’t allowed to be posted. Extremism researchers and academics have urged companies to act, specifically calling on them to stop stimulating the spread of false information through recommendation systems and to limit how the systems connect extremist groups and amplify the extremist content.
YouTube, like many other tech companies, uses a combination of human and automated moderation. It also added new rules to ban groups such as QAnon and removed prominent white nationalist accounts.
But accounts associated with Crimo do not necessarily belong to one of these groups. And while YouTube has a policy against direct threats of violence, the videos can often fall into what Brian Fishman, the former policy director overseeing the implementation of Facebook’s Dangerous Organizations Policy, calls content from ” gray zone”, in which people discuss their motivations and frustrations without breaking the rules.
“It’s harder to write general rules that can be applied at scale on large platforms to do this,” Fishman said.
A YouTube spokesperson said channels and videos that violated its Community Guidelines and Creator Responsibility Policy were removed after the Highland Park shooting.
In hindsight, the videos should have raised red flags, said Dr. Ziv Cohen, a forensic psychiatrist and clinician.
Cohen, who provides ratings for law enforcement and in court cases, said the proposition that YouTube videos and other social media content can be used to predict potential shooters has merit.
“What helps us choose future shooters is if we know someone is on the path to violence,” Cohen said, saying the online profile associated with Crimo was “concerning” and indicative of a potential for violence.
In one video, a person who appears to be Crimo appears to describe the aftermath of a school shooting, which ended with him draped in an American flag. The person included depictions of himself holding a gun, as well as narration suggesting he may have felt destined to carry out an attack.
“If someone is showing a lot of content related to school shootings or other mass shootings, I think that’s absolutely a red flag,” Cohen said.
Mass shooting suspects like Crimo and the shooter in Uvalde, Texas, who killed 21 people, appear to have left a trail of violent posts and interactions on social media platforms. Companies have come under intense scrutiny as to why they didn’t notice the behavior before it turned deadly.
But the task of detecting and moderating content is not easy.
On YouTube and most other technology platforms, there’s a steady stream of content to review, including videos with direct threats or harassment. Llansó said the sheer scale of content uploaded to platforms like YouTube makes it impossible for humans to review every video before it is posted online. Instead, YouTube relies on automated content moderation tools that Llansó says are “notoriously inaccurate.”
Using such tools to search for content that doesn’t violate the rules but could predict violence or terrorism would be difficult, and it could aggravate some biases in policing, Llansó said.
Despite the ability for tech companies or other authorities to use social media content to predict shootings, creating technology for the task would be incredibly difficult and fraught with potential ethical issues, Llansó said.
“There are many different ways that machine learning tools for content moderation can fit in and build on existing societal biases,” Llansó said.
Content attributed to Crimo fell into a gray area not covered by YouTube’s guidelines, which prohibit videos recorded by an author during a “deadly or major violent event” and inciting viewers to commit acts of violence, Fishman said.
Even though some content from some shooting suspects occupies this central space, Fishman said, there are characteristics that could still be identified in potentially dangerous documents.
“Often these kinds of ‘artistic’ depictions glorify previous attacks,” Fishman said.
Fishman, who is a senior fellow with the international security program at New America, a Washington, D.C. think tank, said researchers are increasingly trying to differentiate between “direct threats” and those engaged in subcultures that are “pretty gross but don’t”. pose threats in the real world.
He also said that platforms might not alert authorities when creators break rules on violent content and that content moderation does not usually generate referrals to law enforcement.
Despite the challenges social media companies face with content moderation, Fishman said, they have a responsibility to seek solutions.
“I think that’s what they signed on when they became big, ubiquitous platforms,” he said.