Harnessing Hateware

Faculty and student researchers focus on harassment
and abuse online

James Brown, director of the Rutgers–Camden Digital Studies Center, teaching his course Truth and Lies in the Digital World, which analyzes the history of propaganda and misinformation online.

By Sam Starnes

Miquela Sousa, more commonly known as Lil Miquela, is a social media icon. A 19-year-old model and musician who promotes a variety of products, she has more than 2.8 million followers on Instagram where her audience comments at length on her posts, often in harassing language.

Lil Miquela: Photo Courtesy of Brud

Rutgers University–Camden undergraduates majoring in digital studies have been studying the trend of various forms of harassment online for several years, and a recent project included studying Lil Miquela’s followers’ comments. Miquela, who is part Brazilian and part Spanish, is often subject to racial harassment, said Sora Kiwior, a senior from Freehold, New Jersey, majoring in animation and digital studies, who is working on the Rutgers–Camden study. Other comments are sexist—sometimes insulting, and other times consisting of “inappropriate flirting or sexual harassment,” Kiwior said.

The catch, Kiwior pointed out, is that Lil Miquela is not human. She’s virtual, a 3D image, an on-screen robot created by a company to promote products. Even though she’s not real—although it is not clear if all of her followers realize she is a robot—Lil Miquela is subjected to types of harassment that are prevalent on social media platforms. “I was not surprised that people would harass a self-proclaimed Instagram influencer,” Kiwior said. “Just being active on social media, you come across harassment all the time.”

The type of harassment Lil Miquela experiences is rampant on the internet, said Jim Brown, an associate professor of English and digital studies who is the director of Rutgers–Camden’s Digital Studies Center. “We know that women, people of color, and anyone who sits at the intersections of identities experience harassment in more intense ways,” said Brown, who has studied and written about the issue extensively and directed the research on Lil Miquela. “If you are a Black woman, you are particularly vulnerable to abuse and harassment.”

Brown argues many online platforms, such as Facebook, which have tacitly permitted hate speech, fall under the category of “hateware.” He coined the term, which is defined as “software that employs policies, algorithms, and designs Harnessing ‘Hateware’ that enable, encourage, and/or directly participate in abuse or harassment.” He said these platforms can help stop race- and gender-based harassment by taking responsibility for what happens on their platforms, rather than outsourcing the responsibility to users.

In “Hateware and the Outsourcing of Responsibility,” a chapter published in the 2019 book Digital Ethics: Rhetorics and Responsibility in Online Aggression, Brown and Rutgers–Camden alumnus Gregory Hennis CCAS’18, a former student of Brown’s, write of the existence of “a regulatory and cultural environment that insists on protecting the free speech rights of users at the expense of the safety of marginalized populations.”

Hennis, who earned degrees in computer science and digital studies and now works as a technical specialist for the Federal Aviation Administration, said he learned through his research with Brown that the existence of hate speech online is not a new problem. “Companies have known that this problem existed and they completely ignored it,” he said. “That was something that stuck with me. People knew that terrible things were happening on their platforms and they didn’t care at all.”

Brown said this practice of “outsourcing” the responsibility of reporting of harassment to users is becoming a less tenable decision for online platforms. In July, more than 1,000 advertisers joined in a boycott of Facebook that was led by a civil rights group urging the company to strengthen its policies on hate speech and misinformation. “People are starting to recognize that a hands-off approach is a position,” Brown says. “It’s not the lack of a position. If a company decides it is not going to filter content or ban someone for racial epithets, that is taking a position.”

Through the creation of a “hateware spectrum,” which would rate software, Brown and Hennis write that “we can begin to track the key features of software that props up and supports abuse and harassment, intentionally or not.”

Their chapter analyzes Discord, an online platform initially set up to provide video gamers the ability to chat while playing, which was used by the alt-right Unite the Right group to organize protests that turned violent and deadly in Charlottesville, Virginia, in 2017. The group “merely took advantage of Discord’s hands-off approach to community management in order to hide in plain sight,” Brown and Hennis write.

Brown and Hennis argue that methods to prevent such behavior needs to be built into programs. “What if software designers began to think more deeply about how their platforms might be enabling bad behavior and designed these platforms with such potentialities in mind?”

Brown and Hennis argue for a sense of “design justice,” a framework proposed by Sasha Costanza-Chock, which sees design as directly tied to questions of racism and sexism. “To prevent the problems of hateware,” Brown and Hennis write, “we will need to identify and diagnose the portions of software that are easily gamed toward nefarious ends and then learn from those lessons as we attempt to build software that avoids landing on the hateware spectrum.”

Hennis said he is encouraged by the shift in attitudes that is forcing companies such as Facebook to more actively address hate speech on their platforms. “Collectively, very slowly, people are starting to realize we need more than anarchy online,” he said.

For more on a related topic, see “Striving for Algorithmic Justice” from the fall 2020 Rutgers–Camden Magazine.

Comments are closed.