Media Partners | Contributors | Advertise | Contact | Log in | Sunday 19 August 2018
182,958 SUBSCRIBERS

Can 'friendsourcing' save us from online harassment?

RATE THIS ARTICLE

Share This Article:

The online world we live in is a quirky, often scary, ever-changing platform.

For how useful the internet might be for our development and growth, there’s no doubt that it brings more than just a series of issues with it. From privacy and censorship to discrimination and bullying, there is just a lot to work on to make this platform truly safe for everyone to use.

Especially in relation to social media, Twitter has been recently criticised for their poor response to harassment reporting. YouTube algorithm is known for sporadically pushing out trending offensive content, and getting abusers banned from Instagram is still a long and unpractical process.

Earlier this month, researchers at the Massachusetts Institute of Technology (MIT) published a study and a website, Squadbox, embracing principles of human sensibility in the fight against online harassment. They call it “friendsourcing”, and in their views, it could potentially revolutionise the way we fight online abuse.

Half of UK girls were bullied on social media in 2017, a survey conducted by Opinium for the children’s charity Plan International UK has found. Meanwhile Pew Research Center found that in the same year four in ten Americans had personally experienced online harassment at some point in their life.

Artificial Intelligence (AI) has been growing exponentially in the last few years, and along with disturbing robots, smart fridges and arguably racist bots, it has also tried to develop in order to fight online harassment.

An open source project launched last year by a Google subsidiary company called Jigsaw saw AI to be deployed as the exclusive tool to defeat online harassment. The machine learning model was taught by example to recognise abusive language or harassment, and Jigsaw claimed the AI could potentially be more accurate than any keyword blacklist, and work faster than any human moderator.

But there is another school of thought in relation to fighting online harassment. Some companies, including Twitter and HeartMob, have in the past used human-based tools to systematically report and fight online harassment, claiming that “human touch” cannot be entirely replaced by any machine when it comes to reporting offences. The recently released platform SquadBox is a project based on this personal way of fighting online harassment.

But before digging deeper into how Squadbox works, what does “friendsourcing” even mean?

 

HeartMob and Twitter Block Lists

One of the first platforms based on the principle of helping others fighting online abuse, HeartMob, was launched in 2016 online harassment survivors. The founders claimed that bystander intervention is the best practice for addressing all forms of violence, and that an online community that could do this, could help end online harassment once and for all.

There’s a Twitter bot available on the HeartMob site to help people fight the more blatant forms of online offences, but all activity on the platform is reviewed by trained staff on the HeartMob team. From harassment cases to supportive messages, help requests and documentation, and all the data you can have on your HeartMobber accounts, everything goes through the support team.

“The reason why these platforms are needed in the first place,” says Dr John DeGarmo of the Foster Care Institute, “is that people on social media are by far less respectful than in real life.

“If I’m talking to you face-to-face [and] I can see if something I’ve said upset you, by your facial, by your body gestures and I will most likely back-off. Whereas on social media we can’t see those social cues, I can’t see if I’ve upset you or not so I just tend to go all out.”

To fight online harassment, Twitter has over time developed different tools. Similar to HeartMob and Squadbox, in 2015 the social media released a tool which would allow users to create, import and export whole lists of people they had blocked. The tool has surely helped to contain spam and online harassment in the past three years, but it has also been criticised for limiting freedom of expression by passively “shunning” people you don’t know based on someone else’s opinion.

 “Generally, people in the online world behave like drivers behave when they’re in their cars,” says Adam Gray, co-founder of Digital Leadership Associates, “they can be more aggressive because they don’t get the face-to-face contact they have to deal with as a result of their behaviour.

“However, if you’re part of a community online, say LinkedIn or Facebook, and you say something which is unjustified or horrible, many of your friends will dissent, so the cost of saying something there will be much higher than saying something in real life. So it really depends a lot on the network and what kind of community there is.”

 

Squadbox

What Squadbox picks up from HeartMob is both the use of a helping AI tool - in their case they have partnered with Jigsaw – and a team of people overseeing and filtering messages before they reach a certain person. But the platform adds another layer to the concept of bystander intervention. In Squadbox, the person behind the moderation of a given mailbox is always a friend.

“The project started from the concept of how we can make online discussion better,” says Amy Zhang, Co-creator of Squadbox and PhD student at the Massachusetts Institute of Technology (MIT).

“We’ve started researching mailing list and how they were moderated. And we were obviously very aware of questions related to harassment.

“At some point last year we were talking about new projects and thought ‘what if someone could moderate another person’s mailing list, could that be a solution?’”

The team interviewed a range of scientists, activists, and YouTube personalities, and found out that people had very different experiences. “A lot of the harassment they faced was very contextual”, says Mrs Zhang.  “They had been targeted in very specific ways or by very specific people where that context was really important, and that led us to think that something that would involve friends more than a strangers’ moderation team could potentially work better.”

The way the platform works is quite decentralised. It’s about people “taking control of their own inbox through their friends or people other people they trust”, Mrs Zhang says.

Squadbox is currently working for emails, but the team is thinking of ways to expand the platform.  Moderators you select on the site can return emails with a tag, put them in the trash folder, add a summary or redact the message before forwarding and so on.

“We can imagine this same system to work for Twitter, and any other place where people are pushing content directly at you,” Mrs Zhang tells The National Student.

“We have created this idea, which is an open-source project, and now it’s up to other websites and people to implement it if they would like to.”

 

From theory to practice

But while it might be easier to look at numbers and figures when dealing with delicate matters such as online harassment, there are real people behind the harm provoked by these issues. The National Student talked to two of them, to ask about their experience, and what they think of “friendsourcing”.

Jane (not her real name), lives in Minya El-Qamh, Egypt. She is a high school student, and she uses social media on a daily basis, mostly to communicate with her friends from other cities and abroad. She tells The National Student how a normal conversation with a friend turned into a case of harassment that almost destroyed her life and reputation.

 “A friend of mine posted something about Islam on Facebook one day. About culture and religion, and how they change in time. My opinion was that culture was something different from religion, and after replying to his comment, the conversation ended there.”

However, when Jane woke up and checked her phone in the morning, she saw one particular user had sent her tons of offensive private messages related to her comment the previous day. He also had commented in the same tone on all her public posts on Facebook.

“At first I thought ‘don’t you have a life?’ And thought that if he was just offending me, I could just block him and get over that,” says Jane, “but then I found out he had taken screengrabs of that conversation with my friend and was sending them to all of my contacts, claiming I was anti-Muslim and saying very bad things about me. I felt scared.”

Jane says she had something she didn’t want anyone to know, and somehow that person discovered that and he started revealing her secret to all her friends and family. “I could handle anything and everything,” says Jane, “but not this one thing.”

When her friends started questioning her about the matter, she lied that it wasn’t true. She told them that person was just an extremist who wanted to bring Sharia law to America and Europe.

“Luckily, I think they believed me and ignored that message. I think that, after multiple reporting actions from me and my friends, his account was finally suspended.”

We asked Jane if she thinks something like Squadbox would have been helpful in preventing or alleviating the consequences of her online abuse, and she had mixed feelings about it.

“It’s a good idea to have friends looking after you on social media,” she says, “but as it has happened to me, I don’t think people would want their friends to know certain secrets that could come from the violation of their privacy from online abusers. I think everyone has something they wouldn’t want even their friends to know, right?”

This privacy issue has been taken into consideration by the Squadbox team. People might have issues with other people reading their emails, even if they’re close friends. According to the Squadbox survey, however, most of the people would be okay in trusting a friend in that regard.

 

But Madelaine Hanson, who has also suffered online abuse on Facebook from an individual going under the name of 'Ali Saed', says that ideas based on “friendsourcing” like Squadbox could actually help in fighting online harassment and that she is already doing something like that on Facebook.

“I have a pretty big following on Facebook, and there's this guy who has repeatedly sent me abusive messages and tried to block me, along with other friends, because we have in his view, 'mocked' Islam.

“We have a Facebook group watching out for him, he's threatened to kill people too. There's about ten of us, and we watch for when he makes a new profile and posts scary comments.”

For these reasons, Mrs Hanson believes Squadbox is a really good idea and she strongly believes that it could work since “the best moderators are human moderators”.

But what about the harm to those bystander friends who are there to prevent any harm could be done to us? “It might be particularly draining or difficult for those people to do so,” says Mrs Zhang, “and if it’s just as traumatising for them we didn’t really solve the problem, we just spread it further.”

However, the Squadbox MIT research has shown that out of the five pair of friends they based their research on, four said that if the harassment was not aimed at them directly, its effect would not be as hurtful. All of them said they would moderate for another person to prevent them getting harassed, and would consider a mutually moderated system with someone.

“The reason why friends could be more helpful than strangers in moderating a personal inbox,” says Mrs Zhang, “is that they know the context where the harassment is taking place. That would give them the right tools to make an informed decision.”

 

 

A solution?

So what is the most efficient way to fight online harassment? Can “friendsourcing” be considered a solution or could training AI further become more reliable?

“I would say both ways of fighting online harassment should exist,” Mrs Zhang says. “You need platforms to take some actions but you also need the ability of users to do so themselves to manage their own contextual and personal situations. Platform action could take a more generic approach but we strongly believe decentralised tools like ours should also exist.”

From a victim’s perspective though, the situation is on a different level. “I believe world governments should interfere more in cases of social media harassment,” Jane says. “If I report someone for something bad, the police should intervene immediately. We might not necessarily get the results that we want, but people would be more afraid of behaving like that online.”

“It’s about people understanding the ramifications of their actions”, Mr Gray says “If people can see the damage that they’re doing by behaving that way, that’s the way forward.”

read more



© 2018 TheNationalStudent.com is a website of BigChoice Group Limited | 10-12 The Circle, Queen Elizabeth Street, London, SE1 2JE | registered in England No 6842641 VAT # 971692974