Technology

Hollabot: Can a robot teach people to be less abusive to women online?

The app, which seeks to educate users who are abusive on social media, is just one initiative being developed by the nonprofit Feminist Internet collective

July 01, 2019
What would an internet designed by women look like? Feminist Internet has some ideas. Photo: Prospect composite
What would an internet designed by women look like? Feminist Internet has some ideas. Photo: Prospect composite

From social media trolls spewing vile rape threats to comment-board discussions that turn toxic at the mere mention of gender equality, the abuse of women online has become depressingly commonplace. Figures show that more than one in five women in the UK have experienced abuse online, usually related to physical or sexual violence.

If the internet was built around feminist and equality-led principles, would such abusive behaviour be eradicated? Feminist Internet, a boundary-pushing nonprofit, aims to answer this thorny question.

The London-based collective was founded in 2017 and includes designers, writers, artists and directors among its ranks. One of its key missions is to democratise the internet and construct a more equal space for women and other marginalised groups through creative, critical practice.

A prime example is the Hollabot, an app that detects abusive online behaviour towards women and then forces the perpetrator to carry out community service. Although it’s still at prototype stage—further prototyping and presumably some kind of collaboration with social media sites would be required—the Hollabot perfectly illustrates the scale of the collective’s ambition: the perpetrator would then be to use their social media account unless they complete a course educating them about the consequences of their actions.

One of Feminist Internet’s co-founders, Eden Clark, explains how the idea for the Hollabot was conceived: “We didn’t simply want people to be ‘cancelled’ if they were abusive online. We thought it would be a good idea for them to go through an educative process, that would inform those that have been abusive on why that behaviour isn’t OK.”

Clark adds that the Hollabot project has been well-received, although it will need much more work before the concept can become a reality.

Feminist Internet’s manifesto states that education is the key to eradicating ignorance and prejudice. “A blessing (and a curse) of the internet is that it is a mass of information from across the world, that can be accessed at the click of a button,” Clark elaborates.

Clark points to Instagram as an example of a tech company grappling with the problem of abusive behaviour. “I have noticed recently that Instagram now monitors curse words, and asks ‘are you sure?’ when you’re about to post, and gives you the option for more information, which could be a good step in the right direction. I’m not sure how many trolls that’s going to stop though … they’re relentless,” she says wryly.

Instagram seems to be taking the lead among the big social media sites in combatting toxic abuse on its platform. According to CBS, it is trialling a system that publicly hides the number of likes a user receives. The company hopes this will make Instagram a less competitive space, which in turn will lead to less bullying and trolling.

Social media is not the only area where Feminist Internet is making an impact. It is also deeply invested in tackling bias in artificial intelligence. In recent months, there has been increased focus on the way that AI fuels sexism.

For example, Apple’s digital assistant Siri and Amazon’s Alexa use feminine voices and are programmed to be subservient. This can reinforce damaging stereotypes about women’s roles in society, according to a Unesco study published in May.

The study was not a revelation for Feminist Internet, who had already hosted a workshop entitled “designing a feminist Alexa” last October. More recently, the collective has developed a chatbot named F’xa, which educates people on bias in search engines, recruitment algorithms and voice technology. It also dispenses advice on how to deal with such issues: When asked "How does bias creep into AI systems?,” for instance, it will reply: “Bias occurs in AI systems when they reflect human biases held by the people involved in coding, collecting, selecting, or using data to train the algorithms that power the AI.”

Clark says: “People have responded really positively [to F’xa] so far, and we’ve been pleasantly surprised about the amount of interest from the media. People have appreciated that F’xa was created by a diverse team, given the current diversity crisis in the AI sector. The language is quite straightforward too, and we worked really hard to make a complex topic understandable so that may have contributed to the positive responses.”

https://www.youtube.com/watch?v=t9HA1ZOBrZk&t=2s

As sexism in technology comes under closer scrutiny, tech firms will no longer be able to simply ignore the problem, particularly as there are still so many more ways make the internet a safer and more welcoming place. It’s refreshing to see a collective such as Feminist Internet highlight this in such a forward-thinking way.

But if we want to see significant change, big tech needs to listen and respond. As Clark says: “Putting more women from diverse backgrounds in the design process for tech would really help. The more diverse people creating tech the better.”