This site uses necessary cookies to enable basic functions like page navigation. By clicking “Accept Cookies”, you agree to the storing of additional cookies on your device to allow the Web Foundation to analyse site usage. View our Privacy Policy for more information.

A sticky icon

Participants from the policy design workshops produced 11 prototypes that can help create safer online experiences for women and tackle OGBV.

A phone icon

Fictional platforms were created for the participants to build their solutions for. These apps gave participants the opportunity to think about and design solutions for what is necessary rather than just what is currently possible.

A profile icon

The prototypes built during the workshops were based on a set of personas of highly visible women online.

Left arrow icon
Right arrow icon

Calm the Crowd

Calm The Crowd offers users more granular control settings by prompting users to check their settings when a spike in abuse is detected. They can then set granular user controls, such as blocking, muting or restricting accounts a platform flags as likely to be inauthentic. It would allow users to create their own keyword filters for the replies and comments they see, and the ability to choose the types of accounts they want to hide, see or limit replies from.

An illustration of Yvonne's profile

This prototype was designed for the persona Yvonne. View this persona


Left arrow icon
Right arrow icon

Viral Notification

Viral Notification provides greater user controls on a video sharing site. Here they can initiate Viral Mode, which was designed in response to a scenario where someone's content had unexpectedly gone viral. In Viral Mode users can turn off commenting or downloads. This feature has a very clear button that users can access to enable this function.

An illustration of Paula's profile

This prototype was designed for the persona Paula. View this persona

Left arrow icon
Right arrow icon

Com Mod

Com Mod allows users the option to delegate reviewing and reporting abuse to trusted communities/contacts at a more granular level (i.e. per post, for a specific amount of time). This solution builds on the idea of shared responsibility and reducing burden on the user who is under attack or receiving abuse.

An illustration of Amy's profile

This prototype was designed for the persona Amy. View this persona

Left arrow icon
Right arrow icon

Image Shield

Image Shield allows users to gain more control over their content and images. A notification system will flag when the system recognizes the user in a video posted by an external account. They can then review the video or dismiss the notification. Users also have the option to delegate reviewing and reporting any abuse to trusted communities/contacts. They can also collect and archive any flagged content with a date stamp, platform, name, and flag filter.

An illustration of Mounda

This prototype was designed for the persona Mouna. View this persona

Left arrow icon
Right arrow icon

Reporting 2.0

Reporting 2.0 offers an improved reporting flow that allows users to easily access information and effectively communicate the full scope of the abuse they are experiencing. It provides easy access to key terminology and policies. For example, a hover button over the categories of abuse gives a short explanation of relevant policies or community guidelines so the user can ensure they are reporting abuse according to the company’s community guidelines. It also offers the ability to add contextual information to a report, including geographical, cultural and linguistic nuance. It also offers the ability to file a report in the original language of the abuse

An illustration of Karishma's profile

This prototype was designed for the persona Karishma. View this persona

Left arrow icon
Right arrow icon

Report Hub

Report Hub provides a reporting dashboard that allows users to track the status of all their reports using key milestones on a timeline - for example, ‘report made’, ‘report under review’, ‘review complete’, ‘decision appealed’. The timestamps help the user understand the timeline of the process. The feature is accessible from the homepage at all times. Users also have the option to save draft reports, add further evidence, or even hand over the report to a trusted contact if they are feeling overwhelmed.

An illustration of Paula's profile

This prototype was designed for the persona Paula. View this persona

Left arrow icon

Reporteroo

Reporteroo is a reporting dashboard that provides transparency and accountability around the reporting process during and post-reporting. It provides specific prompts for users based on the category of abuse, so they can provide the context and information needed for the platform to respond more effectively. It also provides the option for users to flag if they are reporting in the same language as the abuse, and if not, to specify which language they are translating to and from. It includes a toggle option that allows users to choose whether they want the content of the reports to be visible or not.

An illustration of Karishma's profile

This prototype was designed for the persona Karishma. View this persona

Left arrow icon
Right arrow icon

One Click

One Click allows users to set a time-limited safety mode. It can be toggled on when users want to shield themselves from potential pile-ons. It can be accessed and enabled in ‘one click’ from Settings, and from pages throughout the platform - i.e. feeds page, posts page or profile page. Safety mode features could include disabling comments or activating a ‘delay period’ for comments, activating a profanity or keyword filter, flagging keywords, and disabling tags.

An illustration of Yvonne's profile

This prototype was designed for the persona Yvonne. View this persona

Left arrow icon

GateWay

GateWay allows users to alert a platform that they are being attacked. It gives the option to request protected status, flag abusive content, collect and archive abusive content as evidence, and generate and share evidence reports. Users also have the option to connect to trusted and verified Civil Society Organisations to seek support in handling online abuse.

An illustration of Mounda

This prototype was designed for the persona Mouna. View this persona

Left arrow icon
Right arrow icon

iMatter

iMatter provides a chat interface to support users through the reporting process. iMatter is accessed from the homepage where the chatbot states that a report has been received. When clicked, a chat opens and guides the user through the status of their report and offers resources such as community support, and the opportunity to chat with a psychologist. A follow-up conversation checks how the user is doing, asks if they need further support, and offers the option to leave feedback about their experience of the reporting process.

An illustration of Paula's profile

This prototype was designed for the persona Paula. View this persona

Left arrow icon
Right arrow icon

Wellbeing Check-Up

Wellbeing Check-Up provides a short, multiple choice pop-up risk assessment that can suggest settings to modify based on the user's experience. It allows users to self-assess their risk, the platform to give prompts around risk and check the level of risk a user’s profile is in at a given time, using indicators that are set by the user.

An illustration of Yvonne's profile

This prototype was designed for the persona Yvonne. View this persona