Prototypes
Participants from the policy design workshops produced 11 prototypes that can help create safer online experiences for women and tackle OGBV.
Fictional platforms were created for the participants to build their solutions for. These apps gave participants the opportunity to think about and design solutions for what is necessary rather than just what is currently possible.
The prototypes built during the workshops were based on a set of personas of highly visible women online.
Calm the Crowd
Calm The Crowd offers users more granular control settings by prompting users to check their settings when a spike in abuse is detected. They can then set granular user controls, such as blocking, muting or restricting accounts a platform flags as likely to be inauthentic. It would allow users to create their own keyword filters for the replies and comments they see, and the ability to choose the types of accounts they want to hide, see or limit replies from.

This prototype was designed for the persona Yvonne. View this persona
Com Mod
Com Mod allows users the option to delegate reviewing and reporting abuse to trusted communities/contacts at a more granular level (i.e. per post, for a specific amount of time). This solution builds on the idea of shared responsibility and reducing burden on the user who is under attack or receiving abuse.

This prototype was designed for the persona Amy. View this persona
Image Shield
Image Shield allows users to gain more control over their content and images. A notification system will flag when the system recognizes the user in a video posted by an external account. They can then review the video or dismiss the notification. Users also have the option to delegate reviewing and reporting any abuse to trusted communities/contacts. They can also collect and archive any flagged content with a date stamp, platform, name, and flag filter.

This prototype was designed for the persona Mouna. View this persona
Reporting 2.0
Reporting 2.0 offers an improved reporting flow that allows users to easily access information and effectively communicate the full scope of the abuse they are experiencing. It provides easy access to key terminology and policies. For example, a hover button over the categories of abuse gives a short explanation of relevant policies or community guidelines so the user can ensure they are reporting abuse according to the company’s community guidelines. It also offers the ability to add contextual information to a report, including geographical, cultural and linguistic nuance. It also offers the ability to file a report in the original language of the abuse

This prototype was designed for the persona Karishma. View this persona
Report Hub
Report Hub provides a reporting dashboard that allows users to track the status of all their reports using key milestones on a timeline - for example, ‘report made’, ‘report under review’, ‘review complete’, ‘decision appealed’. The timestamps help the user understand the timeline of the process. The feature is accessible from the homepage at all times. Users also have the option to save draft reports, add further evidence, or even hand over the report to a trusted contact if they are feeling overwhelmed.

This prototype was designed for the persona Paula. View this persona
Reporteroo
Reporteroo is a reporting dashboard that provides transparency and accountability around the reporting process during and post-reporting. It provides specific prompts for users based on the category of abuse, so they can provide the context and information needed for the platform to respond more effectively. It also provides the option for users to flag if they are reporting in the same language as the abuse, and if not, to specify which language they are translating to and from. It includes a toggle option that allows users to choose whether they want the content of the reports to be visible or not.

This prototype was designed for the persona Karishma. View this persona
One Click
One Click allows users to set a time-limited safety mode. It can be toggled on when users want to shield themselves from potential pile-ons. It can be accessed and enabled in ‘one click’ from Settings, and from pages throughout the platform - i.e. feeds page, posts page or profile page. Safety mode features could include disabling comments or activating a ‘delay period’ for comments, activating a profanity or keyword filter, flagging keywords, and disabling tags.

This prototype was designed for the persona Yvonne. View this persona
GateWay
GateWay allows users to alert a platform that they are being attacked. It gives the option to request protected status, flag abusive content, collect and archive abusive content as evidence, and generate and share evidence reports. Users also have the option to connect to trusted and verified Civil Society Organisations to seek support in handling online abuse.

This prototype was designed for the persona Mouna. View this persona
iMatter
iMatter provides a chat interface to support users through the reporting process. iMatter is accessed from the homepage where the chatbot states that a report has been received. When clicked, a chat opens and guides the user through the status of their report and offers resources such as community support, and the opportunity to chat with a psychologist. A follow-up conversation checks how the user is doing, asks if they need further support, and offers the option to leave feedback about their experience of the reporting process.

This prototype was designed for the persona Paula. View this persona
Wellbeing Check-Up
Wellbeing Check-Up provides a short, multiple choice pop-up risk assessment that can suggest settings to modify based on the user's experience. It allows users to self-assess their risk, the platform to give prompts around risk and check the level of risk a user’s profile is in at a given time, using indicators that are set by the user.

This prototype was designed for the persona Yvonne. View this persona