PART TWO: REVIEWING HARASSMENT REPORTS
The WAM! project offers broader insights into the challenges of handling harassment reports. Teams that support Twitter and other social media companies carry out this largely invisible work twenty-four hours a day at large scales. Since few details of these content moderation processes are public knowledge, it can be difficult to assess the effort such work entails and the impact it has on reviewers, especially in aggregate.
For the WAM! project, the process starts with the authorized reporter relationship and the WAM! reporting tool. From there the process shifts to assessment and conversations with both reporters and the Twitter review team. Estimations based on WAM!’s volunteer experience can be considered a baseline of the effort involved: platforms are likely to have implemented support for harassment reporting that is at least as efficient as WAM!’s review process.2.1. THE AUTHORIZED REPORTER RELATIONSHIP
Typically when people report harassment on Twitter, they do so directly to the company, either via the reporting features on the Twitter platform or through a form on the Twitter website. In addition to these processes for individual reporting, Twitter has given a small number of organizations authorized reporter status. This enables these organizations to receive, review, and escalate reports to Twitter. Authorized reporters manage independent intake and assessment systems separate from Twitter’s systems. Twitter’s review team takes special note of reports arriving through these channels.
This project was possible because Twitter granted WAM! authorized reporter status. The people WAM! helped were helped thanks to that relationship. The data collected and the findings of this document are grounded in that relationship. WAM! is only one of Twitter’s authorized reporters, and Twitter is not the only platform to make use of this kind of relationship. What are the consequences of the authorized reporter model? What does it achieve and what does it complicate?BENEFITS
For Twitter and other platforms, the authorized reporter relationship is a way to handle context and the challenges of determining harassment—more people with relevant expertise can review reports. For organizations like WAM!, it’s an opportunity to serve as an advocate with powers that extend beyond platform reporting. In many cases, WAM! reviewers provided close personal attention and emotional support. WAM! also had the ability to direct individuals to resources outside the Twitter platform, such as a lawyer who had volunteered assistance.  Extending that advocacy, authorized reporters are also uniquely positioned to conduct and publish research like this report, with limited risks to a platform’s staff or trade secrets.COSTS
The authorized reporter relationship also carries costs and risks. The relationship is unstable: The platform determines who is granted authorized reporter status, how the relationship works in practice, and whether or not it continues. Quality of support will vary with different authorized reporters. As a model, the authorized reporter relationship gives greater attention to groups that have associated advocates and experts, potentially at a cost to individuals reporting directly to Twitter. For authorized reporter organizations, the process requires substantial labor and can have mental health consequences for reviewers.A NEED FOR GREATER CLARITY
The authorized reporter relationship is not widely understood. Media articles about the WAM! project repeatedly mischaracterized the authorized reporter relationship. (A detailed analysis of media coverage of the WAM! project is included in Appendix 2.) Email exchanges between reporters and WAM! reviewers show that the boundaries between WAM! and Twitter were not always clear to reporters: Some reporters of harassment assumed WAM! had access to the Twitter system, specifically prior reports and correspondence or deleted/protected tweets. This lack of clarity may cause people to misdirect reports of harassment and have unreasonable expectations of authorized reporter organizations, leading in turn to broken exchanges and abandoned reports.
This lack of clarity extends to those in the authorized reporter relationship. The WAM!–Twitter correspondence shows some unevenness in how Twitter manages the authorized reporter relationship, particularly with regard to reports from people who were not the receivers of harassment (bystanders and delegates). Sometimes these reports were processed via WAM!’s authorized status. Sometimes WAM! received emails from Twitter requesting the name and @handle of the reporter, or requiring that the receiver of the harassment file a report. WAM! staff describe being unable to determine when either outcome would occur.2.2. THE WAM! REPORTING TOOL
People who reported harassment to WAM! did so through an online form created and hosted by WAM!.  The design of the WAM! reporting form, accessible only through the WAM! website, shaped the data the project received and consequently the analysis contained in this document.
The WAM! reporting form asked reporters to categorize the type of harassment they were reporting, drawing on eight preassigned categories. The associated radio buttons of the form allowed reporters to select only one of these eight. This demonstrably influenced the categorization of harassment. Many reporters used the text box accompanying the final ‘Other’ category to indicate that the harassment they were reporting fell into a category not included in the preceding seven, such as spamming, stalking, inciting others to online harassment, and encouraging suicide. Many also used the ‘other’ category to indicate that the reported harassment fell into multiple categories. Reporters similarly pointed to multiple categories in the text box following the question, Please describe in detail the harassment you are receiving, as well as in later correspondence.
An excerpt from the WAM! reporting tool; a complete version of the base form can be found in Appendix 3.
Notably, reporters may have also used the form in manners that departed from the intentions of the design. For example, though the WAM! reporting form did not explicitly allow anonymous reporting, individuals reporting on the behalf of others could supply incorrect names and email addresses—the WAM! team’s declared process was to contact the receiver of the harassment rather than the reporter. (Similar practices were visible in the report trolling WAM! received.) Thus, although name and email were requested from all who reported on behalf of others, it is unknown how many these reporters used a genuinely identifying name and email address.
The attention and emotional state of reporters may have also affected which of the reporting form’s directions reporters followed and how. The fourth question of the form asked reporters to Enter Twitter handle being harassed (Do not include the @ symbol). This was paired with rollover text reading: (Do not include the @ symbol). Despite this instruction, numerous reporters entered the @ symbol. There are multiple possible explanations for this: Reporters may have understood the @ symbol as integral to Twitter handles. Reporters may have been giving only cursory attention to instructions. It is worth highlighting, however, that when reporting harassment, many reporters are likely to be in a high state of stress. Consequently, the ability to absorb directions and make decisions may be reduced. This, in turn, can have both data and design implications.TWITTER’S REPORTING TOOLS
How is this similar or different from reporting to Twitter? Twitter updated its reporting tools soon after the WAM! project’s reporting period officially closed. The main visible changes of these updates were the introduction of bystander reporting and a streamlining of the reporting process.  As this report was being finalized, Twitter began releasing images of further tool updates and has announced that it will be giving its review team new capabilities for responding to accounts that engage in harassment or abuse.  An update that allows reporters to choose to receive an email record of the harassment report to share with law enforcement has since followed.  As a brief overview, at the moment of writing, Twitter offers multiple ways to report tweets and accounts from mobile and web as well as support articles on online abuse,  on supporting others during such abuse,  and on trusted resources users can turn to for further help. 
Reporting tools take two primary forms: an in-platform tool and a web form. The current in-platform reporting tool  appears as a handful of consecutive screens, one question per screen. Except for the final screen, which offers a text box, it is composed entirely of required questions followed by predefined answers assigned to radio buttons. As with the WAM! form, these radio buttons allow only a single selection. The current web form  consists of 9 questions, 8 of which are required; answer formats include both radio buttons and text boxes. This form includes a text box for further detail, including types of harassment not covered by the form’s predefined categories. It also requires different information of reporters than the in-platform tool, including a general location and an electronic signature.2.3. REVIEWING AND RESPONDING
Submitting the WAM! reporting form triggered WAM!’s review process, establishing a conversation with WAM!. This in turn often led to escalation—that is, WAM!, as an authorized reporter, submitted a report to Twitter on the individual reporter’s behalf. Once reports were submitted to WAM!, they were automatically entered into the ticketing system used by WAM! reviewers to track the large number of cases reported (see Figure 1).HOW WAM! HANDLED INCOMING REPORTS
With the submission of the reporting form, the larger reporting process clicked into motion. Reports were routed to the internal WAM! ticketing system, an automated note was sent to the receiver or reporter of harassment as appropriate,  and WAM! reviewers began the work of reviewing incoming reports and starting a conversation via email with the receiver of harassment.
When a new report arrived in the system, a single WAM! reviewer assessed the report based on whether the report was genuine and included adequate information, following the links provided in the report and reviewing the Twitter timelines of the reported accounts. If the WAM! reviewer deemed the report genuine, the reporter (or receiver) would then receive a follow-up email from WAM!. If the report was deemed nongenuine, WAM! did not respond further.
In the course of deciding which reports to escalate to Twitter, the reviewer asked the receiver of harassment (sometimes but not always the reporter) for additional information or documentation as necessary. Each report was reviewed along common characteristics assessing the nature of the harassment and the risk to the harassment receiver (see Table: WAM! Criteria).HOW MANY TICKETS DID WAM! IDENTIFY AS FAKE?
Out of all 594 incoming reports that were converted into tickets, WAM! volunteers judged 47% of them (277) to be fake. Most of these reports arrived in a single day (nearly 250), when WAM! received a high volume of requests from a bot. Note that WAM! reviewers reviewed reports without the administration privileges accessible to Twitter’s team of reviewers, making determinations of falsehood more challenging.HOW LONG DID WAM! TAKE TO RESPOND TO REPORTS?
WAM! replied promptly to all tickets; across all 355 tickets where WAM! replied, WAM! responded within 386 minutes on average (6.4 hours), replying to 75% of reports within 10 hours. Every ticket was responded to in less than 24 hours.CONVERSATIONS BETWEEN WAM! REVIEWERS AND RECEIVERS OF HARASSMENT
This section, which combines quantitative analysis of conversation patterns and qualitative analysis of the kinds of exchanges that occurred, highlights three important types of exchange pattern: single contacts, broken exchanges, and extended exchanges. These terms do not imply inaction by WAM!; both ‘broken’ exchanges and extended exchanges were escalated to Twitter by reviewers.
Out of all 594 tickets, WAM! reviewers personally replied to 355 tickets after the initial automated response (60%), most often establishing a conversation with the person who was reportedly being harassed.
In the 285 cases in which WAM! did not send a personal response, most tickets did not receive any internal discussion from the WAM! team. Examples include duplicate submissions that were merged, or tickets that WAM! flagged as fake with no discussion. WAM! team members commented on average 3 times on the 17 tickets they discussed but did not reply to. Often this discussion was focused on assessment of the ticket as fake or not; tickets judged fake did not receive further communication.
In the shortest of these conversations, WAM! sent one follow-up message to the reported target of harassment. These 107 single contacts with no back-and-forth arose from the following situations:
• Reports immediately escalated by WAM! with no discussion, where Twitter took swift action in return, allowing WAM! to send a single response that Twitter had taken action
• Twitter having taken independent action to suspend the harassing account by the time WAM! looked at the incoming report
• Duplicate reports, often submitted by a second bystander
• Requests by WAM! for further information (especially in cases where the submitted evidence, such as screenshots, could not be accepted by Twitter) that were never returned
• Reports sent to WAM! where the target of harassment was handling the report through other channels, but wanted WAM! to incorporate their case into WAM!’s survey of harassment experience
• Reports focused on other social media platforms, such as Facebook
• Reports associated with cases where Twitter had previously denied WAM! requests
• Reporters asking for action that could not be escalated to Twitter
• 7 cases where WAM! promised to escalate the tickets to Twitter (including some duplicate reports from multiple sources), promising to get back within 24 hours, and never replied to the target with the outcome;  all but two of these were eventually escalated to Twitter
• One case where WAM! received evidence in a language other than English and was unable to offer support
A broken exchange refers to an exchange that ends with an unmet expectation of response. The nonresponse may have occurred when it was the reporter’s interaction turn or WAM!’s. Exchanges broken by reporters—that is, moments when reporters didn’t respond—mainly followed requests from WAM! for more information, such as tweet URLs, previously assigned Twitter case numbers, or additional evidence.
Overall, exchanges broken by WAM! include all reports deemed fake and some triggering reports. After deciding a report was nongenuine (often decided without any email exchanges, but sometimes after limited exchange), WAM!’s team did not engage in further interaction. A small number of reports reports triggered intense emotional response in WAM!’s review team; a handful of these were accidentally left without response after reviewers disengaged to recover. 
Many conversations had substantial length. These extended exchanges included either a high number of emails or a high word count in a reporter’s emails (for the most part WAM!’s team used brief email templates or wrote short personal emails). Extended exchanges were typically used by reporters for additional reports of harassment and emotional release.
Many times, reporters communicated additional instances of harassment and additional reports to Twitter within these follow-up email exchanges. Follow-up reports within the same WAM! ticket might include information about additional accounts or information on additional types of harassment. When additional accounts were mentioned in the conversation process, they were sometimes described as joint harassers engaged in campaigns of harassment. In some cases, additional accounts were reported for separate cases of harassment. Additional accounts were also reported in the context of the same user opening new accounts as reported ones were suspended. As an example of the variety of harassment types discussed within a single ticket, someone who initially reported the posting of unauthorized photos as part of a revenge porn attack might later in the email exchange provide details about having been doxxed as well.
Although the presence of many reported cases of harassment per WAM! ticket poses a challenge for the authors’ quantitative analysis (which consequently undercounts the harassment reported to WAM!), these conversations demonstrate the benefits of engaging in an extended conversation with people who are experiencing harassment, centering the process around the person rather than focusing it on the specific instance.
These conversations also show evidence of emotional release, which took the form of venting (about the harassment or about Twitter’s reporting process) and gratitude (repeated thanks for review/escalation of particular cases as well as for the WAM! project more broadly).
These extended exchanges show that for people experiencing harassment, the reporting process is much more than just harassment identification—it is part of coping emotionally with a traumatic experience. From the support perspective, the reporting process is an opportunity to establish trust and listen. Processes optimized solely for stopping harassment are unlikely to address the larger impact of the harassment on the targeted user.ASSESSING PERSONAL SAFETY CONCERN OF REPORTED RECEIVERS OF HARASSMENT
In each case, WAM! volunteers assessed the personal safety concern of receivers of harassment. Out of 317 reports not labeled fake, the reporter claimed a safety risk in 18%. In contrast, WAM! judged 25% of these reports to involve some kind of personal safety risk. In 30 cases, WAM disagreed with the reporter’s assessment and concluded that there was no personal safety risk. In 54 cases, WAM! concluded that a safety risk did exist, even when the reporter didn’t make that claim.
Qualitative analysis of reports where WAM! disagreed, concluding there was no personal safety risk, suggests that a limited number of reporters may have indicated fear of personal safety in order to emphasize seriousness and urgency rather than to indicate specific safety concerns. It is also possible that the project’s focus on the Twitter platform meant that details that would explain physical safety concerns didn’t surface in these reports: When following up, the WAM! team focused on acquiring information for assessment and escalation to Twitter. They didn’t query reporters on why they felt a personal safety risk.WHAT DID WAM! ESCALATE TO TWITTER?
WAM! reviewers escalated 43% of reports they judged genuine. Why didn’t WAM! reviewers escalate all tickets that they considered genuine? Aside from reports that reviewers did not judge to merit escalation due to the specifics of the interactions described, reports weren’t escalated for the following reasons:
• some reports, especially bystander reports, were merged into a single ticket if WAM! had already escalated the issue
• some reports lacked the kind of evidence that Twitter required for escalation
• some reports were confusing, but when WAM! asked the reporter for clarifying information, the reporter did not respond2.4. THE WORK OF REVIEWING HARASSMENT
WHAT CAN WAM!’S EXPERIENCE TELL US ABOUT THE WORK OF REVIEWING REPORTS?
Reviewing and responding to harassment reports can be challenging labor. It is work that carries considerable weight, urgency, and stress. Particularly with cases that might involve law enforcement, such as death threats or rape threats, a reviewer’s decisions and speed of response can have profound effects on the safety of the receiver of harassment. Further, evidence of harassment is often emotionally difficult to read; reviewers may additionally encounter material designed by malicious reporters to cause harm to them.
At the same time, the labor of reviewers is largely invisible, with worker identities protected by companies and their work processes hidden to prevent harassers from exploiting loopholes. Consequently, with a few exceptions,  little is known about the nature of this work. The following, brief analysis of WAM!’s experience is offered to further support a more informed discussion about the work of reviewing harassment reports.REVIEWING HARASSMENT REPORTS: BY THE NUMBERS
While most of their work occurred during US daylight hours, the four WAM! reviewers responded to incoming email from reporters at almost all hours of the day. Reviewers replied promptly to every initial report deemed genuine: across the 355 tickets where WAM! reviewers engaged in conversation, their first response after the auto-response email was within 386 minutes of the initial report, on average replying to 75% of reports within 10 hours. All of these tickets were responded to in fewer than 24 hours.
Across the 21-day reporting period, WAM! reviewers:
• Assessed 640 incoming reports (30/day):
• assessed genuineness of each report (assessing up to 255 in one day)
• checked evidence in each report
• Sent a total of 1226 messages in the ticketing system (58 per day)
• Wrote 59,243 words in the ticketing system
• Examined more than 531 unique allegedly harassing accounts and 179 unique receivers of harassment (34 per day)
• Responded personally to 355 reports (17 per day)
• Evaluated or added 3,628 tags in the internal ticketing system (173 per day)
• Escalated 155 tickets to Twitter (7 per day)
• Carried out at least 186 exchanges with Twitter, as measured through Twitter’s responses  (9 per day)MENTAL HEALTH COSTS OF REVIEWING HARASSMENT REPORTS
Reviewing and responding to harassment reports took a toll on the WAM! team. Reviewers reported secondary trauma from reading abusive and threatening tweets and the backstories of reporters. Symptoms included anxiety, sleeplessness, loss of concentration, depression, the triggering of past PTSD, and irritability, among others.
Some members of the WAM! staff and board themselves received harassment on Twitter as a result of the reporting project. As the names of the people on the reviewing team were never publicly released, this was regardless of whether those individuals were part of that team or not. This harassment included hate speech, distribution of photoshopped images and false information, and rape and death threats.2.5. THE PROBLEM OF EVIDENCE
Evidence presents a serious challenge both for people reporting harassment and for people reviewing reports. As data across the WAM! project demonstrates, this challenge is often exacerbated by assumptions about the forms harassment will take.  When reporting tools and review processes accept only certain modes or formats of evidence, it becomes difficult to submit evidence for forms of harassment that fall outside these assumptions. Platform responses such as suspension and deletion in turn affect the evidence available for reporting harassment to law enforcement and other channels.TWITTER & EVIDENCE
Twitter requires reporters to provide URLs for examples of harassment. This assumes that tweets are the mode through which harassment is being carried out. In the event that harassment occurs in some other way—for example, through exposure to violent or pornographic profile images or usernames via follower/favorite notifications—reporting harassment becomes complicated. Such harassment currently cannot be reported using Twitter’s in-platform tool. While it can be reported via Twitter’s web form, the reporter is still also required to provide a URL. 
Initially, WAM! encouraged reporters to submit screenshots of harassment. Only later did the WAM! team realize that Twitter does not currently accept screenshots as evidence. While the URL of a tweet is self-authenticating, an image presented as a screenshot could be the product of digital manipulation. For Twitter reviewers, assessing a screenshot also requires a greater expenditure of resources: the reviewer must locate the harassing element within the screenshot and then match it to a relevant object—account, tweet, image, etc.—within the Twitter system.
Screenshots, however, are currently the primary means for capturing harassment that uses the ‘tweet and delete’ tactic—a tactic that cannot easily be captured via URLs.
A harassment tactic seen on many platforms is for authors to delete harassing messages after the messages have been responded to/seen. This removes a critical piece of context that affects the way later viewers—and reviewers— of an exchange assess aspects like validity, emotional tone, and reasonableness of response.
On Twitter, when a harasser deletes a harassing tweet it not only affects later assessment of exchanges, it eliminates the URL associated with the tweet. Which means that if the tweet wasn’t reported immediately, pre-deletion, reporting the tweet via URL is now impossible.  Further, Twitter’s data retention policies  suggest that even if a URL is reported in time, if the associated tweet is deleted before being seen by reviewers, the URL may not continue to function internally—that is, the evidence may be inaccessible even to Twitter reviewers.
Taking a screenshot creates a digital record that has a permanency outside the Twitter system, a record outside the control of the alleged harasser. A screenshot can be shared even after a tweet has been deleted or a following or favoriting action has been undone. Further, exchanges in the WAM!–reporter correspondence suggest that many people find taking a screenshot easier and more familiar than locating the URL of a tweet and saving it. Harassment is thus likely recorded earlier and more often with screenshots.LAW ENFORCEMENT & EVIDENCE
Account suspension can slow or limit a harasser’s ability to continue harassment on a single platform, particularly if suspension is linked to an IP address or telephone number  and hinders the creation of fresh accounts. However, when accounts are suspended, harassing or abusive tweets are no longer visible. The target of the harassment thus loses the ability to show the harassment or abuse directly to law enforcement. The same complications created by the ‘tweet and delete’ tactic are also consequences of the process of ‘review and suspend.’ Twitter itself advises users who contact law enforcement about threats to “document the violent or abusive messages with print-outs or screenshots.”  This pattern and its consequent problems, however, are not unique to the Twitter platform.
As this report was finalized for publication, Twitter announced that reporters of harassment will now have the option to receive a record to share with law enforcement. This record summarizes the harassment report, but doesn’t include evaluation or response.  Full assessment of this new option is beyond the scope of the current analysis, but it suggests that some of the difficulties previously experienced in conveying the seriousness of online harassment to law enforcement may now improve, at least with regard to the Twitter platform.
Even with this new option, when an account is suspended for harassment, no explanation is provided on the Twitter platform itself. All that users—including law enforcement—see at the URL of a suspended account is a blanket suspension notice. As a result, suspensions for harassment or abuse are indistinguishable from suspensions for spam, trademark infringement, etc. This contrasts with Twitter’s policy for explicitly marking ‘withheld content’— tweets or accounts censored in particular locations due to government requests.
For those reporting harassment to law enforcement, an indication on the Twitter platform of suspension due to harassment would be helpful. Further, such public acknowledgment could potentially reduce experiences of isolation or stigma, deter harassing behavior, and provide more robust data for analysis