Reporting, Reviewing, and Responding to Harassment on Twitte

Gathered together in one place, for easy access, an agglomeration of writings and images relevant to the Rapeutation phenomenon.

Re: Reporting, Reviewing, and Responding to Harassment on Tw

Postby admin » Thu May 19, 2016 9:11 am

APPENDIX 3: WAM! REPORTING FORM

The design of the WAM! reporting form, accessible only through the WAM! website, shaped the data the project received and consequently the analysis contained in this document. Though the actual form has since been removed, screenshots of the backup, privately mirrored version of the form follow. These are expanded through written descriptions to detail the possible variations, nonvisible text, and answer restrictions reporters would have encountered.

WAM! Women, Action, & the Media

WAM Twitter Harassment Reporting Tool (Pilot)


Image
Image
Image

People who visited the WAM! reporting form found a single page of 16–20 questions. The range reflects the presence of two choice points in the form, where selections could lead to additional questions. The first set of additional questions opened in conjunction with the very first question: Are you the person being targeted on Twitter? The default response reporters saw was ‘Yes.’ If reporters responded ‘No,’ they were asked to include their own name and email address, as well as indicate whether the report was being made with the awareness of the target of the harassment. The second set of additional questions opened in response to the answer to question 13 of the base form: Are you being harassed on multiple platforms? This defaulted to ‘No.’ If reporters responded ‘Yes,’ they were asked additional questions about what other platforms harassment was occurring on.

Multiple questions included rollover explanations, or additional information that became visible when the reporter’s cursor hovered over the question. These served several purposes. Rollovers were used to provide guidance and clarification. In particular, they were used in conjunction with questions asking for names and emails to explain how to answer these questions if the reporter was not the target of the harassment. Similarly, a rollover was used to explain that a reporter need provide only a single example tweet; more could be shared at a later point. Rollovers were also used to direct the format of input: a rollover associated with the question about the Twitter handle of the target of harassment asked reporters not to include the @ symbol in their answer; a rollover requesting a numerical entry accompanied the question about how many weeks harassment had been occurring. Rollovers were used twice specifically to explain why questions were being asked: for bystanders, to explain that WAM! wanted to analyze data about bystander reporting and the harassment target’s awareness of it; and with regard to the phone number request, to explain this might be used for verification or communication. Rollovers were not used with the categories of harassment or the question, Do you fear for your personal safety due to this harassment?

Just over half of the questions were required; these are marked with a red superscript star. If any of these were left unanswered, form submission would not complete: reporters would be presented with the form again with missing information brought to their attention. Note that in addition to items marked as required and items left unmarked, the question soliciting the reporter’s phone number is specifically labeled “Optional.” The format of acceptable answers varied with the question; formats used included radio buttons (the circular answer options) with pre-assigned answers, free text boxes, and numerical entry boxes. Radio buttons allowed only a single selection.
admin
Site Admin
 
Posts: 36135
Joined: Thu Aug 01, 2013 5:21 am

Re: Reporting, Reviewing, and Responding to Harassment on Tw

Postby admin » Thu May 19, 2016 9:11 am

_______________

Notes:

1. Bhatnagar, Tina. “Update on user safety features.” Twitter, February 26, 2015.

2. Avey, Ethan. “Making it easier to report threats to law enforcement.” Twitter, March 17, 2015

3. Dibbell, Julian. “A Rape in Cyberspace.” The Village Voice. 21 December 1993.

4. Sproull, L. & Kiesler, S. „Reducing social context cues: electronic mail in organizational communication.» Management Science, 32 (11): 1492-1512. 1986.

5. Sproull, L. & Faraj, S. “Atheism, sex, and databases: The net as a social technology.” In Brian Kahin and James Keller (eds.), Public Access to the Internet (pp. 62-81). Cambridge: The MIT Press. 1995.

6. Hess, Amanda. “Robin Williams’ Death Inspires Twitter to Crack Down on Harassment (Just a Little Bit).” Slate. August 14, 2014.

7. Selby, Jenn. “Zelda Williams returns to Twitter following Robin Williams’ death with a powerful message.” The Independent. September 3, 2014.

8. McDonald, Soraya Nadia. “Gaming vlogger Anita Sarkeesian is forced from home after receiving harrowing death threats.” Washington Post. August 29, 2014.

9. Stuart, Keith. “Zoe Quinn: ‘All Gamergate has done is ruin people’s lives’.” Guardian. December 3, 2014.

10. Totilo, Stephen. „Another Woman In Gaming Flees Home Following Death Threats.” Kotaku. October 11, 2014.

11. Duggan, Maeve. ”Online Harassment” Pew Research Center. October 22, 2014.

12. Bazelon, Emily. “Do Online Death Threats Count as Free Speech?” New York Times. November 25, 2014.

13. Hess, Amanda. “Why Women Aren’t Welcome on the Internet.” Pacific Standard. January 6, 2014.

14. Gadde, Vijaya. “Twitter executive: Here’s how we’re trying to stop abuse while preserving free speech” April 16, 2015.

15. Gillespie, Tarleton. „The Dirty Job of Keeping Facebook Clean.” Microsoft Research Social Media Collective, February 22, 2012.

16. Chen, Adrian. „The Laborers Who Keep Dick Pics and Beheadings Out of Your Facebook Feed.” WIRED. October 23, 2014.

17. Buni, Catherine. Chemaly, Soraya. “The Unsafety Net: How Social Media Turned Against Women.” The Atlantic. October 9, 2014.

18. Crawford, Kate. Gillespie, Tarleton. “What is a flag for? Social media reporting tools and the vocabulary of complaint” New Media and Society. July 15, 2014

19. Bhatnagar, Tina. “Update on user safety features.” Twitter, February 26, 2015.

20. Barr, Alistair. Fleisher, Lisa. YouTube Enlists ‘Trusted Flaggers’ to Police Videos. Wall Street Journal, March 17, 2014.

21. As far as the authors have been able to determine, there is currently no public list of past or present Twitter authorized reporters. Interactions with Twitter staff suggest multiple organizations hold this status; however, which organizations and how many are unknown.

22. Levine, Marne. ”Controversial, Harmful and Hateful Speech on Facebook.” Facebook Inc. May 29, 2013.

23. Women, Action, and the Media. Statement on Facebook’s Response to their campaign. May 28, 2013.

24. Reports appear to receive faster attention, but the specifics of this different prioritizing are not clear.

25. Doshi, Shreyas. “Building a Safer Twitter.” Twitter, December 2, 2014.

26. Bhatnagar, Tina. “Update on user safety features.” Twitter, February 26, 2015.

27. Avey, Ethan. “Making it easier to report threats to law enforcement.” Twitter, March 17, 2015.

28. The WAM! Twitter Reporting Tool did not query reporters about traditional demographic variables like sex, race, age, etc.

29. A possible reference to the popular “Over 9000” Internet meme

30. Our counts of previous reporting to Twitter filter out all values above 200, to account for entries that most obviously take this approach.

31. In some cases, further accounts were discussed in subsequent communications with WAM!. In some analyses, we include those accounts, but we exclude them here since we have not coded these accounts as harassment receivers or allegedly harassing accounts.

32. For more details, See Appendix 1.5: Methods: GamerGate Analysis

33. In this analysis, we filter out four reports suggesting a harassment start date higher than 18.5 years ago. The largest duration included in this analysis is 5 years 10 months, with most falling under 5 years 6 months.

34. Other reporters included this information in further conversations with WAM!. That information is not included in this analysis, which is limited to answers on the form.

35. One response left this field blank.

36. This may be due to its relative size rather than relative harassment levels. For comparison purposes, Facebook’s monthly active user count is more than four times that of Twitter. http://www.statista.com/statistics/2720 ... -of-users/

37. Because WAM! used Twitter’s own form rather than their own system to escalate tickets, we reconstructed the association with escalated tickets for our analysis. For more information on methods and limitations, see Appendix 1.3 and Appendix 1.4

38. This report cannot offer an objective assessment of the accuracy of judgments by WAM!, Twitter, or reporters. However, it appears that authorized reporters are in a strong position to work with harassment receivers to assess risk; this may serve to improve both personal safety and the accuracy of harassment reports.

39. Answering this question was, however, limited by a lack of access to information on accounts that were suspended or deleted by Twitter before the date that authors sampled follower information (resulting in small and biased sample sizes and a loss of statistical power). For more details, see Appendix 1.4.

40. Specifically for serious cases where WAM! was otherwise unable to help.

41. In addition to the public-facing form hosted by WAM!, the reporting form was mirrored elsewhere in the event that the public site was attacked and had to be taken down. When the reporting period closed, the public-facing form was removed. The following analysis is based on the privately mirrored version of the reporting form. The complete base form is available in the appendix.

42. Doshi, Shreyas. “Building a Safer Twitter.” Twitter, December 2, 2014.

43. Bhatnagar, Tina. “Update on user safety features.” Twitter, February 26, 2015.

44. Avey, Ethan. “Making it easier to report threats to law enforcement.” Twitter, March 17, 2015.

45. https://support.twitter.com/articles/15794

46. https://support.twitter.com/articles/20170516

47. https://support.twitter.com/groups/57-s ... -resources

48. It’s worth noting that particularly for apps used on mobile devices, if a user hasn’t updated the app to the version that includes these features, the design of the in-platform tool they see may be different; earlier versions mainly direct the user to the web form.

49. https://support.twitter.com/forms/abusiveuser

50. As initially set up, the automated note that went to receivers of harassment included the contact information reporters of harassment had provided. This problem was brought to WAM!’s attention and addressed.

51. After further investigation, we determined that these gaps occurred after one of the three WAM! reviewers at the time stepped down after emotional trauma. In the transition, several tickets were missed.

52. Follow-up analysis shows that 5 out of 7 of these dropped tickets were escalated to Twitter, but that WAM! did not complete the conversation at the time.

53. See the Introduction for discussion and references on the work of reviewing harassment reports

54. The actual number is higher, but the authors don’t have access to this data

55. This is visible, for example, in the richly detailed personal accounts in WAM!–reporter correspondence, but also in WAM!– Twitter correspondence; further, WAM! staff attribute Twitter’s response rate to doxxing to evidence complications related to ‘tweet and delete’ patterns.

56. On the web form, in conjunction with a text box labeled ‘Further description of problem’ Twitter requests reporters detail harassment that doesn’t fit the tweet category. That web form can be found here: https://support.twitter.com/forms/ abusiveuser.

57. There are many reasons a harassing tweet might not be immediately reported; for example, emotional well-being may require distance before dealing with harassing or abusive messages.

58. https://support.twitter.com/articles/41 ... orcement#5

59. Russell, Jon. “Twitter is now requiring phone numbers for Tor accounts” Techcrunch, March 2, 2015.

60. https://support.twitter.com/articles/15794#

61. Avery, Ethan. “Making it easier to report threats to law enforcement.” Twitter blog. March 17, 2015.

62. The remaining incoming reports were empty or duplicates. For example, 255 duplicate reports arrived on November 11 during a gap in WAM!’s Captcha filtering; only some of these were converted into tickets in WAM!’s internal system. Three days earlier, on the November 8, WAM! was warned anonymously via its reporting form that ‘GamerGate’ was planning a campaign of fake reporting and spam, aiming to submit 10,000 spam reports over the subsequent few days. This was a significant overstatement compared to the number received. Beyond the reporting timeframe, we counted 143 empty reports and 37 additional reports that were never added to the WAM! ticketing system, arriving after the conclusion of the reporting period.

63. In this section, unlike the analysis of form submissions, we consider all tickets, including ones later labeled as fake by WAM! It is important to note that some of the 640 tickets were prompted by something other than a report. Only 594 reports in total were associated with tickets.

64. Twitter’s responses included: account deleted, warning sent to account owner, account suspended, account previously suspended, request declined, information request, contacted user, issue resolved itself, authorization request, user not found, user engaging with alleged abuser.

65. See http://blog.randi.io/good-game-auto-blocker/

66. Wofford, Taylor. One Woman’s New Tool to Stop Gamergate Harassment. Newsweek, Nov 29, 2014.

67. Since there is no rigorously developed list of GamerGate targets, we limit this analysis to alleged harassing accounts.

68. The project only received one report in a language other than English, and was unable to address it due to lack of the necessary language abilities.

69. How spam and communications other than reports of harassment or abuse on Twitter—e.g., general contentions that the WAM! project itself was a form of harassment—were handled in the dataset is documented in the methods section.

70. As measured by the number of sentences per day in the Media Cloud database that specifically mention harassment or abuse on Twitter.

71. Note that the Media Cloud database is skewed toward English-language media, and that international media outlets are significantly underrepresented.

72. A close analysis of non-English-language stories is beyond the scope of the present report.

73. An pejorative acronym denoting “Social Justice Warriors.”

74. Within Media Cloud’s dataset, the first mention of “censorship” as related to the reporting project is WAM executive director Jaclyn Friedman’s characterization of harassment as a form of censorship in a 7 November 2014 interview published in The Atlantic. The first characterization of the project itself as “censorship” or “policing speech” is on 10 November 2014 from blogger Andrew Sullivan, who remained the primary source of censorship-related critique. Articles that mention Sullivan’s critiques, however, are overwhelmingly critical of Sullivan himself rather than of the reporting project.

75. This graph is based on analysis of individual sentences about the project, rather than on all of the sentences in each story. Because a story can include both sentences about the project that do mention WAM! by name and sentences about the project that only mention Twitter, some individual stories are represented by both the orange and the blue lines.

76. Mentions of harassment or abuse on Twitter comprise only a small portion of mentions of Twitter overall (no more than 2% of all mentions per day between 1 January 2014 and 15 February 2015. This is due to the frequency with which Twitter is mentioned in stories that are not about Twitter itself—for example, when a spokesperson confirms or denies something via Twitter, or when readers are invited to follow an article’s author on Twitter.

77. The January 8 2015 NPR segment about online harassment was not associated with a peak in coverage.
admin
Site Admin
 
Posts: 36135
Joined: Thu Aug 01, 2013 5:21 am

Previous

Return to A Growing Corpus of Analytical Materials

Who is online

Users browsing this forum: Google [Bot] and 26 guests

cron