by Alice E. Marwick, Fordham University, firstname.lastname@example.org, and Ross Miller, Fordham University School of Law, email@example.com
Fordham Law School
NOTICE: THIS WORK MAY BE PROTECTED BY COPYRIGHT
YOU ARE REQUIRED TO READ THE COPYRIGHT NOTICE AT THIS LINK BEFORE YOU READ THE FOLLOWING WORK, THAT IS AVAILABLE SOLELY FOR PRIVATE STUDY, SCHOLARSHIP OR RESEARCH PURSUANT TO 17 U.S.C. SECTION 107 AND 108. IN THE EVENT THAT THE LIBRARY DETERMINES THAT UNLAWFUL COPYING OF THIS WORK HAS OCCURRED, THE LIBRARY HAS THE RIGHT TO BLOCK THE I.P. ADDRESS AT WHICH THE UNLAWFUL COPYING APPEARED TO HAVE OCCURRED. THANK YOU FOR RESPECTING THE RIGHTS OF COPYRIGHT OWNERS.
FLASH: The Fordham Law Archive of Scholarship and History Center on Law and Information Policy Centers and Institutes
Follow this and additional works at: http://ir.lawnet.fordham.edu/clip
Part of the Privacy Law Commons
Recommended Citation Marwick, Alice E. and Miller, Ross W., Online Harassment, Defamation, and Hateful Speech: A Primer of the Legal Landscape ( June 10, 2014). Fordham Center on Law and Information Policy Report.
This Book is brought to you for free and open access by the Centers and Institutes at FLASH: The Fordham Law Archive of Scholarship and History. It has been accepted for inclusion in Center on Law and Information Policy by an authorized administrator of FLASH: The Fordham Law Archive of Scholarship and History. For more information, please contact firstname.lastname@example.org.
Online Harassment, Defamation, and Hateful Speech: A Primer of the Legal Landscape
June 10, 2014
Assistant Professor, Communication and Media Studies, Fordham University
Academic Affiliate, Fordham CLIP
Project Fellow, Fordham CLIP
CLIP Study Advisors
Joel R. Reidenberg
Microsoft Visiting Professor of Information Technology Policy, Princeton
Academic Study Director, Fordham CLIP
N. Cameron Russell
Executive Director, Fordham CLIP
Interim Director and Privacy Fellow, Fordham CLIP
(through July 2013)
Any views and opinions expressed in this study are those of the authors and are not presented as those of Fordham University, Fordham University School of Law or the Fordham Center on Law and Information Policy.
(c) 2014. Alice Marwick and Fordham CLIP. This study may be reproduced, in whole or in part, for educational and non-commercial purposes.
I. Executive Summary
• Although online harassment and hateful speech is a significant problem, there are few legal remedies for victims.
• Section 230 of the Communications Decency Act provides internet service providers (including social media sites, blog hosting companies, etc.) with broad immunity from liability for user-generated content.
• Given limited resources, law enforcement personnel prioritize other cases over prosecuting internet-related issues.
• Similarly, there are often state jurisdictional issues which make successful prosecution difficult, as victim and perpetrator are often in different states, if not different countries.
• Internet speech is protected under the First Amendment. Thus, state laws regarding online speech are written to comply with First Amendment protections, requiring fighting words, true threats, or obscene speech (which are not protected). This generally means that most offensive or obnoxious online comments are protected speech.
• For an online statement to be defamatory, it must be provably false rather than a matter of opinion. This means that the specifics of language used in the case are extremely important.
• While there are state laws for harassment and defamation, few cases have resulted in successful prosecution. The most successful legal tactic from a practical standpoint has been using a defamation or harassment lawsuit to reveal the identities of anonymous perpetrators through a subpoena to ISPs then settling. During the course of our research, we were unable to find many published opinions in which perpetrators have faced criminal penalties, which suggests that the cases are not prosecuted, they are not appealed when they are prosecuted, or that the victim settles out of court with the perpetrator and stops pressing charges. As such, our case law research was effectively limited to civil cases.
• In offline contexts, hate speech laws seem to only be applied by courts as penalty enhancements; we could locate no online-specific hate speech laws.
• Given this landscape, the problem of online harassment and hateful speech is unlikely to be solved solely by victims using existing laws; law should be utilized in combination with other practical solutions.
Lori Stewart is a middle-aged Midwestern woman and personal blogger. Her blog, This Just In, shares gentle stories of gardening, her military family, vacation snapshots, and other aspects of her life. In 2006, Stewart founded Toys for Troops, a non-profit organization which sends Beanie Babies and soccer balls to soldiers stationed overseas, so they can distribute them to local children. Shortly after, she attracted the attention of a troll. “JoeBob” left comments like the following:
So you enabled comments again, eh?
Good! You liberal fuc#$!* cun@. 
Hope your inbred half-retarded son takes a bayonet to the gut by a Palestinian warrior. I love to see useful idiots serving the jew get disemboweled!
For seven years, JoeBob left profane, aggressive comments on Stewart’s weekly blog. In response, she disabled comments and instead used a private Facebook account to communicate with readers. JoeBob escalated. He created a fake email address, purporting to be Stewart, and sent emails to her friends and family full of homophobic slurs and anti-Semitic remarks. He signed Stewart up for hundreds of newsletters and commented on other blogs using her name. Frustrated, Stewart went to the police, who told her "there's really not going to be much you can do about this. You have a public blog, there is such a thing as freedom of speech."  She persuaded the officer to file a report anyway, so at least she could have a case number to work with.
Luckily for Stewart, a local police investigator became interested in her case. He subpoenaed JoeBob’s ISP, and was able to reveal the troll’s identity—including his name, workplace, and location. Stewart decided not to publicize JoeBob’s real identity, but opened her blog up to comments, hoping that the mere threat of exposure would keep the harasser at bay. This seemed to be the best she could hope for. Unfortunately, JoeBob continued to harass Stewart, and she revealed his name. Robin B. King was arrested by the county sheriff on four counts of harassment by electronic communication.  He has pled not guilty and a jury trial is set for this year. 
Practically, the path most victims have taken is to use the legal system not to win a judgment, but to subpoena IP records. Legal proceedings can allow victims to unmask and potentially publicize the names of their anonymous harassers. This is what Lori Stewart eventually did. After going to the police, she was able to discover the harasser's identity; Robin B. King, a 56-year-old Defense Department employee based in the Saint Louis suburbs. (In April, King pleaded guilty to a misdemeanor count of harassment through electronic communication, according to local news reports.)
-- People Harassed Online Have Few Legal Protections, by Noah Berlatsky
Online harassment is a significant online problem, particularly for women. In the last decade, several high-profile incidents, including the online harassment of tech blogger Kathy Sierra, the backlash against Anita Sarkeesian’s Feminist Frequency Kickstarter project examining sexism in video games, and the targeted harassment of several female Yale Law students on the AutoAdmit message boards have raised questions around the limits of online free speech and the prevalence of explicitly sexist commentary on the internet.  Several feminist legal scholars have systemically analyzed the content and prevalence of such speech, particularly the long-term and individual impacts.  Obviously, online harassment is not limited to gender.  Recent studies have investigated people who deliberately engage in provocative, aggressive internet behavior, primarily with regard to cyberbullying and “trolling.”  However, research suggests that those most likely to be the victims of hateful online speech are women, sexual minorities, and people of color—in other words, harassment breaks down along traditional lines of power. 
While there are many descriptive studies of online hate speech, harassment, defamation, and so forth, we decided to research these issues from a legal perspective.  We were primarily interested in (i) what legal remedies, if any, are available for victims of such acts, and (ii) if such legal remedies and procedures exist, whether practical hurdles stand in the way of victims’ abilities to stop harassing or defamatory behavior and obtain legal relief. Every US state and the District of Columbia has a law covering cyberstalking or cyberharassment, and a majority of states have laws covering both.  Defamation law has been used to pursue offensive online speech in a few well-documented cases;  and, in some instances, laws around hate speech may be germane. However, prosecuting such cases is very difficult. Not only are there issues with jurisdiction, local police are often too busy, unwilling or not technically savvy enough to target perpetrators, making criminal proceedings impractical. And as Stewart’s case suggests, internet speech is protected under the First Amendment, making the ability to regulate hateful, defaming, or harassing speech problematic.  In many cases, the best victims can hope for is that, in unmasking the perpetrator, a loss of anonymity will be enough to stop online harassment.
The goal of this research project is to better understand the legal remedies available to victims of online harassment, hate speech, and defamation, as well as the current legal protections afforded to this type of speech under the First Amendment within the US. The project examines long-standing and new treatments for online harassment, and seeks to provide a resource for victims of offensive comments online, practitioners, academics and the public atlarge. For context, we begin with an introductory summary of Section 230 of the Communications Decency Act, which provides broad immunity to internet service providers, and is thus crucial to the current legal landscape.  Given that online speech may be protected under the First Amendment, understanding legal remedies requires examining the limits of First Amendment protections, specifically the unprotected categories of fighting words, defamation, obscenity, and true threats. Thus, legal remedies can be placed into three categories: (1) cyberharassment and cyberstalking; (2) defamation; and (3) hate speech and hate crime laws.  In each section, we summarize state laws, examine significant case law, and discuss complications and drawbacks to each potential remedy. Again, we hope that this document can serve as a resource for researchers, legal practitioners, internet community moderators, and victims of harassment and hateful speech.
While online harassment and hateful speech is a significant problem online, the current legal landscape is, generally, of little help to victims. Section 230 of the Communications Decency Act provides internet service providers (including social media sites, blog hosting companies, and so forth) with broad immunity from liability for user-generated content. Since hosting sites are not legally liable for user content, and although victims can appeal to site proprietors under Terms of Service or community standards, there is no obligation on the part of the host to remove content, delete user accounts, or discipline harassers.
When victims contact local law enforcement for help, it seems that they are rarely taken seriously. Many law enforcement personnel face limited resources and lack technical expertise. Issues with state jurisdiction also make successful prosecution difficult, as victim and perpetrator are often in different states, if not different countries.  While there are state laws for harassment and defamation, few cases have resulted in successful prosecution. The most successful legal tactic from a practical standpoint has been using a defamation or harassment lawsuit to reveal the identities of anonymous perpetrators by subpoenaing ISPs then settling. (During the course of our research, we were unable to find many published opinions in which perpetrators have faced criminal penalties, which suggests that the cases are not prosecuted, they are not appealed when they are prosecuted, or that the victim settles out of court with the perpetrator and stops pressing charges. As such, our case law research was effectively limited to civil cases.) Victims may hope that the fear of unmasking will cause online harassers to stop their activities, but this is by no means guaranteed.
Complicating matters further, internet speech is protected under the First Amendment. Thus, state laws regarding online speech will be held unconstitutional if they interfere with speech protected by the First Amendment. While the First Amendment’s guarantee of freedom of speech covers most situations, there do exist several categories of speech are not protected, including fighting words, threats, and obscene speech. In other words, unless an offensive or obnoxious online comment falls into one of these three categories, it is generally protected speech. For instance, for a statement to be defamatory, it must be a false statement of fact rather than a matter of opinion. This means that the specifics of language used in the case are extremely important—calling someone a “rapist” is a verifiable statement (has the person been convicted of rape?),  but calling someone a “bitch” is a matter of opinion, and therefore protected speech. 
We also investigated hate speech laws. However, we found that hate speech and hate crime laws are limited by the First Amendment and are generally only applied by courts as penalty enhancements for other crimes; there are no hate speech laws specific to the online context.
The only cases in which online speech seems to be aggressively prosecuted is when it involves minors. If minors are public school students defaming school personnel, or “cyberbullying” their classmates, they may administered disciplinary action from the school, and may not be protected by the First Amendment. However, the landscape around this area is a moving target and laws seem to be changing very rapidly.
Given this landscape, the problem of online harassment and hateful speech is unlikely to be solved solely using existing laws. These laws may be augmented with other solutions, such as community moderation or enforcing terms of service. However, this report focuses only on existing laws and available legal remedies, and does not advocate for or review any proposals for new laws.
This document covers three areas: hate speech, defamation, and online or “cyber” harassment. To assist in understanding the US legal landscape in these areas, we provide an overview of current US laws, key cases, and their relevance to online harassment and hateful speech.
IV. The First Amendment, Unprotected Speech, and the Right to Anonymity
Since the First Amendment provides for the right to free speech, it places limits on laws that attempt to govern online speech. In order to access most information online, a person must take several affirmative steps—sitting at a computer, opening a web browser, typing terms into a search engine, and so forth.  Because of these affirmative steps required to access information online and the availability of parental control software, the Supreme Court has ruled that protecting minors from indecent online materials is not an adequate justification for limiting most online content, as such limits risk suppressing adult speech as well.  As a result, laws that regulate online harassment, defamation and so on face a delicate balancing act. They must be written narrowly to avoid encroaching on speech protected by the First Amendment while still restricting the undesirable conduct in practice. As a result, several states have very narrow cyberharassment laws that exclusively target the categories of speech that have been held by the Supreme Court to be unprotected under the First Amendment.  These categories include “certain well-defined and narrowly limited classes of speech, the prevention and punishment of which have never been thought to raise any Constitutional problem,” such as obscenity, defamation, and fighting words. 
To determine whether content qualifies as obscene, and is therefore constitutionally unprotected, the Supreme Court created the Miller test.  Under the Miller test, speech is obscene if it meets three conditions: (1) "the average person, applying contemporary community standards," would find that the work, taken as a whole, appeals to the prurient interest, (2) the work depicts or describes, in a patently offensive way, sexual conduct specifically defined by applicable state law, and (3) the work, taken as a whole, lacks serious literary, artistic, political, or scientific value.  Several states have cyberharassment laws that criminalize certain obscene speech using this definition (discussed in Section VII). In practice, there is a high threshold for obscenity. 
Defamation is the communication of a false statement of fact that harms the reputation of a victim, and includes libel, which covers written published statements, and slander, which covers spoken statements.  Since the ratification of the First Amendment, the Supreme Court has consistently ruled that laws against defamation are not unconstitutional.  However, the First Amendment does limit how defamation laws may be applied.  For example, if someone makes a defamatory statement about the official conduct of a public figure, the figure can only prevail in a lawsuit if the speaker knew that his or her statement was false.  Defamatory statements which are labeled as opinion are not protected by the First Amendment if they serve only to harm the victim’s reputation.  (Defamation will be discussed in more detail in Section VI.)
A state cannot pass laws that “forbid or proscribe advocacy of the use of force” or make it illegal to advocate breaking the law, except where the speech is “directed to inciting imminent lawless action and is likely to incite or produce such action.”  This is called the “fighting words” doctrine. Fighting words, such as calling a police officer a “white racist motherf@#ker” and telling him that you wish his mother would die,  are exempted from First Amendment protections because “their content embodies a particularly intolerable (and socially unnecessary) mode of expressing whatever idea the speaker wishes to convey” and not because of the content of the message.  There is, however, inconsistency and disagreement among various courts as to what qualifies as fighting words.  In the online context, it is difficult for speech to meet the “imminent lawless action” requirement.  As such, most online speech, even if it promotes violence against an individual, will be protected and the victim will not have legal recourse.
Another category of unprotected speech is “true threats.” A statement is considered a true threat if the speaker “means to communicate a serious expression of an intent to commit an act of unlawful violence to a particular individual or group of individuals.”  The speaker need not actually intend to carry out the threat.  The U.S. Supreme Court has determined that true threats are not protected under the U.S. Constitution for three reasons: preventing fear, preventing the disruption that follows from that fear, and diminishing the likelihood that the threatened violence will occur. 
In practice, it can be quite difficult for internet speech to pass the “true threat” test. For example, in United States v. Alkhabaz, the defendant, who went by the name Jake Baker, had posted several stories to the Usenet group alt.sex.stories which involved the rape, torture, and murder of young women.  One such story involved one of his classmates at the University of Michigan; he described in detail the acts of violence he would perform, and the enjoyment he would gain from such acts.  These stories were brought to the attention of the university, who, with Baker’s consent, searched Baker’s dormitory room, personal papers, e-mail account and computer files.  Upon seizing Baker’s computer, the investigators discovered a series of emails in which Baker and a Canadian man outlined a plan to kidnap the young woman and carry out the fantasies detailed in Baker’s stories.  The police believed Baker and his correspondent represented a threat to their potential victims, and so referred Baker to the FBI, who arrested him pursuant to a warrant from a U.S. magistrate judge for violating 18 U.S.C. § 875(c), which prohibits interstate communications containing threats to kidnap or injure another person.  Baker was indicted in a jury trial, and appealed to the district court, which dismissed the charges; the government appealed to the Sixth Circuit Court of Appeals.  The Sixth Circuit upheld the dismissal and found that Baker’s stories did not constitute a true threat, and were therefore protected speech.  Baker apparently never intended his classmate to see the emails, and he was not emailing his correspondent to threaten his classmate or to attempt to intimidate her. As a result, Baker’s emails and stories did not, to the Sixth Circuit, constitute a threat.  Alkhabaz demonstrates that the burden to determine a “true threat” is quite high, and presumably most hostile online speech would fail to meet the standard determined by the Sixth Circuit.
Despite this high standard, it is still possible for online speech to meet the true threat criteria. In State v. Locke, a man appealed his conviction for threatening the then-governor of Washington State, Christine Gregoire.  Locke sent a series of threatening messages to Gregoire via her website.  His first email, which read, “I hope you have the opportunity to see one of your family members raped and murdered by a sexual predator. Thank you for putting this state in the toilet. Do us a favor and pull the lever to send us down before you leave Olympia,”  was determined to be hyperbolic political speech rather than a true threat.  While the second called Gregoire a “c#nt” and said “you should be burned at the stake like any heretic,”  the court determined that this was also not a true threat, as “the ancient political or religious pedigree of burning at the stake” is not realistically threatening.  However, Locke’s third correspondence was an event request, titled “Gregoire's public execution,” to be held at the Governor’s mansion.  These messages constituted a practical threat, especially as they were sent only 17 days after Arizona congresswoman Gabrielle Giffords was shot in the head by a disgruntled constituent.  For the threat contained in the third email, Locke’s conviction was upheld by the appeals court. 
Many obnoxious and hateful online comments are posted anonymously or pseudonymously, meaning that one of the first steps in prosecuting such comments is often identifying the perpetrator.  To identify the perpetrator, the plaintiff will generally need as much information as possible about the poster of these comments, such as the IP addresses which were used to post the messages and any personal information associated with the account used to post the messages. However, this information may not be publicly available, and if so the poster will have to get the information from the ISP. But most ISPs will not voluntarily disclose a user’s confidential information, whether to protect the user or to comply with data privacy laws,  so the plaintiff (the party bringing the complaint) will generally need a court order forcing the ISP to disclose such information.  However, there is a well-documented First Amendment right to anonymous speech that the court must consider before issuing a court order.  As a result, federal and state courts use different court-created tests to determine if the court will order the unmasking of the anonymous speaker, as discussed below. These tests balance the speaker’s right to anonymity against the rights of the victim. 
While various tests exist, most of them include the same elements. First, the court may require that the plaintiff take reasonable steps to alert the defendant that he or she may be subject to a court order.  These might include sending a private message to the poster’s account, and publicly posting notices to the internet service where the allegedly defaming comments were made. For example, if the allegedly defamatory statements were originally posted to a message board, the plaintiff may need to post a public notice to the same message board alerting the defendant to the potential lawsuit.  Afterwards, the court may require that the defendant be given a reasonable amount of time to respond.  This step allows the defendant time to hire counsel and take the necessary steps to formally oppose the motion in court. 
Courts will then consider whether the plaintiff has provided enough evidence to support each of the individual elements of a defamation claim.  In doing so, the court will consider how strong the plaintiff’s claim is, typically by comparing it to existing procedural standards, such as whether the case would be strong enough to survive a motion for summary judgment or a motion to dismiss.  (A motion for summary judgment is a procedure where one party requests that the court decide a case without a full trial, while a motion to dismiss is a procedure where one party requests that the court dismiss a lawsuit because the claim has no legal remedy.)  A motion for summary judgment will only be granted, meaning the court will only decide the case without a trial, if the significant facts of the case are undisputed and the only issues involve interpreting or applying the law to the facts.  In other words, during an unmasking proceeding the court will apply the law to the facts that the plaintiff has presented and determine whether or not the alleged statements are or may be considered defamatory. 
For example, in Doe v. Cahill, the court examined each of the elements of a libel claim under Delaware law: 1) the defendant made a defamatory statement; 2) concerning the plaintiff; 3) the statement was published; and 4) a third party would understand the character of the communication as defamatory.  The court began by considering whether the allegedly defamatory statements are assertions of fact or opinion.  In reviewing these statements, the court determined that all of the statements at issue were purely statements of opinion or otherwise not defamatory, so the court found that the case would not survive a motion for summary judgment.  However, if the court had determined that the statements at issue are capable of being defamatory, the court would have proceeded to examine the remaining elements of a defamation claim in turn.  If, after reviewing each element of the claim, the court had determined that the case was strong enough to survive a motion for summary judgment, the court would have granted the unmasking order. 
Finally, some courts may also consider the First Amendment implications in unmasking the defendant.  For example, the Ninth Circuit Court of Appeals suggested that “the nature of the speech should be a driving force in choosing a standard by which to balance the rights of anonymous speakers in discovery disputes.”  Under this standard, the type of speech is relevant to the level of protection that such speech receives, with anonymous political, religious, or literary speech receiving more protection than certain other types of speech.  Other courts, however, hold that the summary judgment test, as discussed above, is the only balancing required in determining if the speaker should be unmasked.  Some courts take a middle ground approach and try to balance the anonymous poster’s First Amendment right of free speech against the strength of the plaintiff’s defamation claim and the necessity for disclosure of the anonymous defendant’s identity, prior to ordering disclosure.” 
Section 230 of the Communications Decency Act  (“CDA”) provides broad immunity to any “interactive computer service” for third party content that is posted onto its service, as long as the service did not provide substantive or editorial contributions. “Interactive computer service” is defined broadly in the statute to include websites, message boards, instant messenger services, blog hosting services, and other internet based services including Facebook, MySpace, YouTube, Google, Yahoo, Tumblr, Flickr, Twitter, and even Revenge Porn. Section 230 immunizes these services from lawsuits for defamation, negligence, gross negligence, unfair competition and false advertising.  However, Section 230 expressly states that it has no impact on certain other areas of law, including federal criminal law,  federal intellectual property law,  communications privacy law,  and certain other state claims.  As such, interactive computer services may still be sued for hosting copyrighted materials under the DMCA and may be prosecuted for violating federal criminal laws.
Given CDA 230, victims of online harassment or hateful speech most often cannot hold the ISP liable,  whether the ISP is a blog hosting service, a web forum, a social media site like Facebook or Twitter, or an email provider such as Hotmail or Gmail. (These providers can, however, be subpoenaed to reveal the identity of an anonymous harasser, as shown in the cases discussed in Section VI, Defamation). Service providers do not have a legal responsibility to moderate or take down content that is harassing or offensive, even if such content violates the site’s terms of service.  For example, Facebook encountered protests from feminist groups who objected to a number of user-created Facebook pages in which rape and domestic violence were treated humorously.  More than 40 women’s groups wrote an open letter to Facebook arguing that pages like “Fly Kicking Sluts in the Uterus” should be considered hate speech and threatening content, and as such, violated Facebook’s own terms of service.  While Facebook has been quick to remove homophobic and Islamophobic content from the sites, feminist activists criticized Facebook for failing to take sexism seriously. In response, Facebook promised to remove the offending pages, and take affirmative steps to screen the site for sexist and violent content in the future.  In this case, Facebook was under no legal obligation to remove the offending pages. They chose to do so for public relations and business reasons due to the large and successful activist campaign, which also targeted Facebook’s advertisers. In such cases, the site in question may decide that removing content is appropriate; however, this is by no means guaranteed.