WHO TO SUE?: A BRIEF COMMENT ON THE CYBER CIVIL RIGHTS AGENDA
VIVA R. MOFFAT†
Danielle Citron’s groundbreaking work on cyber civil rights raises a whole variety of interesting possibilities and difficult issues. [1] In thinking about the development of the cyber civil rights agenda, one substantial set of concerns revolves around a regulatory question: what sorts of claims ought to be brought and against whom? The spectrum of options runs from pursuing currently-existing legal claims against individual wrongdoers to developing new legal theories and claims to pursuing either existing or new claims against third parties. I suggest here—very briefly—that for a variety of reasons the cyber civil rights agenda ought to be pursued in an incremental manner and that, in particular, we ought to be quite skeptical about imposing secondary liability for cyber civil rights claims.
Citron has argued very persuasively that online harassment, particularly of women, is a serious and widespread problem. I will not describe or expand upon her claims and evidence here, but for the purposes of this brief essay, I assume that online harassment is a problem. Determining what, if anything, to do about this problem is another matter. There are a variety of existing legal options for addressing online harassment. Victims of the harassment might bring civil claims for defamation or intentional infliction of emotional distress. [2] Prosecutors might, under appropriate circumstances, indict harassers for threats or stalking or, perhaps, conspiracy. [3] These options are not entirely satisfactory: because of IP address masking, wireless networks, and other technological hurdles, individual wrongdoers can be difficult, if not impossible, for plaintiffs and prosecutors to find. Even if found, individual wrongdoers might be judgment-proof. Even if found and able to pay a judgment, individual wrongdoers may not be in a position to take down the offending material, and they are certainly not in a position to monitor or prevent similar bad behavior in the future.
Thus there are reasons to pursue secondary liability—against ISPs, website operators, or other online entities. Current law, however, affords those entities general and broad immunity for the speech of others. Section 230 of the Communications Decency Act provides that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” [4] This provision has been interpreted broadly such that ISPs, website operators, and others are not indirectly liable for claims such as defamation or intentional infliction of emotional distress. [5] The statute provides a few exceptions, for intellectual property claims, for example. [6] The proponents of the cyber civil rights agenda have proposed that additional exceptions be adopted. For example, Mary Anne Franks has analogized online harassment to workplace harassment and suggested that Section 230 immunity ought to be eliminated for website operators hosting harassing content. [7]
Notwithstanding the force of the arguments about the extent of the problem of online harassment and the reasons for imposing third party liability, I suggest that claims for indirect liability ought to be treated with skepticism for a variety of reasons. [8]
First, it is unclear whether the imposition of third party liability is likely to be effective at reducing or eliminating the individual bad behavior that is problematic. Secondary liability would presumably entail some proof of, for example, the third party’s ability to control the wrongful behavior or the place in which that behavior occurred, the third party’s knowledge of the bad behavior, or the third party’s inducement of the harassment (or some other indicia of responsibility of the third party). If this is so, it is easy to imagine that third parties—ISPs, website operators, and so on—who wish to avoid imposition of secondary liability or who wish to encourage or permit the “Wild West” behavior online will take measures to avoid findings of ability to control, of knowledge, or of inducement. Website operators might, for example, employ terms of use that strongly condemn online harassment and that require that users indemnify the website operators. ISPs might adopt strategies that effectively reduce or eliminate any “knowledge” the entity might have of what occurs on the site. The third parties might design their operations such that they cannot control user-created content, much as the filesharing services and peer-to-peer networks did in the wake of the RIAA’s pursuit of secondary liability claims.
Having just postulated that indirect liability may be ineffective, my second concern may seem contradictory: it may be overbroad. The collateral consequences of imposing secondary liability for user-generated content are enormous. As many have pointed out, third party liability may very well have substantial chilling effects on speech. Even if individual wrongdoers are willing to put their views out in the world, website operators and ISPs are likely to implement terms of use, commenting policies, and takedown procedures that are vastly overbroad. This is not to say that there are no collateral consequences, such as chilling effects on speech, from the imposition of direct liability, but only to speculate that such effects are potentially greater as a result of third party liability.
Third, to the extent that cyber civil rights agenda entails (and perhaps emphasizes) a norm-changing enterprise, it seems at least possible that claims of indirect liability are less likely to be effective in that regard. Revealing individual bad behavior and pursuing that wrongdoer through the legal system represents a straightforward example of the expressive value of the law at work: public condemnation of wrongful behavior. Claims for indirect liability are less likely to allow for such a straightforward story. Many (though not all) website operators and ISPs are engaged in very little behavior that is easily categorized as wrongful. Instead, third party liability of those entities is justified on other grounds, such as the entity’s ability to control the online behavior, the receipt of benefits from the bad behavior, or knowledge of the harassment. Attempts to hold these entities liable may not serve the expressive value of changing the norms of online behavior because in the vast majority of instances people are less likely to be convinced that the behavior by the third party was, in fact, wrongful. [9] In short, the argument that the imposition of third-party liability will change norms about individual online behavior strikes me as speculative.
Finally, a number of the reasons that victims might pursue claims against third parties simply are not sufficient to justify imposition of such liability. One might seek third party liability because individual wrong- doers cannot be found or because those individual wrongdoers are judgment- proof. Neither reason, though understandable, is sufficient. As a descriptive matter, third party liability in general is rarely or never imposed solely for one of those reasons. As a fairness matter, that is the right result: it would be inequitable to hold a third party liable solely because the wrongdoer cannot be found or cannot pay.
Each of the concerns sketched out above applies either to a lesser extent or not at all to the pursuit of direct liability claims, civil or criminal. While there are other problems with efforts to seek redress against individuals wrongdoers, that is the more fruitful path for the development of the cyber civil rights agenda.
________________
Notes:
† Assistant Professor, University of Denver Sturm College of Law. J.D., University of Virginia Law School; M.A., University of Virginia; A.B., Stanford University.
1. Danielle Keats Citron, Cyber Civil Rights, 89 B.U. L. REV. 61 (2009); Danielle Keats Citron, Law’s Expressive Value in Combating Cyber Gender Harassment, 108 MICH. L. REV. 373 (2010).
2. See Citron, Cyber Civil Rights, supra note 1, at 86–89.
3. Id. See, for example, the indictment of Lori Drew for conspiracy and violation of the Computer Fraud and Abuse Act, 18 U.S.C. § 1030. (The indictment is available online at http://www.scribd.com/doc/23406509/Indictment.) Drew was eventually acquitted by the judge in the case. See Rebecca Cathcart, Judge Throws out Conviction in Cyberbullying Case, N.Y. TIMES, July 2, 2009, available at http://www.nytimes.com/2009/07/03/us/03 ... &scp=4&sq= lori%20drew&st=cse.
4. 47 U.S.C. § 230(c)(1) (2006).
5. For a summary of the development of the CDA’s immunity provisions, see H. Brian Holland, In Defense of Online Intermediary Immunity: Facilitating Communities of Modified Exceptionalism, 56 U. KAN. L. REV. 369, 374–75 (2008) (“[C]ourts have consistently extended the reach of § 230 immunity along three lines . . . .”).
6. The statute provides exceptions for intellectual property claims, federal criminal enforcement, and a few others. 47 U.S.C. § 230(e) (2006). Third party liability for intellectual property claims is also regulated and partly immunized. See 17 U.S.C. § 512 (2006).
7. Mary Anne Franks, Unwilling Avatars: Idealism and Discrimination in Cyberspace, COLUM. J. GENDER & L. (forthcoming Feb. 2010), available at http://papers.ssrn.com/sol3/papers.cfm? ... id=1374533.
8. On the other hand, I have much less concern about the vigorous pursuit of claims against individual wrongdoers.
9. In the course of representing a student sued by the RIAA for uploading digital music file in violation of the Copyright Act, I asked her if she had heard of the Napster opinion (A&M Records, Inc. v. Napster, Inc., 239 F.3d 1004 (9th Cir. 2001) (notably, for purposes of this anecdote, an indirect liability case)). She said, “Yes, but I used Gnutella.” The suit for indirect liability obviously didn’t have the expressive value for that student that the recording industry might have hoped.