On Wednesday, two proposals were unveiled that would reform the scope of Section 230 of the Communications Decency Act of 1996.  Section 230 immunizes online platforms from civil liability based on third-party content, and provides immunity for acting in “good faith” to restrict or remove content that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”

The first proposal, entitled the “Ending Support for Internet Censorship Act,” was introduced by Senator Josh Hawley (R-MO).  The second proposal comes from the Department of Justice (“DOJ”) in a report entitled “Section 230—Nurturing Innovation or Fostering Unaccountability?”.  Both proposals come after the Trump Administration issued an executive order on Section 230.

The Ending Support for Internet Censorship Act

Senator Hawley’s proposed legislation would amend Section 230 to require “politically unbiased content moderation.”  The amendment would deny Section 230’s liability protections to any company that does not obtain “an immunity certification” from the Federal Trade Commission (“FTC”) stating that “the company does not moderate information provided by” others in a way that is “biased against a political party, political candidate, or political viewpoint.”  These requirements apply to any “interactive computer service” with more than 30 million U.S. active monthly users, 300 million worldwide active monthly users, or more than $500 million in global annual revenue.

To obtain an “immunity certification,” a covered entity must prove “by clear and convincing evidence that” it does not currently, and has not for the past two years, engaged in politically biased moderation of content.  “Political biased moderation” is content moderation that is “designed to negatively affect,” “disproportionately restricts or promotes access to, or the availability of a political party, political candidate, or political viewpoint,” and any content moderation decision by employee “motivated by an intent to negatively affect” a political party or its candidates or viewpoints.  Excepted from this definition are activities not protected under the First Amendment or activities that are “necessary for business,” which is defined to mean a “lawful act that advances the growth, development, or profitability of a company” and is not political.  An entity also does not engage in political biased moderation for an employee’s actions if the entity publicly discloses that the employee’s actions were politically motivated and “terminates or otherwise disciplines the employee.”

Certification approval requires “at least 1 more than a majority of the Commissioners” of the FTC, and dissenting opinions must be published.  Applications for immunity certifications must be processed within six months, and when granted the certification is valid for two years.  The proposal requires the FTC to establish a system for public comment and participation in the process.  Lastly, the proposal would require the FTC to issue a report every five years on whether these newly created protections are “no longer necessary and should be modified.”

The Department of Justice Proposal

The DOJ’s Report recognizes “significant technological changes since 1996” and the “expansive interpretation that courts have given Section 230” to argue that online platforms are “both immune from a wide array of illicit activity on their services and free to moderate content with little transparency or accountability.”  In light of this, the Report proposes that Congress amend Section 230 in four ways:

Incentivizing Online Platforms to Address Illicit Content

The DOJ’s first proposal is meant to deal with “illicit content.”  Section 230, of course, does not apply to any content that would violate federal criminal laws, including laws against obscenity, child pornography and endangering children.  Still, the DOJ suggests that Section 230 should be amended to further prevent illicit online content.  It proposes denying Section 230 immunity for platforms that “purposefully or knowingly” “promote, solicit, or facilitate activity or material that the provider knew or should have known would violate federal criminal law.”  It also urges carving out actors who purposefully “design or operate their systems in any manner that results in an inability to identify or access most (if not all) unlawful content,” as well as carving out Section 230 protections for laws related to terrorism, child sex abuse, and cyber stalking.  DOJ also proposes requiring platforms to provide users with mechanisms to flag unlawful content.

Clarifying Federal Government Civil Enforcement Capabilities

The DOJ’s second proposal would amend Section 230 immunity to not apply in any case brought by the federal government, whether criminal or civil.  Currently, Section 230 only expressly precludes immunity in the criminal context.

Promoting Competition

The third proposal in the DOJ Report would amend Section 230 so it cannot be used “as a tool to block antitrust claims aimed at promoting and preserving competition.”  The Report says that such immunity was not part of the original intent of the statute, and claims that this reform is needed because there is presently “a risk that defendants will continue to try to use Section 230 to creatively block antitrust actions.”

Promoting Open Discourse and Greater Transparency

The fourth DOJ set of proposed reforms is aimed at content moderation.  First, DOJ proposes deleting the “otherwise objectionable” category from Section 230 because it is allegedly too vague.  DOJ would add new immunities for moderation of content that “violates federal law or promotes violence or terrorism.”  DOJ states that “the proposals strike a more appropriate balance between promoting an open, vibrant Internet and preserving platforms’ discretion to restrict obscene and unlawful content.”  Second, DOJ proposes defining the “good faith” requirement of Section 230 immunity to require providers to do a number of things:  publicly state their content moderation criteria in their terms of use; follow their stated terms of use; base any of their content restrictions “on an objectively reasonable belief” that such content falls within Section 230’s enumerated categories; and, explain the reasons for the content moderation to the content provider.  Third, the DOJ supports “adding a provision to make clear that a platform’s decision to moderate content either under (c)(2) or consistent with its terms of service does not automatically render it a publisher or speaker for all other content on its service.”