Senators Warren & Graham Want To Create New Online Speech Police Commission

from the this-bill-causes-me-psychological-harm-and-emotional-distress dept

The regulation will continue until internet freedom improves, apparently. Last year we wrote about Senator Michael Bennet pushing a terrible “Digital Platform Commission” to be the new internet speech police, and now we have the bipartisan free speech hating duo of Senators Elizabeth Warren and Lindsey Graham with their proposal for a Digital Consumer Protection Commission.

The full bill is 158 pages, which allows it to pack in a very long list of terrible, terrible ideas. There are a few good ideas in there (banning noncompetes and no poach agreements). But there are an awful lot of terrible-to-unconstitutional ideas in there.

I’m not going through the whole thing or this post would take a month, but will highlight some of the lowlights.

A brand new federal commission!

First, it’s setting up an entirely new commission with five commissioners, handling work that… is mostly already the purview of the FTC and (to a lesser extent) the DOJ. Oddly, the bill also seeks to give more power to both the FTC and the DOJ, pretty much guaranteeing more bureaucratic clashing.

The areas in which the Commission will have new authority, though, beyond what the FTC and DOJ could do, is almost entirely around policing speech. It would get to designate some websites as “dominant platforms” if they’re big enough: basically 50 million US users (or 100k business users), a marketcap or revenue over $550 billion. Bizarrely, the bill claims it will violate the law to take any action “to intentionally avoid having the platform meet the qualifications for designation as a dominant platform,” which basically means if you TRY NOT TO BE DOMINANT, you could be seen as violating the law. Great drafting, guys.

Literally, much of this law is screaming “we don’t want dominant platforms,” but then says it will punish you for not becoming a dominant platform. That’s because the real goal of this law is not stopping dominant platform, but allowing the government to control the platforms that people use.

Editorial pressure & control pretending to be about “transparency” and “due process.”

In terms of the actual requirements under the law. It would require “dominant” platforms publicly reveal their editorial decision making process, which seems like a clear 1st Amendment violation. Just imagine if a Democratic administration demanded Fox News publicly declare its editorial guidelines, or a GOP administration requiring that of MSNBC. In both cases, people would be rightly outraged at this clear intrusion on 1st Amendment editorial practices. But, for whatever reason people think it’s fine when it’s social media companies. Here it says such companies have to:

make publicly available, through clear and conspicuous disclosure, the dominant platform’s terms of service, which shall include the criteria the operator employs in content moderation practices.

We’ve talked about this before. Transparency is good, and companies should be more transparent, but when you mandate it under law it creates real problems, and likely less transparency. First off, it turns a useful feature into a “compliance” feature, which means that every bit of transparency has to go through careful legal review, and companies will seek to only do exactly what is required under the law, rather than providing more useful details.

But, again, “content moderation practices” and “criteria” are the kind of thing that need to change rapidly, because malicious actors are adapting rapidly. But, this law is written under the ignorant and foolish belief that content moderation criteria are static things that never change. And, again, since such changes will now need to go through legal and compliance reviews, that means that it gives much more time and room for bad actors to operate, while websites have to review with their lawyers if they can actually make any move to change a policy to stop a bad actor.

It also requires a clear “notice” and appeals process for any content moderation decision. This is another thing that makes sense in some circumstances, but not in many others, most notably spam. Now, this new rule might not matter too much to the companies likely to be declared “dominant” given that the DSA has similar requirements, but at least the DSA exempts spam. Warren and Graham are so clueless that they’re literally requiring websites to inform spammers that they’re onto them.

By my reading, the bill will require Google to inform spam websites when it downranks them for spam. The definition of a platform explicitly includes search engines, then requires a “notice” and “appeal” process for any effort to “limit the reach” of content or to “deprioritize” it. The exceptions are in cases of “imminent harm”, if it relates to terrorism or criminal activity, or if law enforcement says not to send the notice. So, search engine spammers, rejoice. Google would have to now give you a notice if it downranks you, with clear instructions on how it makes those decisions, so you can try to get around them.

And, it gets dumber: if any user of a “dominant platform” believes that the platform is not following its own terms of service, you can (1) demand they comply, (2) request a mandated appeal, which requires under law that the platform tell you within 7 days “with particularity the reasonable factual basis for the decision,” and (3) file a legal complaint with the Commission itself to try to have the Commission force the website to change its mind.

This bill was clearly written by very, very, very ignorant people who have never spent a day handling trust & safety complaints. The vast, vast majority of trust & safety issues are around spam or other malicious users trying to game your system. Having to go through all of these steps, and giving the malicious actors the ability to completely tie up your moderation systems handling reviews and (even worse) having to respond to investigations from the Commission itself, is going to be abused so badly.

Also, this section says that the Commission “shall establish a standardized policy that operators of dominant platforms can adopt regarding content moderation and appeals of content-moderation decisions.” This is, to put it mildly, stupid beyond all belief.

Seriously. Take whichever staffer wrote this nonsense and put them on a trust & safety team for a month.

Trust & safety content moderation practices are constantly changing. They need to constantly change. If you establish official under-the-law “best practices” then every website is going to adopt exactly those practices, because that’s how you avoid legal liability. But that means that websites are much slower to react to malicious users who get around those rules, because going beyond best practices is now a legal liability.

On top of that, it means an end to experimentation. The trust & safety field is constantly evolving. Different sites take different approaches, as it’s often necessary based on the community they cater to, or the kinds of content they show. What kinds of “best practices” could anyone come up with that apply equally to Twitter and Reddit and Wikipedia and GitHub? The answer is you can’t. Each one is really different.

But this bill ignores that and assumes, ignorantly, that they’re all the same, and there’s one magic set of “best practices” that a government bureaucracy, which will likely be made up of people with very little trust & safety experience, gets to determine.

Data portability & interop, but without dealing with any of the tradeoffs

There’s a section on data portability and interoperability — which is of great interest to me — but makes the same silly mistakes nearly every other regulatory mandate for those two things does. Again, I want there to be more data portability and interoperability, but when you mandate it, you create a variety of other problems, such as questions regarding privacy. I mean, the same people (Elizabeth Warren) complaining about how we need to mandate interoperability, also complained about Cambridge Analytica getting access to user data.

So, under this bill, you’re likely to get a lot more Cambridge Analytica situations, because denying access to such companies might violate the portability and interop requirements. The bill “solves” this by saying “but don’t undermine end-user data protection.” Basically the bill is “you must unlock all your doors, but don’t read this to mean that you should let burglars rob your house.”

And then leaves it up to the platforms to figure out how you stop the burglars without having locks at your disposal.

There are ways to do interoperability right, but a government mandate from on high, without taking into account a whole wide variety of tradeoffs… is not it. But it is what we get from the clueless duo of Warren and Graham.

Speech suppression masquerading as ‘privacy’ reform.

We’ve said it before, and I’m sure we’ll say it many more times in the future: we need a federal privacy bill. And, in theory, this bill has privacy reform. But this is not the privacy reform we need. At all. As with way too many privacy reform bills, the nature of this one is to give the very companies we constantly accuse of being bad about privacy more power and more control.

It includes a “duty of loyalty” such that any service has to be designed such that it does not “conflict” with “the best interests of a person” regarding their data. This is another one of those things, like so much in this bill, that sounds good in theory, until you have to understand the actual details, which the bill (as it does in the section above) handwaves away all the problematic questions and tradeoffs.

Who determines what’s in the “best interests” of the person in this situation? The person themselves? The company? The government? Each of those has very real problems, none of which the bill weighs, because that would take actual work and actual understanding, which seems like something none of the staffers who wrote this bill cared to do.

Then, there’s a separate “duty of care” that is even worse. We’ve explained for years that this European concept of a “duty of care” has always been a “friendly sounding” way to attack free speech. And, that’s quite clear in this bill. The duty of care requires that a website not use algorithms or user data in a way that “is likely to cause… psychological injuries that would be highly offensive to a reasonable person.”

What?

This bill causes me psychological injuries. And most “highly offensive” speech is also highly… protected by the 1st Amendment. But this bill says that a website has to magically stop such “psychological injuries.”

This is basically “stop all ‘bad’ content from flowing online, but we won’t define bad content’, we’ll just blame you after anything bad happens.” It’s literally the same mechanism that the Great Firewall of China used in its earliest versions, in which the state would tell ISPs “we’ll fine you if any bad content is spread via your network” but didn’t tell them what counted as bad.

The end result, of course, is vast over-suppression of speech to avoid any possibility of liability.

This is not a privacy bill. It’s a speech suppression bill.

And it gets worse. There’s a “duty of mitigation,” as well, which requires any website to “mitigate the heighted risk of physical, emotional, developmental, or material harms posed by materials on, or engagement with, any platform…”

So, you have to “mitigate” the potential “emotional” harms. But, of course, there are all sorts of “emotional” harms that are perfectly legal. Under this bill, apparently, not any more.

This is yet another attempt at Disneyfying the internet, and pretending that if we just don’t let anyone talk about controversial topics that, magically, they go away. It’s head in the sand regulation from lawmakers who don’t want to solve hard problems. They just want to hide them and then blame social media companies should any of them reveal the real social problems.

If all of this wasn’t already an attack on the 1st Amendment… the bill includes a “right to be forgotten.”

“A person shall have the right to… delete all personal data of the user that is stored by a covered entity.”

Once again, if you were talking about basic data in other contexts, this could make sense. But we already know exactly how this works in practice because the EU has such a right to be forgotten and it’s a fucking disaster. It is regularly used to hide news stories. Or, by Russian oligarchs to try to silence reporters detailing the sources of their wealth.

Do Warren or Graham include exceptions for journalism or other protected speech? Of course not. That would make sense, and remember the goal of this bill is not to make sense, but to grandstand and pretend they’re “cracking down on big tech.”

Giving China the moral high-ground when it forces foreign companies to give up data to local Chinese firms.

For years, China has implemented a program by which non-Chinese companies that want to operate in China must partner with a Chinese owned company, which effectively gives China much more access to data and the ability to control and punish those foreign companies. It’s the kind of thing the US should be condemning.

Instead, Elizabeth Warren and Lindsey Graham apparently see it as an admirable idea worth copying.

The bill requires that any “dominant platform” that is operating in the US must be based in the US, or own a subsidiary in the US. It’s basically the “no foreign internet company may be successful” act. Except, of course, this will only justify such actions not just in China but everywhere else as well. Tons of countries are going to point to the US in requiring that US companies set up local subsidiaries as well, putting data and employees at much greater risk.

As part of this section, it also says that if more than 10% of the owners or operators of a dominant platform are “citizens of a foreign adversary” then the operator has to keep a bunch of information in the US. This is basically the “TikTok provision,” without recognizing that it will also be used to justify countries like India, Turkey, Brazil, Russia and more when they demand that US companies keep data in their countries, where the government will then demand access to it.

Please apply for your 1st Amendment license

I wish I were joking, but the bill requires designated “dominant” platforms to get a license to operate, meaning that the government can choose to suspend that license. You know, like Donald Trump threatened to do (even though this was not a thing) to TV stations that made fun of him.

That line alone should suggest why this is problematic. Requiring a special license (with the inherent threat to have that license removed) for businesses primarily engaged in speech is a massive 1st Amendment red flag. It has been allowed in broadcast businesses that use scarce spectrum solely because of the scarcity of the spectrum, where you can only have one TV or radio station operating on a certain frequency.

But, that makes no sense on the internet.

But, Warren and Graham love it. Especially because it lets the licensing authority created by this bill “rescind” or “revoke” the license if they feel that the platform has “engaged in… egregious… misconduct.” I’m sure that won’t be abused at all [insert rolling eye emoji].

If your social media platform license gets revoked, then you “shall not be treated as a corporation” and you “may not operate in the United States.”

This is… authoritarian dictator level bullshit.

Anyway, that’s just some of the many problems with the bill. Amazingly, a bunch of organizations are eagerly endorsing the bill. I’m not convinced any of them actually read it.

The Digital Consumer Protection Commission Act is endorsed by Accountable Tech, the American Economic Liberties Project, the Center for American Progress, Color of Change, Common Sense Media, the Open Markets Institute, Public Citizen, and Raven. 

I find Public Citizen’s endorsement of the bill particularly problematic, given how hard Public Citizen’s litigation group has fought to protect free speech online. The others are just disappointing, but not as surprising, as nearly all of them have gone off the deep end in believing any regulation of “big tech” must be good because “big tech is bad.”

The failure of all of these organizations to actually consider the real impact of what they’re endorsing says a lot about the leadership of all of those orgs, and none of it good.

Filed Under: , , , , , , , , , , , ,


Source link