Google’s Plan To DRM The Web Goes Against Everything Google Once Stood For

from the do-not-drm-the-web dept

The grand old enshittification curve strikes again. Remember, as stated by Cory Doctorow, the process of enshittification entails these steps:

first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves.

Way too many companies go through this process, and it’s pretty typical. In the past, we’ve talked about how young tech companies innovate, whereas old ones litigate. Basically, the underlying issue is that as companies become less innovative, rather than creating new useful things, they focus on extracting more value however they can, while simultaneously trying to stymie and hold off innovative upstarts.

There are many ways in which Google has clearly reached that stage in various aspects of its business. But, there’s been talk over the past few weeks of something that is truly problematic: Google toying with what is quite clearly DRM for the web, in a system called Web Environment Integrity that it is proposing to build into Chrome.

If you listened to my podcast discussion with Doctorow just last week, while ostensibly about Cory’s latest novel, we spent a lot of time talking about enshittifcation, including a discussion on the idea that inspired the novel, when Microsoft first wanted to build it’s “trusted computing” system with a separate cordoned off chip to check what it was that your open computer itself was doing, so that it could make sure what you were doing was legit. Now there are pros and cons to this approach, but it fundamentally changes how computers work.

From the podcast, Cory summed it up this way:

So 20 years and eight months or so ago. A team from Microsoft came to the Electronic Frontier Foundation and presented us something called the Next Generation Secure Computing Base, which they also called Palladium, which today we call Trusted Computing. And the idea here, and again, I apologize in advance for how gnarly this is, the idea here, is that if you have a computer, because it is like a universal touring machine that can run any program, it is impossible for anyone else to know which programs your computer is running.

And on the one hand, that means that you could always put, say some surveillance tools that you’re forced to run by your boss or by an abusive spouse or by your government. You could put them inside a virtual machine and they would never know it. They’d be a head in a jar. They’ve never know it.

You can also emulate old pieces of software inside a new piece of software. You can open up a browser window simulate a whole like Mac SE running system 7. In fact, you know, my little laptop computer, middle of the range laptop computer here, I can open up 15, 20 browser tabs and emulate 20 Mac SEs without breaking a sweat. My fan doesn’t even turn on.

And so there’s this like character of universality and it’s both a feature and a bug. And it’s a bug in some trivial ways, like it’s really hard for me to know. whether you and I are playing a video game in which you are truly not running any like aim hack software. It’s also a bug in that like I can’t run a server in the cloud and know for sure that the people who own that server aren’t siphoning off my data. And it’s also a bug in that I can’t ask you my trusted technical expert to remotely look at what my computer is doing and tell me whether it’s running any spyware. because the spyware can just present itself to you as a not spyware application.

And so this is the problem that Microsoft was setting out to solve. And the way that they were gonna solve it is they were gonna put another computer in your computer, right, a separate, fairly low powered, extremely kind of simple and easily audited computer that was gonna be sealed in epoxy, mounted to the board in a way that it couldn’t be removed without revealing that it had been removed. It was gonna have like, acid in an epoxy pouch on the top of the chip. So if you tried to decap it and fuzz it or, you know, put an electron tunneling microscope, it would dissolve, right?

And it was going to be this, like, tamper evident piece of hardware. And this hardware could observe all the things going on in the other computer, in the computer you interact with. And you could ask it, hey, take your observations about my computer, make a manifest about what my computer is doing, like the… bootloader and operating system and the applications and the extensions and what’s going on in the kernel and whatever and make a signed cryptographically signed manifest of that and send that to someone else who wants to validate what kind of computer I’m running. This is called remote attestation and it is like a genuinely new computing capability, one that we had historically lacked.

It is, I would argue, as powerful and as complicated and as difficult to get your head around, and it’s potentially troubling as like universal computing and, and networking, and strong encryption, which is to say working encryption. It is like a new power and maybe even a superpower for your computer to allow multiple people who don’t trust each other to nevertheless trust one another’s statements about how their computers are configured.

That is like a really powerful thing.

If you think about work from home. You know, I was just at a friend’s house yesterday who does a lot of commercially sensitive work and who has been targeted repeatedly by state actors and by private corporate espionage, as well as phishers. He says he gets new employees and as soon as it hits LinkedIn, that they’re on the network, those employees get phished.

People try to take over their home computers and during remote work, this was a huge problem because suddenly his corporate perimeter was drawn around devices in people’s houses. And there were like extremely powerful adversaries trying to break into those computers and steal the data and it was worth a lot of money. And so he wanted to be able to ask those computers how they were configured, not because he didn’t trust the employees, but because he didn’t trust those employees’ technical capabilities, right? He wanted to give them a backstop in which their own mistakes wouldn’t be fatal for their jobs and for the enterprise. Right.

So there’s some like beneficial, consensual ways to conceive of this.

So this is the thing Microsoft came and presented to us and we were like, wouldn’t this let you determine whether someone was using Open Office and stop them from opening a Word document or you know, iWork right? You know, Numbers, Pages and Keynote. They’re like, ‘oh yes, totally.’ And we were like, wouldn’t this let you distinguish between people running SMB and Samba and keep them off the network? And they were like that too, right? And they were like, ‘but maybe we could, you know, everyone in the enterprise could run a version of this that you trusted instead of one that we trusted.’ And some of those people, like one of them is a guy called Peter Biddle, who’s, actually an honorable fella who really did believe in this. But you know, Peter Biddle wasn’t the boss of Microsoft.

And you know, this gun on the mantelpiece in Act 1 had a severe chance of going off by Act 3. And so here we are at Act 3. And 20 years after fighting about this. And it’s got multiple guises, right? Digital rights management, trusted computing, UFE boot locking, the broadcast flag, which is back. I don’t know if you’re following this. There’s a new version of ATSC that’s got DRM in it. And the FCC is likely to green light it. And the whole broadcast thing is gonna be back again.

You know, there’ve been so many names for this over 20 years and they all boil down to this thing. Should your computer be able to be compelled to tell the truth, even when you would prefer that it lie on your behalf? Should there be a facility in your computer that you can’t control that other people can remotely trigger? Are the costs worth the benefits? Is there a way to mitigate those costs? What are we going to do about this? And as I say, this is an argument that I’ve been having with myself for 20 years that basically no one else has advanced.

And, well, really, that’s about the best description I can come up with for what’s happening here with the Web Environment Integrity system. Here’s how Google describes the project, which reads differently if you just read (or heard) Doctorow’s explanation above:

With the web environment integrity API, websites will be able to request a token that attests key facts about the environment their client code is running in. For example, this API will show that a user is operating a web client on a secure Android device. Tampering with the attestation will be prevented by signing the tokens cryptographically.

Websites will ultimately decide if they trust the verdict returned from the attester. It is expected that the attesters will typically come from the operating system (platform) as a matter of practicality, however this explainer does not prescribe that. For example, multiple operating systems may choose to use the same attester. This explainer takes inspiration from existing native attestation signals such as App Attest and the Play Integrity API.

There is a tension between utility for anti-fraud use cases requiring deterministic verdicts and high coverage, and the risk of websites using this functionality to exclude specific attesters or non-attestable browsers. We look forward to discussion on this topic, and acknowledge the significant value-add even in the case where verdicts are not deterministically available (e.g. holdouts).

This is, understandably, causing some controversy. And it should. The most comprehensive and understandable argument for how troubling this is that I saw early on came from Alex Ivanovs, who pointed out that one “side effect” of this would be enabling Google to effectively block ad-blocking. And, of course, plenty of people will insist that that’s not a side-effect, that’s the end goal. As he notes:

A significant concern stemming from the tech community is the potential for monopolistic control. By controlling the “attesters” that verify client environments, Google, or any other big tech company, could potentially manipulate the trust scores, thereby deciding which websites are deemed trustworthy. This opens up a can of worms regarding the democratic nature of the web.

As one GitHub user commented, “This raises a red flag for the open nature of the web, potentially paving the way for a digital hierarchy dominated by a few tech giants.”

Quite a discussion broke out in the Github over all this, with Mozilla stepping up pretty quickly (in the Github) to highlight how this move was against the open web, and Mozilla was against it:

Mechanisms that attempt to restrict these choices are harmful to the openness of the Web ecosystem and are not good for users.

Additionally, the use cases listed depend on the ability to “detect non-human traffic” which as described would likely obstruct many existing uses of the Web such as assistive technologies, automatic testing, and archiving & search engine spiders. These depend on tools being able to receive content intended for humans, and then transform, test, index, and summarize that content for humans. The safeguards in the proposal (e.g., “holdback”, or randomly failing to produce an attestation) are unlikely to be effective, and are inadequate to address these concerns.

Detecting fraud and invalid traffic is a challenging problem that we’re interested in helping address. However this proposal does not explain how it will make practical progress on the listed use cases, and there are clear downsides to adopting it.

This week, browser maker Brave also came out as opposed to the plan, saying it wouldn’t ship with WEI enabled.

Brave strongly opposes Google’s “Web Environment Integrity” (WEI) proposal. As with many of Google’s recent changes and proposals regarding the Web, “Web Environment Integrity” would move power away from users, and toward large websites, including the websites Google itself runs. Though Brave uses Chromium, Brave browsers do not (and will not) include WEI. Further, some browsers have introduced other features similar to, though more limited than, WEI (e.g., certain parts of WebAuthn and Privacy Keys); Brave is considering how to best restrict these features without breaking benign uses.

Google’s WEI proposal is frustrating, but it’s not surprising. WEI is simply the latest in Google’s ongoing efforts to prevent browser users from being in control of how they read, interact with, and use the Web. Google’s WebBundles proposal makes it more difficult for users to block or filter out unwanted page content, Google’s First Party Sets feature makes it more difficult for users to make decisions around which sites can track users, and Google’s weakening of browser extensions straightforwardly makes it harder for users to be in control of their Web experience by crippling top ad-and-tracker-blocking extensions such as uBlock Origin. This is unfortunately far from a complete list of recent, similar user-harming Web proposals from Google. Again, Brave disables or modifies all of these features in Brave’s browsers.

Now, I’ve seen some conspiracy theories making the rounds about this, trying to argue that it’s not just a terrible, awful, dangerous, problematic idea, but that there are truly nefarious (think: government surveillance) reasons behind all this. And that’s… nonsense.

But it is bad. It’s very clearly opposed to the principles of an open web, the kind of thing that Google used to be at the forefront of fighting for. But, of course, as companies get older and lose that innovative edge, they look to extract more value out of users. And that leads down this path.

And, yes, there are real concerns about abuse that WEI claims to be addressing. As Cory discussed about Microsoft’s original plan 20 years ago, they’re presenting new capabilities that can be used to stop some very problematic things. But… the way that it’s being done fundamentally restructures the open internet to something that is not the same at all.

It goes against the most important values that we push for here at Techdirt, around pushing the power to the edges of the network, rather than centralizing them. Whether or not you believe Google’s motives in putting together WEI are benign, the use of such a tool will not remain so.

As Cory said, the gun on the mantle in Act 1 is very likely to go off by Act 3. The system can be abused, especially by big powerful companies like Google. And that means, at some point, it will be abused by big powerful companies like Google.

Supporting the open web requires saying no to WEI, and having Google say no as well. It’s not a good policy. It’s not a good idea. It’s a terrible idea that takes Google that much further down the enshittification curve. Even if you can think of good reasons to try to set up such a system, there is way too much danger that comes along with it, undermining the very principles of the open web.

It’s no surprise, of course, that Google would do this, but that doesn’t mean the internet-loving public should let them get away with it.

Filed Under: , , , , , ,

Companies: google


Source link