Donations Make us online
from the from-techlash-to-ailash dept
Just recently we had Annalee Newitz and Charlie Jane Anders on the Techdirt podcast to discuss their very own podcast mini-series “Silicon Valley v. Science Fiction.” Some of that discussion was about this spreading view in Silicon Valley, often oddly coming from AI’s biggest boosters, that AI is an existential threat to the world, and we need to stop it.
Charlie Jane and Annalee make some really great points about why this view should be taken with a grain of salt, suggesting the “out of control AI that destroys the world” scenario seems about as likely as other science fiction tropes around monsters coming down from the sky to destroy civilization.
The timing of that conversation was somewhat prophetic, I guess, as over the following couple of weeks there was an explosion of public pronouncements by the AI doom and gloom set, and suddenly it became a front page story, just days after we were talking about the same ideas percolating around Silicon Valley on the podcast.
In our discussion, I pointed out that I did think it was worth noting that the AI doom and gloomers are at least a change from the past, where we famously lived in the “move fast and break things” world, where the idea of thinking through the consequences of new technologies was considered quaint at best, and actively harmful at worst.
But, as the podcast guests noted, the whole discussion seems like a distraction. First, there are actual real world problems today with black box algorithms doing things like enhancing criminal sentences based on unknown inputs. Or, determining whether or not you’ve got a good social credit score in some countries.
Like there are tremendous legitimate issues that can be looked at today about blackbox algorithms, but none of the doom and gloomers seem all that interested in solving any of those.
Second, the doom and gloom scenarios all seem… static? I mean, sure, they all say that no one knows exactly how things will go wrong, and that’s part of the reason they’re urging caution. But, they also all seem to go back to the Nick Bostrom’s paperclip thought experiment, as if that story has any relevance at all to the real world.
Third, many people are now noticing and calling out that much of the doom and gloom seems to be the same sort of “be scared… but we’re selling the solution” kind of ghost stories we’ve seen in other industries.
So, it’s also good to see serious pushback on the narrative as well.
A bunch of other AI researchers and ethicists hit back with a response letter, that makes some of the points I made above, though much more concretely:
While there are a number of recommendations in the letter that we agree with (and proposed in our 2021 peer-reviewed paper known informally as “Stochastic Parrots”), such as “provenance and watermarking systems to help distinguish real from synthetic” media, these are overshadowed by fearmongering and AI hype, which steers the discourse to the risks of imagined “powerful digital minds” with “human-competitive intelligence.” Those hypothetical risks are the focus of a dangerous ideology called longtermism that ignores the actual harms resulting from the deployment of AI systems today. The letter addresses none of the ongoing harms from these systems, including 1) worker exploitation and massive data theft to create products that profit a handful of entities, 2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities.
Others are speaking up about it as well:
“It’s essentially misdirection: bringing everyone’s attention to hypothetical powers and harms of LLMs and proposing a (very vague and ineffective) way of addressing them, instead of looking at the harms here and now and addressing those—for instance, requiring more transparency when it comes to the training data and capabilities of LLMs, or legislation regarding where and when they can be used,” Sasha Luccioni, a Research Scientist and Climate Lead at Hugging Face, told Motherboard.
Arvind Narayanan, an Associate Professor of Computer Science at Princeton, echoed that the open letter was full of AI hype that “makes it harder to tackle real, occurring AI harms.”
“Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the open letter asks.
Narayanan said these questions are “nonsense” and “ridiculous.” The very far-out questions of whether computers will replace humans and take over human civilization are part of a longtermist mindset that distracts us from current issues. After all, AI is already being integrated into people’s jobs and reducing the need for certain occupations, without being a “nonhuman mind” that will make us “obsolete.”
“I think these are valid long-term concerns, but they’ve been repeatedly strategically deployed to divert attention from present harms—including very real information security and safety risks!” Narayanan tweeted. “Addressing security risks will require collaboration and cooperation. Unfortunately the hype in this letter—the exaggeration of capabilities and existential risk—is likely to lead to models being locked down even more, making it harder to address risks.”
In some ways, this reminds me of some of the privacy debate. After things like the Cambridge Analytica mess, there were all sorts of calls to “do something” regarding user privacy. But so many of the goals focused on actually handing more control over to the big companies that were the problem in the first place, rather than moving the control of the data to the end user.
That is, our response to privacy leaks and messes from the likes of Facebook… was to tell Facebook, hey why don’t you control more of our data, and just be better about it, rather than the actual solution of giving users control over their own data.
So, similarly, here, it seems that these discussions about the “scary” risks of AI are all about regulating the space in a manner that just hands the tools over to a small group of elite “trustworthy” AI titans, who will often talk up the worries and fears if the riff raff should ever be able to create their own AI. It’s the Facebook situation all over again, where their own fuckups lead to calls for regulation that just give them much greater power, and everyone else less power and control.
The AI landscape is a little different, but there’s a clear pattern here. The AI doom and gloom doesn’t appear to be about fixing existing problems with blackbox algorithms — just about setting up regulations that hand the space over to a few elite and powerful folks who promise that they, unlike the riff raff, have humanity’s best interests in mind.
Filed Under: ai, doom and gloom, existential risks, longtermism
Source link
Leave a Reply