Not all technological innovation deserves to be called progress. That’s because some advances, despite their conveniences, may not do as much societal advancing, on balance, as advertised. One researcher who stands opposite technology’s cheerleaders is MIT economist Daron Acemoglu. (The “c” in his surname is pronounced like a soft “g.”) IEEE Spectrum spoke with Agemoglu—whose fields of research include labor economics, political economy, and development economics—about his recent work and his take on whether technologies such as artificial intelligence will have a positive or negative net effect on human society.
IEEE Spectrum: In your November 2022 working paper “Automation and the Workforce,” you and your coauthors say that the record is, at best, mixed when AI encounters the job force. What explains the discrepancy between the greater demand for skilled labor and their staffing levels?
Acemoglu: Firms often lay off less-skilled workers and try to increase the employment of skilled workers.
“Generative AI could be used, not for replacing humans, but to be helpful for humans. … But that’s not the trajectory it’s going in right now.”
—Daron Acemoglu, MIT
In theory, high demand and tight supply are supposed to result in higher prices—in this case, higher salary offers. It stands to reason that, based on this long-accepted principle, firms would think ‘More money, less problems.’
Acemoglu: You may be right to an extent, but… when firms are complaining about skill shortages, a part of it is I think they’re complaining about the general lack of skills among the applicants that they see.
In your 2021 paper “Harms of AI,” you argue if AI remains unregulated, it’s going to cause substantial harm. Could you provide some examples?
Acemoglu: Well, let me give you two examples from Chat GPT, which is all the rage nowadays. ChatGPT could be used for many different things. But the current trajectory of the large language model, epitomized by Chat GPT, is very much focused on the broad automation agenda. ChatGPT tries to impress the users…What it’s trying to do is trying to be as good as humans in a variety of tasks: answering questions, being conversational, writing sonnets, and writing essays. In fact, in a few things, it can be better than humans because writing coherent text is a challenging task and predictive tools of what word should come next, on the basis of the corpus of a lot of data from the Internet, do that fairly well.
The path that GPT3 [the large language model that spawned ChatGPT] is going down is emphasizing automation. And there are already other areas where automation has had a deleterious effect—job losses, inequality, and so forth. If you think about it you will see—or you could argue anyway—that the same architecture could have been used for very different things. Generative AI could be used, not for replacing humans, but to be helpful for humans. If you want to write an article for IEEE Spectrum, you could either go and have ChatGPT write that article for you, or you could use it to curate a reading list for you that might capture things you didn’t know yourself that are relevant to the topic. The question would then be how reliable the different articles on that reading list are. Still, in that capacity, generative AI would be a human complementary tool rather than a human replacement tool. But that’s not the trajectory it’s going in right now.
“Open AI, taking a page from Facebook’s ‘move fast and break things’ code book, just dumped it all out. Is that a good thing?”
—Daron Acemoglu, MIT
Let me give you another example more relevant to the political discourse. Because, again, the ChatGPT architecture is based on just taking information from the Internet that it can get for free. And then, having a centralized structure operated by Open AI, it has a conundrum: If you just take the Internet and use your generative AI tools to form sentences, you could very likely end up with hate speech including racial epithets and misogyny, because the Internet is filled with that. So, how does the ChatGPT deal with that? Well, a bunch of engineers sat down and they developed another set of tools, mostly based on reinforcement learning, that allow them to say, “These words are not going to be spoken.” That’s the conundrum of the centralized model. Either it’s going to spew hateful stuff or somebody has to decide what’s sufficiently hateful. But that is not going to be conducive for any type of trust in political discourse. because it could turn out that three or four engineers—essentially a group of white coats—get to decide what people can hear on social and political issues. I believe hose tools could be used in a more decentralized way, rather than within the auspices of centralized big companies such as Microsoft, Google, Amazon, and Facebook.
Instead of continuing to move fast and break things, innovators should take a more deliberate stance, you say. Are there some definite no-nos that should guide the next steps toward intelligent machines?
Acemoglu: Yes. And again, let me give you an illustration using ChatGPT. They wanted to beat Google[to market, understanding that] some of the technologies were originally developed by Google. And so, they went ahead and released it. It’s now being used by tens of millions of people, but we have no idea what the broader implications of large language models will be if they are used this way, or how they’ll impact journalism, middle school English classes, or what political implications they will have. Google is not my favorite company, but in this instance, I think Google would be much more cautious. They were actually holding back their large language model. But Open AI, taking a page from Facebook’s ‘move fast and break things’ code book, just dumped it all out. Is that a good thing? I don’t know. Open AI has become a multi-billion-dollar company as a result. It was always a part of Microsoft in reality, but now it’s been integrated into Microsoft Bing, while Google lost something like 100 billion dollars in value. So, you see the high-stakes, cutthroat environment we are in and the incentives that that creates. I don’t think we can trust companies to act responsibly here without regulation.
Tech companies have asserted that automation will put humans in a supervisory role instead of just killing all jobs. The robots are on the floor, and the humans are in a back room overseeing the machines’ activities. But who’s to say the back room is not across an ocean instead of on the other side of a wall—a separation that would further enable employers to slash labor costs by offshoring jobs?
Acemoglu: That’s right. I agree with all those statements. I would say, in fact, that’s the usual excuse of some companies engaged in rapid algorithmic automation. It’s a common refrain. But you’re not going to create 100 million jobs of people supervising, providing data, and training to algorithms. The point of providing data and training is that the algorithm can now do the tasks that humans used to do. That’s very different from what I’m calling human complementarity, where the algorithm becomes a tool for humans.
“[Imagine] using AI… for real-time scheduling which might take the form of zero-hour contracts. In other words, I employ you, but I do not commit to providing you any work.”
—Daron Acemoglu, MIT
According to “The Harms of AI,” executives trained to hack away at labor costs have used tech to help, for instance, skirt labor laws that benefit workers. Say, scheduling hourly workers’ shifts so that hardly any ever reach the weekly threshold of hours that would make them eligible for employer-sponsored health insurance coverage and/or overtime pay.
Acemoglu: Yes, I agree with that statement too. Even more important examples would be using AI for monitoring workers, and for real-time scheduling which might take the form of zero-hour contracts. In other words, I employ you, but I do not commit to providing you any work. You’re my employee. I have the right to call you. And when I call you, you’re expected to show up. So, say I’m Starbucks. I’ll call and say ‘Willie, come in at 8am.’ But I don’t have to call you, and if I don’t do it for a week, you don’t make any money that week.
Will the simultaneous spread of AI and the technologies that enable the surveillance state bring about a total absence of privacy and anonymity, as was depicted in the sci-fi film Minority Report?
Acemoglu: Well, I think it has already happened. In China, that’s exactly the situation urban dwellers find themselves in. And in the United States, it’s actually private companies. Google has much more information about you and can constantly monitor you unless you turn off various settings in your phone. It’s also constantly using the data you leave on the Internet, on other apps, or when you use Gmail. So, there is a complete loss of privacy and anonymity. Some people say ‘Oh, that’s not that bad. Those are companies. That’s not the same as the Chinese government.’ But I think it raises a lot of issues that they are using data for individualized, targeted ads. It’s also problematic that they’re selling your data to third parties.
In four years, when my children will be about to graduate from college, how will AI have changed their career options?
Acemoglu: That goes right back to the earlier discussion with ChatGPT. Programs like GPT3and GPT4 may scuttle a lot of careers but without creating huge productivity improvements on their current path. On the other hand, as I mentioned, there are alternative paths that would actually be much better. AI advances are not preordained. It’s not like we know exactly what’s going to happen in the next four years, but it’s about trajectory. The current trajectory is one based on automation. And if that continues, lots of careers will be closed to your children. But if the trajectory goes in a different direction, and becomes human complementary, who knows? Perhaps they may have some very meaningful new occupations open to them.
From Your Site Articles
Related Articles Around the Web
Source link
Leave a Reply