As AI agents like Auto-GPT speed up generative AI race, we all need to buckle up | The AI Beat

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


If you thought the pace of AI development had sped up since the release of ChatGPT last November, well, buckle up. Thanks to the rise of autonomous AI agents like Auto-GPT, BabyAGI and AgentGPT over the past few weeks, the race to get ahead in AI is just getting faster. And, many experts say, more concerning.

It all started in late March, when developer Toran Bruce Richards, under the name @significantgravitas, launched Auto-GPT, an “experimental open-source application” connected to OpenAI’s GPT-4 by API. Running on Python, Auto-GPT had internet access, long/short-term memory and, by stringing together GPT calls in loops, could act autonomously without requiring a human agent to prompt every action.

Then, on March 29, @yoheinakajima launched BabyAGI, a “task-driven autonomous agent” that leverages GPT-4, Pinecone‘s vector search, and LangChainAI‘s framework to “autonomously create and perform tasks based on an objective.”

Fast-forward a couple of weeks, and now Auto-GPT has more GitHub stars than PyTorch (82K at the moment) and is the “fastest growing GitHub repo in history, eclipsing decade old open source projects in 2 weeks.” Fortune says BabyAGI is “taking Silicon Valley by storm” and OpenAI’s Andrej Karpathy, who was formerly director of AI at Tesla, called the tools the “next frontier of prompt engineering.”

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

Are AI agents a game-changer?

Jay Scambler, an Oklahoma City-based consultant and strategist building AI tools for small businesses and creatives, told me last week by Twitter message that the tools feel like a game-changer. “I don’t mean to sound dramatic, but we now have the power and responsibility of managing a coordinated team of AI agents at our fingertips without much effort,” he said. “This team doesn’t have fatigue, executes code *almost* flawlessly (depending on who you ask), and can find answers to almost any problem using tools like LangChain.” 

Others aren’t as optimistic. Nvidia AI scientist Jim Fan tweeted: “I see AutoGPT as a fun experiment, as the authors point out too. But nothing more. Prototypes are not meant to be production-ready. Don’t let media fool you — most of the ‘cool demos’ are heavily cherry-picked.”

Either way (and of course there’s a but), at the moment both Auto-GPT and BabyAGI require developer skills and are not accessible to the average ChatGPT user. And even Nicola Bianzino, chief technology officer at EY, told me in an interview that Auto-GPT is “fascinating” — but he admits that he doesn’t yet understand the details of how it actually works. This is moving so quickly, he explained, that there are already a host of versions on top of the original. “I don’t personally know the different variations that are out there in the wild,” he said.

Serious concerns about AI agents in the wild

While the AI agents are “profound,” there are also serious concerns. Daniel Jeffries, former chief information officer at Stability AI and managing director of the AI Infrastructure Alliance, told me last week that “the challenge becomes that we don’t really know what an error looks like. Currently Auto-GPT fails 15-30% of the time in reasoning, I think we get less tolerant of errors as they become more autonomous.” 

And even though the current use cases are limited, as Fortune’s article pointed out, there are other risks coming down the pike — including the AI agent’s continuous chains of prompts quickly running up substantial bills with OpenAI; the possibility of malicious use cases like cyberattacks and fraud; and the danger of autonomous bots taking action in ways the user didn’t intend, including buying items, making appointments or even selling stock.

That doesn’t seem to be slowing down the race to develop AI agent tools, however. Last week, for example, HyperWrite, a startup known for its generative AI writing extension, unveiled an experimental AI agent today that can browse the web and interact with websites much like a human user.

Shumer said his team is very focused on issues of safety. “We want to figure out the right way to do it, and that’s sort of the common theme through all this, we’re taking our time to do this the right way,” he said. 

I also had a chance to speak to the developers behind AgentGPT, a browser-based AI agent launched on April 8 that offers easier access to the non-tech user.

A trio of developers with day jobs worked on autonomous agents in their spare time, with an eye towards use cases for internal tooling. When they saw the explosive popularity of Auto-GPT and BabyAGI, they decided to push out their project and get some feedback. In just nine days, AgentGPT has over 14,000 stars on GitHub and over 280,000 users.

The bigger vision behind AgentGPT, the developers say, is to go beyond HyperWrite’s offering to integrate with other tools — such as Slack, email or even Facebook.

Are AI agents just hype and hustle?

Some are saying that the new focus on AI agents is just another example of “hustle bros,” with hyperbolic claims by “get-rich schemers” looking to play off the excitement around the potential of these tools.

That may be true — but to me, it seems like the pace of AI development in this space is real, which means it’s worth keeping a close eye on, especially as the risks and dangers become crystal-clear. It may be impossible for me to keep up — but with developers starting to run Auto-GPT on their phones, I think we all need to buckle up for a fast ride.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.




Source link