Our Future Artificial Intelligence Overlords Need a Resistance Movement

Comment

Artificial intelligence has moved so fast that even the scientists are having a hard time keeping up. In the past year, machine learning algorithms have started generating rudimentary movies and amazing fake photos. They even write code. In the future, we will likely look back to 2022 as the year AI shifted from processing information to creating content just like many humans.

But what if we also look back on it as the year AI took a step towards the destruction of the human species? As hyperbolic and ridiculous as it sounds, public figures from Bill Gates, Elon Musk and Stephen Hawking, and right back to Alan Turing, have expressed concerns about the fate of people in a world where machines surpass them in intelligence, where Musk says AI. has become more dangerous than nuclear warheads.

After all, humans don’t treat less intelligent species particularly well, so who’s to say that computers, trained everywhere with data that reflects all aspects of human behavior, won’t “put their goals before ours” as a legendary computer scientist put it. Marvin Minsky once warned.

Refreshingly, there is some good news. More scientists are looking to make deep learning systems more transparent and measurable. That momentum must not stop. As these programs become increasingly influential in financial markets, social media and supply chains, technology firms will need to start prioritizing AI security over capability.

Last year, across the world’s top AI labs, around 100 full-time researchers focused on building secure systems, according to the 2021 State of AI report, produced annually by London-based venture capital investors Ian Hogarth and Nathan Banaich. Their report for this year found that there are still only about 300 researchers working full-time on AI security.

Also Read :  MoonBikes bring VR experience in second show floor appearance

“It’s a very low number,” Hogarth said during a Twitter Spaces discussion with me this week about the future threat of AI. “Not only are very few people working on aligning these systems, but it’s also kind of the Wild West.”

Hogarth was referring to how in the past year a wealth of AI tools and research has been produced by open-source groups that say super-intelligent machines should not be controlled and built in secret by a few large companies, but created in the open. In August 2021, for example, the community-driven organization EleutherAI developed a public version of a powerful tool that could write realistic comments and essays on almost any topic, called GPT-Neo. The original tool, called GPT-3, was developed by OpenAI, a company co-founded by Musk and largely funded by Microsoft Corp., which offers limited access to its powerful systems.

Then this year, a few months after OpenAI wowed the AI ​​community with a revolutionary image-generating system called DALL-E 2, an open-source company called Stable Diffusion released its own version of the tool to the public, free of charge.

One of the advantages of open source software is that by being out there, a greater number of people are constantly probing it for inefficiencies. This is why Linux has historically been one of the most secure operating systems available to the public.

But throwing powerful AI systems out there also increases the risk that they will be misused. If AI is as potentially harmful as a virus or nuclear contamination, then it might make sense to centralize its development. After all, viruses are screened in biosafety laboratories and uranium is enriched in carefully confined environments. Research into viruses and nuclear power is controlled by regulation, however, and with governments keeping up with the rapid pace of AI, there are still no clear guidelines for its development.

Also Read :  How government can boost AI entrepreneurship

“We pretty much have the worst of both worlds,” says Hogarth. AI risks abuse by being built outside, but neither does anyone control what happens when it’s created behind closed doors.

For now at least, it’s encouraging to see the focus growing on AI alignment, a growing field that has to do with designing AI systems that are “aligned” with human goals. Leading AI companies such as DeepMind and OpenAI from Alphabet Inc. has several teams working on AI alignment, and many researchers from those firms have gone on to launch their own startups, some of which are focused on making AI safe. These include San Francisco-based Anthropic, whose founding team left OpenAI and raised $580 million from investors earlier this year, and London-based Conjecture, which was recently backed by the founders of Github Inc., Stripe Inc. and FTX Trading Ltd.

Speculation operates under the assumption that AI will reach parity with human intelligence in the next five years, and that its current trajectory spells disaster for the human species.

But when I asked Conjecture’s CEO, Connor Leahy, why AI might want to hurt people in the first place, he answered correctly. “Imagine people want to flood a valley to build a hydroelectric dam, and there’s an anthill in the valley,” he said. “This will not stop the people from their construction, and the anthill will immediately be flooded. At no point did some people even think about harming the ants. They just wanted more energy, and this was the most efficient way to achieve that goal. Analogously, autonomous AIs will need more energy, faster communication and more intelligence to achieve their goals.”

Also Read :  Gadgets to keep you warm and comfortable this winter

Leahy says that to prevent that dark future, the world needs a “portfolio of bets,” including examining deep learning algorithms to better understand how they make decisions, and trying to endow AI with more human-like reasoning.

Even if Leahy’s fears seem overblown, it’s clear that AI is not on a path that is entirely in line with human interests. Just look at some of the recent efforts to build chatbots. Microsoft abandoned its 2016 bot Tay, which learned to interact with Twitter users, after it posted racist and sexually charged messages within hours of being released. In August of this year, Meta Platforms Inc. published a chat that claimed Donald Trump was still president after being coached on public texting on the Internet.

No one knows if AI will wreak havoc on financial markets or torpedo the food chain one day. But it could pit people against each other through social media, something that is probably already happening. The powerful AI systems recommending posts to people on Twitter Inc. and Facebook aims to sap our engagement, which inevitably means serving up content that provokes outrage or misinformation. When it comes to “AI alignment”, changing those incentives would be a good place to start.

More from Bloomberg Opinion:

• Tech Told’s Terrible, Terrible Week in 10 Charts: Tim Culpan

• Wile E. Coyote Moment as Tech Races Off the Cliff: John Authers

• Microsoft’s AI Tool Could Be a Good Thing: Parmy Olson

This column does not necessarily reflect the opinion of the editorial staff or Bloomberg LP and its owners.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is the author of “We Are Anonymous.”

More stories like this one are available at bloomberg.com/opinion

Source

Leave a Reply

Your email address will not be published.

Related Articles

Back to top button