When could we meet the first intelligent machines

When could we meet the first intelligent machines

When could we meet the first intelligent machines

Couldn’t attend Transform 2022? Discover all the summit sessions now in our on-demand library! Look here.


How close are we to living in a world where human intelligence is overtaken by machines? During my career I have regularly engaged in a thought experiment where I try to “think like the computer” in order to imagine a solution to a programming challenge or opportunity. The gap between human reasoning and software code has always been pretty clear.

Then, a few weeks ago, after talking to chatbot LaMDA for several months, Blake Lemoine, now a “former” Google AI engineer, said he thought LaMDA was sentient. [subscription required]. Two days before Lemoine’s announcement, Douglas Hofstadter, an AI pioneer and Pulitzer Prize-winning cognitive scientist, wrote an article saying [subscription required] that artificial neural networks (the software technology behind LaMDA) are not aware. He also came to this conclusion after a series of conversations with another powerful AI chatbot named GPT-3. Hofstadter ended the article by estimating that we are still decades away from machine consciousness.

A few weeks later, Yann LeCun, Chief Scientist of Meta’s Artificial Intelligence (AI) Lab and winner of the 2018 Turing Award, published an article entitled “A Path Towards Autonomous Machine Intelligence”. He shares in the article an architecture that goes beyond consciousness and sentience to offer a path to programming an AI that can reason and plan like humans. Researchers call this artificial general intelligence or AGI.

I think we will come to regard LeCun’s paper with the same reverence we now give to Alan Turing’s 1936 paper which described the architecture of the modern digital computer. Here’s why.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to advise on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, California.

register here

Simulate an action using a world model

LeCun’s first breakthrough is imagining a way to overcome the limitations of today’s specialized AIs with his concept of the “world model”. This is made possible in part by inventing a hierarchical architecture for predictive models that learn to represent the world at multiple levels of abstraction and across multiple time scales.

With this global model, we can predict possible future states by simulating action sequences. In the article, he notes, “This can allow reasoning by analogy, applying the model set up for one situation to another situation.”

A configurator module to drive new learning

This brings us to the second major innovation in LeCun’s article. As he notes, “One can imagine a ‘generic’ world model for the environment with a small part of the parameters modulated by the configurator for the task at hand.” It leaves open the question of how the configurator learns to break down a complex task into a sequence of sub-objectives.. But that’s basically how the human mind uses analogies.

For example, imagine if you wake up this morning in a hotel room and need to run the shower in the room for the first time. Chances are you quickly broke the task down into a series of sub-goals based on analogies learned from running other showers. First, figure out how to turn the water on using the handle, then confirm which direction to turn the handle to make the water hotter, etc. You can ignore the vast majority of data points in the piece to focus on a few that are relevant. to these goals.

Once started, all intelligent machine learning is self-learning

The third major breakthrough is the most powerful. LeCun’s architecture operates on a self-supervised learning paradigm. This means that the AI ​​is able to learn on its own by watching videos, reading text, interacting with humans, processing sensor data, or processing any other input source. Today, most AIs must be trained on a diet of specially labeled data prepared by human trainers.

Google’s DeepMind has just released a public database produced by their AlphaFold AI. It contains the estimated form of nearly all of the 200 million proteins known to science. Previously, it took researchers 3 to 5 years to experimentally predict the shape of a single protein. AI trainers from DeepMind and AlphaFold completed nearly 200 million in the same five-year window.

What will it mean when an AI can plan and reason on its own without human trainers? Today’s leading artificial intelligence technologies – machine learning, robotic process automation, chatbots – are already transforming organizations in industries ranging from pharmaceutical research labs to insurance companies.

When they arrive, whether in a few decades or a few years, intelligent machines will introduce both vast new opportunities and startling new risks.

Brian Mulconrey is an advisor to insurtech startup Sureify Labs, co-founder of Force Diagnostics, and a futurist. He lives in Austin, Texas.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.

If you want to learn more about cutting-edge insights and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You might even consider writing your own article!

Learn more about DataDecisionMakers

Leave a Reply

Your email address will not be published.