View all on-demand sessions from the Intelligent Security Summit here.
The advent of artificial general intelligence (AGI) – the ability of an artificial intelligence to perform any intellectual task that a human can understand or learn – is inevitable. Despite many experts’ predictions that AGI may never be reached or that it will take hundreds of years to emerge, I believe it will be there within a decade.
Why artificial general intelligence is coming
How can I be so sure? We already have the know-how to produce massive programs with the capacity to process and analyze large amounts of data faster and more accurately than any human could ever do. And in reality, massive programs may not be necessary after all. Given the structure of the neocortex (the part of the human brain we use to think) and the amount of DNA required to define it, we may be able to create a complete AGI in a program that is only 7.5 megabytes.
We’ve also seen robots exhibit the kind of fluid movement controlled by 56 billion neurons in the cerebellum (the part of the human brain responsible for muscle coordination). Again, it doesn’t take a supercomputer, just a few microprocessors and an understanding of how coordination, balance and reactions should work.
The catch is that in order for today’s artificial intelligence to grow into something close to real human-like intelligence, it needs three essential components of consciousness: an internal mental model of the environment with the entity at the center; a perception of time that allows a perception of future outcome(s) based on current actions; and an imagination, so that multiple possible actions can be considered and their outcomes evaluated and selected. In short, it must be able to explore, experiment and learn real objects, and interpret everything it knows in the context of everything else it knows, in the same way a three-year-old child does.
What AI can’t do yet
Unfortunately, today’s limited AI applications simply don’t store information in a general way that allows it to be integrated and then used by other AI applications. Unlike humans, AIs cannot aggregate information from multiple senses. So while it may be possible to merge language and image processing applications, researchers haven’t found a way to integrate them in the same seamless, effortless way that a child integrates vision, language, and hearing.
That is not to take away from current AI. From AI bots that can identify, evaluate, and make recommendations for streamlining business processes, to cybersecurity systems that continuously monitor data entry patterns to thwart cyberattacks, AI has repeatedly demonstrated its ability to process and analyze data faster than humanly possible. . But while the performance is impressive, the AI that most of us experience is more of a powerful method of statistical analysis than a true form of intelligence. Today’s AI is limited by its reliance on massive datasets, and there’s no way to create a dataset large enough for the resulting system to handle completely unexpected situations.
To achieve AGI, researchers must shift their focus from ever-expanding data sets to a more biologically plausible structure that allows AI to exhibit the same kind of contextual, common sense that humans do. Until now, AI investors have been unwilling to fund such a project, which could solve essentially the same problems a three-year-old routinely tackles. That’s because the abilities of a three-year-old aren’t particularly marketable.
AGI and the market
Marketability is perhaps the secret sauce in the rise of AGI. We can expect the development of AGI to create capabilities that are individually tradable. Something is being produced that improves the way your Alexa understands you and everyone is rushing to bring that new development to market. Someone else is producing something with better vision that can be used in a self-driving car and everyone is rushing to bring that development to market as well. While each of these developments is marketable on their own, if they are built on a common underlying data structure, the sooner we can link them together, the more they can interact and build broader context, and the sooner we can get started with AGI to approach.
Finally, as we approach human-level intelligence, no one will notice. At some point we will get close to the threshold on a human level, then match that threshold, and then cross that threshold. At some point after that, we will have machines that are clearly superior to human intelligence and people will agree that AGI just might exist. But it will be gradual as opposed to a specific ‘singularity’. Ultimately, however, AGI is inevitable because market forces will prevail – it’s just a matter of waiting for the insights needed to make it work.
Charles Simon is a nationally recognized entrepreneur and software developer and the CEO of FutureAI. Simon is the author of Are computers revolting? Preparing for the future of artificial intelligenceand the developer of Brain Simulator II.
Data decision makers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
To read about advanced ideas and up-to-date information, best practices and the future of data and data technology, join DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers