Artificial intelligence (AI) is one of the most evocative and confusing terms in technology. It seems there are new announcements almost every day about the advancements of machines and their ability to ‘think’.
We have seen a machine master the complex game of Go, previously thought to be the most difficult challenge of artificial processing. We have witnessed vehicles operating autonomously, including a caravan of trucks crossing Europe with only a single operator to monitor systems. We have seen a proliferation of robotic counterparts and automated means for accomplishing a variety of tasks and all of this has given rise to a flurry of people claiming that the AI revolution is upon us. While there is no doubt that there have been significant advancements in the field of AI, what we have seen is only a start on the path to what could be considered a full Artificial Intelligence.
Understanding the growth of AI capability is critical for understanding the advances we have seen. Full AI, that is to say complete, autonomous sentience, involves the ability for a machine to mimic a human to the point that it would be indistinguishable from them. This type of true AI is still a long way from reality; it requires immense computing power, the ability to teach a machine to interpret complex things like emotional responses, and how to generate both intuitive and emotional responses to situations. However, there will be many more practical applications of basic AI in the near term that hold the potential to greatly enhance our lives.
With basic AI, the processing system learns and interprets responses based on “experience.” That experience comes in the form of training through using data sets that simulate the situations we want the system to learn from. This is the confluence of Machine Learning and AI. The capability to teach machines to interpret data is the key underpinning technology that will enable more complex forms of AI that can be autonomous in their responses to input. It is this type of AI that is getting the most attention.
In the next 10 years, the use of this type of machine learning-based artificial intelligence will likely falls into two buckets:
There is no doubt about the commercial prospects for autonomous robotic systems in the commercial market. We see this application already being advanced to the point that it will become commercially viable; the first step to becoming practical and widespread. Simply put, if revenue can be made from it, it will become self-sustaining and thus continue to grow. For instance, the iRobot Roomba has succeeded as a solidly commercial application of autonomous technology. Autonomous vehicle technology is one of the most publicized and one of the most needed applications of artificial intelligence. There are an estimated 4.4 million injured or killed in traffic accidents per year in the United States alone. Autonomous vehicle technology could almost completely eliminate this and greatly improve availability and efficiency of transportation for everyone.
In addition to the automation of transportation and logistics, a wide variety of additional technologies that utilize autonomous processing techniques are being built. Currently, the artificial assistant or “chatbot” concept is one of the most popular. By creating the illusion of a fully sentient remote participant, it makes interaction with technology more approachable. There have been obvious failings of this technology (the unfiltered Microsoft (News - Alert) chatbot, “Tay,” as a prime example), but the application of properly developed and managed artificial systems for interaction is an important step along the route to fully artificial intelligence. This is also a hugely important application of artificial intelligence as it will bring technology to those who previously could not engage with technology completely for any number of physical or mental reasons. By making technology simpler and more human to interact with, you remove some of the barriers to its use that cause difficulty for people with various impairments.
The use of artificial intelligence for development and discovery is just now beginning to gain traction, but over the next decade this will become an area of significant investment and development. There are so many repetitive tasks involved in any scientific or research project that using robotic intelligence engines to manage and perfect the more complex and repetitive tasks would greatly increase the speed at which new breakthroughs could be uncovered. There is also the tantalizing possibility that as we increase the capability of our artificial intelligence systems, they could actually perform research and discover new avenues to explore theories. While this is still a long way away, it could greatly accelerate the discoveries needed for many advancements that could improve and extend our lives.
There is a downside to the rise of AI beyond the irrational “rise of the machines” fear. We as a species are very bad at giving up control. A prime example of this is the resistance to driverless cars as a practical transportation tool. This exemplifies why it will never be possible for us to allow full artificial intelligence to blossom and take over control. Microsoft’s AI chatbot is also an example of this failure to relinquish control. For true AI to develop completely, we would have to allow it to truly build and develop personality, emotion and perhaps most critically, a point of view that could drastically differ from our own. Our inability to let go will only be magnified as more and more of our lives becomes automated and artificially intelligent systems gain more control. Technology cannot solve this problem, because it is so deeply ingrained into us as humans. However, while the use of artificial intelligence is certainly advancing, there will always be a need for human influence and participation in all areas.
The dystopian vision of robots assuming complete control of society is unlikely; the nuances of perception, intuition and plain old “gut check reactions” still elude machines; a doctor’s relationship with a patient is a critical element of healthcare that would be almost impossible to fully replicate with technology. Learning from repetition, improving patterns, and developing new processes is well within the reach of current AI models, and will strengthen in the coming years as advances in AI – specifically machine learning and neural networking – continue. Rather than being frightened by the perceived threat of AI, it would be wise to embrace the possibilities.
About the Author
Chris Harrold, Chief Evangelist, Talend, has a long career in IT with roles in operations and development. He has been focused on the analytics and emerging data-processing technology space for the last 6 years, with special emphasis on Hadoop, Spark, and other large-scale data processing architectures. Currently his focus is on driving awareness around Data Integration and Data Management challenges and solutions.