Artificial Intelligence falls short of being properly defined as we ignore lots of things about Natural Intelligence itself.
Nevertheless it is clear that computer pattern recognition and learning capabilities are steadily improving. This is notably « visible » in image segmentation and recognition, strategy games, language processing, and semantic search. Google is responding to our queries with an incredible personalized touch. Additionally, self-driving cars are demonstrating extraordinary ability to mimic and better human driving.
It is fair to ask where we stand with AI as we seem to have reached a “local optimum”.
When experts are asked about how far we stand from having human like intelligences in front of us, they often relate to subjects vaguely defined like “thinking”, “feeling”, “creating”, “understanding”.
US DARPA (Defense Advanced Projects Research Agency) positions AI research on its third wave, after expert systems and machine learning, we now enter the third phase where AI systems should be able to explain their reasoning. This will be particularly interesting as cognitive systems like IBM Watson defeat our human capabilities and outreach by far our capacity to explore thousands of similar cases in the context of healthcare or legislation.
Recently, many visionaries including successful entrepreneurs and scientists have warned that “Artificial Intelligence” could be a threat to humanity.
Lots of questions are flowing from these statements.
As disembodied as this artificial intelligence can be, it would still need an infrastructure to be run and actuators to interact with the physical world.
Even in the rise of Internet of Things (IoT), there is no such system able to use the physical environment to impact the physical world. We are light years away from having a sufficiently interconnected and interoperable world that would allow a machine to take control over humanity or harm it substantially and that’s very fine.
Robotics creates lots of occasions to dream about humanoid robots that could make the nightmare a reality like in the « Terminator » saga or in the more recent series « Black Miror », « Metalhead » episode, but it can easily be argued that humanity remains by far the most prevalent danger to itself and, if intelligent robots could exist, they would probably helping us out, like in the excellent cartoon « The Iron Giant« .
It is far more realistic to conceive the machine as a suplement to our intelligence to pursue objectives like Decision support and Task automation, above all in the midst of cyberworld and data growth where human individual capacities fall apparently short of encompassing their virtual environment. In those ventures, the human operator « sits » before the screen, bridging human perception of the physical environment with computer inputs and trying to consolidate the concepts and information into one holistic mental model of reality. That is made difficult because it requires modeling physical environment and integrate digital inputs into the model.
An alternative view exists, it consists in preserving the authentic perception of physical reality by the operator (vision, audition) and superimposing computed data, this is basically called Augmented Reality(AR).
Augmented reality is an excellent environment to create a seamless interface between human and computer. Half virtual reality and half real perceptions, Augmented Reality allows Artificial Intelligence to capture human operator’s perceptions on the fly and superimpose computational results under the form of synthetic artifacts. The first examples of Augmeted Reality have appeared in the 80s, not surprizingly for the military. It was mad of specific information projections on the helmet of fighter and tank pilots, known as HUD (Head Up Displays).
As we are speaking, AR still hasn’t made it to the mainstream market. Microsoft Hololens has paved the way but the very expensive device (>3000 €) simply can not reach the mainstream. Microsoft is pushing it hard to design teams in large companies, car manufacturers, pharmaceutical companies.
Virtual Reality (VR), on the other hand is now progressing at high pace through popular products like the Oculus Quest 2 from Facebook
that delivers incredible sensations and experience sharing moments for less than 400€. The problem with VR is the “V”. Difficulties and costs of creating a digital twin of many environment is what prevents VR from being competitive in the human machine interface race. As a former R&D engineer in commercial flight simulators, back in the 90’s, I know what it takes to create and run a synthetic environment in the professional world.
Amazingly, VR seems to be “ human going into the computer”, we all remember the Tron movie, while AR is really “the computer coming into the reality”.
Personal Interactor, 2021
Furthermore it doesn’t go alone, but rather with the operator, or the operators. Another movie can be remembered at this point, “Her”. Her is an audio augmented reality, meaning a human operator assistance only by the mean of voice support.
But we can imagine many different ways to support the human operator, at least one per sense, meaning superimposed visual information, voice assistance, touch assistance, and probably in a near future smell and taste synthesis.
Dubitative ? I strongly advise to test the latest Oculus Quest 2 from Facebook, an extraordinary device that revolutionize the VR world. This is only the beginning of a global shift, an amazing apraisal to the visionary work of Ernest Cline, born in 72, who wrote in 2011 the novel « Ready Player One » featuring people playing in a matrix. This novel has produced a blockbuster in 2018, a 500+ M$ movie bu Steven Spielberg, worth seeing, well, a must see.
Is that the future, people diving into parallel realities, absorbed by mega brands, sucked by virtuality into one or more matrices. It will take long to figure out what is the best human machine interface. Elon Musk and his friends work hard on Neuralink to bind directly the operator and the computer wirelessly by the thoughts. Even Mars seems closer…
We learn as we walk and tomorrow’s solution will not be today’s dream. Innovating is like a street fight. It focuses your attention on a very unique subject and the tunnel effect narrows your sight on the subject, preventing you from seeing the entire scene, surrounding elements that may impact you and change your perception of reality. The Human Machine ecosystem is a very large one, even if we have only 5 physical senses, it is hard to tell if a voice supplement is better than a video clip or a graph to provide contextual support on a specific subject. There is a plentiful of ergonomic activities to plan in the next generation of operation centers.
A metaphor as a takeout: the wheel was invented to supplement human legs, not to replace them. Thanks to the wheel, human goes faster, farther, with less fatigue. No one ever thought the wheel would take control over the world.
Augmented Intelligence, like Augmented Carriage (the wheel), remains an artefact.