Sometimes an AI-based system can’t decipher the physical world with a sufficient degree of accuracy and the option of just adding more data isn’t possible. In many of these cases, however, this deficiency can be addressed by using four techniques to help AI better understand the physical world: synergize AI with scientific laws, augment data with expert human insights, employ devices to explain how AI makes decisions, and use other models to predict behavior.
Artificial intelligence (AI) gets its “intelligence” by analyzing a given dataset and detecting patterns. It has no concept of the world beyond this dataset, which creates a variety of dangers.
One changed pixel could confuse the AI system to think a horse is a frog or, even scarier, err on a medical diagnosis or a machine operation. Its exclusive reliance on the data sets also introduces a serious security vulnerability: Malicious agents can spoof the AI algorithm by introducing minor, nearly undetectable changes in the data. Finally, the AI system does not know what it does not know, and it can make incorrect predictions with a high degree of confidence.
Adding more data cannot always surmount these problems because practical business and technical constraints always limit the amount of data. And processing large datasets requires ever-larger AI models that are outpacing available hardware and growing AI’s carbon footprint unsustainably.
We have identified an alternative remedy: connecting data-driven AI with other scientific or human inputs about the application’s domain. It is based on our two decades of experience at the University of California’s Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS) in working with academics and business executives to implement AI for many applications. There are four ways it can be done.
Synergize AI With Scientific Laws
We can combine available data with relevant laws of physics, chemistry, and biology to leverage the strengths and overcome the weaknesses of each. One example is an ongoing project with Komatsu where we are exploring how to use AI to guide the autonomous, efficient operation of heavy excavation equipment. AI does well in running the machine but not so well in understanding the surrounding environment.
Therefore, to teach the AI algorithm the differences between soft soil, gravel, and hard rock in the terrain being excavated we used physics-based models that describe size, distribution, hardness, and water content of the particles. Equipped with this knowledge, the AI-driven machine can apply just the right amount of force to grab a bucketful of earth efficiently and safely. Similarly, we use AI to operate a robotic surgical arm, and then combine it with a physics-based model that predicts how skin and tissue will deform under pressure. In both cases, whether earth or tissue, combining data-driven and physics-based models makes the operation safer, faster, and more efficient.
Augment Data With Expert Human Insights
When available data is limited, human intuition can be used to augment and improve the “intelligence” of AI. For example, in the field of advanced manufacturing, it is extremely expensive and challenging to develop novel “process recipes” required to build a new product. Data about novel processes is limited or simply does not exist and generating it would require lots of trial-and-error attempts that may take many months and cost millions of dollars.
A more effective way is to have humans and AI augment and support each other, according to Lam Research, a leading maker of semiconductor equipment that supplies state-of-the-art microelectronics manufacturing facilities. Starting from scratch, highly experienced engineers usually do well in arriving at an approximately correct recipe, while AI is continuously collecting data and learning from those efforts. Once the recipe is in the ballpark, the engineers can enlist AI to support them in fine-tuning it to a precise optimum. Such techniques may provide up to an order of magnitude improvement in efficiency.
Employ Devices To Explain How AI Makes Decisions
In the science fiction novel The Hitchhiker’s Guide to the Galaxy the smartest computer gave 42 as the answer to “life, the universe, and everything,” prompting many a reader to chuckle. Yet, it is no laughing matter for businesses, because AI often operates as a black box that makes confident recommendations without explaining why. If the way that AI makes decisions is not explainable, it is usually not actionable. A doctor shouldn’t make a medical diagnosis and a utility engineer shouldn’t shut off a critical piece of infrastructure based on an AI recommendation that they cannot explain intuitively.
For example, we are working on a smart infrastructure application where sensors monitor the integrity of thousands of wind turbines. The AI algorithm analyzing this data may throw a red flag when it detects a pattern of increased temperature or vibration intensity. But what does this mean? Is it just a hot day or a stray gust of high wind? Or does a utility crew need to be rushed out (an expensive operation) immediately?
Our solution: add fiber-optic sensors to measure the actual physical strain in the turbine material. Then, when utility engineers cross-check the AI red flag with the actual strain in the turbine blade, they can determine the true urgency of the problem and choose the safest corrective action.
Use Other Models To Predict Behavior
Data-driven AI works well within the boundaries of the dataset it has processed, analyzing behavior between actual observations, or interpolation. However, to extrapolate – that is, to predict behavior in operating modes outside the available data – we have to incorporate knowledge of the domain in question. Indeed, this is often the approach taken by many applications that employ “digital twins” to mirror the operation of a complex system such as a jet engine. A digital twin is a dynamic model that mirrors the exact state of an actual system at all times and uses sensors to keep the model updated in real time.
We used this effectively in our project with Siemens Technology on digital twins for smart buildings. We employed data-driven AI to model and control the normal operation of the building, and to diagnose problems. Then, we judiciously mixed in physics-grounded equations – such as basic thermodynamic equations tracking the heat flow to the air conditioning system and living spaces – to predict the building’s behavior in a novel setting. Using this approach, we could predict the building’s behavior with different heating or cooling equipment or while operating under unusual weather conditions. This enabled us to try alternate operational modes without endangering critical infrastructure or its users. We found this approach also works well in other applications such as smart manufacturing, construction, and autonomous vehicles ranging from automobiles to spacecraft.
As humans, we understand the world around us by using our senses in tandem. Given a steaming cup, we determine instantly that it is tea from its color, smell, and taste. Connoisseurs may go a step further and identify it as a “Darjeeling first-flush” tea. AI algorithms are trained – and are limited – by a particular dataset and do not have access to all the “senses” like we do. An AI algorithm trained only on images of cups of coffee may “see” this same steaming cup of tea and conclude it is coffee. Worse, it may do so with a high degree of confidence!
Any available dataset will always be incomplete, and processing ever-larger datasets is often not practical or environmentally sustainable. Instead, adding other forms of understanding of the domain in question can help make data-driven AI safer and more efficient and enable it to address challenges that it otherwise could not.
originally posted on hbr.org by Pushkar P. Apte and Costas J. Spanos
Pushkar P. Apte is the director of strategic initiatives at the University of California’s Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS). He is also a strategic technology advisor at SEMI, a global microelectronics industry association, and leads its Smart Data-AI Initiative.
Costas J. Spanos is the director of the University of California’s Center for Information Technology Research in the Interest of Society and the Banatao Institute (CITRIS). He is also the Andrew S. Grove Distinguished Professor of Electrical Engineering and Computer Sciences at University of California, Berkeley, and the chief executive officer of the Berkeley Education Alliance for Research in Singapore (BEARS).