Skip to main content

This week’s blog examines our pandemic response, a science tech competition involving autonomous cars and the role of complex simulation software powered by AI.

Pandemic quarterbacking

The pandemic is still not in our rearview mirror, but that has not stopped researchers from engaging in a bit of Monday morning quarterbacking. What could have been done better regarding our pandemic response? Where did we go wrong? A group of researchers from U.S. and Switzerland have published a perspective paper – basically an opinion piece – examining these questions. A summary of their perspectives is what I saw first in a MIT Technology Review issue. The complexity here is that science informs policy decisions. Epidemiological research scientists are busy studying the pandemic and providing guidance to policy makers, but the pandemic is not a static thing that can be analyzed retrospectively. It is like a living, breathing, evolving organism, so its progression has been hard to study or predict.

Fundamentally, there is a lot of uncertainty involved in any analysis. First, there is uncertainty related to the fundamental data used in the analysis. How many people are infected? How do you know they are infected? How many died? How do you know they died because of COVID-19? To make matters worse, there are thousands of studies from all over the world with questionable data and questionable conclusions. Hardly the material to give confidence to any policy maker.

So, how are the authors of this opinion piece recommending we tackle future pandemics? They suggest approaching the pandemic response like an engineer. This means dealing with uncertainty and being transparent in communicating about uncertainties. It also means looking at the problem from multiple perspectives. This is what engineers do and there is a term for it: concurrent engineering. Finally, it means developing tools that can serve as a starting point and evolve to meet the needs of a changing situation. Good advice for the next pandemic and the current one.

Autonomous racing

I saw a reference to car racing in ACM News. I am not really into car racing, but this story caught my eye as it details a race featuring self-driving cars with a hefty prize: $1 million. This race took place at the Indianapolis Motor Speedway and the winning team was from Technical University of Munich (TUM). Their car raced around the track with an average speed of 135 mph! Quite impressive. These cars are smaller versions of the Formula One racing cars, equipped with a battery of sensors. The cars didn’t race in tandem in this particular race, but rather ran one after another and the best time won the award. An interesting side note: One other team who set the record for best lap speed at 139 mph, had a programming glitch and lost. The glitch? They programmed the vehicle to run five laps instead of six. Looks like a million-dollar mistake!

Digital twins

I came across the term “digital twin” in twin contexts recently, both related to the idea of simulating reality with AI, but completely different outside of that. One article was about how a digital twin is going to save us from the grinch that is planning on stealing our Christmas. Well not literally – it is an article about how to save our supply chain infrastructure which is causing all kinds of goods shortages. The second context has a more literal meaning: How do you create a digital twin which is quite human-like in its genetic makeup?  

The MIT Tech Review article is all about how to deal with supply chain disruptions even before they happen. The solution is a complex, reinforcement learning AI software simulation. What does the AI software do? Using tons of data on suppliers, locations, items, weather, etc. – the complex simulation software systematically runs what-if scenarios. If there is water shortage in area one, where can the farm produce be alternatively procured? If a particular shipping lane is interrupted, what are the alternatives? It is like your favorite map software rerouting you when it sees a traffic jam in one location. Amazon, Google, FedEx and others are investing heavily in digital twin infrastructure to be better prepared to deal with evolving disruptions in the supply chain. What about smaller sellers? Access to such software is being slowly democratized and may become more widely available.

What about the other digital twin I mentioned above? Turns out, University of Cambridge researchers used data at the organ, tissue and cellular level, including genetic markers to generate a digital twin in two simulated clinical case studies. They then tried to predict the evolution of clinically relevant endpoints, such as blood pressure, using graphed neural networks and generative adversarial networks. This sort of modeling and simulation is the goal of precision medicine where one can forecast the progression of a condition. The work reported here is early research and not quite ready for primetime.

I am always looking for feedback and if you would like me to cover a story, please let me know! Leave me a comment below or ask a question on my blogger profile page.

V. “Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.