Skip to main content

BakeryScan to cancer

A fascinating story appeared in The New Yorker last week about an AI pastry program that morphed into a cancer detection program. The story describes a Japanese entrepreneur named Kambe who started a small software company called Brain Co. almost four decades ago. This company focused on developing computer vision algorithms and, in early 2000, it developed BakeryScan to speed up the checkout process of a bakery. The problem to be solved? How to correctly identify a heap of different pastries picked from a self-serve aisle and loaded onto a plate. Long before the deep learning revolution, this company perfected the solution to this problem using traditional approaches---getting the lighting right in images and engineering which features predicted which pastry was on the plate. In short, lots of elbow grease.

Fast forward to today and this traditional solution is still in use. Kambe, now in his seventies, has not ignored deep learning advances. Because of hygiene concerns highlighted by the pandemic, pastries now have to be wrapped in plastic, which alters the light characteristics of the image. The solution? A deep learning algorithm to visually "unwrap" the plastic and the traditional solution for pastry identification. That is not the end of this story, however. A hospital in Japan approached the company to see if their AI magic could be used to detect cancer in pathology slides. And they did---utilizing their arsenal of feature recognition in images.

As we hurtle towards the deep end of the deep learning revolution, it is useful to note that traditional approaches still have their advantages. Sure it can be painstaking and slow, but it can explain why a particular prediction was made (and doesn’t need millions of instances of training data).

Augmented reality world

Facebook is making huge investments in augmented and virtual reality. Its latest foray is in a wristband that can detect and interpret electromyograms (EMGs). EMG is the electrical signal emitted by your muscles. Want to swipe left or right on your smartphone? That signal from your brain is sent to the muscles in your hand which then execute the action. That signal can be detected as EMG in your wrist which the Facebook wristband then interprets to navigate an augmented reality world. In effect, you are swiping left or right in a virtual world just by thinking about it! What is the actual augmented reality application here? Not disclosed yet.

The most famous app that took the world by storm four years ago is Pokemon Go. It allowed smartphone users to see mythical creatures when they viewed the world through their camera lenses. The gamification of it, which involved hunting for these characters in the real-world using GPS-location tracking, made this app the single-most popular health app. Why health? Millions of people had to walk around parks and other locations to figure out where exactly these characters were! Brilliant app.  

What is the latest on this app? Well, I found a link which shows Pokemon Go in an entirely different world---a world inhabited by people wearing Microsoft HoloLens and interacting with Pokémon Go characters and people in the real world (check out the cool video in the link above).

Augmented reality is definitely in our future. It has huge applications in almost every domain. As an example, imagine this recurring nightmare---assembling a piece of Ikea furniture, but this time the augmented reality shows the instructions in your augmented world and you can swipe left or right in your mind as your hand does the assembly!

Evidence-based Policing

I saw this interesting article in Wired. It covers the long history of Lawrence (Larry) Sherman, current director of the Jerry Lee Center of Experimental Criminology at the University of Cambridge in UK.

In a nutshell, the life story of Larry Sherman can be captured by these two statements: Police should abide by the Hippocratic oath which physicians take, “first, do no harm,” and police interventions should be evidence-based using randomized trials to see which interventions really work. Over a 50-year career, Sherman has conducted a wide range of experiments in many locations including the United States, the UK and Australia.

“Checkpoint” is one such effort spawned from University of Cambridge. In the summer of 2016, a 20-month trial was launched involving minor offenses. A machine learning algorithm dubbed “Harm Assessment Tool Kit” helped in identifying study participants. Half were assigned randomly to traditional prosecution and the other half were provided personalized treatment plans and agreed to either meet with their victims and/or undertake community work.

At the end of the study period, it is estimated that the UK police saved 2 million pounds by reducing crime using the non-traditional prosecution. Larry is also advocating for population-based approaches, now all the rage in personalized medicine. The basic idea is to identify at-risk individuals and intervene to prevent crime (just like you do to prevent hospitalization of at-risk persons with specific health conditions). This scientific approach to policing being pioneered by Larry Sherman is definitely worth looking into.

I am always looking for feedback and if you would like me to cover a story, please let me know! Leave me a comment below or ask a question on my blogger profile page.

V. “Juggy” Jagannathan, PhD, is Director of Research for 3M M*Modal and is an AI Evangelist with four decades of experience in AI and Computer Science research.


Listen to Juggy Jagannathan discuss AI on the ACDIS podcast.