April 3, 2023 | V. "Juggy" Jagannathan
In this week’s blog I look at the labor force impact of large language models (LLMs) and a blog post by Bill Gates heralding a new era of artificial intelligence (AI). First, check out the podcast I recorded on AI in health care with Dr. Thomas Polzin, director of natural language processing at 3M HIS.
In this conversation with Dr. Polzin, we cover what it takes to serve cutting edge AI solutions to thousands of hospitals and clinics across the world. We discussed how to be laser focused on the satisfaction of hundreds of thousands of clinicians and coders, how to ensure the highest standards are maintained in terms of data privacy and garnering the full trust of our customers.
There is no question that LLM-based solutions like ChatGPT have changed the AI landscape. The technology is revolutionary. I expect organizations like the Brookings Institution to do a thoughtful study on the impact of this technology what it means for the workforce. But OpenAI has jumped the gun. They have come up with their own analysis of what the tech means to the labor market. Here is their thought provoking paper in arXiv: “GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.” Now, this paper is yet to be peer reviewed, but the import of the paper is nevertheless interesting. Let’s dive in and see what they have to say.
First an explanation of the title. The first GPT in the title stands for general pretrained transformers. This is a foundational model that underlies all the different LLMs coming from OpenAI, Google, Meta and others. The second GPT stands for general purpose technology. That is, it has universal application – like a computer which can be used for anything.
In the paper, the OpenAI authors did an analysis of the data available through O*Net. O*Net is a primary source of occupational information. The dataset that is currently available has 1,016 occupations broken into 19,265 tasks. These tasks break into 2,087 unique, detailed work activities (DWAs). They then map each of the detailed descriptions in DWAs to determine which one of them can be impacted by LLM tech. They define three buckets: No exposure (i.e., not impacted by LLMs), direct exposure (LLMs can decrease time to do the task by 50 percent), LLM + exposed (LLMs in combination with other innovation can decrease time by half). Then, to determine the truth of these assertions for each task, they got humans to rate the tasks, as well as (what else) GPT4! Before diving into the conclusions of this paper, I found it interesting that they had a complete section on the weakness of their approach – almost like the section on risks in Securities and Exchange Commission filings!
In the researcher’s own words, this is their conclusion: “Our findings indicate that approximately 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of GPTs, while around 19% of workers may see at least 50% of their tasks impacted.” That is indeed quite an impact that is being forecasted!
Bill Gates has declared that a new era, the era of AI, has begun – with the advent of LLMs like ChatGPT. In this blog post, Gates talks about the challenge he posed to OpenAI last summer, to build a model that will successfully pass an AP biology exam. Within a few months, the GPT model literally aced the AP test. Clearly, we are at an inflexion point with this technology.
The Gates Foundation has been involved in numerous philanthropic activities – particularly around inequities – so, he takes an optimistic tone on how AI can help address inequities. He also examines what this new era means for a wide swath of human endeavors. Productivity enhancement, through what Microsoft calls Co-Pilot, is clearly one way a broad range of people can benefit from the tech. Health AI applications (such as AI-powered ultrasound) that can be deployed to poorer countries is another application.
Education is another big area, and Khan Academy has already embarked on this endeavor. Check out the awesome demo from Sal Khan on this front. Of course, there are lots of risks we need to navigate before the full potential of these technologies is realized. Just this week NPR came out with this interesting take: “It takes a few dollars and eight minutes to create a deepfake”. Perhaps this really is the start of the “age of misinformation.”
Given this potential for misinformation and harm, The Future of Life Institute just released an open letter calling for a six month pause on giant AI experiments, signed by the likes of Yoshua Bengio and Elon Musk! Not likely to happen though, unless the government steps in (which is called for in the letter).
I am always looking for feedback and if you would like me to cover a story, please let me know! Leave me a comment below or ask a question on my blogger profile page.
“Juggy” Jagannathan, PhD, is an AI evangelist with four decades of experience in AI and computer science research.