Skip to main content

This week I am going to focus on freshly proposed amendments to the EU Artificial Intelligence (AI) Act adopted on June 14, 2023, by the European Union (EU) parliament. I will also cover some recent deliberations surrounding how to deal with intellectual property issues from the use of generative AI.

EU AI Act and United States efforts

The EU issued 771 amendments to an earlier drafted bill in 2021, those are described in 349 pages. A tome! It is pretty much focused on generative AI capabilities and foundational models. The final version is slated to be adopted later this year.

My eyes glazed over looking at all this legalese, so I did what anyone tends to do in the age of generative AI: I asked ChatGPT and Bard to summarize the key points. ChatGPT came back and said that its knowledge base is limited to September 2021. It is sorely out of date. Of course, if I use GPT-4 through an API, and pay some toll, I will get better answers. But Bard immediately came back with a summary. Here is a summary of the summary – four broad tenets (I did confirm the gist from a few other sources.)

  • Banning the use of AI for social scoring.
  • Prohibiting the use of AI for real-time remote biometric identification in publicly accessible spaces.
  • Requiring AI systems to be designed in a way that minimizes bias.
  • Giving people the right to make complaints about AI systems.

Source: Bard  

The EU’s General Data Protection Regulation (GDPR) laws surrounding privacy are the strongest in the world and forbids organizations from utilizing personal data for anything without private individual consent. These considerations manifest in the latest AI regulations as well. The Stanford Center for Research on Foundation models has analyzed the EU AI Act from the perspective of what it means for the developers of the foundation models such as GPT-4 and Bard. In the analysis they rate how well a recent set of foundation models (10 of them) fare with respect to the provisions of the AI Act.

They analyzed the models along four major categories: 

  • Data transparency – What data was used in training models? Was personal or copyrighted data used? 
  • Compute power transparency – What resources were used in training the model? 
  • Model details transparency – What are the capabilities and limitations, how are the risks mitigated (related to bias and hallucinations), how are these models evaluated and tested? 
  • Deployment transparency – Does the user know they are dealing with machine generated content? Which EU states have seen deployment of systems based on this tech? How is downstream compliance (meaning applications that bundle these capabilities) handled?

Needless to say, GPT-4 and Bard are pretty much opaque when it comes to the top two categories above. But they fare better on the latter two categories. Of all the foundation models rated by Stanford researchers, only Bloom scored relatively high. The analysis underscores the difficulty these foundation models will face if the proposed amendments become the law in the EU. Of course, we are likely to see some intense lobbying on this front.

Meanwhile, the legislative overtures in the U.S. are proceeding at a pedestrian pace. I tried Bard again to see what is happening, and I got some interesting results. First, it gave a pretty decent summary. Then when I asked it to provide citations, it clammed up! After that, even if I removed the citations from the query, it gave me the same boilerplate answer: “I'm not able to help with that, as I'm only a language model.” I reset the chat and tried the same query, voila, got a complete answer with citations. Interesting behavior.

I have been tracking this space and National Institute of Standards and Technology (NIST) has several efforts underway, including the latest announcement from the White House directing NIST to create a new public working group on generative AI to analyze the risk of utilizing these models. A bipartisan house effort wants to establish an AI commission that will inform on the dangers of AI and what regulations to impose. A number of senate bills are in the running – one from Senator Schumer, another from Senators Markey and Peters. Suffice it to say, they are all in the fairly early stages of sausage making. One citation from Bard was interesting – it lists a number of state government efforts to reign in AI – from California, New York, Illinois, Vermont, Colorado, Connecticut and Washington, D.C. A kaleidoscope of efforts all over the place. Interesting to say the least.

Intellectual property (IP) protections 

I have been a long time Time Magazine subscriber and they have been running a steady stream of articles about AI. This one in particular brings to the fore one of the core use cases for generative AI. It's titled “AI Could Help Free Human Creativity” by Sheena Iyengar, a professor at Columbia Business School. Her thesis, which is a valid one, is that you can use generative AI as a choice generation engine. So, if you say to ChatGPT, as she did, “List the ways in which one can use a toothpick,” it lists 50. And you can ask it to do more. The point is, it follows the traditional idea generation process – generate and test. Or in a team setting, brainstorm and select.

So, what about the IP generated in this fashion? There are undoubtedly new drugs that are going to come to the market by leveraging these tools. Should they be awarded patents? How can you tell if AI was used in the process? There are gorgeous pictures and images already being generated. Should they be provided copyright protection? Particularly when it is trained on massive copying of copyrighted material. These are all quite profound questions. See the passionate speech Vanderbilt Law professor Daniel Gervais gave on this topic. He argues we should not be giving copyright nor patent protections to AI generated solutions. In this short paper he published recently titled “Artificial Inventors,” this argument is clearly made. I liked this paragraph which gets to the root of his argument:

“Machines are not incentivized by money. Exclusive rights mean nothing to them. They can do science, technology, or both. They can replace human researchers in both realms, up to a point and I am not sure I want to find out what the ultimate point is. There is a reason we picked humans to be inventors, and only humans, and it is not (just) the patent bargain; it is the human bargain.”

I found one more recent paper, “Evolving theory of IP rights,” making a similar argument. Another short excerpt from the concluding section of the paper:  

“Indeed, technology and new knowledge can encompass a responsibility when there is potential for benefits to extend to all peoples. In this respect, global health, both human and environmental, must be prioritized by international agreements, such as those at the centre of the multilateral trading regime. Without doing so, the world will remain focused on the objectives of the few to the detriment of all.”

As literally billions of dollars are being invested in all manner of startups to exploit generative AI technology, it remains to be seen how the landscape of IP rights are going to be transformed. Let’s hope it is to the betterment of all humanity.

I am always looking for feedback and if you would like me to cover a story, please let me know! Leave me a comment below or ask a question on my blogger profile page.

“Juggy” Jagannathan, PhD, is an AI evangelist with four decades of experience in AI and computer science research.