Cal AI Transparency Act Passes Through Another Committee — With Help from a Pack of Gum

Yesterday, in Sacramento, I had the privilege of testifying before the California Senate Governmental Organization Committee, advocating for SB 942 — the California AI Transparency Act (CAITA). The bill was successfully passed through this Committee with an impressive 11-1 vote (yea!). The bill is authored by State Senator Josh Becker. After its passage, it was great to see the addition of a co-author, Senator Susan Rubio, to the bill, so it seems the bill has some good momentum and additional support as it progresses next to the Senate Appropriations Committee. During my testimony, I used a prop — a pack of gum — and wanted to use this blog post to expand on why I used it.

During my testimony, I noted that we had made the societal decision starting over a hundred years ago to label food, cosmetics, drugs, medical devices, mattresses, etc. from a safety and health perspective, and if AI truly represents the most “profound technology that humanity will ever develop and work on” (as one of the large tech firm’s CEO claims), I then echoed the belief that this technology then should be held to the same standards of transparency as a pack of gum.

So, I led off my testimony with that “profound” quote, which is from Sundar Pichai, the CEO of Alphabet, who said that in 2021, which was even before the world experienced the start of the Generative AI revolution. [That was also the same quote I used as the lead for Chapter 4 on AI in my book Containing Big Tech.]

Here is the quote:

"I view AI as the most profound technology that humanity will ever develop and work on ... if you think about fire or electricity or the internet, it's like that. But I think even more profound."

I then said the following:

But if AI is as profound as he claims then "it should be held to the same standards of honesty and transparency as a pack of gum."

We have food labeling laws dating back to 1906. These laws not only require standardized disclosures but also prohibit false or misleading misbranding.

California even passed its own food labeling law in 1939 — the Sherman Food, Drug, and Cosmetic Act — which represented one of the earliest state-level efforts to regulate the labeling and safety of food products.

But here we are in 2024, and it is becoming increasingly difficult to tell whether content is human-generated or machine-generated by GenAI (i.e., “synthetic”).

I lifted that “pack of gum” analogy and quote from Max Kennerly in his comments to the National Telecommunications and Information Administration (NTIA) in response to their request for feedback on “AI Accountability.” So, he gets full credit for the gum analogy and highlighting that our “labeling laws for food, cosmetics, drugs, medical devices, and even pesticides do not merely require standardized disclosures, but also prohibit “misbranding” by way of labeling that is “false or misleading in any particular.” The implication, of course, is that if we do it for those things, why not do something comparable in our digital age for digital content created by machines?

I next said the following in my testimony:

Californians deserve to know whether the videos, photos, and content they see and read online are real.

SB 942 addresses this problem. It simply says a provider must label its content and provide a way for a consumer to ask, "Hey, did you create this?”

That sentence, “Californians deserve to know …” was a paraphrase of Senator Schatz's remarks about the AI Labeling Act (sponsored by Senator Schatz and Senator Kennedy), which I modeled much of CAITA on in the drafting process.  It also echoes Senator Kennedy’s comments about their bill:  “Our bill would set an AI-based standard to protect U.S. consumers by telling them whether what they’re reading, seeing or hearing is the product of AI, and that’s clarity that people desperately need.”

I won’t bore you with the rest of my testimony, but I closed with:

“In summary, this bill puts AI content on the same level as a pack of gum in terms of disclosures, which is the transparency that Californians need.”

Of course, I pulled out my pack of gum and flashed it as a prop during that closing sentence. A pack of gum has a barcode that tells you exactly who manufactured the product and the product or part number. It also has a nutrition label that tells you the ingredient, nutrition, and allergen information.

Interestingly, in the subsequent testimony from opponents of the bill, a lobbyist for the tech industry posited at the end of his testimony that labeling of content created by Generative AI could violate the First Amendment (see 1:33:36 of this video). He said:

And finally and just to very quickly to distinguish between a pack of gum and I think the technology here at issue. A pack of gum is not protected by the First Amendment, whereas some of the speech is, and I think the way we either preference or stigmatize certain speech is concerning and that’s why we want to get this right.

This struck me as maybe not the strongest argument in that this bill deals solely with labeling synthetic content that is created by “machine-based systems,” i.e. AI, so is he implying that machine-based systems that created generated content via machine-learning from millions of data sources have free speech rights? It is not altogether clear whose “free speech” is being impacted. Furthermore, in light that the labeling of AI content does not ban or abridge speech (if one can consider machine-generated content as speech), where is the problem, especially given that all the major AI companies have pledged to do the labeling the bill calls for and have not raised a peep about free speech? And don’t we require political ads to have disclosures, so isn’t there a track record out there of requiring disclosures for certain forms of content? I started thinking of a bunch more points to debunk this argument, but in the end, none of the Senators had any follow-up questions on this particular matter, and the bill passed.

After the hearing, I saw that an expert also examined this First Amendment issue in the context of the AI Labeling Act, and concurred with some of my initial reactions in this article entitled “Does the First Amendment Protect AI Generated Speech?”

The First Amendment prohibits the government from “abridging the freedom of speech.” “Speech,” as the U.S. Supreme Court has interpreted the term, refers not just to the written or spoken word, but also to art, films, parades, and other forms of expression. Until now, courts have applied the free speech clause to forbid government restrictions on human expression.

There may, however, be a middle ground between prohibiting generative AI from contributing to public discourse and giving it free rein: labeling requirements.

The AI Labeling Act, which U.S. Senators Brian Schatz (D-Hawaii) and John Kennedy (R-La.) introduced last October, would require AI-created images, video, and other media to carry a disclosure of their AI source. According to Senator Schatz, even if such a labeling requirement cannot guarantee a marketplace of ideas in which the truth will prevail, it may prevent a total marketplace failure, while preserving the public’s right to information.

So, I think that CAITA, aka SB 942, which is again modeled on the AI Labeling Act, does find that nice “middle ground” — the bill does not restrict or prohibit what content can be generated; the content just needs a notice or disclosure that the content itself was created by AI. I think the world has survived and benefited from labels on mattresses, or the barcode and nutrition label on a pack of gum, so I think we can survive a notice on an image that tells us that this is machine-generated. But that’s my opinion, and I am not the author of the bill. I do know Senator Becker and his team are certainly open to addressing concerns and enhancing the bill, and they look forward to having that engagement.

At the end of the day, this lack of transparency with respect to Gen AI has serious implications for our society and democracy, including significant problems we are currently experiencing regarding disinformation, fraud, academic cheating, and more. For example, we can no longer be sure if a video or an image is real or a deep fake, making it easier to harass or embarrass individuals or sway public opinion about politicians and celebrities. Voicemails that appear to be from a spouse or a relative telling a consumer to transfer money can now be more easily created by a fraudster using AI to scam a victim into initiating a financial transaction. Teachers increasingly don't know if homework assignments are being written by students or generated by popular GenAI tools such as ChatGPT, and students are being wrongfully accused of GenAI creating their original work because it was "too good." And given that news organizations are turning to AI to generate content, coupled with well-documented stories of GenAI “hallucinating” its results, it further blurs the lines between fact and fiction in our democracy. So we need to do something, and I think SB 942 is a practical solution to give us that transparency and honesty we need with Gen AI.

Finally, here is a photo of Senator Becker and I getting ready to testify.  Yes, I am about to get a very long overdue haircut!

 

Previous
Previous

If Privacy Laws Were Software Development Models, California is Open and Agile while APRA is Closed and Waterfall

Next
Next

California AI Transparency Act Passes Senate Judiciary Committee