Bot Anxiety and the Ethics Leviathan
Supervisor Dean Preston, a previous beneficiary of fake news in campaigns, takes a performative stance on deepfakes amid election narratives focused on the tech industry
PROGRAM ADVISORY: “How Peskin Could Win,” a look at the possibilities he could have under ranked choice voting, is now live at The Voice of SF, as is a new installment of Susan Reynolds’ backgrounder on Peskin. Enjoy!



Supervisor Dean Preston finally presided over his hearing on using artificial intelligence in campaigning this past Monday, and the result was a lot more heat than light. The District 5 Supervisor, facing a reelection challenge, seems more interested in using the admittedly concerning implications of the technology to suit his narratives and possibly to squelch speech he doesn’t like. Unfortunately, Preston’s ham-handed, ideological, and self-serving approach is the first to color this subject at City Hall.
Preston’s track record on this issue is not great. As previously noted, he’s benefited politically from using online fake news platforms, including some AI tech.
The “San Francisco Independent Journal,” created by members of the local Democratic Socialists of America chapter to boost Preston’s re-election campaign in 2022, sloppily used the same template as the Marin Independent Journal. It featured glowing coverage of Preston, written by writers under undeclared pseudonyms with fake profiles, including GAN-generated phony profile pictures.
When Preston introduced his hearing request Dec. 12, he highlighted New York Mayor Eric Adams’ use of an AI tool to make service outreach robocalls in different languages as an example of deceptive misuse, which is a bit of a reach compared with the tech used by his supporters.
Preston also doesn't seem to care much for free political speech generally. In 2020, he tried to kill a long-existing public records advertising contract with the Marina Times over their then-editor Susan Reynolds’ admittedly spicily opinionated tweets about him. His revenge play got quashed after his colleagues had second thoughts about the idea.

Preston’s hearing took up about an hour of Monday’s two-hour Rules Committee hearing and had little notice or attention. It had initially been scheduled for late April. The supervisor opened the hearing with remarks betraying an obtuse view of what is a very real problem.
It needs no repeating here that a global information war is being waged between reactionary authoritarian states and democracies and that the American home front is under pressure from information campaigns designed not so much to sway votes but democratic institutions generally. In response, those institutions still struggle to define a new concept of “information integrity” to complement “information security.”
Instead, Preston focuses on the shiny object of generative AI creating “new avenues for misleading content” and demands “adequate guardrails” against “fraudulent video and audio material,” disregarding necessary context and rights considerations.
The supervisor identified some significant and problematic recent uses, including a deepfake video distributed by MAGA troll Jack Posobiec that depicted Pres. Joe Biden calling for military conscription of 20-year-olds due to the Ukraine war, as well as one aimed at suppressing votes in New Hampshire. But he couldn’t offer much in the way of local examples.
Instead, he cited one that doesn’t exist: “an AI-generated voice of former President Barack Obama, claiming to endorse someone for San Francisco District Attorney.”
A cursory search of social media and cache websites yielded no such content. When we asked Preston's office about the cite, we were told the supervisor misspoke and likely conflated another use of “a different deepfake of Obama’s voice being used re Garry Tan's mayoral announcement." [Maybe Preston saw this, or this, and misremembers things?]. That video was posted on Xitter [go ahead, pronounce it like it’s spelled] April 29 by a user named @CrookeJenkins, an unambiguously progressive-aligned parody account.
The hearing was nevertheless revealing on several levels. It featured pretty useful-for-future-reference presentation decks by relevant department heads. Director of Elections John Arntz outlined planned department and city responses to hostile information operations directed at election procedures.
A presentation by Ethics Commission Director Patrick Ford outlined the agency's bailiwick and limits in pursuing issues based on the content of political speech [this one was the longest and comprehensive apart from the issue at hand; we touch on a possible reason for this further down].
At one point, Ford said that enforcing against content of political speech would be "a new frontier" for the agency. That could be interpreted as a caveat to Preston, though a weak one. Arntz also noted the narrow domain in which his agency could act, again, not against political speech per se.
These presentations were preceded by one by UC Berkeley Prof. David Evan Harris, which outlined the issues plus a brace of state bills currently in the pipeline, including one from East Bay Assemblymember Buffy Wicks that would require generative AI producers to watermark their media so that it can be properly identified as artificial, along the lines of a provision in recently enacted European Union laws.
When Preston asked if he knew of municipal laws against deepfakes, Harris replied that the large amount of proposed state and federal legislation on the subject had precluded tracking local legislation but that he could look into it.
Harris ultimately recommended that San Francisco hold off on any local effort unless AB 2930, the state bill from East Bay Assemblymember Rebecca Bauer-Kahan, that would require risk assessments be made on algorithms used in AI apps, referred to in the legislation by the new term of art “Automated Decision Tools," failed to pass. Given industry support for the bill, passage is likely. [One wonders what legislative term of art will be developed for deepfakes.]
Preston seemed overly focused on the problem of “false political endorsements” rather than the big-picture problems for democratic institutions fighting the current information war. He commented throughout the hearing that he was surprised that there was little or no legislation against what he called "fraudulent" political speech or "lies."
It’s almost as if he was trying to set the rhetorical stage for any unwelcome outcomes in this November’s election.