I was going to write this newsletter about Juwan Howard, former Fab Five basketball player at the University of Michigan, who just became the head coach , too. I will probably follow up with that one in the next couple weeks.
Instead, I'm going to focus this one on the recent doctored video of House Speaker Nancy Pelosi that spread through social media, perpetuated and amplified by certain, ahem, political circles. The video sits at the intersection of politics, AI, media, social media; basically all the topics I set out to talk about in this space.
(Editor’s note: I am the editor, so I don’t really know why I just wrote that. But I want to do an all-music edition soon. Stay tuned. Or listen and subscribe to my playlist in the meantime.)
OK, back to this Pelosi video. Here's The Atlantic, in the aftermath:
Many news outlets called it a fake; others called it doctored or distorted. Whatever you want to label it, the video was created to spread, and that’s exactly what happened. The Facebook page Politics WatchDog posted a version that has been viewed millions of times, eliciting sneering comments about Pelosi, possibly from viewers who didn’t realize that the video had been manipulated. Others appeared on Facebook, Twitter, YouTube, and elsewhere. President Donald Trump tweeted a reference to the video; his personal attorney Rudy Giuliani shared it, too, although Giuliani later deleted his post. News outlets have chased the story with fervor, even while correctly noting that such pursuit snares the media in the very trap the makers of the video hoped to set.
While not a "deepfake," it's another data point towards the things to come: partisans, politicians, shitheads, and unsavory national and international actors will increasingly use doctored or faked videos that leverage AI technology to overtly or covertly push fake content, for various nefarious reasons. That has profound implications for governments, for businesses, for people (this is a next-level revenge tactic).
At a more important/fundamental level, the very definitions of trust, truth, and reality are at stake. What happens when you can't believe what you see? Or hear? It all sounds like hyperbole. I don’t think it is. (I do want to say that I don’t think the Pelosi video, in-and-of itself, is an inflection point in disinformation. But I do think it’s good that there’s so much attention being paid to the implications by such a large swath of people. So let’s talk about it.)
To set the stage, doctored and distorted videos are nothing new. Political campaigns have always clipped and cut the content of others to make a point or slime an opponent. Check CNN's deepfake guide, which includes callouts like the incorporation of Forrest Gump into real historical footage. The techniques required to do that work were creative, in nature. Highly-skilled creative people at Hollywood studios with the chops, time, and resources uniquely available to them.
The difference between creative applications and deepfakes is that deepfakes can be created with a level of automation and sophistication using technology that continues to get cheaper, better, and available to more people. And they change the sound and the visuals of a video or recording, effectively altering the things we typically take for granted as "real."
Watching how social media platforms address the upcoming onslaught of deepfakes is going to be fascinating and important, if you're of the belief that deepfakes and technologically-advanced "doctored" videos have the potential to be very harmful. And their policies and stances, sometimes down to the video level, are likely to evolve over time. But these videos will be born on, spread through, and be kept alive on social platforms.
The platforms often prohibit graphic violence, abuse, or threats of physical harm, but videos like the one of Pelosi highlight the large grey area that these platforms have created for themselves. Does the Pelosi video hit those key criteria? Facebook has not removed the video. Nor has Twitter. YouTube has. The platforms would rather live in a world where they’re not responsible for the content on their platforms.
On the other side of the platforms debate, to what degree do we or would we want these companies to make editorial decisions? And with what accountability? What would they deem as art, or acceptable? What would be considered malicious or propaganda? What rules can you write that would effectively distinguish between satire and political hoaxes?
My current opinion is that the platforms can’t and shouldn’t stay “neutral” with regard to content. What the solves are? I have no clue.
Finally, there seems to be consensus in D.C. that this new level of disinformation tactics, especially more sophisticated deepfake videos, will become more prevalent leading up to the 2020 election and beyond. Instead of addressing this new threat head-on, government appears to be doing nothing of substance.
I haven't scratched the surface of any of the issues I addressed above. The complexities, the implications, or my own depth of understanding (I seriously feel like I comprehend 2% of what's happening, maybe less!). It's going to be an ongoing—and terrifying!—journey. I know I’m missing some perspectives or angles, too. If you have thoughts/insights/comments, I’m all ears (assuming your content to me hasn’t been altered).
So with that very optimistic and inspiring newsletter, here's a lizard running across the court at a professional tennis tournament.