I REGULARLY watch the excellent conservative commentator Victor Davis Hanson (VDH) on YouTube. Last Sunday, November 23, I clicked on what I assumed was his channel and watched what appeared to be his latest video. It covered the case of Mark Kelly, the Democratic Senator from Arizona and retired military officer who has recently been accused of sedition.
Everything seemed normal. The video looked and sounded exactly like VDH – the cadence, the tone, the presentation style. But after several minutes, I began to suspect something was seriously wrong.
It became increasingly clear that what ‘VDH’ was saying didn’t match VDH’s opinions or manner of speech. The tone was unusually legalistic and dry, arguing points that aligned precisely with pro-Kelly talking points, essentially defending him against the sedition accusations. In fact, the script read exactly like something an AI would generate if prompted: ‘Provide a defence of Mark Kelly with respect to the sedition accusation against him.’
The sophistication of the video was frightening. For someone like me, who watches almost all of VDH’s content, the fact that it almost fooled me shows just how dangerous these AI-generated fakes can be. The production was highly professional, replicating both his voice and face in a convincing manner. Much effort must have been invested into recreating VDH’s YouTube platform context. It’s not hard to imagine that this was a highly professional attempt to discredit a prominent commentator, presumably done by (or at the request of) senior Democrat activists.
I shared the link with my colleague Martin Neil, noting my suspicions:
‘This is so odd. I watch Victor Davis Hanson all the time and I‘m 99 per cent sure this is totally AI generated. It’s his voice and face but it’s not how he talks and what he is saying is 100 per cent AI generated boring text.’
However, by the time Martin clicked on it the next day the link produced a screen saying ‘This video is no longer available because the YouTube account associated with this video has been terminated.’
It was clear that the video was designed to deceive, yet there was no publicly accessible copy to warn others about it.
I tried to find archived versions, and while some discussions confirmed this was not the first time VDH had been targeted by AI-generated content, no reliable copy of this particular clip existed. Even AI tools such Grok could not retrieve it. Grok provided the following related Wayback Machine link:
‘ (captures the embedded YouTube short: VDH on “sedition backfire”, ~2 minutes; channel since deleted).’
This also failed, demonstrating just how fleeting and difficult to track these deepfakes can be.
This incident underscores a chilling reality: deepfake technology has advanced to the point where it can convincingly mimic respected public figures and spread disinformation in politically charged contexts. The danger is not just theoretical. When these videos are crafted with precision, they can deceive even well-informed audiences, potentially influencing public opinion, discrediting credible voices, and eroding trust in media.
We are entering a world where seeing is no longer believing. As AI-generated videos become more sophisticated, the public must develop new tools and critical thinking skills to discern fact from fabrication. Transparency, education, and proactive archiving of potential deepfakes are essential to prevent these manipulations from shaping political discourse.
The VDH deepfake was a small glimpse into what could become a massive problem for democracy. If this could almost fool someone very familiar with the source material, imagine the impact on the wider public. We must act, before seeing and hearing can no longer be trusted.
This article appeared on Norman Fenton’s blog on November 28, 2025, and is republished by kind permission.










