On one hand,
stopping it isn't just pointless, it's
impossible. China keeps releasing better, smaller, faster FREE models for anyone to use, which bypass tools like AI video watermarking, camera 3D depth verification, etc. Progress will continue due to capitalism, innovation. competition, and just the natural course of industry growth.
On the other hand, what SHOULD happen is:
1. Strict laws (ex.
Deepfakes) with
teeth (i.e. actual prosecution & fast turnaround)
2. Verification systems from trusted sources (not easy when even government accounts like
POTUS & the
White House are using AI)
There
are valuation tools out there, but because technology always stays a step ahead, it is going to become progressively harder to detect fakes due to both the quality & the sheer
volume of data as the market gets flooded:
A team of Cornell computer science researchers has developed a way to “watermark” light in videos, which they can use to detect if video is fake or has been manipulated, another potential tool in the fight against misinformation.
news.cornell.edu
New Sony video authenticity solution offers defense against deepfakes. It verifies footage using digital signatures and 3D depth detection.
www.androidheadlines.com
It's a big mess. Personally I think that AI-generated fake news will cause MASSIVE problems in coming years. We're just seeing the first inklings of it:
Federal judges using AI filed court orders with false quotes, fake names
Adelphi student accused of using AI on an assignment sues the school
Facebook, Instagram accounts falsely linked to predatory behavior via AI auto-moderation
Dutch election overshadowed by AI fakes and genocide accusations