A recent online ad from the Daily Voice, an internet news site based in Connecticut, called for newscasters. The ad seemed normal, but for one specific line: “We will use the captured video with your likeness to generate video clips for stories continuously into the future.”
What the news site was promising to use is a variant of the AI technology known as deepfakes. Deepfakes are a type of AI that combines deep learning with fake or synthetic data or media — visual or other information that is manufactured, not produced by real-world events — to generate content.
While some consider deepfakes to be just synthetic data that enterprises can use to their advantage to train machine learning models, others see it as a dangerous tool that can sway political opinion and events, and harm not only consumers with fake and misleading images, but also organizations by eroding trust in authentic data.
Deepfakes as a useful tool
Enterprises must separate the bad from the good with deepfakes, said Rowan Curran, analyst at Forrester Research.
“It’s important to disambiguate this idea of deepfakes as a tool that individuals are using to fake a speech by a politician from these useful enterprise [tools] for generating synthetic data sets for very useful and very scalable enterprise [products],” Curran said.
It’s important to disambiguate this idea of deepfakes as a tool that individuals are using to fake a speech by a politician from these useful enterprise…