Study: Only two out of 2,000 people can accurately recognize deeply falsified content

February 19, 2012 - WithDeep fakesTechnology (deepfake) continues to evolve and concerns about disinformation, fraud and identity theft are growing. A new study shows that people are more concerned about AI The tools have a very low level of awareness, making it difficult to effectively recognize deeply falsified content.

Study: Only two out of 2,000 people can accurately recognize deeply falsified content

According to a new study by iProov, most people have trouble distinguishing deeply fake content from real content. The study invited 2,000 participants from the U.K. and U.S. and showed them a range of real and AI-generated images and videos. The results showed thatOnly 0.1% of the participants - that is, two people - were able to accurately distinguish real content from deeply faked content.

The study also found thatOlder people are more likely to be deceived by AI-generated fake contentThe following is a list of the most common types of forgeries. Approximately 55 to 64 year olds in the 30% and 65+ year olds in the 39% had never heard of deep falsification before. Although the younger participants were more confident in their ability to detect deep forgeries, they did not actually outperform the other age groups on the test.

The study found that deeply faked videos were more difficult to recognize than images, and participants were less likely to correctly identify a fake video than a fake image 36%.

1AI notes thatSocial media platforms are considered to be the main distribution channel for deeply falsified content. Nearly half of the participants (49%) identified Meta-owned platforms, including Facebook and Instagram, as the most common sources of deeply fake content, while 47% pointed to TikTok.

Even when people suspect that a piece of content is a deepfake, most do not take action. Only 20% of respondents said they would report suspected deepfake content if they encountered it online.

As deep-fake technology becomes more and more realistic, iProov believes that human perception alone can no longer reliably identify deep-fake content, and that biometric security solutions with live detection are needed to address the increasingly realistic deep-fake threat.

statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

Former OpenAI Chief Technology Officer "sets up his own business", Weng Lai and other old bosses to join him

2025-2-19 11:16:51

Information

Google Holds Back: Gemini Will Have AI Video Generation Built In, So Everyone Can Be A Director

2025-2-19 11:19:21

Search