Pre-Purchase Insights: Everything you need to know before you buy.
By John Chow | 2023 Nov 18
There is no denying that generative AI usage has been nothing short of explosive over the past few years. According to a Statista report, the search volume for “AI” has tripled from 2022-2023, exponentially growing to the most recent 30.4 million searches in one month. The ability of artificial intelligence to write copy, create images, and even build videos sometimes indistinguishable from reality has caused a massive shift in content creation. However, this innovation has a dark side, as the unprecedented realism opens the door to dangerous implications, from scammers creating authentic-looking phishing emails to deceptive political campaigns spreading misinformation.
This surge in AI-generated content has led to a pressing need: the ability to differentiate between what’s been created by humans and that crafted by algorithms. Recently, at the world renowned Devoxx Technology Conference, Apryse’s Research Manager Michael Demey gave an insightful presentation into the current state of AI and how we can validate authenticity.
However, if you’d like to skim through the top points from the video, we’ve prepared them below.
The initial reaction to combat AI-generated deception was to use AI itself. Various projects were initiated to develop AI detectors capable of identifying synthetic content. However, this approach quickly revealed a concerning vulnerability — AI detectors could be exploited to train generative AI algorithms more effectively. This led to a dangerous cycle, where each advancement in detection technology fueled the development of more sophisticated AI that could bypass these safeguards, creating an unnecessary arms race.
At this point, it was time to think outside the box. The solution lies not only in developing better detectors but also in exploring alternative methods to identify and authenticate content.
One such alternative is the Coalition for Content Provenance and Authentication (C2PA). This open standard is maintained by a diverse group of companies spanning software and media industries and aims to prove the origin and authenticity of digital content while also tracking any changes or edits made to resources.
The C2PA establishes a framework for content provenance and authentication, ensuring that the origin and history of digital assets can be verified. This coalition's approach is designed to create a more transparent and trustworthy digital environment.
C2PA achieves its goals by adding metadata to files, detailing their origin and any edits made. The ‘C2PA Manifest file’, integrated into the digital resource, serves as a record of the content's journey. This metadata is then securely bound to the resources using digital signatures.
Digital signatures play a pivotal role in C2PA's methodology. These cryptographic techniques serve three crucial purposes: ensuring integrity, authenticity, and non-repudiation.
Hashing, a fundamental concept in cryptography, is employed to verify the integrity of a document. This ensures that the content remains unchanged and untampered, protecting against corrupted downloads or malicious alterations by third parties.
Public Key Infrastructure (PKI) comes into play when proving the identity of the content creator. The concept of public-private key pairs establishes a unique digital signature for each individual, allowing for the verification of the author's authenticity.
Certificate authorities, trusted entities that issue digital certificates, contribute to the non-repudiation aspect of digital signatures. This ensures that the content's author cannot deny being the creator, adding an extra layer of accountability.
A practical example of C2PA in action is Canon's incorporation of the technology into a physical device. This device embeds C2PA manifest files into pictures taken with Canon cameras, providing an additional layer of authenticity and traceability.
Imagine a war reporter on the front lines, capturing a critical moment through the lens of a Canon camera. With C2PA, they can not only prove that the image originated from their device but also showcase the journey of the file, including any edits made along the way, such as compression adjustments. This capability enhances the credibility of the captured content, offering a powerful tool in situations where the authenticity of digital assets is paramount.
As generative AI continues to advance, so do the methods in which we can decipher it from human-made content. C2PA, with its innovative approach to content provenance and authentication, stands as a beacon of hope in this digital landscape.
By combining metadata and digital signatures, this open standard provides a robust framework for differentiating between AI-generated and human-made content, ushering in an added layer of transparency and trust for digital content. In the face of the concerns posed by AI, the collaboration of industry leaders in initiatives like the C2PA proves that, with the right tools, we can maintain control over such powerful algorithms and ensure the authenticity of our digital experiences.
Share this post