Deepfakes have become a global phenomenon, capturing widespread attention. The emergence of international companies utilizing AI-generated deepfakes has led to a consensus on how to approach this issue. Earlier this year, Google joined forces with other influential organizations such as OpenAI, Adobe, Microsoft, AWS, and the RIAA to form the C2PA steering committee. In light of concerns surrounding deepfake and mainstream AI misinformation, IT professionals are encouraged to embrace the initiatives of this agency, specifically referring to content credentials.
Content credentials, heralded as the new standard, are set to regulate the management of visual and video data across various industries. Therefore, the IT team must pay close attention to this development. Content credentials involve the use of digital metadata, allowing creators and content holders to receive proper credit and promote transparency within an ecosystem. This metadata, which includes the artist’s name and details, is embedded directly into the content upon export or download, making it impossible to remove.
Standardized content labels, created under the same rules and permissions, have a significant opportunity to implement consistent labeling in a universally accepting world thanks to the support of influential companies. Content credentials offer numerous benefits, including strengthening credibility and reliability among audiences by providing essential information about the author or creative process. This creates an environment that combats misinformation and disinformation.
Furthermore, attaching contact details to an artist’s work enhances their identity, enabling users to recognize and connect with them. Additionally, measures will be put in place to address internet content that is fake and deceptive. Australia has experienced a significant rise in deepfake fraud, mirroring trends seen in other parts of the world.
Deepfakes pose a genuine threat to the stability and security of Australians. A comprehensive long-term campaign against deepfakes should focus on raising awareness and educating the public about how deepfakes operate and the available options to avoid falling victim to such tricks.
To make this vision a reality, there needs to be a consensus within the industry, with key stakeholders providing the necessary technology and having the most significant impact in the field of AI. This is where content credentials come into play. While content credentials offer the best chance of establishing standards to combat the deepfake problem, challenges related to detection, regulation, and punishment of misuse persist. This means that prevention cannot solely rely on the industry or the support of media leaders. Implementations must extend across a vast portion of the internet, ensuring that most websites are as informed as viral sites in search engine results.
IT and AI professionals involved in content creation should strive to understand and implement content credentials, just as web developers have embraced security, SEO, and other standards to protect content from being banned. Steps that should be taken include fully integrating content credentials into workflows to maintain content authenticity and traceability, advocating for transparency both internally and externally, supporting regulation through collaboration with industry bodies and the government, collaborating with other professionals and organizations to develop consistent approaches and tools for identifying deepfake risks, preparing response strategies for when deepfake technology is detected, and leveraging community resources such as those provided by the eSafety Commissioner for up-to-date developments in cybersecurity.
The formation of deepfakes poses a significant challenge for IT professionals who must find effective solutions. Content credentials provide a solid foundation upon which the rest of the industry can build, offering a promising starting point for the world at large.