New Bill Suggests Watermarking AI Content to Fight Deepfake Scams

Storj

Qualified
Jul 10, 2023
211
98
27
New Bill Suggests “Watermarking” AI Content to Fight Deepfake Scams

Introduction

On July 11, 2024, a bipartisan group of U.S. senators introduced a landmark piece of legislation aimed at combating deepfake scams, addressing copyright infringement, and regulating the use of data in AI training. The bill, known as the Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act), represents a significant step toward mitigating the risks associated with the rapid advancement of AI technologies. This legislation proposes a standardized method for watermarking AI-generated content to ensure transparency and authenticity, addressing the growing concern over deepfakes and their impact on various sectors, including the crypto industry.

Overview of the COPIED Act

Purpose and Objectives

The COPIED Act aims to tackle several critical issues related to AI-generated content. The bill proposes a regulatory framework that mandates AI service providers to embed metadata in all AI-generated content. This metadata, or watermark, would serve to disclose the origin of the content and ensure that it cannot be removed or altered. This approach seeks to enhance transparency and combat the misuse of AI technologies, particularly in the context of deepfake scams.

Legislative Background

The bill was introduced by Democratic Senator Maria Cantwell, who has been vocal about the need for regulatory measures to address the unchecked growth of AI technologies. The introduction of the COPIED Act comes amid growing concerns about the potential misuse of AI, particularly in the creation of deepfakes and other forms of deceptive media.

In a press release, Senator Cantwell emphasized the importance of the bill in providing "much-needed transparency" in an era where AI technologies are advancing rapidly and often without adequate oversight. The bill is designed to put creators, including journalists, artists, and musicians, back in control of their content by ensuring that AI-generated materials are clearly identified and traceable.

Impact on the Crypto Industry

Deepfake Scams in Crypto

One of the primary beneficiaries of the COPIED Act is the cryptocurrency industry, which has been severely impacted by deepfake scams. These scams often involve the use of AI-generated content to impersonate influential figures, including celebrities and industry leaders, to promote fraudulent investment schemes. By falsely implying official endorsement, these scams can mislead potential investors and cause significant financial losses.

Recent incidents, such as the live-streaming of a SpaceX launch using AI-generated voices and deepfakes to impersonate Elon Musk, highlight the growing prevalence of these scams. According to industry experts, deepfake scams could account for over 70% of all crypto-related crimes within the next two years. The COPIED Act aims to address this issue by providing a clear method for distinguishing between genuine and AI-generated content.

Broader Implications for Crypto Crime

While deepfakes are a significant concern, they represent only one aspect of the broader issue of AI-related crypto crimes. A recent report by Elliptic has shed light on the rise of AI-driven cyber threats, including state-sponsored attacks and sophisticated illicit activities. AI technologies, while offering numerous benefits, also pose risks when exploited by bad actors.

For instance, dark web forums are exploring the use of large language models (LLMs) for various crypto-related crimes. These include reverse-engineering wallet seed phrases and automating phishing and malware attacks. The availability of "unethical" versions of GPTs, such as WormGPT, underscores the need for vigilance and proactive measures to combat the misuse of AI technologies.

Current Legislative and Industry Reactions

Senate Hearing on AI Privacy

The introduction of the COPIED Act coincides with broader discussions about AI and privacy. On the same day the bill was introduced, the Senate Commerce Committee held a hearing on the need to protect Americans' privacy in the context of accelerating AI technologies. The hearing highlighted the urgency of addressing privacy concerns and ensuring that AI developments are managed responsibly.

Industry Responses

The response from various industry stakeholders to the COPIED Act has been generally positive. Many view the bill as a necessary step toward increasing transparency and protecting intellectual property. However, some experts have raised concerns about the practical implementation of the proposed watermarking system. They argue that while the bill addresses important issues, it may face challenges in terms of technical feasibility and enforcement.

International Perspectives

The COPIED Act is part of a broader global conversation about regulating AI technologies. Several countries have introduced or are considering similar measures to address the risks associated with deepfakes and other forms of manipulated media. For example, the European Union has proposed regulations aimed at ensuring transparency and accountability in AI systems. The international community's approach to these issues will likely influence the effectiveness and adoption of the COPIED Act.

Challenges and Considerations

Technical Challenges

Implementing a standardized watermarking system for AI-generated content presents several technical challenges. Ensuring that watermarks are both secure and difficult to remove is crucial for the effectiveness of the COPIED Act. Additionally, the integration of watermarking technology into existing AI systems and platforms will require significant coordination and collaboration among stakeholders.

Legal and Ethical Considerations

The bill also raises important legal and ethical questions. For instance, how will the COPIED Act address potential conflicts with intellectual property laws and privacy concerns? Balancing the need for transparency with the rights of individuals and organizations involved in AI development will be a key consideration as the bill progresses through the legislative process.

Future Outlook

As the COPIED Act moves forward, its success will depend on several factors, including the support it receives from both lawmakers and industry stakeholders. The bill's potential impact on deepfake scams and AI-related crimes could set a precedent for future legislation in this area. Continued monitoring and evaluation will be essential to ensure that the proposed measures effectively address the evolving challenges posed by AI technologies.

Conclusion

The introduction of the COPIED Act marks a significant step toward addressing the challenges associated with AI-generated content and deepfake scams. By proposing a standardized watermarking system, the bill aims to enhance transparency and protect creators' intellectual property. As the legislative process unfolds, it will be important to address the technical, legal, and ethical considerations associated with the proposed measures. The COPIED Act represents a proactive approach to managing the risks posed by rapidly advancing AI technologies and could serve as a model for similar initiatives worldwide.

For more details on the COPIED Act and related discussions, you can explore the following sources:

1. [Cryptonews](https://www.cryptonews.com/news/new-bill-suggests-watermarking-ai-content-to-fight-deepfake-scams)

2. [Senate Commerce, Science, Transportation Committee](https://twitter.com/commercedems/status/1677423715923450882)

3. [Elliptic Report on AI Crypto Crimes](https://www.elliptic.co/ai-crypto-crimes-report)