Deepmark
06/03/2025

DeepMark’s Latest Research Accepted at ICLR 2025: A New Standard for Evaluating Audio Watermarking Robustness

DD
by DeepMark2 min read
DeepMark’s Latest Research Accepted at ICLR 2025: A New Standard for Evaluating Audio Watermarking Robustness

We are proud to announce that DeepMark’s latest research paper has been accepted for publication at the International Conference on Learning Representations (ICLR) 2025, one of the three most prestigious conferences in the fields of machine learning and artificial intelligence, alongside NeurIPS and ICML.

This acceptance marks a significant milestone for our team and affirms the impact of our ongoing research on the secure deployment of generative AI technologies.

A Framework for Robustness in Audio Watermarking

In this work, we introduce a comprehensive and extensible evaluation framework for assessing the robustness of audio watermarking algorithms. As the use of generative AI accelerates across industries, ensuring the security, traceability, and integrity of synthetic audio content is a critical challenge. Our proposed framework addresses this by enabling standardized, rigorous testing under a variety of real-world attack scenarios.

Key features of our framework include:

  • Support for Common Audio Editing Attacks: Simulating everyday manipulations such as cropping, filtering, or compression.
  • Desynchronization-Based Attacks: Evaluating resilience against time-scaling, pitch-shifting, and other synchronization-breaking techniques.
  • Deep Learning-Based Attacks: Including adversarial and reconstruction-based methods powered by generative models.
  • Process Disruption Attacks: A novel class of attacks introduced in our research, targeting generative AI systems without relying on prior knowledge of underlying architectures or signal processing techniques.

Designed with modularity and scalability in mind, this framework serves as a foundation for future research and industry standards, ensuring that watermarking systems can be meaningfully compared and improved upon over time.

Looking Ahead

We believe this work contributes a critical tool to the broader research community and industry stakeholders working on content authenticity, security, and traceability in the era of generative media.

We are preparing to release the full paper and open-source code in the coming weeks, enabling researchers and practitioners to utilize and build upon our findings.

At DeepMark, we are committed to advancing secure, transparent, and accountable AI systems. This recognition from ICLR reinforces our mission and motivates us to continue driving forward the state of the art in generative AI watermarking.

DeepMark
DeepMark
DeepMark

Your Content Deserves the Best Protection

Discover how our innovative AI watermarking tool can transform your digital protection strategy. Request a demo and let us guide you through features tailored to your needs.