Being able to identify AI-generated content is critical to promoting trust in information. While not a silver bullet for addressing problems such as misinformation or misattribution, SynthID is a suite of promising technical solutions to this pressing AI safety issue. This toolkit is currently launched in beta and continues to evolve. It’s now being integrated into a growing range of products, helping empower people and organizations to responsibly work with AI-generated content. SynthID uses a variety of deep learning models and algorithms for watermarking and identifying AI-generated content.