Google has released VaultGemma, a large language model designed to keep sensitive data private during training. The model uses differential privacy techniques to prevent individual data points from being exposed, which makes it safer for handling confidential information in sectors like healthcare, finance, and government. The release is part of Google’s Gemma family of models and is aimed at researchers and developers who want to experiment with privacy-preserving AI systems. By open-sourcing the model, Google … More
The post Google introduces VaultGemma, a differentially private LLM built for secure data handling appeared first on Help Net Security.
This article has been indexed from Help Net Security
Read the original article: