Google Bounty Program

AI Security Challenges

Google has revealed an expansion of its Vulnerability Rewards Program (VRP) to offer incentives to researchers who identify vulnerabilities specific to generative AI systems. This move aims to enhance the safety and security of AI technologies.

Laurie Richardson and Royal Hansen of Google highlighted, “Generative AI presents unique challenges compared to conventional digital security. Issues such as unfair biases, data misinterpretations (often referred to as ‘hallucinations’), and model tampering come to the forefront.”

The expanded program will cover areas such as prompt injections, unauthorized data retrieval from training datasets, model tampering, adversarial attacks causing misclassification, and model theft.

AI Cybersecurity Measures

Earlier in July, Google launched an AI Red Team under its Secure AI Framework (SAIF) to proactively address potential threats to AI systems.

In its dedication to AI security, Google is also enhancing the AI supply chain’s security. It is leveraging established open-source security measures like the Supply Chain Levels for Software Artifacts (SLSA) and Sigstore.

Google explained the significance of these tools, stating, “Digital signatures from sources like Sigstore enable users to ensure software integrity, while metadata provided by SLSA gives insights into software composition and construction, empowering users to verify license agreements, pinpoint vulnerabilities, and recognize sophisticated threats.”

Proactive Steps in AI Safety

In related news, OpenAI has launched an internal Preparedness team dedicated to assessing and defending against potential large-scale risks to generative AI. This includes threats across the domains of cybersecurity, as well as chemical, biological, radiological, and nuclear (CBRN) dangers.

Furthermore, OpenAI, in collaboration with Google, Anthropic, and Microsoft, has initiated a $10 million AI Safety Fund. This fund is designed to support and foster research dedicated to AI safety.