Google Launches New Free Tool To Secure Your AI—Introducing SAIF

    3
    0
    Google Launches New Free Tool To Secure Your AI—Introducing SAIF


    Building on the Secure AI Framework announcement from last year, Google has today launched a new security tool to assess what risk AI systems pose to your organization. The SAIF Risk Assessment tool has been designed to help others to not only evaluate their security posture from the AI systems perspective but apply best practices and put the principles behind SAIF into action within any organization. Here’s what you need to know about the free Google SAIF tool.

    Introducing The Google SAIF AI Security Risk Assessment Tool

    A conceptual framework to help, in a collaborative fashion, with the securing of emerging AI technology., the Secure AI Framework was first announced in June 2023 by Google Cloud’s chief information security officer, Phil Venables, alongside the vice president of engineering for privacy, safety and security, Royal Hansen. “In the pursuit of progress within these new frontiers of innovation,” Google said, “there needs to be clear industry security standards for building and deploying this technology in a responsible manner.” Hence, SAIF was born. Across the 16 months that have passed, SAIF has been put into very real practice with the formation of the Coalition for Secure AI. This industry forum has worked to comprehensively advise security measures when it comes to the deployment of AI systems, using the SAIF principles as its bedrock. Today, Google SAIF has taken yet another giant leap forward with the launch of a free-to-use risk assessment tool that generates a custom checklist for organizations to use when securing their AI systems.

    ForbesGoogle Adds Nudity Filter, Scam Blocker And More For 1 Billion Messages Users

    Google’s SAIF Risk Assessment is available immediately, taking the form of a questionnaire-based tool that, if completed honestly and fully, can output thorough and practical guidance for security practitioners when it comes to securing the AI systems deployed at their organization.

    How The Google SAIF AI Security Tool Works

    Starting with questions aimed at gathering as much information as possible regarding an organization’s existing AI security posture, the Google SAIF questionnaire covers a number of distinct themes: training, tuning and evaluation; access controls to models and data sets; preventing attacks and adversarial inputs; secure designs and coding frameworks for generative AI; and generative AI-powered agents.

    The Google SAIF tool then starts highlighting the specific AI security risks it has determined within the AI system based upon those answers and, vitally, provides recommended mitigations for these. However, this is not just a checklist ticking exercise, Google SAIF also provides easy to understand reasoning behind the risks it has identified, whether these are data poisoning, prompt injection or model source tampering related. Nor does the Google SAIF tool shy away from the technicalities, with detailed technical risks and mitigating controls also provided. All of this is done, thanks to AI itself of course, almost instantly. There’s no need to wait for a time-consuming and expensive consultancy report to be compiled by an external agency. There’s even an interactive SAIF Risk Map to help navigate and understand the issues that have been uncovered and the way in which different security risks are introduced, exploited and mitigated throughout the AI development process.

    ForbesNew Gmail Security Alert For 2.5 Billion Users As AI Hack Confirmed

    “The SAIF Risk Assessment Report capability specifically aligns with CoSAI’s AI Risk Governance workstream,” a Google spokesperson said, “helping to create a more secure AI ecosystem across the industry.”

    The Google SAIF tool can be used, free of charge, by visiting SAIF.Google and following the links there.



    Source link

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here