AI Security
Driving Responsible Innovation: Reflections on a Year of AI Governance
Over the past two years, KPMG, working with GRF, has led interviews and working sessions with corporate leaders who believe that balancing governance strategy, while keeping pace with innovation, will distinguish AI-using organizations. This white paper examines the last year of risks and priorities, and looks at governance as a driver of innovation in AI.
The growth of the private AI market, an increase in U.S. and global lawmaking and regulation, and advocacy organizations and think tanks alike have propelled both scrutiny and excitement. But accelerated work in the field of AI, as well as this evolving regulatory landscape, have led to a confluence of legal and security risk for adopters. Learn about the risks of inaction, governance as a foundation for innovation, and advanced considerations as we look forward.
Practitioners’ Guide to Managing AI Security
The race to integrate AI into internal operations, and bring AI-based products and services to market, is moving faster than almost anyone could have imagined. Some security leaders have expressed concern that in the excitement over AI’s potential, critical security and assurance considerations are being overlooked.
Recognizing the disconnect between AI innovation and AI security, Global Resilience Federation convened a working group and asked KPMG to facilitate in-depth discussions among AI and security practitioners from more than 20 leading companies, think tanks, academic institutions, and industry organizations.
The output of this working group is the Practitioners’ Guide to Managing AI Security. The guide aims to provide insights and considerations that strengthen collaboration between data scientists and AI security teams across five tactical areas identified by the working group: Securing AI, Risk & Compliance, Policy & Governance, AI Bill of Materials, and Trust & Ethics.
The Leadership Guide to Securing AI
Artificial intelligence (AI) is critical to the future success and health of companies across industries.
To empower emerging and current AI security leaders, Global Resilience Federation (GRF) convened an experienced working group and asked KPMG to facilitate in-depth meetings and interviews with AI and security practitioners from more than 20 leading companies, think tanks, academic institutions, and industry organizations.
We believe that the output—The Leadership Guide to Securing AI—will be of great value to the GRF network and to other organizations seeking to explore this groundbreaking technology.