The Cybersecurity and Infrastructure Security Agency (CISA) has introduced new guidelines aligned with the Department of Homeland Security’s (DHS) focus on AI safety. This initiative was highlighted by the establishment of a special safety and security board.
The recently released guidelines are specifically intended for the owners and operators of the six sectors identified as critical infrastructures, including agriculture, healthcare, and information technology. These guidelines provide a minimal set of requirements to ensure the security and resilience of AI systems through the use of responsible AI technologies. They offer guidance on governance, risk assessment, and ongoing management of AI-related processes.
Operators can utilize the AI risk management framework developed by the National Institute of Standards and Technology (NIST) to continuously assess the impact and risks associated with AI implementations. This includes identifying AI dependencies and environmental effects, and addressing any vulnerabilities that may arise. The guidelines also stress the importance of conducting inventories of AI usage and establishing procedures for reporting AI-related safety risks.
CISA recognizes the need for Chief Information Security Officers (CISOs) to understand the critical path of the AI supply chain and conduct tests to identify security gaps in AI systems. By focusing on specific areas, infrastructure owners can address a wide range of AI-related risks, including design flaws, cyber-attacks, and physical security breaches.
The document highlights the dual role of AI in critical infrastructure, as it has the potential to transform management by enhancing environmental sensing, automating customer service, improving physical security, and enhancing forecasting accuracy. However, while this innovation can strengthen infrastructure systems, it also exposes them to new forms of attacks and failures.
These guidelines are just one aspect of the Department of Homeland Security’s broader initiatives aimed at integrating AI technology into its national security frameworks. The pervasiveness of AI presents both opportunities and threats to sensitive infrastructure for critical systems, according to Homeland Security Secretary Alejandro Mayorkas. The agency is actively working to identify and mitigate these risks through strategic initiatives and collaboration with experts.
Earlier this year, DHS unveiled its AI strategy, which included the DBDS roadmap and the launch of the AI Corps. By 2024, the AI Corps aims to have 50 specialists who will enhance the agency’s AI capabilities. The new board involved in this initiative includes notable figures in the technology industry, such as Sam Altman from OpenAI and Sundar Pichai from Alphabet Company.
The inclusion of AI guidelines in CISA’s mandate not only fulfills the requirements of the recent AI-Associated executive order from the Biden administration but also sets an example for conducting similar risk analyses in various sectors. This demonstrates the U.S. Government’s leadership in the field of critical infrastructure security and its correlation with artificial intelligence.
Source: fedscoop