U.S. and Britain Lead Global Effort: AI Secure Design Blueprint

Trending news

The United States, Britain, and over a dozen other nations have recently unveiled what a senior U.S. official referred to as the first detailed international agreement aimed at ensuring the safety of artificial intelligence (AI) against malicious actors. The agreement emphasizes the need for companies to develop AI systems that are “secure by design.” While the 20-page document is non-binding, it outlines general recommendations, including the monitoring of AI systems for potential misuse, protection of data from tampering, and vetting of software suppliers.

Trending news 2
AI with humans

The director of the U.S. Cybersecurity and Infrastructure Security Agency, Jen Easterly, emphasized the significance of the agreement, noting that it is the first time multiple countries have affirmed that the primary focus in the design phase of AI systems should be security. Easterly highlighted that the guidelines prioritize safety over rapid market deployment and competition-driven cost reduction.

Trending news 3
AI image

The 18 countries that signed the agreement include the United States, Britain, Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore. While the framework addresses concerns related to preventing the hijacking of AI technology by hackers, it primarily focuses on security measures such as rigorous testing before the release of AI models.

Trending news 4
AI touch with humans

Despite this development, the agreement does not delve into challenging questions surrounding the ethical use of AI or the collection of data feeding these models. The rise of AI has raised various concerns, including its potential use in disrupting democratic processes, facilitating fraud, and leading to significant job losses.

Trending news 5
AI image 2

In terms of AI regulations, Europe is ahead of the United States, with lawmakers drafting rules around AI. Recently, France, Germany, and Italy reached an agreement on the regulation of AI, supporting “mandatory self-regulation through codes of conduct” for foundational AI models. However, the U.S. Congress has struggled to pass effective AI regulation, despite the Biden administration’s efforts to address risks related to AI for consumers, workers, and minority groups, as well as bolster national security through an executive order issued in October.