The Artificial Intelligence Safety Institute Consortium (AISIC), a public-private partnership for developing safe and reliable artificial intelligence (AI) in the U.S., has been launched.
According to the U.S. Department of Commerce, on the 12th (local time), AISIC will primarily focus on AI capability assessment, risk management, and developing synthetic media watermarking guidelines. Over 200 companies, including the government, academia, AI developers, and big tech companies such as OpenAI, Google, Microsoft, Amazon, hardware companies like Qualcomm, and financial companies, will participate.
The Department of Commerce said, “The consortium is the largest assembly of test and evaluation teams established so far, and it will focus on building the foundation of new measurement science for AI safety.”
“President Joe Biden has directed us to mobilize all means to set AI safety standards and protect the innovation ecosystem,” Secretary of Commerce Gina Raimondo said. “AISIC was established to help achieve this goal.”
The U.S. launched AISIC since it was widely agreed that a safety review was needed as AI rapidly developed. In the context of the recent spread of deepfake adverse effects, big tech companies like OpenAI, Google, and Meta in the U.S. have started labeling their companies’ AI images.
In addition, the European Union (EU), South Korea, and others are also actively involved in global cooperation for AI safety reviews.
Last November, the United Kingdom held an “AI Safety Summit” to discuss international cooperation for AI safety reviews. South Korea plans to jointly host a mini AI Safety Summit with the UK this May.
Most Commented