The EU’s AI Act Is Barreling Toward AI Standards That Do Not Exist

Efforts to regulate artificial intelligence (AI) must aim to balance protecting the health, safety, and fundamental rights of individuals while reaping the benefits of innovation. These regulations will protect people from physical harms (like AI cars crashing), less visible harms (like systematized bias), harms from misuse (like deepfakes), and others. Regulators around the world are looking to the European Union’s AI Act (AIA), the first and largest of these efforts, as an example of how this balance can be achieved. It is the bar against which all future regulation will be measured. Notably, the act itself is intended only to outline the high-level picture of this balance. Starting in early 2023, accompanying technical standards will be developed in parallel to the act, and they ultimately will be responsible for establishing many of the trade-offs; early signs suggest that developing effective standards will be incredibly difficult. 

Caught between an unwillingness to compromise on the pro

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from Lawfare

Read the original article: