We’ve been obsessed with ethical dilemmas and AI ever since the word “robot” was invented in the 1920 Russian play, RUR, which dealt with a robot revolution leading to the extinction of the human race. Now, nearly 100 years on, the European Commission is looking to formalize the role of AI and its reach with this new set of rules. Does it go far enough, and will the AI industry take note?
What are the EU’s AI Guidelines?
One of the biggest gauntlets thrown down by the new regulation is a set of seven requirements that AI developers will need to meet. Here’s the summary straight from the report, which is downloadable here. (1) human agency and oversight (2) technical robustness and safety (3) privacy and data governance (4) transparency (5) diversity, non-discrimination and fairness (6) environmental and societal well-being (7) accountability These key needs establish a focus on transparency and oversight that’s definitely a step in the right direction.
How Will the EU AI Guidelines Be Enforced?
They won’t – for now. The EU’s new set of regulations is are intended as guidelines rather than actual rules. However, there’s a chance that as AI develops, governments will be forced to regulate the industry further. By setting up expectations, the EU’s new document serves an important role in helping to shift the public conversation into the right direction, and might one day help those government agents when they’re building a set of regulations that will actually be enforceable.
Why has the EU Created AI Guidelines?
We’ve spent milliennia figuring out basic ethics that (most) everyone can agree on. Don’t exploit others’ personal information, respect your fellow man, don’t harm or injure others, and so on. But, as increasingly advanced AI technology develops, it’s a pipeline that’s capable of delivering unethical results. Without guidelines or anything to call attention to ethical issues, any new tech can potentially repeat past mistakes at a vast (and automated) scale. More practically, AI ethics have been in the news recently. Microsoft employees banded together in February in order to protest a still-ongoing $479 million contract with the US Army for an Integrated Visual Augmentation System. The protest mirrored an earlier Microsoft backlash against a multi-million-dollar ICE contract related to facial recognition AI.
Google’s AI Ethics Panel
Google, meanwhile, has been struggling with its own AI ethical battles. Namely, it established and immediately disbanded a new AI ethics panel within the space of a week. The issue that likely led to the panel’s dissolution was social media’s apprehension about panelist Kay Coles James, president of conservative think tank the Heritage Foundation. However, it won’t stop Google from pressing forwards, with the tech giant saying it would “find different ways of getting outside opinions on these topics.” Ethics in AI? Everyone agrees they’re needed. But getting people to agree on how to define and approach them is far more difficult. Hopefully the EU’s new guidelines help guide the conversation rather than add to the cacophony.