Artificial intelligence: 7 steps for trusty AI
April 8, 2019This being a new report from the European Commission, it should come as no surprise that its "seven essentials for achieving trustworthy AI," and robotics are themselves only one of three über-steps.
The other two steps include a "large-scale pilot with partners" and "building international consensus for human-centric AI" — more on both later.
First, those seven essentials:
- Human agency and oversight — this is the Commission's way of saying AI should support fundamental rights and "not decrease, limit or misguide human autonomy."
- Robustness and safety — a "trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors."
- Privacy and data governance — EU citizens should have full control over their data, and data concerning them "will not be used to harm or discriminate against them." This falls in with existing EU data protection law, but will be one to watch as the field of AI expands its reach.
- Transparency — the Commission wants AI systems to be "traceable." It's an interesting ambition considering that AIs and other machine learning systems are built to handle loads of data that humans cannot handle or understand.
- Diversity, non-discrimination and fairness — this one's fairly self-explanatory, but by that very same token, it will need greater definition by the Commission to lend it any real value.
- Societal and environmental well-being — "AI systems should be used to enhance positive social change… sustainability and ecological responsibility."
- Accountability — the Commission wants AI systems to be… erm, accountable.
But accountable by whom, for what and to whom, when, where and how? And with what level of enforcement?
Enough talk
The European Commissioner for Digital Economy and Society Mariya Gabriel says the EU is taking "an important step towards ethical and secure AI. We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society."
The seven essentials have been released as part of the European Commission's AI Strategy from 2018. They were designed by a high-level expert group, known as HLEG.
It aims to increase public engagement in the AI debate. But if it truly wants to achieve that, the Commission will have to improve its own communication tactics — or simply speak more plainly.
And get down to brass tacks.
In a press release, Ursula Pachl, HLEG member and deputy director general of the consumer group, BEUC, said it was "crucial to go beyond ethics now and establish mandatory rules to ensure AI decision-making is fair, accountable and transparent."
As for steps 2 and 3…
On Tuesday, April 9, the AI expert group (HLEG) will present their work so far during a "Digital Day" in Brussels.
The Commission also plans to launch an as yet ill-defined "pilot phase" in summer 2019 during which, it is assumed, further discussions will be held with so-called stakeholders.
Then, after that pilot phase ends in early 2020, the HLEG will review their findings and feedback.
On the international stage, the Commission says it will "strengthen cooperation with like-minded partners such as Japan, Canada and Singapore" and take an active role in G7 and G20 discussions, involving companies and other organizations.
But there's little detail so far on how the EU hopes to position itself as AI, machine learning and AI-assisted technologies, such as health and voice interaction media, are increasingly "monetized." There is nothing in those essentials on that.