The European Commission has unveiled a list of ethical rules to regulate the development of artificial intelligence.
This list, drawn up by a committee of independent experts, follows the launch in April last year of a strategy to put Europe at the forefront of artificial intelligence.
According to the experts,”the human” must remain at the heart of technologies related to AI, which should “not diminish or limit” their autonomy, but instead preserve “fundamental rights”.
These technologies must also take into account “diversity” and promote “non-discrimination” as well as “social well-being and the environment”.
“Citizens should have full control over their own data,” say the experts, calling for “transparency” and AI surveillance mechanisms.
Artificial Intelligence systems should promote equitable societies and Human Rights, as well as positive social changes such as sustainability and ecological responsibility, and their “human control” must be guaranteed.
It also calls for the traceability of systems and promotes their accessibility in a non-discriminatory way, which would require trying to promote a “universal” design to take into account the needs of disabled people.
“The ethical dimension of artificial intelligence is not a luxury feature or an addition: it has to be an integral part of its development,” said EU Commission Vice President Andrus Ansip unveiling the measures in Brussels today.
In addition to the establishment of ethical rules, the plan presented in April 2018 aims for an investment of 20 billion euros in AI research by 2020 from the EU institutions, Member States and the private sector.
This money should support the development of AI in key sectors, such as transport and health, and strengthen research centers in an effort to keep pace with the US and China, both of which are investing heavily in the sector. In 2017 Beijing announced a public investment plan of 22 billion dollars (18 billion euros) in AI by 2020.