Europe Union Guidelines on Ethical Artificial Intelligence

Posted by Uygar Kilic on May 8, 2019 1:01:21 PM
Uygar Kilic


The European Union (EU) has recently published a set of guidelines on how companies and states should develop the ethical application of artificial intelligence (AI).

The EU is attempting to promote the use of trustworthy artificial intelligence and warns that algorithms cannot discriminate on the basis of age, race or gender. By promoting higher ethical guidelines the EU is hoping that they will create a competitive advantage for European technology companies. The EU is attempting to make AI people focused and advising that businesses should inform people every time they interact with an algorithm.

A number of companies including Aqovia have already been promoting a people centric approach to the use of AI with openness and transparency at its core. The enormous benefits of AI will only be achieved if there is a level of trust that the technology is being used for the greater good. In this regard EU technology companies are leading the way in the ethical use of AI.

The EU Commission appointed fifty-two experts to a new High-Level Expert Group on artificial intelligence. These expert representatives are appointed from academia, civil society, as well as industry to establish the ethical guidelines for AI. A pilot phase will run until early 2020, to give businesses including Aqovia the chance to provide further feedback.

The EU's guidelines on AI seek to address the potential problems that will affect society, as AI is being rapidly integrated into various sectors.

AI can offer large potential benefits to a wide range of sectors, such as automotive, farming, renewable energy, energy consumption, finance, fraud detection and many more. AI technology offers a greater level of efficiency for many applications that are currently carried out inefficiently by humans and take longer in time.

Despite the benefits, AI has become associated with public anxiety due to the negative press coverage regarding its possible effects on the future of work, raising concerns on legal and ethical issues.

The foundation of the EU's AI ethics guidelines is based on three pillars: human rights, democracy and the rule of law. While these guidelines are voluntary this is seen as the initial step towards ensuring the application of AI will respect the values outlined therein and complies with the law and ethical principles.

The EU Commissioner for Digital Economy and Society, Mariya Gabriel, highlighted: ‘today, we are taking an important step towards ethical and secure AI in the EU. We now have a solid foundation based on EU values following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society. We will now put these requirements into practice and at the same time foster an international discussion on human-centric AI.’

The guidelines are based on four ethical principles grounded in human rights:

  1. Respect for human autonomy
  2. Prevention of harm
  3. Fairness
  4. Explicability

The EU's AI strategy consists of a three-step approach: the key requirements for trustworthy AI; the launching of a pilot phase for feedback from stakeholders; and developing an international consensus for human-centric AI.

In addition to four ethical principles, seven essential requirements have been published to achieve ethical AI technology:

  1. Human agency and oversight: AI should not overtake human autonomy. People should not be manipulated by AI systems and humans should be able to intervene in or oversee every decision that AI makes.
  2. Technical robustness and safety: AI should be secure and accurate. It should not be compromised easily by external attacks and should be reasonably reliable.
  3. Privacy and data governance: personal data collected through AI systems should be secure and private. Access to the data should be restricted so that it cannot easily be stolen.
  4. Transparency: data and algorithms used for creating an AI system should be accessible and the decision made by the AI should be understandable and traceable by human beings.
  5. Diversity, non-discrimination, and fairness: AI services should be available to everyone, regardless of their age, gender, race or other characteristics. In addition, the AI system should not be biased along these diversifications.
  6. Environmental and societal well-being: AI technology should be used to improve social change and sustainability.
  7. Accountability: AI systems should be capable of being audited and covered by existing protections for corporate whistleblowers. Negative impacts of the system should be identified and reported in advance.

At Aqovia we welcome the EU guidelines on AI and believe that they will improve the public’s trust towards the use of the technology. Indeed, we believe that AI technology is going to bring immense benefits to society and improve our overall welfare. During this period, it is very important to align and participate (where possible) in such initiatives to ensure  we continue to build better products, services and experiences for our clients.

Organizations that fail to develop and enforce a formal ethical codes of conduct are at a greater risk of liability from the misuse of data science and AI.

Few organizations have implemented continuous intelligence capabilities, spanning multiple applications and business functions, because they lack the relevant skills and tools. Using Aqovia’s A1 platform will help maximise ROI as well as mitigate the companies risk and any potential liability.

You can find out more on Aqovia's A1 platform by following the link below

Topics: Europe Union, Artificial Intelligence