HALO: etHical-aware AdjustabLe autOnomous systems

Nowadays, citizens continuously interact with software systems, e.g., by using a mobile device, in their smart homes, or from on board of a car. These systems are more and more autonomous thanks to the widespread use of AI technologies and their impact on the social, economic, and political spheres is becoming evident.

The EU is confronting the dangers represented by unauthorized disclosure and improper use of personal data, but there is a less evident but more serious risk attaining the core of the fundamental rights of the citizens. Worries about the growth of the data economy and the increasing presence of AI-enabled Autonomous Systems (AS) have shown that privacy concerns are insufficient: other ethical values and human dignity are at stake. To this aim, the Ethics Guidelines for Trustworthy AI of EU High-Level Expert Group on AI set the requirements that an AI system should satisfy, among which: (i) respecting the rule of law; (ii) being aligned with agreed ethical principles and values, including privacy, fairness, and human dignity; (iii) keeping the humans in control, hence adjusting system’s autonomy with respect to the user preferences; and (iv) being robust and safe, that is system’s behavior remains trustworthy even if things go wrong.

To address these principles, humans should be supported in controlling the system’s autonomy by enabling them to take the operational control of the system when they feel that it would be necessary or by assuring that the systems’ decisions comply with their ethical preferences, without violating regulations and laws.

HALO approaches the challenge by empowering users with a software exoskeleton, which enables users to express their moral preferences and to adjust the system’s autonomy and the related interaction protocols, in an ethical-aware manner. The customization of the system’s autonomy is guaranteed by a software mediator that, depending on the user’s ethical preferences, first determines the new level of autonomy and then (re-)distributes autonomy and control among the involved entities (e.g., system’s components, software agents, humans interacting with the system, etc.).

HALO implements a paradigm shift from a static to a ground-breaking dynamic approach for ethical-aware adjustable AS by: