Robot ethics

Robot ethics, sometimes known by the short expression “roboethics”, concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic (such as in healthcare or as ‘killer robots’ in war), and how robots should be designed such as they act ‘ethically’ (this last concern is also called machine ethics). Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.

While the issues are as old as the word robot, serious academic discussions started around the year 2000. Robot ethics requires the combined commitment of experts of several disciplines, who have to adjust laws and regulations to the problems resulting from the scientific and technological achievements in Robotics and AI. The main fields involved in robot ethics are: robotics, computer science, artificial intelligence, philosophy, ethics, theology, biology, physiology, cognitive science, neurosciences, law, sociology, psychology, and industrial design.

Fundamental topics
In the course of a few decades, robotics has become one of the main scientific-technological disciplines, in such rapid evolution that in a near future we humans will cohabit the planet, and we will collaborate with a new type of automatic machines: robots. This will involve many new and ethical, psychological, social and economic problems. “Robusty is an applied ethic, its aim is to develop scientific, cultural and technical tools and knowledge that are universally shared, regardless of cultural, social and religious differences. These tools will promote and encourage the development of robotics towards the wellbeing of society and the person. Moreover, thanks to roboethics, it will be possible to prevent the use of robotics against human beings “(Veruggio, 2002). For the first time in its history, humanity has the opportunity to build intelligent and autonomous entities.

From this point of view, it is necessary that the scientific community re-examine the concept of intelligence, it is no longer associated only with humans or animals, but also with machines. Similarly, complex concepts such as autonomy, learning, conscience, free will, decision-making, freedom, emotion, and many others do not have the same semantic or practical meaning, if they are related to human beings, animals or machines. In this context, it becomes even more natural and necessary for robotics to involve in its progress many other disciplines, including the it is necessary that the scientific community re-examine the concept of intelligence, it is no longer associated only with humans or animals, but also with machines.

The main positions on roboethics
Already during the first International Symposium on roboethics (Sanremo, Italy, 2004), three different ethical positions emerged in the robotics community, in relation to their responsibilities towards their technical-scientific activity (D. Cerqui, 2004):

Robotic not interested in ethics: it is the position of those who consider their research as strictly technical activities, free from moral or social responsibilities;

Robotic people interested in ethical issues in the short term: it is the position of those who express their moral concerns, regarding their professional activity, in immediate and simple terms of good and bad, referring to their definition of accepted cultural values and social conventions;

Robotic people interested in long-term ethical problems: it is the position of those who express their moral concern, in relation to their professional activity, in global terms and in the long term. For example, they consider problems such as the digital divide between North and South, or between different generations.

Disciplines involved in roboetics
The preparation of roboethics requires the involvement of experts from different disciplines who, by collaborating in international projects, committees and commissions, can update laws and regulations based on the problems resulting from the scientific and technological developments of robotics. It is possible that the need to create new curricula studiorum and new specializations able to manage such a complex situation (as happened, for example, in the case of forensic medicine) will arise during the course of the work. The main disciplines involved in roboethics are, in addition to robotics itself: computer science, artificial intelligence, philosophy, theology, biology, physiology, cognitive sciences, neurosciences, jurisprudence, sociology, psychology and industrial design.

Principles
As a human ethics, roboethics must conform to the principles and fundamental norms accepted, sanctioned and universally accepted in the main Charters on Human Rights, and among these:

Respect for dignity and human rights.
Equality, justice and fairness.
Benefits and disadvantages of each activity.
Respect for cultural differences and pluralism.
No discrimination or stigmatization.
Right to the protection of personal data.
Defense of Privacy.
Confidentiality.
Solidarity and collaboration.
Social responsability.
Sharing benefits.
Responsibility for the protection of the biosphere.

General ethical issues related to science and technology
Robotics shares with many other sectors of science and technology many of the ethical problems derived from the second and third industrial revolution, including:

Dual use of technology.
Impact of technology on the environment.
Effects of technology on the global distribution of wealth.
Technological partner gap, digital divide.
Access to technological resources.
Dehumanization of humans compared to machines.
Technology addiction.
Anthropomorphization of machines.

Roboethics Roadmap
In the 2006 European Robotics Research Network Roboethics Roadmap, the ethical dimension is set out as follows: Depending on how the robots’ abilities are perceived, there are different evaluations of robots in the ethical dimension:

Robots are nothing but machines: in this case, Roboethics is comparable to the ethics of any other mechanical science
Robots have an ethical dimension: robots are believed to have an intrinsic ethical dimension because as human symbolic products, they can expand and enhance man’s ability to act ethically.
Robots are moral agents: Artificial agents can now act as moral patients (ie objects of moral action) or moral agents. In the opinion of most roboethicists, it is not necessary for them to have a free will in order to act ethically. Here the attention is focused on the action and not the decision to act.
Robots are a new species: according to this view, robots will not only have consciousness, but will transcend human dimensions in terms of morality and intelligence.

Forecast EURON
According to the EURON forecast, robots will be developed and used in the near future in the following areas::

Production of humanoid robots: So the production of a robot or androids of human-like intelligence and emotional ability. In line with the so-called 2004 Fukuoka World Robot Declaration, EURON issues three programmatic points:
Next-generation robots will co-exist as partners with humans.
Next generation robots will support people both physically and mentally.
Next generation robots will contribute to the realization of a safe and peaceful society.

Modern production plants in factories but also in small businesses
Adaptable service robots and smart homes; On the one hand, these are humanoid service robots and, on the other hand, living spaces that are completely computerized, sensor-controlled and networked. As an illustrative example, the self-ordering refrigerator is mentioned, which, as soon as stocks go down, automatically provides replenishment to a merchant via the network.
Network robots: These are mainly developments of artificial intelligence on the Internet, as they are already beginning to be observed in search engines.
Out-of-the-box robots: Great progress has been made here, especially in space travel; but also in mining, warehousing and agriculture in the near future further developments are foreseeable.
Health Care and Quality of Life: In recent years, a robotization of medicine and surgery has been relatively unnoticed. Computer assisted diagnostic procedures, therapy robots and surgical robots have been approved in Europe since 2000.
Military robots: Integrated defense systems, autonomous vehicles and aircraft, and smart ammunition.
Edutainment: In addition to lessons, toys and entertainment industry here is also decidedly addressed the sex industry.

In the last two years, however, there has been growing criticism of the very optimistic view of the future role of robots in society. There is a growing danger that people will be less and less related to life because the artifacts can only convey the illusion of life. The effects of persistent interaction with pseudo-manic beings on the psyche of humans are at least considered potentially problematic. A substantial criticism of the use of robots in the social and military field, presented primarily by the computer pioneer Joseph Weizenbaum, can be found in the documentary Plug & Pray.

The ethical challenges of the coming years will thus not be “robot rights”, but the handling of the coming reality of a robotized society.

Robotics and Law
Parallel to the occupation with robotics from an ethical perspective, a jurisprudential consideration of the specific issues in the use of robots develops. The main focus is on civil and criminal liability as well as the shift in the liability allocation required by increased autonomy, for example in the area of robotics and AAL. With the use of robots to perform state tasks in areas sensitive to fundamental rights, for example in prisons In addition, there are also specific public-law issues, for example in the field of data protection, which go beyond the requirements of product safety and admission criteria. The use of robots will confront society in the future with a variety of previously unresolved legal problems. Most likely, this discussion is conducted in the context of autonomously driving cars, which may have to make a decision about life and death may also be,. So far, there are very few lawyers who deal with the topic of “robotic law” and advise in this area.

Current developments
Robotics has been discussed by various bodies since 2016/17, such as the IEEE, the European Parliament, the European Association for Robotics (euRobotics) and initiatives such as “Responsible Robotics”.

History and events
Since antiquity, the discussion of ethics in relation to the treatment of non-human and even non-living things and their potential “spirituality” have been discussed. With the development of machinery and eventually robots, this philosophy was also applied to robotics. The first publication directly addressing and setting the foundation for robot ethics was Runaround (story), a science fiction short story written by Isaac Asimov in 1942 which featured his well known Three Laws of Robotics. These three laws were continuously altered by Asimov, and a fourth, or zeroth law, was eventually added to precede the first three. in the context of his science fiction works. The short term “roboethics” was probably coined by Gianmarco Veruggio.

An important event that propelled the concern of roboethics was the First International Symposium on Roboethics in 2004 by the collaborative effort of Scuola di Robotica, the Arts Lab of Scuola Superiore Sant’Anna, Pisa, and the Theological Institute of Pontificia Accademia della Santa Croce, Rome. “After two days of intense debating, anthropologist Daniela Cerqui identified three main ethical positions emerging from two days of intense debate:

Those who are not interested in ethics. They consider that their actions are strictly technical, and do not think they have a social or a moral responsibility in their work.
Those who are interested in short-term ethical questions. According to this profile, questions are expressed in terms of “good” or “bad,” and refer to some cultural values. For instance, they feel that robots have to adhere to social conventions. This will include “respecting” and helping humans in diverse areas such as implementing laws or in helping elderly people. (Such considerations are important, but we have to remember that the values used to define the “bad” and the “good” are relative. They are the contemporary values of the industrialized countries).
Those who think in terms of long-term ethical questions, about, for example, the “Digital divide” between South and North, or young and elderly. They are aware of the gap between industrialized and poor countries, and wonder whether the former should not change their way of developing robotics in order to be more useful to the South. They do not formulate explicitly the question what for, but we can consider that it is implicit”.

These are some important events and projects in robot ethics. Further events in the field are announced by the euRobotics ELS topics group, and by RoboHub:

1942, Asimov’s short story “Runaround” explicitly states his Three Laws for the first time. Those “Laws” get reused for later works of robot-related science fiction by Asimov.
2004, First International Symposium on Roboethics, 30–31 January 2004, Villa Nobel, Sanremo, Italy, organized by School of Robotics, where, the word Roboethics was officially used for the first time;
2004, IEEE-RAS established a Technical Committee on Roboethics.
2004, Fukuoka World Robot Declaration, issued on February 25, 2004 from Fukuoka, Japan.
2005, ICRA05 (International Conference on Robotics and Automation), Barcelona: the IEEE RAS TC on Roboethics organized a Workshop on Roboethics.
2005–2006, E.C. Euron Roboethics Atelier (Genoa, Italy, February/March 2006). The Euron Project, coordinated by School of Robotics, involved a large number of roboticists and scholars of humanities who produced the first Roadmap for a Roboethics.
2006, BioRob2006 (The first IEEE / RAS-EMBS International Conference on Biomedical Robotics and Bio-mechatronics), Pisa, Italy, February 20, 2006: Mini symposium on Roboethics.
2006, International Workshop “Ethics of Human Interaction with Robotic, Bionic, and AI Systems: Concepts and Policies”, Naples, 17–18 October 2006. The workshop was supported by the ETHICBOTS European Project.
2007 ICRA07 (International Conference on Robotics and Automation), Rome: the IEEE RAS TC on Roboethics organized a Workshop on Roboethics.
2007 ICAIL’07, International Conference on Artificial Intelligence and Law, Stanford University, Palo Alto, USA, 4–8 June.
2007 International European Conference on Computing and Philosophy E-CAP ‘07, University of Twente, Netherlands, 21–23 June 2007. Track “Roboethics”.
2007 Computer Ethics Philosophical Enquiry CEPE ’07, University of San Diego, USA,12–14 July 2007. Topic “Roboethics”.
2008 INTERNATIONAL SYMPOSIUM ROBOTICS: NEW SCIENCE, Thursday FEBRUARY 20th, 2008, Via della Lungara 10 – ROME – ITALY
2009 ICRA09 (International Conference on Robotics and Automation), Kobe, Japan: the IEEE RAS TC on Roboethics organized a Workshop on Roboethics.
2012 We Robot 2012, University of Miami, FL, USA
2013 Workshop on Robot Ethics, University of Sheffield, Feb 2013
2013 We Robot 2013 – Getting Down to Business, Stanford University
2014 We Robot 2014 – Risks and Opportunities, University of Miami, FL, USA
2016 Ethical and Moral Considerations in Non-Human Agents, Stanford Spring Symposium, AAAI Association for the Advancement of Artificial Intelligence
2017 On October 25, 2017 at the Future Investment Summit in Riyadh, a robot called Sophia and referred to with female pronouns was granted Saudi Arabian citizenship, becoming the first robot ever to have a nationality. This attracted controversy, as it was not obvious whether this implied that Sophia could vote or marry, or whether a deliberate system shutdown could be considered murder; as well, it was controversial considering the lack of rights given to Saudi human women.

In popular culture
Roboethics as a science or philosophical topic has not made any strong cultural impact, but is a common theme in science fiction literature and films. One of the most popular films depicting the potential misuse of robotic and AI technology is The Matrix, depicting a future where the lack of roboethics brought about the destruction of the human race. An animated film based on The Matrix, the Animatrix, focused heavily on the potential ethical issues between humans and robots. Many of the Animatrix’s animated shorts are also named after Isaac Asimov’s fictional stories.

Although not a part of roboethics per se, the ethical behavior of robots themselves has also been a joining issue in roboethics in popular culture. The Terminator series focuses on robots run by an uncontrolled AI program with no restraint on the termination of its enemies. This series too has the same futuristic plot as The Matrix series, where robots have taken control. The most famous case of robots or computers without programmed ethics is HAL 9000 in the Space Odyssey series, where HAL (a computer with advance AI capabilities who monitors and assists humans on a space station) kills all the humans on board to ensure the success of the assigned mission after his own life is threatened.

Source from Wikipedia