Intro -- Acknowledgments -- Contents -- Chapter 1: Introduction -- References -- Part I: Can an AI System Be Ethical? -- Chapter 2: Bias and Discrimination in Machine Decision-Making Systems -- 2.1 Introduction -- 2.2 Why Machine Failure Is More Serious -- 2.3 How Machine Learning Works -- 2.4 What Is Meant by Machine Discrimination -- 2.4.1 Fairness Through Unawareness -- 2.4.2 Individual Fairness -- 2.4.3 Counterfactual Fairness -- 2.4.4 Group Fairness -- 2.4.5 Impossibility of Fairness -- 2.5 What We Are Talking About: Example of Machine Discrimination -- 2.6 Why Machine Learning Can Discriminate -- 2.7 How Machine Discrimination Can Be Overcome -- 2.7.1 Pre-processing for Fairness -- 2.7.2 In-training for Fairness -- 2.7.3 Post-processing for Fairness -- 2.8 Conclusion -- References -- Chapter 3: Opacity, Machine Learning and Explainable AI -- 3.1 Introduction -- 3.2 Fundamentals of Trustworthy and Explainable Artificial Intelligence -- 3.3 Dimensions and Strategies for Promoting Explainablity and Interpretability -- 3.3.1 Dimensions of Explainability and Interpretability -- 3.3.2 Interpretability Strategies -- 3.4 Digging Deeper on Counterfactual Explanations -- 3.4.1 Basics of Counterfactual Explanations -- 3.4.2 Overview on Techniques for Counterfactual Explanations -- 3.5 Future Challenges for Achieving Explainable Artificial Intelligence -- 3.5.1 Multimodal Data Fusion for Improved Explainability -- 3.5.2 Reliable and Auditable Machine Learning Systems -- 3.5.3 GPAI Algorithms to Learn to Explain -- 3.6 Concluding Remarks -- References -- Chapter 4: The Moral Status of AI Entities -- 4.1 Introduction -- 4.2 Can Machines Be Moral Agents? -- 4.3 Do We Need a Mind to Attribute Moral Agency? -- 4.4 The Challenge of Responsibility -- 4.5 Artificial Moral Patients and Rights -- 4.6 Relationalist Proposals -- 4.7 Conclusion -- References
Part II: Ethical Controversies About AI Applications -- Chapter 5: Ethics of Virtual Assistants -- 5.1 Introduction -- 5.2 What Are Virtual Assistants? -- 5.3 What Ethical Issues Do Virtual Assistants Raise? -- 5.3.1 Human Agency and Autonomy -- 5.3.1.1 Manipulation and Undue Influences -- 5.3.1.2 Cognitive Degeneration and Dependency -- 5.3.2 Human Obsolescence -- 5.3.3 Privacy and Data Collection -- 5.4 Should We Use Virtual Assistants to Improve Ethical Decisions? -- 5.5 Concluding Remarks -- References -- Chapter 6: Ethics of Virtual Reality -- 6.1 Introduction -- 6.2 Preliminaries -- 6.2.1 Prehistory and History -- 6.2.2 Is It Real? -- 6.3 My Avatar -- 6.3.1 What They Reveal About the User -- 6.3.2 How They Influence the Behaviour of Other Users -- 6.3.3 How They Influence the User's Behaviour -- 6.4 What Is Good -- 6.5 What Is Bad -- 6.5.1 Personal Risks -- 6.5.2 Social Risks -- 6.6 What Is Weird -- 6.7 Ethical Issues -- 6.7.1 Privacy -- 6.7.2 Ethical Behaviour -- 6.8 Conclusions -- References -- Chapter 7: Ethical Problems of the Use of Deepfakes in the Arts and Culture -- 7.1 Introduction: What Is a Deepfake? Why Could It Be Dangerous? -- 7.2 Are Deepfakes Applied to Arts and Culture Harmful? -- 7.2.1 Encoding-Decoding and GAN Deepfakes -- 7.2.2 The Moral Limit of Artistic Illusion -- 7.2.3 Resurrecting Authors -- 7.2.4 Falsifying Style -- 7.3 The Limits of Authorship -- 7.4 Conclusion -- References -- Chapter 8: Exploring the Ethics of Interaction with Care Robots -- 8.1 Introduction -- 8.2 State of Art -- 8.3 What Are Care Robots? -- 8.3.1 Definition -- 8.3.2 A Bit of History -- 8.3.3 Taxonomy -- 8.3.4 Some More Examples of Existing Robots -- 8.4 Design -- 8.5 An Ethical Framework for Care Technologies -- 8.6 Conclusion -- References -- Chapter 9: Ethics of Autonomous Weapon Systems -- 9.1 Introduction -- 9.2 Autonomous Weapon Systems
9.2.1 Definitions -- 9.2.2 Examples -- 9.2.2.1 Sentry Robots: SGR-A1 -- 9.2.2.2 Loitering Munitions with Human in the Loop: Switchblade and Shahed-136 -- 9.2.2.3 Autonomous Loitering Munitions: HARPY -- 9.2.2.4 Autonomous Cluster Bomb: Sensor Fuzed Weapon (SFW) -- 9.2.2.5 Hypothetical AWS: SFW + Quadcopter + Image Recognition Capabilities -- 9.3 Legal Basis -- 9.4 Main Issues Posed by AWS -- 9.4.1 Low Bar to Start a Conflict: Jus Ad Bellum -- 9.4.2 Availability of Enabling Technologies and the Dual Use Problem -- 9.4.3 Meaningful Human Control -- 9.4.4 Unpredictability of AWS -- 9.4.5 Accountability -- 9.4.6 Human Dignity-Dehumanization of Targets -- 9.5 Conclusions -- References -- Part III: The Need for AI Boundaries -- Chapter 10: Ethical Principles and Governance for AI -- 10.1 Intro: Risk and Governance -- 10.2 AI Risks, Responsibility and Ethical Principles -- 10.3 Ethical Guidelines and the European Option for AI Governance -- 10.4 The Artificial Intelligence Regulation in Europe -- 10.5 AI Governance: Open Questions, Future Paths -- References -- EU Legislation and Official Documents Cited -- Other Resources Mentioned -- Chapter 11: AI, Sustainability, and Environmental Ethics -- 11.1 Introduction -- 11.2 Energy Demands and Environmental Impacts of AI Applications -- 11.3 What Is Sustainability? -- 11.4 A Path to Make AI More Sustainable from Environmental Ethics -- 11.4.1 The Anthropocentric Concern for the Environmental Costs of AI -- 11.4.2 The Biocentric Concern for the Environmental Costs of AI -- 11.4.3 The Ecocentric Concern for the Environmental Costs of AI -- 11.5 Ethical Values for a Sustainable AI -- 11.6 Conclusions -- References -- Chapter 12: The Singularity, Superintelligent Machines, and Mind Uploading: The Technological Future? -- 12.1 Introduction -- 12.2 The Advent of the Singularity: Raymond Kurzweil's Predictions
12.2.1 Is the Singularity Near? -- 12.2.2 From Moore's Law to Law of Accelerating Returns -- 12.3 The Roadmap to Superintelligent Machines -- 12.3.1 Concerns and Uncertainties -- 12.3.2 The Future of Superintelligence by Nick Bostrom -- 12.4 What if We Can Live Forever? Dreams of Digital Immortality -- 12.4.1 Types of MU: The Analysis of David Chalmers -- 12.4.2 Will I Still Be Myself in a Virtual World? Problems with Personal Identity -- 12.5 Conclusions -- References