![]() ![]() There is limited relevant empirical data collection, primarily in a few psychological studies on current moral and social attitudes of humans towards robots and other artificial entities. Beyond the conventional consequentialist, deontological, and virtue ethicist ethical frameworks, some scholars encourage “information ethics” and “social-relational” approaches, though there are opportunities for more in-depth ethical research on the nuances of moral consideration of artificial entities. The reasoning varies, such as concern for the effects on artificial entities and concern for the effects on human society. ![]() There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. We identify 294 relevant research or discussion items in our literature review of this topic. There is little synthesis of the research on this topic so far. Moreover, this responsibility holds, I claim, independently of whether the agent in question is “punishable” or not.Įthicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. The implications of this thesis are that at some point in the future we may be faced with situations for which no human being is morally responsible, but a machine may be. In place of such intentional accounts of moral agency, I provide a functionalist alternative, which makes conceptual room for the existence of AMAs. Against this thesis, I argue that the requirement of internal states for moral agency is philosophically unsound, as it runs up against the problem of other minds. Nevertheless, the argument is still often made that while some machines may be agents, they cannot be moral agents, given their ostensible lack of the requisite phenomenal states. Once I have established the shortcomings of this “standard account”, I move to elaborate on other, more plausible, conceptions of agency, on which some machines clearly qualify as agents. Whereas it is often argued that machines cannot be agents simpliciter, let alone moral agents, I claim that this argument is predicated on a conception of agency that makes unwarranted metaphysical assumptions even in the case of human agents. Moreover, I claim that whereas machines may conceivably be moral patients in the future, there is a strong case to be made that they are (or will very soon be) moral agents. Nevertheless, irrespective of this failure, I also argue that the entire project is misdirected in that moral patiency need not be a necessary condition for moral agency. ![]() I raise conceptual and epistemic issues with regards to the sense of sentience employed on this view, and I argue that the Organic View does not succeed in showing that machines cannot be moral patients. This view claims that because artificial agents (AAs) lack sentience, they cannot be proper subjects of moral concern and hence cannot be considered to be moral agents. On the Organic View of Ethical Status, being a moral patient is a necessary condition for an entity to qualify as a moral agent. if they are moral patients) or whether they can be sources of moral action (i.e. ![]() Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). ![]()
0 Comments
Leave a Reply. |