Turing Triage Test


Download 24.72 Kb.
Sana21.08.2020
Hajmi24.72 Kb.
#127197
Bog'liq
200108 Abstract Cerminarium-Schmiljun


From the “Turing Triage Test” to the importance of Moral Competence in (Social)-Robots:
Research Focus, Goals and Questions

Keywords: Moral Machines, Turing Triage Test, Chinese Room
Name: Dr. André Schmiljun (Adam Mickiewicz University in Poznań)

Address: Teterower Ring 54, 12619 Berlin

e-Mail: schmiljun@insystems.de
Alan Turing argued in his famous Turing test” – also known as “imitation game” (1950) – if a human being wasn’t able to distinguish between a reaction of a machine and the reaction of a human being, then we can attest the machine intellectual skills comparable to us. Thus, the idea is to say that machines can think the moment we are convinced that they can think. Inspired by Turing, Rob Sparrow introduced a modified version of this test in 2004 – the Turing Triage Test”. Similar to Turing, Sparrow proposes a moral dilemma - a “triage” situation - in which a choice must be made as to which of two human lives to save. Now, Sparrow argues that we know that machines have achieved moral standing comparable to a human when the replacement of one these peoples with an artificial intelligence leaves the character of the dilemma intact. In fact, a machine will pass the test if a reasonable person concludes to preserve the continuing existence of a machine over the life of a human being.
In my presentation, I like to introduce the test and discuss the dilemma. In the second part, I concentrate on the problems of this test. I will argue, that Sparrow’s Turing Triage Test” must face the same challenging questions like Turing, one of them brought up by John Searle in his prominent “Chinese Room”. According to Searle, robots lack intentionality, they are basically formal syntactic and follow rules. Thus, their moral standing must be questioned and the coherence of Sparrow’s “Turing Triage Test” seems to be controversial. Finally, I will discuss my research approach: My suggestion is that (social) robots interacting with human beings require moral competence if we don’t want them to be a potential risk in society, causing harm, social problems or conflicts. Neither the top-down (rule-based), nor bottom-up (context- situate moral) strategies are sufficient. I will argue for a hybrid version on the groundwork of Georg Lind’s dual aspect dual layer theory of moral self.


Literature

Bendel, O. 2013. “Wie viel Moral braucht eine Maschine?” (Source 12th August 2017. http://blog.zdf.de/hyperland) Liewo (June 2013).

Nowak, E. 2017. “Can human and artificial agents share an autonomy, categorical imperative-based ethics and ‘moral’ selfhood?” In Filozofia Publiczna I Edukacja Demokratyczna. Vol. 6 (2): 169-208.

Nowak, E. 2016. What Is Moral Competence and Why Promote It? In Ethics in Progress. Vol. 7: No. 1.322-333.

Lind, G. 2016. How to teach morality? Berlin: Logos Verlag.

Schmiljun, A. 2017. “Robot morality. Bertram F. Malle’s concept of moral competence.” In Ethics in Progress. Vol. 8 (2): 6-79.

Searle, John R. 1980. “Minds, Brains, and Programs”. In The Behavioral and Brain Sciences. (3), 417-457.

Sparrow, R. 2004. “The Turing Triage Test”. In Ethics and Information Technology 6 (4): 203-213.

Sparrow, R. 2014. “Can machines be people? Reflections on the Turing Triage Test.” In Lin, P.; Leith, A., Bekey G. A. (eds.) Robot Ethics. The ethical and social implications of robotics (pp. 301-316.). Cambridge: MIT University Press.

Turing, A. M. 1950. “Computing Machinery and Intelligence.” Mind 49: 433-460.









Download 24.72 Kb.

Do'stlaringiz bilan baham:




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling