발간년도 : [2023]
논문정보 |
|
논문명(한글) |
[Vol.18, No.6] Validation of RSDA Model in Moral Decision-Making of Artificial Moral Agent or AI Robots |
|
논문투고자 |
Jong-Wook Kim, Namin Shin |
|
논문내용 |
The objective of this research is to validate the RSDA model for forecasting decision-making processesin Artificial Moral Agents (AMAs) or AI Robots. The RSDA model delineates the Rubric evaluation,Scenario development, and Data collection phases, ultimately culminating in the creation of an Algorithmcapable of predicting robot or AI responses that closely align with human decision-making. Thisinvestigation demonstrates the feasibility of a hybrid AMA model that can emulate human ethicaljudgment by discerning ethical scores through the analysis of human decision-making patterns in real-lifescenarios. Data for this study were gathered from a cohort of elementary and college students whoresponded to four ethical dilemma scenarios involving domestic, medical, and educational robots, as wellas autonomous vehicles. The Univariate Dynamic Encoding Algorithm for Searches (uDEAS) wassubsequently employed to construct a statistical model that conforms to the decision-making patternsobserved in human groups under consideration. According to the results of the RSDA model, theabsolute mean ethics score for ethical principle 1 is 0.49 for elementary school students and 1.53 foruniversity students, indicating that ethical awareness of human rights develops as students grow older. Inaddition, the average standard deviation of the ethics scores of the five principles is 1.02 for elementaryschool students and 0.67 for college students, indicating that ethical judgment narrows with age andethical consensus is formed. The findings of this study affirm that the RSDA model, while intuitive,systematically elucidates each step and holds substantial promise for deployment in scenarios whereinintelligent agents necessitate human-like decision-making capabilities. Moreover, the RSDA model isanticipated to enhance the credibility of AMAs by augmenting the transparency and explainability ofdecisions made by social robots or AI. |
|
첨부논문 |
|
|
|
|
|