Marcus Arvan convincingly argues in this article that while programming AI you can produce psychopathic behaviours both if you decide not to teach any moral target AND if you decide to teach moral targets.
Instead, you need to teach your machine some flexibility. Thus, I would argue, moral reasoning used for machines needs to be adjusted through a large set of cases and through rules dealing with specific cases. Can the Mīmāṃsā case-based application of rules help in such cases?
Monthly Archives: August 2018
“Do you know of any recent literature on X?”
This post is part of my series of suggestions for younger colleagues and students. I put here all the pieces of good advice I would have loved to hear while I was in their position…