Teaching ethics to a machine? You need some casuistic reasoning

Marcus Arvan convincingly argues in this article that while programming AI you can produce psychopathic behaviours both if you decide not to teach any moral target AND if you decide to teach moral targets.
Instead, you need to teach your machine some flexibility. Thus, I would argue, moral reasoning used for machines needs to be adjusted through a large set of cases and through rules dealing with specific cases. Can the Mīmāṃsā case-based application of rules help in such cases?