Marcus Arvan convincingly argues in this article that while programming AI you can produce psychopathic behaviours both if you decide not to teach any moral target AND if you decide to teach moral targets.
Instead, you need to teach your machine some flexibility. Thus, I would argue, moral reasoning used for machines needs to be adjusted through a large set of cases and through rules dealing with specific cases. Can the Mīmāṃsā case-based application of rules help in such cases?
Comments and discussions are welcome. Be sure you are making a point and contributing to the discussion.
What interests me is how learning projects designed in such a way already assume an anthropocentric ordering of reality ought to be privileged. With the nature of information looking more like a physical quantity, we are perhaps entering an era when self-replicating machines might do a much better job of knowledge transmission than humans. The ethical question at the centre of this debate perhaps depends on whether or not information will suffer the same fate as the universe. In other words, will information survive when all mass and energy are ‘exhausted’? If so, it there appears to be a good moral argument for humans to develop such machines as expediently as possible, rather than attenuate their development for some other unspecified reason(s)?
For clarification: One is a concept of information and the other is the concept of a mechanical device, the former is the program and the latter is the machine. Biological organisms are machines with programs encased in physical bodies. The information carrying the programs for development, growth, and repair are at the level of the DNA. The information carrying the program for seeking food, evading predator, and finding a mate is more flexible and is contained in the connectivity of the brain. “Simulated” programs can copy the “behavior” of an animal. For the programs to be “real”, the system must have a body to protect and nourish (it has to have a “skin in the game”). Evolution has already found the solution, because her we are “the biological machines”.
Just as we have made aircrafts that fly better than birds, we have made all sorts of other machines that do better than biological equivalent forms. The predator drone can hunt the enemy much better than any human. The same goes with programming for behavior. Moral reasoning can be programmed in a machine which can easily outperform a human machine. Go is a game requiring flexibility in which the AI program beat the human champion in October of 2015. Rules of Mimamsa can easily be put into a flexible program. The AI may even point out the faulty reasoning, if any, that may be imbedded in the system.
Thank you for this interesting reply. I am skeptical about the AI’s ability to outperform humans in the case of ethical decisions, since these require a lot of fine-tuning. There is a lot of implicit reasoning here and I am not sure we have reached this level in formalising the reasonings and creating corresponding algorithms yet (at least, the colleagues I met at conferences don’t seem to have reached this level yet).
AI is used to calculate celestial mechanics in order to target an orbit around Mars. It would be nearly impossible to do this in the pre-Calculus era before the days of Newton/Leibnitz. I completely agree with you that we (humans) have not yet adequately formalized the reasoning to drive ethical decisions. Our current state is similar to the “pre-Calculus” era. Once we formalize the reasoning to drive ethical decisions, AI would certainly come in handy to make the process more efficient. However, there is one other possibility, that of collaboration (between AI and humans) to allow us to formalize the ethical reasoning process in a novel way. In a more archaic fashion, the internet is already assisting us (humans) by linking vast numbers of brains across the globe.
My answer to the question “Can the Mīmāṃsā case-based application of rules help in such cases” – May be in some specific cases, but we cannot generalize. The negativity of AI is also a hype nowadays.
Of late, there is a misconstrued notion of AI. People are often confused with automation and intelligence. Typically AI deals with two broad areas – learning and reasoning. While learning happens with number crunching exercises that are specialty with computers per say, reasoning is mainly borrowed from Logic, which has roots in philosophy. Presently, the logic followed in AI is from Aristotle (https://plato.stanford.edu/entries/logic-ai/).
What we could do is perhaps, see how Indian philosophy could help in reasoning process. There has been some work in these areas, but yet to catch up in full spirit. My personal opinion is that Indian philosophy, especially Mīmāṃsā could be useful in certain parts such as reasoning with instructions and legal systems. But moral and ethical reasoning – I’m not sure.