<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>elisa freschiTeaching ethics to a machine? You need some casuistic reasoning &#8211; elisa freschi</title>
	<atom:link href="https://elisafreschi.com/2018/08/07/teaching-ethics-to-a-machine-you-need-some-casuistic-reasoning/feed/" rel="self" type="application/rss+xml" />
	<link>https://elisafreschi.com</link>
	<description>These pages are a sort of virtual desktop of Elisa Freschi. You can find here my cv and some random thoughts on Sanskrit (and) Philosophy. All criticism welcome! Contributions are also welcome!</description>
	<lastBuildDate>Wed, 22 Apr 2026 19:06:15 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	
		<item>
		<title>Teaching ethics to a machine? You need some casuistic reasoning</title>
		<link>https://elisafreschi.com/2018/08/07/teaching-ethics-to-a-machine-you-need-some-casuistic-reasoning/</link>
		<comments>https://elisafreschi.com/2018/08/07/teaching-ethics-to-a-machine-you-need-some-casuistic-reasoning/#comments</comments>
		<pubDate>Tue, 07 Aug 2018 14:44:00 +0000</pubDate>
		<dc:creator>elisa freschi</dc:creator>
				<category><![CDATA[deontic]]></category>
		<category><![CDATA[Mīmāṃsā]]></category>
		<category><![CDATA[Marcus Arvan]]></category>
		<guid isPermaLink="false">http://elisafreschi.com/?p=2809</guid>

				<description><![CDATA[Marcus Arvan convincingly argues in this article that while programming AI you can produce psychopathic behaviours both if you decide not to teach any moral target AND if you decide to teach moral targets. Instead, you need to teach your machine some flexibility. Thus, I would argue, moral reasoning used for machines needs to be [&#8230;]]]></description>
					<content:encoded><![CDATA[<p>Marcus Arvan convincingly argues in <a href="https://medium.com/@marcusarvan/were-programming-a-i-to-be-psychopaths-and-how-not-to-388908d18097" rel="noopener" target="_blank">this</a> article that while programming AI you can produce psychopathic behaviours both if you decide not to teach any moral target AND if you decide to teach moral targets.<br />
Instead, you need to teach your machine some flexibility. Thus, I would argue, moral reasoning used for machines needs to be adjusted  through a large set of cases and through rules dealing with specific cases. Can the Mīmāṃsā case-based application of rules help in such cases?</p>
]]></content:encoded>
			

		<wfw:commentRss>https://elisafreschi.com/2018/08/07/teaching-ethics-to-a-machine-you-need-some-casuistic-reasoning/feed/</wfw:commentRss>
		<slash:comments>5</slash:comments>
				<post-id xmlns="com-wordpress:feed-additions:1">2809</post-id>	</item>
	</channel>
</rss>