2007/10/01

AI vs. Ethics

《Artificial Intelligence: A Modern Approach, 2/e》p.36:

Omniscience, learning, and autonomy

We need to be careful to distinguish between rationality and omniscience. An omniscient agent knows the actual outcome of its actions and can act accordingly; but omniscience is impossible in reality. Consider the following example: I am walking along the Champs Elysées one day and I see an old friend across the street. There is no traffic nearby and I'm not otherwise engaged, so, being rational, I start to cross the street. Meanwhile, at 33,000 feet, a cargo door falls off a passing airliner, and before I make it to the other side of the street I am flattened. Was I irrational to cross the street? It is unlikely that my obituary would read "Idiot attempts to cross the street."

This example shows that rationality is not the same as perfection. Rationality maximizes expected performance, while perfection maximizes actual performance. Retreating from a requirement of perfection is not just a question of being fair to agents. The point is that if we expect an agent to do what turns out to be the best action after the fact, it will be impossible to design an agent to fulfill this specification---unless we improve the performance of crystal balls or time machines.

...

這段論證和倫理學結果主義如出一轍!尤其 actual outcome 正是孫老師用的詞,特別敏感 XD。

--
幾天內我應該會寫一小篇關於 AI 的 blog XD。

Labels:

Blogger yen310/01/2007 5:45 pm 說:

最近上的課說functional programming 是用來解釋AI 的,不知道對不對

 
Blogger Josh Ko10/02/2007 12:13 am 說:

奇特的論調,有比較詳細的說法嗎?XD

 
Blogger yen310/02/2007 1:11 am 說:

還沒拿到課本,等我拿到課本再跟你說吧

 

<< 回到主頁