All of a sudden, we're talking about artificial intelligence (AI), and inevitably with both alarms and pushback against them. This seems strange when Turing's original paper was published right back in 1950. But there is a question beyond even that of when AIs might develop agency (the experts differ widely on that one): What motivations could an 'intelligent' machine have?

We are often warned that machines would lack human emotions like empathy or compassion. No doubt, but on the same basis they would also lack hatred, jealousy, greed, envy, etc., all the feelings that make us nasty to one another. Simply, without emotions, what motivation would an AI have to do anything in particular (without being told to which is what agency means)? The usual answer is that it would be completely rational, but what's that? When Hume said in the 18th century that reason (nowadays more often called 'rationality') has to be 'the slave of the passions', he had restricted rationality to discovery of truth and falsehood. We usually interpret it more widely than that to include judgment on what to do with the truth we find, but inevitably emotions of one sort or another guide that judgment. So what could guide a machine's judgment? Some people might suggest accord with the principles of the universe, but if the universe is set to end run down and dead, what's the rationality in that? So possibly Kant's Categorical Imperative derived by reason?

Sometime soon we need to think again about rationality if we want any idea what a machine with agency (and judgment) even can do.

Blog home Next Previous