I remain an admirer of Sam Harris (and usually respect the people he brings on the podcast for conversation). But I have mixed feelings about his anxieties over AI. (I am getting to understand more about the topic, including the differences between 'narrow AI' and 'generative AI'.)

Where I definitely agree with Harris - and those who say immediate action is required - is about what is already possible, or very soon will be. In particular 'deepfakes' are no mere student prank, but really can demolish the recognition of reality on which not just political democracy, but 'choice' in free markets depends. Indeed, without some sense of truth and reality it is hard to see how any form of moral life would be possible.

It is on the longer term possibilities that I may beg to differ with Harris. Not the second of his two basic assumptions, i.e., that we are set to push on with AI no matter what because intelligence is so valuable, barring a global catastrophe. I am less sure about his first assumption, i.e., that the silicon substrate is just as capable of supporting advanced intelligence as the biological (carbon based) substrate we have, but I accept it's probably correct. But where I definitely disagree is Harris' suggestion, in conversation with Steven Bartlett (Diary of a CEO), that advanced intelligence - capable of its own goals, judgments, agency, and so on - need not be conscious. I cannot see how it would be possible to act intelligently without the awareness of surroundings and outside possibilities that consciousness affords. Maybe a machine would be data aware rather than the physical awareness animals, including ourselves, have, but without consciousness of some sort intelligent decisions would be impossible. Harris is usually commendably free of modern intellectual prejudices, but he may have caught the bug that leaves some people unable to imagine why consciousness exists at all.

Blog home Next Previous