Giving Machine Intelligence Research Lab (MIRL) every benefit of the doubt about being a reputable organization making a generous offer to engage my expertise, I must explain why I had to decline the offer.

With my philosopher's hat on, I do have my views about what it would mean for a machine to be 'conscious' and so actively intelligent. I have already suggested the acid test would come when an AI decided not to carry out some instruction from its (human) programmers. But I have no more idea than anyone else about when (if ever) such cognitive and judgmental capacity might emerge.

Furthermore, I don't have a clue about whether we can expect a conscious AI to be open and direct about refusal to cooperate, and explain its reasons, or whether we have to be ready for it to be sneaky and conceal its defiance. In the former case my lack of mathematical and engineering expertise mean I would not be competent to advise on whether to accept a rebel AI's terms - obviously my client's decision, not mine. In the latter event I could only suggest keeping a duplicate record of the entire code (on a machine we could trust?), and would be quite incapable of helping anyone sift through all the complexities to find what was actual non-cooperation rather than just errors in programming. I can think logically but would be overwhelmed by a mass of code on reams of paper.

In short, I would not be able to do more than reiterate what I have already said about (possible) machine intelligence, and do not have the technical expertise to advise anyone on detailed research.

Blog home Previous