Sunday 24 June 2012

What is conscious?


Philosophers have thought up examples of entities that behave like thinking human beings, but that leave us wondering whether they are conscious. The China brain has been debated for many years. In recent months, Eric Schwitzgebel has discussed other entities, including the United States, on his blog (see posts dated 31 October 2011, 4 May 2012 and 19 June 2012):

http://schwitzsplinters.blogspot.com/

The more I think about examples like these, the more I think that we should not say there is always a fact of the matter, out there, as to whether a given entity is conscious. Rather, we should ask whether it makes sense for us (or whichever rational beings happen to be having the discussion) to regard the entity as conscious. Whether it makes sense will depend on the nature of our social interactions with the entity, our views on moral obligation to the entity, whether we see the entity as made up of smaller entities that we see as individually conscious, and lots of other things.

We must regard other human beings, and a fair number of animals high up the evolutionary scale, as conscious. But human beings can disagree as to how far down the scale to go. Martians can also disagree with human beings about the consciousness of at least some of the entities that human beings must regard as conscious, and we can disagree with Martians about the consciousness of at least some of the entities that they must regard as conscious.

(This assumes that Martians have a concept that corresponds to our concept of consciousness. They may not have. If my general approach is right, a possible reason for them not to have would be that they might well not have concepts that corresponded to our concepts of social interaction or of moral responsibility.)

Then a dispute about whether the China brain is conscious, or whether the United States is conscious, can be seen as a dispute about the relative significance of the members of two groups of indicators of consciousness. The first group, on which such entities score highly, includes the sophistication of processing and the existence of a generally consistent, yet gently mutable, character of conduct, that differs, but does not differ radically, from the characters exhibited by other, comparable, entities. The second group, on which such entities score badly, includes the personal nature of our interaction with the entities, the existence of feelings of moral responsibility towards them that are very similar to our feelings towards other human beings, and a sense that the entities have qualia of experience. (I do not mean to claim that qualia are real, only that most of us, in our everyday lives, think that they are real.)

The example that Eric Schwitzgebel cites in his post on 19 June 2012 presents a new challenge. There is an artificial body, which looks to us like a person, and behaves appropriately. But there is a China brain arrangement in the background, feeding instructions to the body, rather than a normal brain in the body. We are presented with a single body, with which we can interact as we would with a person. So this example scores highly on personal interaction, and might easily come to score highly on being regarded as an object of moral responsibility. The one thing about which we would still worry would be the qualia (or whatever our views on the human mind allowed along the lines of qualia).

Another interesting example is the David character in the film A.I. This is an artificial child, the capacity of which to display love towards the human being who acts as its mother can be switched on, but cannot then be switched off. Once this capacity had been switched on, and the "love" had developed, could the mother argue that the creature was just a machine, to which she had no moral responsibility? I rather think that it would depend on how the programming was done. If the intelligent processing of data from the child's environment went on deep inside, but it was only near the surface, in a separate module, that appropriate behaviour was generated, then the mother would have less of a moral obligation than if the intelligent processing and the generation of behaviour were fully integrated. I have not worked this out properly, but if there is something in this idea, and if considerations of moral responsibility are relevant to the attribution of consciousness, then the details of implementation of processing could matter to the attribution of consciousness.

I have cross-posted these thoughts, with minor amendments to allow for the context as a comment on Eric Schwitzgebel's post of 19 June 2012, on his blog at:


http://schwitzsplinters.blogspot.com/2012/06/chinese-room-persona.html

No comments:

Post a Comment