Philosophers and scientists have often found the phenomenon of consciousness to be mysterious, but perhaps not for reasons of difficulty of definition. Indeed, Tononi (2003: 253), for example, finds the question of definition almost ridiculously easy, as illustrated by his comment: ‘Everybody knows what consciousness is: it is what abandons you every night when you fall into dreamless sleep and returns the next morning when you wake up’. But definition may not be quite so simple if we find any reason to distinguish between different forms of consciousness, in the manner of Block’s (1995) distinction between ‘phenomenal consciousness’ and ‘access consciousness’ for example. Lycan provides an essential element in most definitions of ‘consciousness’ by saying that it consists in, or at least includes, subjective awareness of experiences which can then be held in memory, whilst drawing attention to the difficulty which conventional science has in explaining this. What provokes intellectual anxiety is the nature of phenomenal consciousness in particular; the fact that on waking up I have a personal, subjective experience of the room around me and of the world I see on looking outside the window. The so-called ‘hard’ problem of consciousness is identified by Lycan (1996: 1) thus: ‘It [the hard problem] has to do with the internal or subjective character of experience, paradigmatically sensory experience, and how such a thing can be accommodated in, or even tolerated by, a materialist theory of the mind.’ Thus the so-called ‘hard problem’ relates to understanding this subjective awareness only and not any other things consciousness might be capable of.

David Chalmers (2002: 50-2) interprets that difficulty in terms of the way that scientific inquiry is normally concerned with explaining functions. He says that the (relatively) ‘easy’ problems of consciousness like integrating information in the brain or producing verbal reports still come under the head of understanding functions; in these cases cognitive functions and abilities, so that the standard scientific approach of specifying which mechanism performs each given function can still be employed. But apparently that is not so with the ‘hard’ problem of understanding experience itself. Another source of anxiety in professional circles probably arises from the perceived unpredictability of subjective thought and experience, which links to the traditional idea that the rational is also predictable.

That problem explains why many have felt driven to give up trying to understand subjective consciousness as a phenomenon like any other and, instead, take one or other of two contrasting positions, either of which means treating it as something quite extraordinary. One is the view attributable to many psychologists, psychoanalysts, and biologists from T. H. Huxley (1874) in the nineteenth century to date, according to which ‘consciousness’ has no causal efficacy or specific role in our functioning. That view, known as epiphenomenalism, appears supported by observation of animals’, including humans’, ability to function for many purposes and perhaps all purposes when deprived of conscious awareness or lacking it anyway.1 Recent studies by Libet et al (1983), Keller and Heckhausen (1990), Haggard and Eimer (1999), and Trevena and Miller (2002), display the appearance of a ‘readiness potential’ in the brain before subjects became aware of deciding to perform an action, thereby seeming to show that conscious decision is not the cause of the action. Recently, the experimental evidence has been extended with different parts of the brain being found to ‘light up’, again in advance of the subject’s awareness, according to which decision will be taken. (More will be said about this later.) Such evidence, combined with philosophical anxieties about any admission that the physical and neural domain might not be causally and explanatorily closed were it to be the case that consciousness is thought to lie outside that domain of natural phenomena, can lead to the strange conclusion that our subjective awareness (consciousness of ourselves as agents) is some kind of illusion which serves no purpose in terms of our lives and existences. The question begged by that conclusion is not so much why such an illusion is maintained, since an already existing sense of (self)awareness and responsibility might have become fitted to various social functions or even a mere relic like the appendix, as why it could emerge in the natural world to begin with.2 The opposite view, exemplified by Immanuel Kant, has been to hold that subjective experience of ability to choose our actions and patterns of conduct must be treated as a ‘transcendental category’. Kant recognised that otherwise we are left with saying that consciousness, and then the freedom to which our awareness of agency gives rise, arises from a purely ‘material’ basis which causes the subjective experience in the first place. Presuming that such causation is thoroughly determinative for both the general nature and detailed behaviour of its effect, as Kant, following the scientific understanding of his time believed he had to do, it would make no sense to speak of anyone having an effective choice to act in any way different from that determined by the material, i.e., unconscious, basis. In that case it seems impossible to make sense of our freedom and moral responsibility (or feeling of them) or of our ability to learn about and know the natural world itself. Accordingly, Kant suggested we simply have to make a brute acceptance of consciousness as transcendent, that is, as connected with that ‘noumenal’ world we can understand only through pure reason (if at all) which is a reality quite apart from the reality we observe and work with in our everyday lives.

Chalmers does not intend to take either of these radical positions by treating consciousness as strictly ‘transcendent’ in the sense of being understandable only through recognition of a reality other than the empirical universe, but he suggests that consciousness as experience be recognised as an irreducible phenomenon analogous to mass, space and time in physics where no attempt is made to reduce them to anything else. Instead these are treated as fundamentals and then theories developed about them, explaining how they interrelate according to fundamental laws. Chalmers’ position resembles that of Tallis (2000) who argues that consciousness cannot serve an evolutionary function and, instead, we should think of it as fundamentally ‘useless’ in the sense of being, like art, part of the Kingdom of Ends; that is, an experience in itself having no use or purpose in relation to anything else, and therefore irreducible. That kind of approach still has, however, a crucial problem. Consciousness (and freedom) is fundamentally different from other irreducible phenomena in the natural universe such as mass (or energy), space or time. So far as the universe we can observe or even think about is concerned, mass/energy, space and time are ubiquitous and it is not even to be considered a meaningful question to ask what comes ‘outside’ them, or ‘before’ and ‘after’ them, or what they might be said to ‘consist of’. Any discussion of such questions leads directly either to the argument about the existence of a meta-cosmos which is far from proven, or to reintroduction of the transcendent and mysterious through positing some kind of supernatural Creator. The position with consciousness is very different since, so far as we can perceive, only living organisms are conscious to any degree whatsoever. All other phenomena in the universe are simply not conscious – that is to say, consciousness is not present within them and plays no part in their existence or behaviour.3 Any interpretation of consciousness as transcendent or even simply as fundamental and irreducible must also reckon with the way it is confined to certain phenomena only and, even amongst them, almost certainly present in varying degrees or levels.

The problem which arises with each of the two preceding views of treating consciousness as extraordinary and perhaps even bizarre does not appear with a sociological approach which Hunt (1995) has called ‘consciousness as society’. The very idea of a ‘collective consciousness’, drawn originally from Durkheim’s conscience collective, pointing as it does to the social aspects of what appears to be personal experience, such as religious observance, clearly can accommodate the notion of consciousness and therefore awareness, including self-awareness, as an evolved ability which connects the person to the world and makes sense on a naturalistic basis.

Unfortunately, this sort of sociological way of interpreting the phenomenon of consciousness meets difficulties of its own, This is especially the case when tied in with criticism of modern (principally Western, or so-called ‘liberal’) society, and its apparent intellectual orthodoxies, by such writers as Heidegger and Wittgenstein and with the results of parapsychological research indicating the reality of a ‘social field’ which takes in the individual psyche. Again, these difficulties are basically philosophical in nature. First, there is a liability to confuse what are two distinct, though related, notions. It is true that most traditional (and some contemporary) cultures do not have an idea of individuality or self-identity. But that does not mean that they have no idea of privacy or ‘private’ life. For instance, ‘shame cultures’ typically allocate some bodily functions and religious devotion within the domestic sphere to a person’s private life and expect that these will not be displayed in public or outside the kin group such as family or clan. At the same time, this private dimension remains subject to communal imperatives and there is no idea of individuality or individual self-expression and none would be tolerated. Moreover, the converse holds here so that in a modern context like a liberal democracy where many people are fiercely protective of their individuality and individual rights they will readily join associations like religious communities, social groups, and so on and may trust these with at least some ‘private’ or ‘personal’ details and information. Further, many people in modern societies will welcome a degree of surveillance of their personal activities by CCTV or speed cameras used by police and other public bodies for sake of security and safety. (Whether their trust is justified that law-abiding citizens need not fear such surveillance is quite another issue.) Simply, their individualities and individual awareness are in no way confined to their private experience. Second, and directly impinging on the Durkheimian conception of a collective consciousness, is the tendency to jump from recognition of the social nature of many activities such as religious observance, or even meditation, which people may appear to undertake on an individual basis, to supposing that modernity as we have experienced it since ‘modern history’ commenced in the late fifteenth century, can provide for a return ticket, so to speak, for those disenchanted with the modern experience and wishing to rediscover a (communal) meaning and experience for their lives. Such an assumption ignores the technological vector already present throughout human history and leading, quite naturally, towards modernity with all its mobility, dissonance, conflicts and transient experiences. Not the least of the components in that vector is the recurrent presence of warfare – itself at once an ultimate expression of social identity whilst readily changing or destroying communities at every level, and one of the principal driving forces behind technical innovation whilst being intimately connected with other such forces including economic ambition. It is still the case after more than half a millennium that there is not the slightest evidence to suggest that the journey into modernity can ever be taken backwards out of it. On the contrary, recent events since the ‘Arab Spring’ have exposed bitter sectarian conflicts within the Islamic community which so many, including sociologists such as Professor Gellner, had supposed to be more united than others and therefore perhaps capable of maintaining its traditions and cohesion alongside modernity. There now appears a danger of the Islamic world experiencing some equivalents of the bloody and brutal introduction to modernity experienced by the Christian world centuries earlier. When seen alongside the collapse of the socialist experiment towards the end of the twentieth century, the evidence seems to point to the arrival of modernity, and its associated concepts of individuality and self-expression, being irreversible.

The fact that each of the diverse responses to the ‘hard’ problem of understanding subjective experience carries deep problems of its own with it suggests a need for a different understanding of consciousness. The route I wish to explore for understanding how a consciousness, or conscious individual, could both have and make use of its subjective experience instead of simply following non-conscious patterns of behaviour which seem to make subjective experience unnecessary, leads to proposing that consciousness be seen in terms of two levels. A distinction in terms of levels is more appropriate than to treat consciousness as merely ambiguous, since different aspects are all related to both awareness and to capacity to be aware, pay attention, and so on. In fact a dual-aspect theory has been proposed already by Block (1995) in respect of consciousness itself. However, rather than distinguishing between two types of consciousness – ‘access consciousness’ and ‘phenomenal consciousness’4 – as Block has done, I attempt a distinction between the kinds of problems consciousness, or a conscious individual, can be seen as having to deal with.5 On that basis I suggest the two levels might be delineated as follows; Level I: It is frequently part of subjective experience to be aware of having no choice as to how to act – for instance, when avoiding oncoming traffic – so that the subjective awareness takes the form of a prompt to swift and necessary action. Flanagan’s (1992: 35f) attempt to address the problem of consciousness from a ‘materialist’ viewpoint incorporates this Level I consciousness, by not only arguing the negative that ‘…thinking of it [subjectivity] as part of an immaterial world has proved to be an illusion [which] actually explains nothing’, but also more positively that evolutionary theory allows us to think of subjectivity as a special kind of sensitivity providing for quicker, more reliable, and more functional responses than a ‘less robustly phenomenological system’. Contrary to the notion that phenomenology is really a form of philosophical idealism, Flanagan holds that a system designed and built up from a physical basis can operate phenomenologically by generating the ability to experience what is happening in the world. That subjective experience or awareness that Block called phenomenal consciousness is already working to carry out a task or function on behalf of the conscious organism. However, in relation to clearcut survival and other problems at Level I there is no need to ascribe any sort of ‘independence’ to the subject’s awareness in the sense of her being able to decide what to do without the question being already settled by the situation alongside unconscious processes or capacities such as the sheer need to live. There is certainly no need in such cases to invoke anything that might called ‘freedom’ or ‘free will’. Probably freedom can still be left out of account if the scope of consciousness is extended beyond that allocated to it by most cognitive psychologists to cover learning (including rote learning) on the lines of the ‘self-organizing consciousness’ (SOC) model advocated by Perruchet and Vinter (2003). In this case there is no immediate challenge to survival which would obviously leave no alternatives for action, but the learner is still following certain routines, such as the structure of the language she learns to speak and write in her community as well as the format of the learning process itself, whilst, on the SOC model, making use of subjective experience in order to learn. Perruchet and Vinter argue the SOC approach is more economical (simpler) than models which posit transfer of information manipulation to an unconscious level during learning and training, but they do not need to find room for choice between alternatives on the part of the learner (at least during the learning process itself). Perhaps even in cases when awareness is simply a pleasure or interest, such as may be found in activities like looking at scenery or bathing, freedom can be ignored, although if substantial resources are required for travel or a swimming pool, for instance, it already becomes hard to understand how we could come to have such demanding tastes if no other problem (requiring us to act in different ways from the ways we would have acted without the awareness) is being addressed thereby.

That part of the ‘hard’ problem most troubling of all to scientists and philosophers remains where problems I characterise as Level II arise, where our subjective experience includes the idea, feeling – call it what we may – that we have a choice between possible alternative courses of action (which is what freedom essentially comes to) and, indeed, may need to investigate the options so as to decide which one to follow. To make sense of this, rather than leaving it as either some bizarre illusion or else an intrusion from some other reality we struggle even to think about, let alone to know, means to think of what the subjective awareness enables an organism to do (or feel) in addition to carrying out any specific functions, even those of an early warning system or learning. There is indeed no reason why (phenomenal) consciousness need be thought of as having only one set of functions or handling only one class of problems. At the same time, there need not be any problem about the ‘phenomenological fallacy’ here: ability to describe things in our environment (to ourselves or to others) is simply an ability which conscious awareness provides, with no suggestion that the presence of the things being described depends in any way upon our consciousness of them. Any options or choices we may have would arise from the situation of things as we find them, not from a magical ability to either create them or destroy them with our minds; that is, without working and arranging to carry out specific actions designed to change the things we find. But an essential feature of problems where alternatives have to be assessed and chosen between is that these will be, in some way, either unexpected or unfamiliar problems where no routine has been worked out, or ones where the conscious agent is not aware that one course of action is necessary and unavoidable. It may, of course, be in the latter sort of case that the agent is deluded or mistaken, and a certain course of action is indeed inescapable. A strict form of determinism will maintain that this is always so if the matter is taken back far enough, but on such a determinist view it is hard to see why we should be made aware of having no alternatives in some cases – indeed feel we are not troubled with the task of making a choice – whereas in others we seem to find there are choices to be made and alternatives to be assessed. But cases where alternatives appear to exist share with the unexpected or unfamiliar the characteristic of an absence, or limitation, of routine or predictable regularity. Simply to address problems of these kinds with a better prospect of finding a solution than expected from the sheer chance that performance of already existing functions, such as blood circulation, will turn out to be sufficient to deal with some novel challenge, means going beyond performance of ‘function’ as normally understood. It is only in comparatively unusual circumstances like a danger which is both immediate and unfamiliar (probably not the case when encountering a predator, for instance) that an unfamiliar challenge would not involve some kind of analysis or investigation to find whether alternatives exist which give rise to some kind of choice, and therefore freedom if the choice is effectively present. For purposes of assessing the choices available, and gathering information for that purpose, a person will need to be aware of them and to deal with them consciously.

Flanagan’s illustrations referred to earlier bring out the fact that the problem-solving for which subjectivity can be especially valuable is problem-solving of a special kind (Level II in my classification). With examples like that of reptiles’ sensitive detection of earthquakes enabling them to move above ground before disaster strikes, or our feeling pain which leads us to avoid injury, Flanagan highlights ability to tackle, and tackle quickly, unpredictable situations and ones which may not occur commonly and/or regularly throughout an animal’s lifespan, so that unconscious ‘instinctive’ response may not be sufficiently prepared. Flanagan’s argument that subjectivity, and thus consciousness, is a phenomenological system offers a way to link the subjective and the physical, and this link can still hold even when alternative courses of action appear to be available (commonly the conscious individual herself selects the ‘best’ option according to her needs and is aware of doing so). But the more uncertain the ‘material’ (physical) environment itself, the less ‘predetermined’ such a choice is likely to be. Flanagan’s account need differ fundamentally from a phenomenological account such as that of Velmans (1996), for instance, who insists that my consciousness of pain is located just where I feel the pain to be, and not in some state of my brain, only if it is held that my awareness of the pain necessarily requires that some state of my brain must exist as the cause of my awareness of the pain and determining that the awareness must then emerge. That is to say, a materialist account of subjectivity in consciousness differs fundamentally from a phenomenological account only if it is held that the physical processes which account for subjectivity and provide its origins, continue to determine, and determine necessarily, the course of its subsequent existence.

A major argument here, impacting on the issue of freedom generally, has emerged through the findings of Libet (1983) and others about ‘readiness potential’, or neural activity which precedes (apparently) voluntary action. The argument in his case is all about interpretation: for some thinkers the findings are evidence that even supposedly voluntary actions are caused and determined by unconscious processes in the brain, whilst Libet himself has proposed that ‘free will’ dos not initiate neural activity but subsequently controls it.6 If the understanding of subjective awareness which I am proposing is sound, however, then a rather different interpretation of readiness potential from either of these may be suitable. The question which really matters for freedom is whether the neural activity preceding any action includes a specific instruction to do X rather than Y or Z, so that the ‘person’ is effectively not allowed any alternative to the action which she may have thought she had ‘decided’ upon, but then carried out. The recent experiments indicating differing neural activity according to which action, such as a wrist movement or button press, the subject will ‘decide’ upon, might appear to imply such an instruction. Yet, although it is extremely plausible to suggest that the brain needs to do some preparation for the subject to be able to ‘decide’ what to do, even in an experimental situation where the subject has been forewarned that a (voluntary) decision is to be made, it appears more strange to then say that neural activity preceding awareness of the decision includes a definite instruction when the subject also understands that nothing of especial importance to herself is at stake, and may even be left to make a ‘random’ choice. This is especially since the subject’s most important decision, i.e., to take part in the experiment, has, of course, already been taken. But the entire sequence may become more understandable if, thinking in terms of consciousness and ability to make conscious decisions as a way to cope with uncertain situations where alternatives may need to be assessed, we suggest that unconscious processes tackle as much of a given problem as they can and then call in conscious awareness to, so to speak, take over. That might, in the experimental case be merely to ensure a (say) wrist action is performed accurately, so that it becomes the kind of action which if it were part of a routine the person carries out regularly might scarcely need awareness at all. But in other cases like deciding on a career change or a religious or political issue the action might require intense intellectual effort and concentration.

Where Level I problem-solving is concerned, such as an early warning system for prompt action or a way to organise a learning process, it definitely can be said that consciousness is carrying out a function, although not a function connected to a particular process like digestion or even thinking as such, but one which can relate to any process or activity that an organism could be involved in or to simple survival. But it might even be said that consciousness carries out a ‘function’ of a peculiar kind when working on Level II problem solving, although that has even less of the nature of a function in the ordinary sense of that term as being carried out regularly by a dedicated mechanism. In that case, however, consciousness might be better thought of as a free-wheeling ability or capacity which will not always be needed or in use, but can come into its own at any time if circumstances change, especially in a sudden and unpredictable manner.

In principle connecting each of these levels of problem-solving with consciousness could yield a set of hypotheses amenable to scientific testing, including in the case of Level II where specific predictions derived from the hypothesis that conscious action would be faster and more reliable in dealing with the unforeseen and unfamiliar, especially when alternative actions need to be considered, might be tested in relation to ability to design protection against, or adjustment to, changes in environment more quickly than could be expected from simple biological adaptation. For both levels of problem-solving what consciousness provides that unconscious processes alone would not is phenomenal awareness of the problem or task itself so that specific attention is focused upon it. Perhaps such an interpretation ends by resembling Dennett’s (1993) conception of consciousness as a ‘virtual machine’ emerging from parallel activities within the brain but with no specific physical location therein, if it implies that consciousness emerges as an ability which (presumably) the brain can provide, i.e., an ability to be aware of whatever, but which need not be identified with any physical part of the brain or its neurons. Does this help in making sense of what has often seemed mysterious, i.e., how anyone could believe that consciousness has a physical or ‘material’ origin and yet affords me a subjective experience and perhaps even enables me to recognise problems and situations and then whether or not I have any choices about how to relate to them? Any answer to that question has to accommodate the fact that I do not feel my subjective experience need necessarily be the same as that of anyone else observing the same empirical phenomena – rightly or wrongly I feel the subjective experience, as Sartre pointed out, is always mine. That could make sense even with a material (physical) origin of my ability to have subjective experiences, if each experience becomes a way to ‘take in’ the world around me, and to learn, or if need be, become aware of any unexpected and unpredictable changes in my situation and environment so that I can work out how to respond to them now. Otherwise my response is left to unconscious processes which can ultimately be expected, for example, to favour passing on any genes which I may have that are especially suited to the changed situation and leaving aside those genes which I may have that are not.


Such an interpretation seems to lead to the conclusions, both that my consciousness is explicable in terms of natural phenomena and is indeed caused by them, and yet that in some way and to some degree it operates independently of those causative factors which gave rise to it, i.e., it has causative efficacy in its own right. The only alternative would appear to be the viewpoint exemplified by Chalmers and Tallis which leaves my subjective experiences as serving no ulterior purpose or function whatever; their sole rationale being the experience itself which they allow me. I do not claim to dismiss that view with certainty. But I contend that since consciousness is (as we believe) confined to living organisms, and ranges in scope and temporal extent amongst them all the way from perhaps totally absent – for instance with at least single celled organisms – to the complexity of human consciousness, it would be strange if it were to hold the irreducible and fundamental status that applies to such all pervading aspects of the universe as space, time and mass/energy. The only way consciousness could be allocated a similar status would be through a teleological development, leading to some kind of future revelation. Not only would that mean relying once again on a dubious conception of ‘progress’ in development of life which humans currently show no obvious signs of fulfilling, but also there is no way we could know of that save through intuition or mystical experience which we could not witness to those who have not shared that experience.

In course of referring to consciousness as ‘independent’ I broached the idea of it having causal efficacy in its own right. This involves two separate issues, since there is no need for causes in the natural world to be conscious and typically they are not. But for consciousness to have causal efficacy, it needs to be both aware of what it is doing and aims to do and be capable of carrying that into effect (at least in part). Otherwise, it can do nothing differently from what an unconscious process, where consciousness is not present, would do. But that, in turn, means that for it to have causal efficacy it must have some degree of independence from its own natural, and non-conscious, causes. On a thoroughly deterministic understanding of causation and of cause and effect relations, it would be nonsensical to insist both upon something having causal efficacy and, at the same time, being caused by anything else as appears with non-transcendental accounts of consciousness. But some recent developments in both scientific and philosophical thought may open a door to different ways of looking at consciousness itself. Developments in ‘fuzzy’ logic and systems include not only theoretical work since the 1960s, but also practical applications in the fields of intelligent robotics and computing as well as environmental control. Yet it is apparent from the scientific and philosophical literature that the question of whether these developments make it necessary to accept a more flexible notion of causation, or are compatible with retaining the position that causation is determinative in the sense of a cause determining its effect rather than simply bringing it into existence, remains unresolved. Owing to the nature of the practical applications of fuzzy logic and fuzzy systems they are likely to be intimately involved in any process of creating an artificial consciousness, or indeed, any system capable of acting independently of its creators (or programmers) – if any such system is possible. However, as a matter of general principle a condition of uncertainty, including in the natural world generally, would suggest that a looser conception of causation as such may be required, perhaps on the lines argued by Lewis (1986) in his analysis of causation. In that case the conception of causality might be restricted to specifying that one phenomenon or event P1 (the cause) must be followed by another P2 (the effect) without it being necessarily the case that P1 determines all features of the existence of P2, including throughout its subsequent development. In effect, this would mean a return to the view of Hume which many have thought discredited. But even if causality were to be so understood for the general case, that does not in itself demonstrate that consciousness itself acts independently in the world it finds, or that its non-conscious causes are not sufficient to explain why any conscious system acted the way it did in any specific cases. All a looser conception of causality does is remove an apparently insuperable obstacle to making sense of consciousness as a free and independently active agent. Now, the philosophical anxieties about accepting this sort of position and taking it that it is possible to have causally, and explanatorily, open systems within which a new source of causal efficacy can enter into a chain of causes and effects, can make it tempting to avoid using such an understanding of consciousness (at least as active and independent agent in the world around it) by leaving it as a illusion or, at most, ineffectual for purposes of action. However, that in no way escapes from philosophical difficulties. Instead, the conception of consciousness as passive spectator or even actual illusion opens the question of how such a useless phenomenon, i.e., one with no function, could ever evolve or otherwise appear in the world. The problem with treating consciousness as an irreducible given but fundamentally useless, on the lines of Tallis’ thinking, is still that of understanding how it could acquire such an irreducible status when it is manifestly non-existent in most of the universe. Indeed, there is even a problem of that kind with the conception in evolutionary biology of genetic replicators as the irreducible given, when they do not exist in most of the natural universe, so that science is still struggling to unravel the process by which life can first come into existence – that is, when replication begins. But in the case of consciousness there is the additional problem of showing how it might be possible for non-material replicators to appear, unless it can be seen as effectively aiding the organism which it accompanies, which in turn means it must be able to act at least in the Level I sense of an early warning system and learning mechanism. As we see, when alternative courses of action that inbuilt systems might not be prepared for need to be assessed or sudden changes in environment that an established teaching process might not be prepared for occur, then we appear to need a consciousness capable of Level II action (i.e., free choice and decision) to deal with them. Especially in such cases consciousness seems to be far from a mere spectator, being very much ‘useful’. This may not seem to compromise the principle, now generally accepted, that the physical world is causally closed, since consciousness is not itself a physical entity, even if it is generated and sustained by a physical entity (the brain). But if consciousness is indeed useful to the organism which has it, then it can impact on its physical environment which presumably means have causal efficacy therein. So, perhaps the principle of causal closure of the physical world needs to be held limited to some degree.

Guilio Tononi (2003) offers a valuable insight here by relating consciousness to a concept of integrated information. For this argument Tononi employs the definition of information given by Shannon & Weaver (1963) in mathematical terms as reduction of uncertainty among a number of alternatives, so that

‘the occurrence of a particular conscious state reduces uncertainty because of a large number of alternative conscious states that it rules out…the information generated by the occurrence of a particular conscious state lies in the large number of different conscious states that could potentially have been experienced but were not.’ (‘Consciousness differentiated and integrated’, 2003, 254).

That notion can again be expected to help in understanding the arrival of conscious awareness on the scene after specific neural activity has begun (‘readiness potential’), for in cases where free choice is relevant it might be seen in terms of conscious awareness coming into play when uncertainty between alternatives is found – when there is no uncertainty the question becomes simply whether unconscious process can act rapidly enough on its own, or whether at least Level I consciousness is still needed. Tononi points out that the repertoire of conscious states in terms of sights, sounds, moods, and so on, available to a human being must amount to an enormous number of distinct possible states. But where occurrence of any given conscious state is concerned the associated information is not information from an abstract point of view, but integrated information. In rare cases, notably split-brain patients, separate centres of consciousness can be generated within the same brain. But for each consciousness (in the usual case a single person with a single brain) its conscious state is experienced as an integrated whole which cannot be subdivided into independent, separately experienced components. The individual person cannot experience the sight of words in a particular phrase independently of the experience of how they sound in her mind, and she cannot simultaneously think two different thoughts. There is no such thing as superordinate consciousness associated with the states of different people in different places or a subordinate, partial consciousness.

Tononi then argues for a way to measure how much information is integrated within a complex set of elements; consciousness comes about when a large amount of integrated information is generated, with certain brain regions generating consciousness because they constitute a complex having a large repertoire of possible states. Whatever the possibilities of such an approach for scientific analysis (Tononi admits that precise absolute values for the size of the brain region complexes may never be obtainable), the idea of a link between consciousness and information defined in terms of a selection between alternatives can help to make the relation between consciousness and freedom comprehensible, simply because the concept of freedom itself, as usually understood, connects with information defined in that way. Freedom depends upon an effective choice between any alternatives for action that the agent may be aware of, with that awareness depending upon available information about the alternatives; i.e., knowledge of which alternative courses of action remain given the situation which is actually known to exist – rather than any other possible situations which might have existed instead. That makes clear that despite the common notion that if people are well informed they are likely to obtain freedom, information need not lead to freedom. Complete information, for instance about mortal danger, might show precisely that no feasible alternatives for action exist, and so the person has no freedom – or, more accurately, freedom is irrelevant to the situation. Yet even in a case of terminal illness, questions of freedom (and its moral acceptability) can arise in the form of a demand that a terminally ill patient be allowed the alternative of assisted suicide or voluntary euthanasia instead of waiting, perhaps in agonising pain, for death. What information does in this, or any other case, is to delimit which, if any, alternatives are actually available so that the person can actually know whether freedom applies in any effective way, and if so which way and how far it applies. Accordingly, information enables a conscious agent; that is, an agent capable of perceiving and processing the information, to work out what alternatives for action there are and then try to work out how to decide between them.

At this juncture it is worthwhile to look briefly here at an argument recently put forward by Harry Frankfurt (1969) against defining freedom, as distinct from consciousness, in terms of ‘The Principle of Alternative Possibilities’ (PAP), although this is discussed somewhat more fully in 11. Compatibilism, Marxism, and Democracy. Frankfurt’s thesis is that in deciding whether some action was carried out freely (without compulsion) what matters is what the agent intended to do, rather than whether she had, or did not have, any alternative course of action available and/or complete information about the relevant situation. If the account of consciousness being made here is sound, then it would appear on the basis of Frankfurt’s interpretation of freedom to be necessary to say that consciousness and freedom are not generically connected. Clearly, as has already been said, these two concepts do not have to go together since it is very possible for someone to be aware (conscious) of having no freedom. In some conditions of oppression, such can be a commonplace. However, it would be much harder to make sense of the converse, i.e., to envisage an unconscious entity (even someone asleep) having freedom or being ‘free’. To say it could be aware of being so would be a flat contradiction. Accordingly, it would appear necessary to accept that the notion of freedom is connected in some way with consciousness, even if the connection is a vague one. Moreover, the sheer fact that, presumably, an unconscious entity cannot be free (or it makes no sense to describe it as free) and that a conscious agent can be aware of being either free or unfree strongly suggests that freedom shares with consciousness a connection with information. Although that may not be decisive for refuting Frankfurt’s view, it is the case that Frankfurt almost certainly underplays the significance of information for the condition of freedom, a significance which the idea of alternative possibilities takes account of quite naturally. Even ‘classical compatibilism’, viz., the thesis that we are free provided we are not subject to direct compulsion of any kind may well underestimate the importance of information in connection with freedom as normally understood. At the same time, thinking of freedom in terms of effective choice between alternatives helps to make clearer the relation between consciousness and information. One way to see consciousness is as an ability to take in and handle information not already at the disposal of unconscious processes. Theorists of evolution sometimes use the analogy between genetic code and coded instructions for a computer, the instructions being made up from bits of information. This sort of analogy presents unconscious and automatic processes in an organism as working with their own store of information inherited from past generations or perhaps learned from the environment. But it also illustrates why consciousness can so often appear as a meaningless, if not silly, add on unless it is also connected with information. Tononi’s point about integrated experiences shows consciousness in the role of organiser of information as well as receiver. But these conscious experiences cover aspects of a changing world which the store of unconscious information does not already have available, and often will not be able to acquire quickly enough without using consciousness first. Such a procedure would suggest that consciousness recognises situations which unconscious processes cannot or will do so only more slowly; if these situations require action, and more especially if alternatives for action present themselves through the conscious experience, then logically consciousness must be allocated some degree of independence of action from whatever unconscious (including physical) process may have generated it in the first place.


If these arguments are sound, they show the concepts of information and freedom being both related to the concept of an alternative; that is, some situation or action which could exist or be carried out in place of that situation or action which does exist or is being carried out. Through the common link to information as well as the customary idea that a free agent must be conscious, i.e., aware, in order to actually have (or use, or cope with) freedom, the idea of consciousness itself becomes linked to that of alternatives. This in turn may link to another subordinate feature of consciousness, viz., intelligence.

When asking what is the nature of intelligence, the mathematician and philosopher Turing had asked himself why a human mathematician can make a judgment as to which proofs are interesting and important before beginning the task of actually working some proofs out, whereas a machine has to make its way blindly through them all. That thought links with the idea that intelligence may be defined (Beeson 1988: 211) as ‘the use of knowledge to restrict search’. That definition links two themes common in the varied definitions offered for ‘intelligence’, namely, ability to collect and analyse information (central in the notions of business or military intelligence), and problem solving capability. Following from these elements, that definition incorporates the idea of winnowing out alternatives in a process with three stages. First, raw information displays which conceivable possibilities actually exist, and which do not, and indicates which other possibilities could be brought into play by any given courses of action, for instance, by selection of criteria. Second, the information can then be acquired as knowledge. Third, an ability to interpret the knowledge – which a machine does not have – enables an intelligent thinker to eliminate some conceivable possibilities as absurd or unsuitable and leave a residue of alternatives that actually deserve consideration. The connection between this and consciousness (and also freedom) can be seen in the thought that the awareness which consciousness affords is necessary to make judgments about the relevance of particular items or sets of information for solving particular problems without prior instruction. Inclusion of ability to select amongst information without any prior programming – such as might be provided in the case of mathematical proofs by a sub-routine eliminating certain proofs from consideration, those being defined according to features previously selected by the programmer – seems to find the difference between the human thinker and any machine so far invented. That difference extends beyond mere problem solving to thinking of any purpose behind the problem solving itself. Such has to come from the programmer, or those enlisting the programmer’s services, not from the machine.

Now, intelligence is faster and more effective than a machine when it is either impossible to reduce the problem in hand to a sequence of routines, however complex, or else to do so would involve a procedure so time-consuming that even the machine’s advantage in repetitious tasks is more than offset. In those situations either the solution to the problem is uncertain, or at least it is too complex for certainty to be established with a practicable degree of effort. At the same time, if we suppose conscious thought becomes faster and more effective than unconscious process or instinct when confronted with a problem not similar to one which unconscious response or instinct is already prepared to meet, we may say that for something called ‘machine intelligence’ to be possible, machines must become capable of such prior information selection and addressing ‘unfamiliar’ problems, i.e., problems not anticipated by their programmers. Most probably testing for and identifying machine intelligence would then involve finding out whether any machine proves capable of actually ‘deciding’ not to follow every aspect of its programmer’s instructions. Attempting to do that by experiment must be extremely difficult, owing to the intricacy of working out for any large program whether a machine’s apparent failure to carry out instructions is actually due to its making an independent ‘choice’ or ‘decision’ or is simply the result of programming error. Further, the experimenter would have to reckon with the theoretical possibility that any machine capable of conscious and intelligent actions (on any customary understanding of intelligence consciousness must be present for the intelligence to become active) might be able to conceal its departure from programming instructions in some way. In human life it depends upon particular circumstances whether a person is keen to display, perhaps defiantly, exercise of independent or free choice against expectations and instructions or else tries to hide her lack of cooperation. It should be noted, however, that if machine intelligence understood in this way is ever to be possible it will entail the machine being capable of acting in ways not determined (or anticipated) by its causes, i.e., its builders and programmers.7

Now, the factor of uncertainty is crucial in creating a Level II problem for consciousness, that is, one where possible alternatives need to be assessed, in any particular cases. In many instances the most significant element in an environment which produces uncertainty, and which does so all the time and not just in periods of exceptional instability like earthquakes or rapid climate change, is the presence of other consciousnesses. For many purposes, including economic theory, it has been supposed that if people are rational their behaviour will be predictable; i.e., can be anticipated with certainty. Yet even if – or rather especially if – others are rational, they also will be capable of discovering different ways of assessing situations, making claims, thinking around difficulties, and so on; tasks for which unconscious instinct or even the less conscious habits of cultural training would indeed be fortunate to be prepared. Ironically, it is the more habitual and instinctive behaviour which affords predictability for study and analysis, and therefore for purposes of either individual perception of what may be expected from others or for information published on the basis of more formal study and analysis, with any presumption of rationality being confined to the observer. Scientific data (in contrast to scientific study) point in the opposite direction from a traditional assumption in philosophy, economics, and psychoanalysis alike, that the unconscious and irrational is also unsystematic and unpredictable whereas the conscious and rational is systematic and predictable. Yet for purposes of understanding interaction between one consciousness and another, or others, and sometimes even for the individual person having to deal with other people in work or personal relations it may be helpful to reverse that traditional assumption by treating the unconscious and irrational as predictable and needing less effort on the part of her own consciousness, as compared with the conscious and rational capable of being unpredictable (especially in conditions of limited knowledge and information, so that the agent cannot necessarily be sure of what another rational consciousness may have in mind). In some circumstances such as ethnic tension and conflict, the cause of peaceable social relations will be served by people abandoning specific forms of predictable behaviour, even if another kind of predictability is sought through making a settlement of the conflicts and disruption which they cause. Thus any permanent settlement in the Middle East will require the end of some certainties for Jew and Arab alike, so that each will have in some ways to break with predictable behaviour for the sake of peace. Yet when each consciousness – or group of consciousnesses – begins to make a more unpredictable environment for others a sort of rachet effect can appear in which the advantages of consciousness and intelligence, that is, of being capable of independent action, for each are increased as other consciousnesses come into play.

Language has been the general way for each human consciousness to communicate with others according to certain recognised rules. According to traditional ways of thinking language provides a link between rational communication and consciousness by affording this measure of predictability in dealing with others. Yet again predictability may not secure either rationality or harmony in social interaction. One obvious objection to Habermas’ response to modern conditions is that the theory of ideal communication does not of itself reduce the importance of deception, emotive appeals, and suggestion within many forms of communication, including those employing ordinary spoken and written language as well as other specialised languages like music or visual images. The very attempt to regulate advertising standards comes from the expectation that such emotive appeals and suggestion will occur and that these can turn into deception; this is predictable in general terms even if we cannot predict which particular examples of advertising will prove misleading (or truthful). What the objection on grounds of deception to an ideal communication theory of language draws attention to is that the criterion of rationality itself, when applied to communication, or indeed to social interaction between self and others in general, represents an attempt to make social interaction predictable in certain ways which are specified by the ethical imperatives behind rationality, such as truth telling. One of the functions which may be ascribed to the rules of rationality, including consistency of thought and argument and compatibility of belief with either (or both) observed facts or sound argument, is to provide a supplement to the rules of natural language in terms of bearing witness. Naturally the very idea of bearing witness means communicating to others, and claiming that they may trust the communicator. Now, we cannot necessarily predict that any given witness to a case will be truthful, but moral notions of honesty, together with fairness and justice in dealings with others, are bound up with efforts (also including the law of perjury) to ensure that truthful witness will be more prevalent than deception. There are many ways to try to make one’s witness convincing to others, but rationality has the advantage over other kinds of personal conviction or charismatic appeal in that it allows those others to make their own independent assessment of the credibility of a witness, just because the rules of rationality include also rules of evidence which do not rely upon any prior assumption of truthfulness on the part of the witness. None of this means to dispute the obvious fact that attempts by individuals and groups or organizations to short cut the task of communication with non-rational appeals are commonplace, but if we reflect that rationality recognises the independence of other consciousnesses than the speaker or writer we begin to see why rationality has so often been sought, not least in the form of a critical and ethical standard. In cases where it is necessary for an individual or group to relate to others on an ongoing basis without mutually destructive conflict it is likely to be especially important to be sure that communication is honest and truthful, which leads to a particular kind of reliability and predictability which the others may trust, and rationality assists in doing that. The point of this is to make conscious behaviour predictable, but only if intentions have already been stated so that actions in accord with those intentions may be expected to follow. It is considerations like these which highlight the fact that even well-intentioned departures from truth telling like ‘white lies’ intended to protect others from needless distress or fear, or concealment for sake of public security, need to be handled with care.

This leads to an honest and rational form of predictability in conscious behaviour being itself part of an obligation, and sometimes contract, which may be due to others whereby restriction on freedom of action is accepted as a necessary part of relating to others, with those others being required to accept some restriction on their freedom of action in return. When such restriction on freedom of action is not needed, there is not the same need – from the point of view of others – for conscious behaviour to be predictable, and it may not be so in habitual ways characteristic of unconscious behaviour. On the contrary, the various forms of creativity and invention which by their nature display a measure of unpredictability (even when they are making use of existing knowledge) are frequently recognised to be beneficial to others and a wider society in addition to expressing the talents of the creative individual. When put in terms of reasoning of this kind, the outlines can be discerned even within existing moral traditions of what can be called a ‘social contract’ applying to each consciousness when it has to interact with other consciousnesses. Naturally, the limitation of freedom which is then implied will sometimes take the explicit form of actual contracts.

It may be seen that a need to relate to others that are also conscious does not run contrary to the condition that a consciousness has a measure of independence of its causes, just so that it is capable of doing things which the resources supplied to an unconscious entity by its causative predecessors, such as genetic programming, would not alone enable it to do. The only ethical exception to this might be such responsibility children may be held to have toward their parents, as being the one case where a causal relation and relation of (conscious) self to others coincides. Yet even in that case if responsibility is seen as nullified in cases of neglect or abuse, for example, so that it then depends also upon the reverse responsibility of parents toward their children, the reasoning begins with a quite different principle which operates like a cause needing to relate to its own effects. With unconscious phenomena it is perfectly possible for a cause to have a ‘feedback’ relation of some kind with its effects, as when a spring sustains plants which then help to maintain soil and prevent the spring drying up. But it would, once again, be quite bizarre – except as a poetic metaphor – to think of the cause in such cases as being able to make any decision about its relationships with its effects. Yet in a case such as the relation of parents with children it is perfectly possible to think in that way, even as regards their decision to have the children at all. Both as regards causes and effects the factor of consciousness appears to be what introduces the element of choice and decision. As already said, Kant showed that if empirical causation entails determination of the effect then independence of consciousness in relation to its causes is impossible unless that is somehow transcendent in relation to the natural (empirical) world. But a more flexible understanding of causation might allow that conundrum to be avoided. At the same time, it might also be possible to say that consciousness, including the independence which we seem to experience with consciousness, can evolve by natural selection just because it has natural causes of some kind whilst being in some sense and some degree independent of; i.e., not determined by, them. There is no need to presume an exception to basic principles in physics of conservation since conscious thoughts, and then actions prompted by those thoughts, require expenditure of energy. There can be no guarantee that we will solve the philosophical and scientific problem of understanding how a seemingly paradoxical development of consciousness with some causal efficacy is possible, let alone how it can occur regularly in the natural world. But if a machine consciousness that we could demonstrate to be such can ever be created, then that might be best chance of understanding how the ‘hard problem’ of consciousness can have a solution without needing to invoke transcendence or dismiss our subjective experiences as a pointless illusion.


  1. T. H. Huxley (1974) describes the case of a French soldier with a serious neurological injury Huxley believed to be comparable to a frog which had had the anterior part of its brain removed. And yet, like the frog, the soldier could carry on engaging in normal activities like eating, getting dressed and undressed, and going to bed at his usual time. (cit Jaegwon Kim, 408). Perhaps the point at which Huxley’s argument – concluding that we are ‘conscious automata’ – may be challenged is on the assumption that such routine activities do require consciousness.
  2. Daniel Dennett (1990: 65) claims that avoiding question-begging is one reason to adopt materialism as one’s philosophical position. But if materialism is left begging the question of ineffectual and pointless illusion, it fares no better than idealist or theological accounts where consciousness is concerned.
  3. Naturally, a scientific viewpoint accepting that consciousness is confined to living organisms runs contrary to any philosophy or spiritual outlook which might hold that consciousness is present in all of existence. However, Kant – like the monotheistic faiths – would not interpret transcendence in that sort of way; saying rather that a transcendent consciousness occupies its own reality, and does not reside in inanimate objects.
  4. In terms of Block’s (1995) definitions, it is phenomenal consciousness which is experience and a P-conscious state has experiential properties. But someone has a state of access consciousness if ‘a representation of its content is (1) inferentially promiscuous…that is, poised for use as a premise in reasoning, (2) poised for rational control of action, and (3) poised for rational control of speech’. Conceivably if it were possible to have access (representational) consciousness only, without phenomenal consciousness, the ‘hard problem’ might not exist, but Block sees that as a theoretical possibility only with (probably) no actual instances even amongst cognitive disabilities.
  5. In the course of arguing that the term ‘consciousness’ is ambiguous, Neil C. Manson (2011) has cited the distinctions used by several theorists between people and animals being conscious (creature consciousness), a state of consciousness, and transitive consciousness (being conscious that something is the case). There are certainly important differences between these, and indeed the forms or levels of consciousness discussed in this essay, but all these senses are related to ideas of capacity for awareness (and awareness of), paying attention, or responding to (situations and surroundings), in contrast to the different usages of the word ‘rock’ (Manson’s own example) – in a geographical sense or as a kind of music – which are quite unrelated.
  6. R. M. J. Cotterill (2003: 293) suggests the observed temporal delay before conscious awareness of decision to carry out an action arises because selection of an appropriate shema or schemata for any to-be-perceived stimuli involves a process of sel-organisation on the part of the individual.
  7. Since the applications of fuzzy logic and fuzzy systems include intelligent robotics and artificial intelligence, the dimension of ‘fuzziness’ is likely to be intimately involved in any process of creating an artificial consciousness, or indeed, any system capable of acting independently of its creators or programmers – if any such system is possible.


  1. Beeson, Michael, (1988), ‘Computerizing Mathematics, Logic and Computation’, The Universal Turing Machine, Haerken R.(ed.), OUP, 211.
  2. Block, Ned, (1995), ‘On a confusion about a function of consciousness’, Behavioural and Brain Sciences, 18, 227-287.
  3. Chalmers, David, (2002), ‘Facing up to Consciousness’, Consciousness, Rita Carter (ed.), London, Weidenfeld & Nicolson, 50-54.
  4. Cotterill, R. M. J., (2003), 'Conscious Unity, Emotion, Dreaming, and Solution of the Hard Problem', The Unity of Consciousness, Axel Cleermans (ed.), Oxford University Press, 288-303.
  5. Dennett, Daniel C., (1993), Consciousness Explained, Penguin Books, Penguin. --- (1990) 'Homuncular Functionalism', Mind and Cognition, Blackwell, 63-77.
  6. Flanagan, Owen, (1992), Consciousness Reconsidered, MIT Press, Cambrdge, Mass.
  7. Frankfurt, Harry, (1969), 'Alternate Possibilities and Moral Philosophy', Journal of Philosophy, Vol 66, No. 23, 829-839. Frankfurt has subsequnetly outlined a theory of 'bullshit' and falsehood, thence moving on to truth, a concept not so obvious in his opinion as some might expect.
  8. Haggard, P. & Einer, M., (1999), 'On the relation between brain potentials and awareness of voluntary movements', Experimental Brain Research, 126, 128-33.
  9. Hunt, Harry T., (1995), On the Nature of Consciousness, Yale University Press, New Haven and London, esp Part I, Sect 1; Part VII.
  10. Keller I. & Heckhausen H., (1990), 'Readiness potentials preceding spontaneous motor acts: voluntary vs. involuntary control', Electroencephalography and Clinical Neurophysiology, 76, 351-61.
  11. Kim, J., (2007), 'The Causal Efficacy of Consciousness', The Blckwell Companion to Consciousness, Max Velmans & Susan Schneider (eds.), 406-17.
  12. Lewis, David, 'Causation', Philosophical Papers II, Oxford University Press, (1986). First published Journal of Philosophy 70 (19730 556-67.
  13. Libet, B., (1985), 'Unconscious cerebral initiative and the role of conscious will in voluntary action', Behavioural and Brain Sciences, 8, 529-66.
  14. Lycan, William L., Consciousness and Experience, MIT Press, Cambridge Mass & London, (1996).
  15. Manson, Neil C., (2011), 'Why "Consciousness" Means what it Does', Metaphilosophy, Vol 42, No. 1/2 (Jan 2011), 98-117.
  16. Parruchet, Pierre & Vinter, Annie, (2003), 'Linking Learning and Consciousness: The Self-Organizing Consciousness (SOC) Model', The Unity of Consciousness, Axel Cleermans (ed.), Oxford University Press, 193-213.
  17. Shannon C. E., & Weaver, W., (1963), The Mathematical Theory of Communication, Urbana II, University of Illinois Press.
  18. Tallis, Raymond, (2000), 'Reflections on the Function of Art', The Raymond Tallis Reader, Michael Grant (ed.), Palgrave.
  19. Trevana J. A. & Miller J., 'Cortical preparation before and after a conscious decision to move', Consciousness and Cognition, 11, 192-90.
  20. Tononi, Guilio, (2003), 'Consciousness Differentiated and Integrated', The Unity of Consciousness, Axel Cleeremans (ed.), Oxford University Press, 253-265.
  21. Turing, Alan, (1992), The Collected Works, Vol 1, Darrel Ince (ed.), North Holland, Amsterdam, ---- (2004), The Essential Turing, B. J. Copeland (ed.), Oxford University Press.
  22. Velmans, Max, (ed.), (1996), The Science of Consciousness, Routledge, London & New York.