A definition of Consciousness

Suppose ‘consciousness’ could be defined as the ability of a ‘being that can learn’ to understand how its self learns. Thus it then has to ‘decide’, a rudimentary idea in our perception of ‘consciousness’.

I think this explains to a degree then our ‘personality’ which we use to direct our experiences in the world and thus our learning. We, in a manner, feed our learning what it likes best: pleasurable experiences.

If this is true, what does it say about the ‘Turing test’ and there for the way forward for AI research.

21 thoughts on “A definition of Consciousness”

  1. Maybe…I guess I was thinking more of a functional definition. A test can be constructed to show that if something can learn from experience, (i.e. its a learning thing) and if it can then show that it can learn its own learning behaviour, or exhibit behaviour that would show a control over its own learning (experiences), then this thing is consciousness.

    I guess an awareness of experience.

  2. I’ve been out of the game for a while but let me give this a shot…

    In relation to HOT, a fairly commonly accepted def’n of consciousness in the social sciences, is an awareness of one’s self (and here’s the clincher) *as an experiencing self*. At face value this is a trivial def. But where one makes the distinction between self, experience, agency has significantly different outcomes – i.e. different methodological presuppositions about human nature produce vastly different theories.

    many social theories have gone away from taking their point of departure as human nature – they tend towards monism and neglect capacities for change (e.g. learning). rather, they take their point of departure to be human social relations. this is also largely reflected in the paradigm shift in soft computing technologies that contain more ‘social’ rather than ‘artificial’ intelligence.

    asking questions about the nature of consciousness, the nature of knowledge and knowing, the nature of learning and doing, etc are fundamentally important questions in so far as they reflect that the human universe is made up of discernible patterns of human action (which includes thinking and speaking) that humans themselves are able to manipulate. the ‘how’ or ‘why’ we manipulate these things (e.g. the politics of X, where X = these things) are inherently social – they exist not in an abstract logic (rationality), but in patterns of intersubjective behviour embedded within social contexts (i.e. temporal and spatial dimensions are vitally important, whereas abstract logic is largely independent of time and space).

    so, i guess my answer to your question is that the future of AI does not lie in a definition of consciousness, but in the way it understands the patterns in and facilitates human social relationships. many decision based models work on existing statistical principles. to have a single abstract and universal learning princple, i think is impossible. i think it would be symptomatic of an oppressive homogeneity. social networks have apparently been taken on by AI researchers even though they have very little credibility among social scientists (the models are too deterministic). i think these are interesting developments because they provide a simple model of the usually opaque social world and render it comprehensible to users. this usually entails a shift in the strutcure of the network as people adjust to what they see available as ‘options’….and so on. *very* simple model of social change a.k.a. learning. similar things apply to language – the history of how french came to be the national language in france is a fascinating example of such a network being ‘built’.

Comments are closed.