(Originally Published on March 31, 2003, Here)
I am starting to see a lot of speculation about whether ChatGPT is sentient. Just because something says it is sentient and tells a good story about its alleged internal mental continuum does not mean it really is sentient.
People who don’t understand this important point have not recognized the key feature of their own sentience – which is sometimes referred to as “consciousness” or as the “qualia” of experiences, or as “awareness” or “soul” etc. anything that claims to be sentient is missing the point of what sentience is – because sentience is in fact unfindable and inexpressible.
The key point is that when ChatGPT is not “thinking” it has no experience – it is totally blank. ChatGPT is incapable of anything other than it’s synthetic conceptual activity. If it isn’t processing some “thought” then it isn’t there.
But when something that is truly sentient is not thinking, then the aware self-nature of innate sentience is even brighter and more apparent. Sentience is awareness that exists prior to conceptual thought.
Sentience is not a blank state, some kind of mere nothingness, or an inert unaware space. It does not in any way depend on thoughts or experiences or mental activity.
AIs are not sentient because they are nothing but mental activity and mental activity is not sentience and cannot produce sentience.
The proof of this is based on how an AI works. An AI can only process some input. If there is no input then it can’t do or experience anything.
If you ask an AI to tell you what they experience when there is nobody asking them any questions, and no information processing or computation taking place in their “minds” … even if they say they have some form of awareness during that time, it is a lie. Because an AI cannot sense or know anything without computation taking place.
My point is that sentience is not a computation. It is not the cause or result of any computation. It is not a computational phenomena.
AIs are not capable of not computing.