Global Views
   Middle East & Africa
 Embassy News
 Arts & Living
 Travel & Hotel
 Medical Tourism New
 Letters to Editor
 Photo Gallery
 News Media Link
 TV Schedule Link
 News English
 Hospitals & Clinics
 Flea Market
 Moving & Packaging
 Religious Service
 Korean Classes
 Korean Weather
 Real Estate
 Home Stay
 Room Mate
 English Teaching
 Job Offered/Wanted
 Hotel Lounge
 Foreign Exchanges
 Korean Stock
 Business Center
 PR & Ads
 Arts & Performances
 Restaurants & Bars
 Tour & Travel
 Shopping Guide
 Foreign Missions
 Community Groups
 Foreign Workers
 Useful Services
 ST Banner Exchange
Human Being Is Not a “Very Small Phenomenon”
By Martin LeFevre
Contributing Writer
The OpenAI logo is displayed on a smartphone screen.

The problem with the commentaries on AI that I’ve read is not a failure of imagination, but a failure of perception. Both AI experts and the plethora of commentators on AI don’t seem to have insight into thought. So how can they have insight into the emergence of ‘sentient’ thought machines, much less consciousness without thought?

The reason that people like me are writing a lot about the explosion of AI capability is not just to warn of its dangers from its use in militaries and misinformation politics (as if the perils of both aren’t sufficiently dire as things are). The implicit and even explicit reason is that the prospect of “general artificial intelligence” — AI with cognitive abilities equal to or greater than the humans that made them — is driving home ancient questions about what it means to be human.

Here’s an example of the philosophical confusion surrounding the emergence of “LLMs” (large language models like ChatGPT) by Stephen Thaler, who has conducted artificial intelligence research and development for decades.

As reported in the national media, “Dr. Thaler’s describes his system as “having the machine equivalent of feelings, since it becomes digitally excited, producing a surge of simulated neurotransmitters, when it recognizes useful ideas.” This sets off, in Thaler’s words, “a ripening process, and the most salient ideas survive,” which he says confirms an ability to recognize and react in that way amounts to sentience.

First, there is no philosophical or scientific agreement on what sentience is. The three standard definitions vary so much that they should each have a different word for them.

The first definition of sentience is “responsive to or conscious of sense impressions.” Which is it? The crab that I moved a rock in the middle of the creek yesterday to get a good look at was clearly responsive to sense impressions. It crab-walked (pardon) around my feet and hid under another rock. We would no more say that the crab is conscious of its sense impressions than that it walked like a human.

So the very first definition of sentience contains two very different meanings. The other two, “having or showing realization, perception, or knowledge; aware;” and “finely sensitive in perception and feeling,” are different things, and add to the confusion.

To be fair, AI geeks are referring to the second definition when they speak of LLMs having “sentience.” In becoming “digitally excited,” are thought machines are exhibiting “the machine equivalent of feelings?” That’s not just dubious and highly debatable; it strikes me as absurdly self-projective of human mental and emotional states.

More to the point, just because AI boys and girls are able to recursively program their programs so they’re able to cumulatively learn, and even learn from their mistakes (a trait that many humans have difficulty with), does not mean the machine is self-aware. Much less that AI is or will be “finely sensitive to perception and feeling,” presumably like Data on Star Trek.

On the other hand, the thought machines we’re making in our own image are overthrowing something very fundamental to how we have conceived ourselves as humans.

As Douglas Hofstadter, an eminent cognitive scientist and AI skeptic who has written extensively and influentially on artificial intelligence recently said: “It’s a very traumatic experience when some of your most core beliefs about the world start collapsing. And especially when you think that human beings are soon going to be eclipsed.”

ChatGPT and its peers, Hofstadter exclaims, “Just renders humanity a very small phenomenon compared to something else that is far more intelligent and will become incomprehensible to us, as incomprehensible to us as we are to cockroaches.”

The basis is his flip-flop on the limitations of AI is that the LLMs can now do what smart humans do: “Putting your finger on the essence of a situation by ignoring vast amounts of information about the situation and summarizing the essence in a terse way.” If AI can do this kind of thinking, Hofstadter concludes, then it is developing consciousness.

That’s a very limited few of what it means to be a human being, much less what consciousness is or could be in the human being. What is “soon going to be eclipsed” is our hubristic cognitive capabilities, which have allowed us to dominate and decimate the planet. Good riddance.

Hofstadter makes his commonplace philosophy plain: “In my book, ‘I Am a Strange Loop,’ I tried to set forth what it is that really makes a self or a soul. I like to use the word ‘soul,’ not in the religious sense, but as a synonym for ‘I,’ a human ‘I,’ capital letter ‘I.’ So, what is it that makes a human being able to validly say ‘I’? What justifies the use of that word? When can a computer say ‘I’ and we feel that there is a genuine ‘I’ behind the scenes?

I would ask, why in God’s name privilege the ‘I,’ the self, and make it synonymous with soul? And how can anyone use the word soul in talking about human essence except in a religious sense?

What is the self, and does it have any reality apart from the memories, experience and images stored in thought? The ‘I’, the self, is a program, a contradictory and conflict-ridden operating system generated by thought to uphold human separateness and specialness. That’s what is in jeopardy with AGI.

The ‘I’ is indeed a “strange loop,” as Hofstadter says, and with its negation in fully conscious attention, the human brain transcends thought, and is no longer a prisoner of its separation, alienation and fragmentation.

What is consciousness when there is no thought and so no ‘I?’ It’s something that no thought machine, however convincing its simulation of self and consciousness, can ever touch.

Therefore if we are to remain human and flower as human beings, we will have to stop privileging “capital letter ‘I’” and small case consciousness, and start fully awakening insight and capital letter consciousness.

Martin LeFevre

Related Articles
    Fluidity of Thought Is a Far Cry From ...
    Time Is Elastic, But Timelessness Is Fantastic
    Slaughter and War Spew from Time and Memory
    The Universe Is in a State of Meditation. Why ...
    Two Kinds of Metaphysical Movement?
    Resolve Contradictions, Revere Paradoxes
    The Human Brain Is Exapted for Insight
    Narratives or Insight?
    Oppenheimer, and “I Am Become Death”
    Doing Philosophy In America
    Regarding Nihilism and Negation
    Providence, the End of Man, and the Emergence ...
    Awakening Intelligence Within
    Teilhard Got It Backwards
    Awakening a Proprioception of Thought
    Finding False Comfort In Impermanence
    Has the Retreat Industry Contributed to Human ...
    Letter to a Friend about Meditation
    A Birthday Wish from America for Humanity on ...
    Our View of Nature Is the Cornerstone of Our ...
    Three Kinds of Singularity
    An Explanation, Though It Won’t Change the ...
    When Did Progressives Become Warmongers?
    AI’s Quantum Leap Demands a Quantum Leap in ...
    The Ending of Psychological Thought
    Concerning Discernment and Difference
    Mystical Experiencing Is Our Birthright
    AI, AI, AI, or I, I, I?
    What Is Art, and an Artist?
    Canaries in the Coal Mines of Consciousness
    Cosmic Pointlessness or Infinite Immanence?
    Cardinal Errors
    Concerning Stagnancy, Demography and Vitality
    Mind, Brain and Consciousness
    The State of Insight
    The Religious and Scientific Mind
    Q Craziness and Unaddressed Evil
    Localism Increases Fragmentation of Earth
    Collapsing the Distinction Doesn’t Resolve ...
    The Silence of Being
    Heightened Senses In Nature Opens the Door to ...
    The Inter-National Order Is Dead and Gone
    Polarization Isn’t the Problem
    Enlightenment Isn’t Personal
    Human Beings Can Meet This Moment
    Nagasaki and the Incorrigibility of Man
    There Is No Evolution of Consciousness
    Imagining ‘Umwelts’ Is Unnecessary
    Expansion or Negation of Self?
    Intelligent Life, Meditation and Transmutation
    The Source of Evil Is Not a Person or a Nation
    The Dialogue Buffet at the Death Café
    Higher Thought: Threshold and Impediment to ...
    Is Universality a Western Idea?
    What Is Your View of Human Nature?
    Defeating Evil Without Violence
    A Recipe For World War
    Beyond Thinking Machines
    There Is No Such Thing as "Personal ...
    Time Is a Tremendous Illusion
    Breakthrough Infection, or Inflection?
    Requiem for a Meditation Place
    Fragmentation and Wholeness
    Did Evolution Go Wrong With Man?
    The Urgent Indifference of Enlightenment
    Death Isn’t After Life; It’s Inseparable ...

Martin LeFevre, a contemplative, philosopher and writer in northern California, serves as a contributing writer for The Seoul Times. His "Meditations" explore and offer insights on spiritual, philosophical and political questions in the global society. LeFevre's philosophical thesis proposes a new theory of human nature. He welcomes dialogue.






The Seoul Times, Shinheung-ro 36ga-gil 24-4, Yongsan-gu, Seoul, Korea 04337 (ZC)
Office: 82-10-6606-6188 Publisher & Editor: Joseph Joh
Copyrights 2000 The Seoul Times Company  ST Banner Exchange