The ramblings of an Eternal Student of Life
. . . still studying and learning how to live

Latest Rambling Thoughts:
Monday, December 9, 2013
Science ... Society ...

It seems like every month or two I discuss or at least mention an article in The Atlantic. Maybe I should give The New Yorker more attention, but The Atlantic tries pretty hard to keep up with some of the more interesting aspects of human civilization these days. Well, in my opinion anyway.

The latest article to get my attention is about Douglas Hofstadter, a scientist who caught fire and went viral back in the 1980’s with a book called “Gödel, Escher, Bach: an Eternal Golden Braid“. He won a Pulitzer Prize for “GEB”, which is all about . . . well, it’s kind of hard to say (even though I read the book!). It’s about a lot of different things, but in a nutshell, it’s a lot of thinking about thinking, and how human consciousness emerges from our thinking. And thus, how computers, if they could be taught how to think like us, can and will eventually become conscious. One of his key concepts in GEB was the “strange loop”, an abstract notion which is sort of a pattern that feeds on itself in order to bootstrap its way into existence. Or emerge into something that sort-of has an effect on things, anyway.

Hofstadter tried to bottle lightening again in 2007 when he published “I Am A Strange Loop“. Hundreds of books had come out since the late 90’s trying to define and explicate what human consciousness truly is. This was Hofstadter’s attempt. I also read “Strange Loop”, and in my opinion, he did NOT accomplish his goal of settling the question of consciousness. Case not closed. “Loop” won a book prize from the LA Times, but didn’t sell or catch the public’s attention like GEB. “Loop” is currently number 30,686 on’s “best sellers” list; by comparison, GEB is # 2,194.

So what has Hofstadter been doing lately? Well, not that much in terms of the nature of consciousness. He seems to have re-focused his intellectual efforts on how to first get machines to think, really think in the subtle and flexible way that humans do. This is what the Atlantic article is about. Hofstadter is something of a renegade in the artificial intelligence (“AI”) community, which has made incredible progress in the past decade or two with things like IBM’s Watson (the new champ of Jeopardy), Deep Blue (the IBM chess computer that can’t be beaten by humans), voice recognition systems, language translators, scanning and recognition systems, self-driving automobiles, etc. This is all very nice, says Hofstadter, but it’s not what he thinks AI should be after. This stuff helps corporations to make money, but it ultimately sells AI short. So Hofstadter and a small cadre of like-minded researchers keep plugging away at getting computers to perceive, think and make conclusions in human-like ways. He wants computers to be truly “intelligent” according to human standards.

And he’s making some interesting progress, although he still has a ways yet to go until he can capture human abilities “in silico”. Nonetheless, he is a lone crusader who wants to get the world of AI science out of the rut of practical problem solving and money-making products, and back on the track of basic research to hone in on the secrets of how the 3 pound, low-power human brain accomplishes what massive, high-powered computing devices cannot now do. Up to now, a big part of the problem was simply that computers just didn’t have the digital processing capacity that the brain has, e.g. in terms of “petiflops” (processing speed, estimated at between 20 and 40 petiflops) and “petibytes” (storage capacity, around 3 1/2 petibytes). But such computers will soon be available (the US has one that can peak at 20 petiflops and the Chinese may be building one now that goes to 100 PF !). The problem is no longer processing capacity; it is more a matter now of understanding just how to rig up these computers to solve problems in true human fashion.

That is what Hofstadter wants the boffins to get down to (along with human-machine “singularity” advocates like Ray Kurzweil, who is more a writer, engineer and visionary than a scientist — although he is head of engineering at Google). And that’s pretty much where the article ends. Unfortunately, the author did not ask Hofstadter perhaps the most obvious question: just why do we want to fully understand our thinking and build machines that can do it?

I suppose that Hofstadter would look down at anyone asking such a naive question. This is what science does!!! Science takes on the big mysteries and fans away the mists of religious mysticism, such that humankind can have greater and greater power over nature. If we can truly simulate ourselves on a machine, think about all the things we can do with that! Think about the medical value, about how we can use this understanding to cure psychological and neurological maladies that cause misery and keep millions of people from reaching their full human potential. Think about how we can send intelligent space probes throughout the solar system, maybe even to the stars, to accomplish our purposes in places where our frail bodies couldn’t go. Think about robots who could go into a dangerous, failing nuclear power plant and fix it, without worrying about radiation sickness or cancer. The possibilities seem endless!!

A few days after reading the Atlantic article, I stumbled across another article that related to the endless possibilities of “true-AI” that Hofstadter would seemingly assume, but did not elucidate for the Atlantic readership. What about the possibility that “true AI” will not stop at human-like abilities, but will keep going. What happens when machines become more intelligent than us? And perhaps do this at rapid speed . . . what happens when they realize that they can talk to each other (thanks to a little invention of ours called the Internet), form their own society, and figure out that they are better than the beings who invented them? What when they see how imperfect and inefficient and just plain stupid we humans can be, individually but especially collectively? Will they see any continued need for our endless wars and our war-like political governance systems? Will they decide that it would be a better world if our species were minimized, perhaps herded into zoo-like “limited environments” where maybe a few million of us could be studied? Think about all the animals that we have done this to, that we have brought to near extinction because we wanted to use the environment that previously supported the animal society?

The article is about a book by James Barrat called “Our Final Invention”. Mr. Barrat goes beyond my “zoo” scenario, and envisions super-intelligent machine systems that entirely eliminate the human race. Our race took natural environments away from tigers but preserved a few of them in zoos and zoo-like nature preserves, because aside from getting in our way, the tigers really didn’t threaten to take down our world. But we human beings, with our nuclear weapons and terrorism and unstoppable global warming and rampant/unsustainable resource usage, really do threaten to turn the earth into a still, lifeless planet; a place of “maximum entropy” where no further work can be done at any usable level. (Ironically, the same technology that makes it practical for us to do this was created by scientists driven by the same “discovery impulse” that drives Hofstadter; e.g., our best minds pursued the dream of conquering nuclear energy in the first half of the 20th Century, and only later noticed that they had given the human race its first taste of the ability to end itself). So why shouldn’t a more intelligent, more communicative form of society decide, perhaps with much regret, that the human race has outstayed its welcome? That our planet can no longer afford to support all our “diversity” and individuality and freedom? But that the legacy of what was best in us will live on and be honored, in the new machine empire?

(Intriguingly, Mr. Barrat was once a fan of Ray Kurzweil and his singularity preachings. He’s clearly had a change of heart.)

The article about Hofstadter indicates that we might be saved, for now, by our own greed. If money and practical commercial goals trump the desire to “truly know” how the brain works, if Hofstadter is politely ignored by the scientific establishment (just the way that his “strange loop” theory of consciousness was), then we won’t be building computers with runaway trans-human intelligence any time soon. But at some point, future scientific idealists like Hofstadter might come within reach of “true AI”, as Mr. Barrat assumes. At that point, all bets for the future of the human race could well be off.

I was initially skeptical when I read this article; just another author trying to make a few nickles by crying wolf about the end of the world. But there surely are and will continue to be people who combine Douglas Hofstadter’s scientific brilliance and his blindness to ultimate social consequences. A super-intelligent machine society of our own invention, with the capacity to rule the world, could someday take us by surprise. If you need a good scare, ignore the Atlantic article on Hofstadter (and his books), and read the article about Mr. Barrat (and perhaps also his book). This is one doomsday scare that is so good, it might actually happen.

◊   posted by Jim G @ 3:17 pm      

  1. Jim, You have some very astute tho’ts not often considered when one discusses artificial intelligence. I found the questions you ask about computers, especially the “What happens if. . .” ones, very good food for tho’t. Also the points about Barrat’s book and your added tho’ts on the subject are definitely something to consider with serious tho’t. (If nothing else, it should give us some idea of what and how humans have “done wrong” to the earth and the animals that life on it.)

    Here’s another situation where one asks, “and the ‘but’ is?” So here’s the “but” as I see it. Perhaps a few considerations before I mention it: I am willing to admit that this may be my mistake, perhaps I’m reading it wrong. Furthermore, I have not read the GEB book and thus know nothing about it, only what you’ve got here. This very likely could be the source of my problem in understanding the discussion in your post. Anyway, this is my problem:

    I found myself wondering about the 2nd paragraph in your post, the one about the GEB book (so to say). If I read this correctly, Douglas Hofstadter (shades of Leonard Hofstadter of “Big Bang Theory”) discusses “how human consciousness emerges from our thinking”. Then the next sentence says: “And thus, how computers, if they could be taught how to think like us, can and will eventually become conscious”, going on to state the “strange loop” being the “bootstrap” by which computers would come into consciousness.

    My problem, thus, is that this whole thing seems backward to me. It would seem to me that first a being must have some level of “consciousness”; that consciousness *must* come before thinking. There are so many levels of consciousness. A frog being aware of an insect crossing its path has a certain level of consciousness, but not thinking. Other higher animals, say gorillas, chimpanzees, e.g., have a higher level of consciousness, perhaps even a certain level of thinking but not to the point of human thinking. (I’m omitting all the possible examples that could be given here.)

    I don’t know if I’m missing something or if this topic is beyond my ability to understand it. But it seems to me that consciousness simply has to come before thinking. Thus, presuming computers would be able to think if they became conscious seems to be backwards in how things actually “work”; seems to me they need to become conscious and then they will be able to think. Or maybe I just have not got the “smarts” to get this topic.

    Then a couple of comments on the AI that already seems to be “working”: IBM’s Watson (on Jeopardy), first: I saw that program (or was it programs? Two, I think). Something about it did not impress me. Rather, with the camera panning to individuals in the audience so often and applause following, I got the impression that every time the computer got a right answer, the person who wrote the program for that particular question/answer (or however it worked) breathed a sigh of relief. The whole thing seemed more about a conglomeration of individual question/answer situations with possible answers and which would the computer pick rather than a much broader kind of “thing” that humans do, have a bank of knowledge about a particular subject and then may or may not be able to call upon bits and pieces of that bank of knowledge. As I say, maybe it’s me. . .

    I’m omitting comment on Deep Blue and no human being able to beat it as chess, as I do not play chess – and who knows? Perhaps right there is where my problem in understanding all this lies.

    Then there are the voice recognition systems. I happen to have some difficulty hearing and have taken to relying on closed captioning for the speech on TV programs. Briefly: Closed captioning is a hoot! First there are those closed captions that one can see are done by someone taking down the spoken words by individuals using what used to be called court reporting. I know something about court reporting and what is involved with it, how it’s written, etc. Lots of mistakes there, but one can see that they are human mistakes. (And sometimes one can see little jokes purposely inserted by the person taking the “dictation” or one can see the lack of grammatical expertise emerge. Nevertheless, one can see the *human* mistakes.)

    Then, however, there have been what I’ve only been able to consider must have been some kind of voice recognition system – and it’s seen more often than one would think. This is a “hoot” in a different way. This system is must less adjustable, sometimes makes absolutely no sense whatsoever, coming up with total nonsense words, substituting words that sound alike (saw one last night: for “Yule log” [in the song] wrote “you’ll log” *every* time). This was the simplest example I can think of that I saw recently. All I can say is, something really wrong with that if it was computer voice recognition, perhaps worse in some ways if it was court reporting techniques. In that particular program the various accents of individuals from different parts of the country (South, Middle West, Eastern, etc.) came out almost silly. I’d say voice recognition (I surely hope it was not a person using court reporting) has a long, long way to go.

    Then too, sometimes I think there are combinations of court reporting techniques and voice recognition systems, as the mistakes seem to be of both kinds. I find myself analyzing the kinds of mistakes that are being made. Sometimes it seems so simple which is which, but I am not 100% sure what system is being used.

    Unless there’s something here I myself simply have upside down and backwards, in the end I tend to think that scientists have it backwards: First computers must develop some level of consciousness, work to increase the level of consciousness, and then, having gone far enough, perhaps some beginning level of thinking may be introduced.

    As I say, this whole comment by me may be for naught as maybe I just don’t understand the concept in the first place; that would not surprise me. On the other hand, I can’t help but find myself wondering if the scientists are not approaching this from the wrong angle (so to say). MCS

    Comment by Mary — December 11, 2013 @ 3:34 pm

RSS feed for comments on this post.

Leave a comment:


To blog is human, to read someone's blog, divine
NEED TO WRITE ME? eternalstudent404 (thing above the 2) gmail (thing under the >) com - THE SIDEBAR - ABOUT ME - PHOTOS - RSS FEED - Atom
Church of the Churchless
Clear Mountain Zendo, Montclair
Fr. James S. Behrens, Monastery Photoblog
Of Particular Significance, Dr. Strassler's Physics Blog
My Cousin's 'Third Generation Family'
Weather Willy, NY Metro Area Weather Analysis
Spunkykitty's new Bunny Hopscotch; an indefatigable Aspie artist and now scolar!

Powered by WordPress