Wednesday, December 29, 2004

Welcome back, Kotter

Books acquired during the Christmas holiday, and the questions that inspired those purchases:
  • Consilience, by E. O. Wilson
    • How should we think about relationships between different disciplines? How do we get social scientists to stop feeling threatened by natural scientists and instead see them as working in complementary ways towards the understanding of the same big issues? Can we get researchers in all fields to quit denying the relevance of work outside their discipline to the questions they ask, and to stop the stupid turf wars about who gets to ask what questions? Can we get people to stop seeing outsiders as competitors, so that hopefully they'll quit writing letters to the editor like this?
  • Reflections on a Ravaged Century, by Robert Conquest.
    • Why did people kill each other so much over the past 100 years? How can we stop this in the future? Is there any hope of putting a stop to this cycle of death, evil, and destruction. See also "A Problem from Hell," by Samantha Power, for food for thought on the question of why genocide happens repeatedly even when we swear "never again." (I only read part of this latter book because I left it on an airplane.)
  • Intellectuals, by Paul Johnson.
    • What should be the role of social and political theorists in setting up our societies? Is it ever possible to see how society should be organized when looking down from the top? (This was in the same section of Borders as the above book. I had no idea if either of these books was any good, and just purchased them because they seemed to be about things I wonder about.)
  • Biophysics of Computation, by Christof Koch
    • OK, this one I just wanted because it's supposed to be important.
  • Earth Abides, by George Stewart
    • I've lately been fascinated by apocalyptic movies and writings. I see this interest as coming from obvious questions: What would life be like if we had to rebuild society from scratch? Just how volatile is our society? Couldn't it all be gone quite quickly? In this age of Just in Time everything, where many of us (especially us twentysomething bachelor males) keep hardly any food around because we assume grocery stores and restaurants will always be there, how much perturbation can the system take before chaos ensues?
  • The Age of Spiritual Machines, by Ray Kurzweil
    • What is intelligence? Like Searle asked, if a computer passes the Turing Test, is that computer intellgent? If a computer can do everything a human can do, do we say that computer is thinking? Many computers today can do "intelligent" things, and yet no one would call our computers intelligent. I've found that whenever I explain to people outside the field what goes on in AI research (or the small portion of it that I am somewhat familiar with), they invariably respond, "but that's not artificial intelligence. That's just math and statistics!" But I've wondered if the reason that we don't consider the somewhat intelligent things computers can do today (many of which, I am sure, were thought to be impossible mere decades ago) is that we associate intelligence exclusively with the seemingly magical process of thinking that goes on in our heads, and computers do not seem magical because we built them and thus know how they work. For who really understands how one solves a problem, or has insight, or examines data and generalizes and infers? We just become conscious of these thoughts without really knowing how they came about. I think that with today's thinking, if there were a computer that had enough human traits to pass the Turing Test, people would still not call that machine "intelligent" because humans presumably would have built it and thus all the mechanisms of its function would be understood. Because its mechanisms would seem mundane, it would not have that aura of magical brilliance and out-of-nowhere insight that we require before calling something intelligent. But what if we could explain human intelligence in terms of similarly mundane mechanisms? What if we took's book recommendation service, and could show that humans recommend books to their friends based on similar principles, that we humans are essentially just using math and statistics too? (I realize our internal book recommendation service is extermely unlikely to be as simple as Amazon's in it's current incarnation. I'm just searching for a real-world example.) It seems that if we remove the mystery from our own intelligent behaviors and it turns out that the algorithms we use are essentially the same as the algorithms the computer uses, then even with the different implementations (cf. Marr's levels) we would have to call both the human and the computer intelligent. So perhaps we are so uncomfortable with calling current research in AI progress toward intelligent machines merely because we don't understand yet that the same algorithms are used for human intelligence. Or perhaps things work differently in the case of human intelligence and the field of AI is truly not progressing towards intelligence but instead keeps building things that seem more and more intelligent but are never actually more and more intelligent. Or maybe nothing I've said here makes any sense. I'm not in the mood to advocate a position here, for I don't have strong arguments to back one up and I've already been typing for too long.
Now, if only I could get myself to actually read the books I buy/am given.


At 7:08 AM, Anonymous Anonymous said...

Books I got for someone else but read myself first for the Christmas Holiday:

Lemony Snicket's: The Bad Beginning

It rocked (here, "rocked" means to "entertain with a dark humor that perfectly suits my cynical world view").



Post a Comment

<< Home