Skip to content

AI Scaremongering

This post on boingboing, “Google: our print scan program has no hidden AI agenda”, which points to this ZDNet story cracks me up.

Talk of a “hidden AI agenda” just cracks me up–it feels like scaremongering, of some lumbering, lovecraftian, inhuman intelligence, artificial intelligence.

When questioned on whether a renaissance of the general paranoia about omnipotent and malign computers was underway now, Levick admitted that such concerns were more abundant, but insisted that Google’s core philosophy of “Don’t be evil” guides all its actions.

“I think that goes back to the concept that these technologies can actually be empowering and good for the world if the companies implementing them are good,” he said. “Could some of these technologies be used for bad purposes? Yes. But will they by us? No.”

Hehe. As someone who works with AI every day, and who knows the prenatal state of natural language processing and so-called “strong AI”, it cracks me up to see public fears of “omnipotent and malign computers”.

Sigh.

5 Comments

  1. ben

    did it crack you up?

    ;^D

    Posted on 19-Nov-05 at 14:37 | Permalink
  2. JanEhaa

    the fact that you work wit natural language processingmight make you think that it is harder to do other things in artificial intelligence than it really is. Natural language processing requires understanding human language, which is a complete mess. Making machines conscious might be a lot easier, since we can start from scratch.

    Easier than impossible might still be impossible though.

    Posted on 12-Aug-06 at 14:57 | Permalink
  3. mote

    I don’t agree at all, JanEhaa. Human language is not as complete a mess as you’d think. We’ve made excellent progress towards solving the more surface-level problems in NLP (for instance, spam classifiers–template based and naive-bayesian based–do wonderful jobs).

    NLP has difficulties in problems that delve into deeper issues (language understanding, machine translation) that I would argue are closer to problems of cognition. How do you propose consciousness without a language of thought that’s tied to the real world? (Or am I still viewing the world through a mirror, NLP-darkened?)

    Posted on 12-Aug-06 at 19:14 | Permalink
  4. JanEhaa

    In one of the computer labs at my university, there is a picture on the wall of two boxes. Each box has a bunch of wires poking out of it, and there is a man standing in front of the boxes scratching his head as to how to connect the wires together. One box has the label syntax on it, and the other is labeled semantics. I’m sure statistical methods do a wonderful job, but it feels a bit like cheating to me. Sort of like a con artists trying to pass as a wealthy sheik by copying the mannerism, clothes, and ways of speaking from the real thing. He might fool a layman, but not an expert. An example all of this might be the impressive, but not perfect, application of spam filtering. Speech recognition software has lately become good enough to become really useful. But it makes no attempt at actually understanding anything. The point I was trying to make is that it might be a good idea to ditch the box marked syntax entirely. The syntax is just the machine human interface. Other interfaces might be conceivable.

    If you start out with the information encoded in a bunch of symbols, then it’s natural to start thinking of thinking as symbol manipulation. I don’t think humans think like that. Most of the time, anyway. And there might be completely different types of thinking. Completely different from what we think of humans as doing, but still something that might be considered conscious. Unfortunately, I’m not smart enough to come up with the example you asked for.

    We probably shouldn’t expect a robot uprising any time soon though. :-)

    Posted on 15-Aug-06 at 07:04 | Permalink
  5. mote

    Well, we need some sort of syntax—syntax defines meaning as much as semantics does. Look at the difference between “the dog bites the man” and “the man bites the dog”; same words, very different meanings. I do agree with you, though… machine intelligence is very different from human intelligence. A machine can defragment a 100 teraabyte hard drive (I could never do that, the problem is too large to keep track of in my brain) but I can read a paragraph and summarize it, something that a computer can never do.

    I love this quote by Edsgar Djikstra: “Asking if a computer can think is like asking if a submarine can swim”

    This is not to say that Natural Language Understanding (and robustly mapping language to internal representation and model of the world) is impossible… it’s just, yes… not going to happen soon.

    Posted on 27-Aug-06 at 10:29 | Permalink