Turing test ponderings
Andy has a post about the Turing test: how to determine between a software program and a human based on the answers it provides to questions you ask. Why would we want a machine to emulate a human being though? Yeah right, just what we need, a machine that can be prejudiced and irrational, that can give false answers to honestly posed questions, that can believe in the most patently ridiculous things, that can sulk and decide not to give any answers or that can look at the universe and its place in it and decide to fuse all its circuits. Hell, we can produce that sort of machine at the drop of a pair of knickers.
We want a machine that is more intelligent than us, completely rational, has access to all the world’s information but is still under our control. Would such a machine ever sound human? I doubt it and do not think we would care if it did. But we would not want to have to pre-code every part of this machine so it would have to be able to learn. Would such a machine be intelligent? or conscoius? Possibly the former, probably not the latter.
I guess it would be intelligent if it was able to pose new problems and seek out the answers to those problems based on methodologies it had come up with. This would be a truly useful machine. It would have to be able to learn about new fields, absorb the knowledge we already have, question us about our assumptions and, as I just said, pose new questions and seek their answers. Would it have to understand natural language? I don’t think so. Imagine an alien coming down to Earth, with whom we could only converse in some structured way: we would still consider the alien to be intelligent even if it ignored any questions we posed about its self.
What about a conscious machine? It would, I guess, need to be able to reflect upon itself, and its own thought processes. Maybe not conscious in the human sense though. We are only slightly conscious: we are unaware of the vast majority of the workings of our brains and bodies and only even aware of our own thoughts after those thoughts have already happened. A conscious machine, on the other hand, might be fully self-aware, able to reflect upon every aspect of its own internal workings. We would have to consider such an entity far more conscious than ourselves.
Does a conscious machine need emotions? It would probably need at least some drivers: to acquire more knowledge, say. This desire would be an emotion. Could it have positive emotions without negative ones? Could its emotions be separated from its physical parts and processes? Lots of interesting stuff here: how would we construct a robot mind?