I think you'd certainly cut down on the forgetfulness, carelessness and so on, if you don't eliminate it entirely. It's there because we're using heuristics to get a reasonable answer quickly, because we don't have time to calculate the real one. The computer would have time to find the real one, since it's probably going to be running noticeably faster than us.
Eh, searching for many of the main Real Life
problem would still require exponential time, at least if we assume that P != NP. You can have as much processing power as you want, but you are not going to solve the knapsack problem
for a very big input through brute force.
What you can
do is use some heuristics that work sort of well, most of the time — for example, for the above mentioned knapsack problem the greedy algorithm tends to usually find a decent solution, though not necessarily the optimal one, and it is a very "human" tactic. But of course, you can develop scenarios in which the greedy algorithm gives you a stupid answer. That's the equivalent, on a tiny scale, of a cognitive bias, I'd think.
But in any case, I think that we are in agreement that the human mind, as it is now, is not the absolute best possible form of general intelligence. What I am not that convinced about is that building from scratch a better one will be as easy as singularity enthusiasts sometimes seem to assume.
Also, in answer to post 133:
It is possible to have human-level AGI without having superhuman AGI shortly afterwards, but that includes both not having the AGI work on designing a better AGI (even tough it would make sense), and having an AGI design that doesn't feature self-modification and self-improvement (and something like that seems likely).
The problem is whether a primitive AGI would be
able to improve itself to such a degree. The only form of general intelligence that we are aware of is the human one, and humans at large do not seem capable of doing this all that easily — it might
be possible for us to improve ourselves, in principle, but you cannot just put an average guy in a room with a lot of neurology textbooks and a workbench and tell him "ok, now work on building a better brain than the one you have". He would fail. I would definitely fail. Perhaps a genius might do it, given enough time, but the first AGI will probably not be genius-level.
Yup, it's quite possible there will be a limit to how strong a particular AI design can become, but it would be an *amazing* coincidence if that level happened to be human-level intelligence. That's a coincidence that's common in works of fiction, for obvious reasons, but reality doesn't follow the same rules.
I was not suggesting that. I was objecting to the oft-mentioned suggestion of "build a primitive, self-improving AGI fragment, and then let it do all the work for you". While automated coding is likely to play a role in the future of computer science, and in the future of artificial intelligence in particular, this idea, at least if formulated in the usual way, strikes me as a trick to avoid thinking about the truly difficult problems. You may as well start talking about "emergent behaviour" or "genetic programming" — to mention other two concepts which might
play some role in the development of an AI, but that are often mentioned as a catch-all for "I don't know how this problem might be solved, but I hope that the computer will solve it somehow for me".
Before starting talking about how to build a self-improving AGI, we need to be able to create an AI with the planning and reasoning abilities of a dog. As far as I know, we do not have much of a clue about how to do that. There are problems that we need to solve in order to do that, hellishly hard
edited 4th Jun '11 10:49:00 AM by Carciofus