posted on Jun, 5 2010 @ 04:55 PM
I agree that intelligence cannot be described or analyzed properly by us, and that's why we can't truly create intelligence, we can only emulate it
or attempt to approximate it and imitate. As for combining all the narrow AI together: there is no metaphysical difference between a "narrow" AI and
one that is assembled from many "narrow" AI. it's still a narrow AI but now it can solve more problems. That's what AI is for: solving problems.
AI isn't for creating life or true intelligence, and nobody who isn't some crazed idealist or working on the bleeding edge of quantum computing or
something is going to imply that we can create true intelligence, only make it appear intelligent according to whatever standard we arbitrarily
Once again, wolfram alpha is not more intelligent than any engineer in the world merely because it has tons of data and can quickly calculate your
queries (say that 10 times fast), it's just a tool and nothing more. It has no true intelligence, it parses the command, sorts it, and searches its
database for a relevant solution, and applies the input, spits out an output. it's not metaphysically different than many other programs. Ask wolfram
alpha to code you a program and it will be at a loss, or even if it could it would pick the "standard" way of doing it. No creativity or
it's not particularly impressive that programs exist which can form a program from UML. That does not require true intelligence
True, Computer Science evolves faster than virtually any other field, but like many other Computer Scientists, I do not stop my learning as soon as I
get out of college. I took my AI course very recently in fact and have done some research, and I am still not convinced.
We are not at the limit of our progress, otherwise progress would stop. You could argue that our potential for growth is perhaps limited by the virtue
of being a biological organism, but then what is the proposed alternative? Transhumanism? There are all kinds of ethical and social issues which need
to be addressed for that to be a working concept of how things should be, not to mention technical issues aren't ironed out yet.
ClearView is interesting, and I congratulate MIT on their research, but it still lacks the concept of self or a knowledge of intent required for my
reflection proposal to work as intended. As of now, it just analyzes the trends and statistics in the code, which is simple reflection and the basic
application is a common use of AI. MIT and others must agree that weak AI is the future :-)
people say they will have a working build in 2 years often because they need grant money or investors, but in reality two years will roll by and it
will be shelved, the company will have folded, or the report will come out and we'll see it's just an advanced tool designed for specific apps.
I have no doubt I'll see AI, in fact I already have. I've worked on it. But I will not see true intelligence in a machine unless it's potentially
either a quantum computer or a biological machine.
The woman's chat program is extremely simple. Useful and clever, but not revolutionary and requires no true intelligence. It parses bits and analyzes
trends. no big deal. The phone app Is a SPECIFICALLY design app, probably using a knowledge web (forget the tech term) to determine relationships.
Useful, but not revolutionary.
How do you propose upgrading our economic system? What role does it play in the discussion of AI?
a computer can "learn" more quickly than a child, but what purpose does it have? It does not learn any more than your hard drive learns when you
copy data to it. the computer merely utilizes the data it is given, it has no concept of what that data is beyond the bits.