reply to post by igigi
Give a couple of the videos a try, it's MUCH more than just Dragon Naturally Speaking and Ask Jeeves combined.
IBM explains how questions and answers aren't simply a prompt-answer format, some questions are worded confusingly, or in the case of Jeopardy!,
organized to deliberately fool you. To throw you off the right answer trail.
That's not intelligence. That's logical deduction based off of key-words and heuristics. Computers should be more difficult to fool than a human,
as it is completely incapable of reading into a statement, and has no need to understand the question - only the capability to return an answer that
makes -you- think the computer understood and provided an answer. More on this later.
With that in mind, you can begin to see how a very complex set of algorithms would need to be developed, just to understand what the computer
is being asked! Beyond that point it's even more complex to chalk up a response adequate enough for the question posed.
Again, that's not intelligence. It's incapable of doing anything that doesn't exist within its databases and computer algorithms.
I work with simulators a lot. There are very interesting quirks to different methods of simulations - one such revolves around the look-up table data
used for flight simulators. They provide some of the most accurate and computationally efficient simulations in regular scenarios included within
their matrix. However, they behave completely erratically when any one of the data in the matrix is exceptional - such as when at an altitude the
matrix doesn't cover, angles of attack stray outside of what the matrix can handle, etc. Further - this simulation is completely dependent upon data
collected from real flight tests and recordings. It's not extrapolated by any form of intelligence or procedural analysis of a digital model by the
computer running the simulation.
Now, some simulators now use a flight-dynamics model of varying complexity and simulation methods. These methods are often far more stable (less
chance of erratic behavior) - and are often just as accurate as the various matrices used (and, often, the two are used in combination). But it's
still not 'intelligent' - simply far more computationally expensive but with greater effectiveness.
The point is - anything with any kind of intelligence or understanding of concepts has the ability to 'fill in the gaps' without having to be told
step-by-step what to do. It's the difference between an insect repeatedly flying into a glass door until it dies and a dog barking at its owner to
open the glass door. The dog wasn't programmed with the knowledge that glass exists or that the owner knows how to remove the barrier. The dog
'figured it out' - developed the connection between the fact that people can absolve this 'invisible' barrier, and that communicating the desire
to, also, go through this barrier can lead to the dog being able to pass through.
You'll see in the videos that Watson gets things wrong, and my even be prone to "infinite loops" of wrong answers. But I'm sure IBM has the
bugs out, right?
See my reference to flight sims and what happens when your look-up tables can't handle the data presented.
The computer isn't even simulating human speech. It is merely programed how to give a response that resembles speech you and I use. Just like a
computer isn't really flying a plane in a flight sim - or even simulating aerodynamics. It's simply making your little virtual plane pantomime the
actions of a real plane. Your home entertainment system doesn't understand it's displaying a tree anymore than a video camera understands it is
recording one. You are the one who thinks you are being shown a tree, when - really - you are just being shown a series of colored dots.
It's nothing resembling intelligence, nor does it really advance a computer's ability to 'understand' our speech. It's a mathematical process of
locating key words and constructing a response we can make sense of while reading intelligence into the operation.
I don't disagree with you, but AI has to happen in stages and this is the next stage.
This is not, even in the slightest, related to intelligence in a computer.
See the difference between an insect and a dog, above. Basic intelligence and the ability to understand things is such a fundamental thing that we
often forget how important it is. Computers don't 'get' anything. Your hand-held battleship game doesn't 'get' the game. Path-finding AI
doesn't "get" the concept of a barrier unless an obstruction is flagged as such. It is merely programmed that certain things are desirable, and
other things are not - and it uses those criteria to 'make decisions.' It doesn't 'get' or attempt to 'get' anything.
Before the computer or robot can "do" anything for you, it first has to be able to process your requests through speech.
Speech recognition and voice-activated commands have only been around since the 80s.
Oh, and sorry, but you need to be corrected on this....
Dragon does simple dictation and speech recognition, it is not able to carry a conversation or tell the difference between a statement and a question
like Watson does.
You're looking at this in far too limited of a scope. Dragon takes a set of vocal patterns and renders them as text - something much easier to work
with in terms of computer programming. A separate program (or programming object) is used to pick out key words and identify questions versus
statements based on the string. This is used to develop a criteria for the response and initiate separate objects to fulfill those criteria and
process them for final assembly into a coherent string.
The computational model for this is simple and has been around for decades. We've merely lacked the processing capacity to make it a reality. A lot
of instructions per second have to be going on under the hood for those kinds of things to be done - particularly in accurately analyzing vocal
patterns (Dragon sacrifices accuracy for practicality - this computer can aim for accuracy and eat up all kinds of clock cycles analyzing vocal
patterns ten ways to Sunday).
It also does not recognize when you are trying to be funny or witty or sarcastic, but Watson does. This is what "natural speech" means.
It's programmed with the concepts of humor and sarcasm already, because humans understand it to exist. It accounts for sarcasm, but it doesn't
understand sarcasm anymore than it understands a conversation about theology.
Also, there is not a database with a bunch of questions and answers that it looks up, that would be ridiculous to call such a thing AI at all, and
that is not what Watson does.
And, yet, that is exactly what it is. Every word is programmed and/or processed for some kind of mathematical and/or logical equivalence. You don't
need a database with questions and answers - but you do need a database of words, just as you had to learn your vocabulary. It still has to have a
source for its answers, just as you do. It's source is either a pre-formed database or the internet. It can't pull the answers to factual
questions out of thin air anymore than you can.
You're fascinated because it doesn't behave like a computer your recognize. I'm unimpressed because it's nothing new. We just have the ability
to do it on a practical scale, and a group decided to invest time into doing it.
It is meant to be a more "human" version of the chess playing Big Blue.
And, yet, Big Blue was not intelligent. Sure - it beat a chess champion, but that's like trying to say a rock is smart because it beat you at a
Chess has a limited number of possible states. Using the existing state and the rule-sets, all possible states over the next five to ten moves can be
calculated - with 'supercomputers' - hundreds of moves ahead can be calculated with thousands of potential states with favorable states and
evolutions selected that increase the probability that favorable states will occur on behalf of the human player.
It's, conceptually, very simple - just very computationally intensive. And it's nearly impossible for a human being to compete - it's impossible
to trick, and assigns no value to any piece, just values the state of the board.
The closest things to intelligent computers have not seen much publicity. I forget the name - but there's a computer capable of 'learning' and
forming basic assumptions through interacting with objects and observing their behavior. That is far closer to 'intelligence' than anything that
has been done so far. It's rather limited stuff - such as 'understanding' the behavior of a toy car, stacking boxes, etc.
It's just not got the same splash - big deal, a computer can push a toy car around and figure out that it can only move 'forward' and 'backward'
(well, easily). However, it's a massive deal - for a computer to actually 'understand' something to the point it can formulate its own solutions
to problems. Simple problems - but it does things without being programmed specifically for the task.
This type of stuff gets attention because it's stuff we can read into. Big Blue beats a chess player and we suddenly think computers can think and
formulate strategies (because Chess is attributed to an intellectual game). This thing can compete at Jeopardy and people suddenly think computers
can converse with us. In each of those cases, we are reading into the actions of a computer and personifying it - something we are great at.
However, the operations behind them are incredibly simple, and not at all a reflection of intelligent behavior.