posted on Jul, 2 2015 @ 01:29 AM
a reply to: netbound
The biggest hurdle for AI is contextual awareness. The current AI implementations are so incredibly simple, they can only express within the context
of word syntax from probable responses based on their available dataset, which means they're essentially entirely stupid. This illusion that a
computer says something that sounds "coniving" or "clever" is nothing more than humans attributing their own context. The current AI implementations
have no knowledge or understanding of word context. They are nothing but probability machines with some added randomness based on their neural net.
These AI posts really annoy me because people don't understand that these machines have no idea what they're actually saying. These machines are
incapable of conceptualizing ideas or forming any kind of real, lasting connection with any of the words they spew. They are forever garbage in and
garbage out machines. That's all they'll ever be unless a programmer can figure a way to encode context and meaning within a words definition, but
then again, that conceptualization would come from the programmer, not the machine. Do you see the problem here?
Until quantum computing becomes a main-stream reality, we're not going to see any inkling of machines even able to fake
sentience enough to be
believable. Humanizing AI at this stage is entirely ignorant. I suggest it will be at least 30+ years before we can create a machine with the ability
to simply fake
sentience, let alone think for itself.
I do however believe that these machines will get better at making reasonable connections with their dataset, so as not to sound entirely inept. At
that point they might actually be useful for basic guidance systems.
edit on 2-7-2015 by Aedaeum because: (no reason given)