posted on Jun, 13 2014 @ 04:13 AM
"Top Google Engineer Says That Computers Will Be Like Humans by 2029".
Main question: WHY?
Why do we want them to be like us? What's so great about us? What's so great about human emotion, about our sense of justice? Why do we want to do
this? So they can have our faulty/selective sense of morality? so they can justify irrational behavior (crusades, persecution, war, greed, and a very
long list of self-serving doctrines/philosophies) with biased, simple-minded, rationally-inconsistent canons and dogmas?
If you analyze pop culture, any self-respecting A.I. themed Sci-fi story will invariably reach the "What makes us humans?", "Compassion, empathy,
hope - irreplaceable human features that a machine will never understand" argument or pseudo/dissertation overselling our twisted tendentious sense
of right or wrong, consisting primarily of a highly prissy mechanism implemented to justify some self-interest purpose or endeavor. It's usually
accompanied by a heart-warming (commonly super cheessy) setup of our hero fighting against all odds, making apparent tough choices that a machine
can't do ('cause they can't "feel") and which ultimately leads to the salvation of their loved ones, or -if the story is ambitious enough- the
salvation of the whole human race. This of course leaves us with a nice self-indulgent, self-reassuring, reinforcing idea of how important and how
unmatchable our humanity is: in other words, the indispensability of our altruistic, noble side.
Well, maybe not. If you look closely we all employ said biased self-serving mechanism to some degree, depending on our sphere of influence. An average
drone-citizen will resort to that technique to convince himself to run a red light, neglect the loved ones, cheat on the wifey. A businessman will put
it to use in order to execute the usual profit-focused measure that absolutely ignores environmental costs and the social impact of subjecting your
slaves to longer working hours for the same -or even less - money with shrinking job benefits. A corporate master will exploit it to devise, prepare
and execute wars.
So, why the hell do we want computers capable of human emotions? ( I know the answer, I'm leaving it out for greater psychological impact).
Machines/computers capable of making decisions based on hard data, not on selfish sentimentalities, should be the short-term goal of Artificial
Intelligence. Someone ( yes, I'm calling said machine a "someone") who can really make the real tough choices, the real "for the greater good"
decisions, sounds exactly like something we are really lacking at the moment ( and have been in ridiculous short supply throughout history, if we pay
attention). Leave that "capable of human emotion" bull out, will you?. Hell, you'll achieve the short-term goal way faster if you do. Yes, I
KNOW!, that might imply to get rid of a huge chunk of our population. I say FINE!. Yes, I KNOW!, It might even mean to get rid of our population
altogether. I say FINE!. Let the machine go about their business. It's probably about time. But, what do I know anyway?