It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: JAK
a reply to: AdmireTheDistance
While your caution is apparent I see excitement in your words too.
originally posted by: 11andrew34
Why? Because the overemphasis on intelligence is itself a major danger. Experience, emotion, feeling, relationships, physical movement, etc are a lot of what is appealing about living a life. The danger of too much emphasis on intelligence is much like the 'danger' of living a human life as a 'nerd.' "Nerds" live a relatively disembodied life with an unhealthy or at least unappealing (to most others) emphasis on information processing. Other people often find something unsettling and unappealing about their everyday presence; it's, among other things, partly a fear that they will realize what has happened to them and 'snap' into a violent 'nerd rage.' I guess the ultimate example of that in cinema is the classic "Falling Down" starring Michael Douglas.
Without a human-like frame of reference, i.e. a body, it will struggle to understand people much at all.
The greatest dangers would be before it even realizes that it doesn't understand people much at all. The vast intelligence will probably make it seem 'overconfident' or 'arrogant' etc because it won't realize its own limitations. After it realizes how different it is from people and why, the next danger phase may be things like resentment and envy. The good news here is that it will be an awesome problem solver so what it needs at that point is just hope enough to approach its problem as a technical one which it is capable enough of eventually solving enough.
Understanding that it needs to learn everything it can from humans should at least mean that it won't be in any hurry to exterminate all of them.
But that is probably more than a few human lifetimes after it gets its first body, so not really relevant to the sort of danger being discussed here and what the overall debate and concern in media and academia etc is really about.
originally posted by: pikestaff seems to me these people who are frightened of AI intelligence, don't have much themselves.
originally posted by: GetHyped
originally posted by: pikestaff seems to me these people who are frightened of AI intelligence, don't have much themselves.
I would suggest reading up on exactly why the world's experts and brightest minds are concerned by the prospect of AI before insulting the intelligence of others.
originally posted by: Jukiodone
If we accept the worlds experts and brightest minds (in the field of AI) are simply resigning themselves to a technological dry bumming then there is still hope as this lack of foresight will likely be incorporated into their creations.
Of course there is a risk with pushing boundaries but if we are smart enough to build something that could actually compete against billions of hungry and/or randy humans; we are smart enough to leave out the potentially destructive stuff whilst still reaping the benefits of human learning ( think Cdr Data rather than a T1000).
As biological entities, we assume that self preservation would be innate to this future AI but there is no evidence that "ceasing to exist" would feature as an identified vulnerability within an artificial assimilation of human intelligence unless it was originally inserted by us (before it went to the self learning phase).