It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Will machine intelligence become the dominant species ?

page: 2
7
<< 1    3 >>

log in

join
share:

posted on Jun, 17 2009 @ 07:14 PM
link   

Originally posted by shadowwolf0
lolthis question is repetative..... NOT YET we would need a load more reserch b4 we can sighn our own death warrant.


I think you're not appreciating just how rapidly we're acquiring technological knowledge and how quickly the future is rushing towards us.

Examples are easy to find:

The invention of alternating current is just over 100 years old and yet we have completely wrapped the entire globe in an energy grid and power our cities with AC.

The 1st successful heavier than air flight took place just over a 100 years ago ... it took less than 40 years from that point for global air transport to become common place.

The 1st successful rocket was developed during WWII (the V2) and in less than 30 years after that we landed on the moon.

The 1st transistor was developed in the late '50s and less than 50 years after that we have a world that couldn't exist without electronics.

And so many other examples of technology could be cited. But the main point is that all these marvels have been created in less than 100 years starting from almost no technolgical infrastructure or knowledge preceeding it.

So to state that we need a lot more research before MI could possibly happen is taking a very, very short sighted look at the incredible speed at which we are acquiring technological sophistication/knowledge. How long did it take to put a mobile phone in the hands of a huge percentage of the global population ? 30 years ? 20 years ? how about 10 years ? It's got to the point that almost as soon as a new discovery/invention happens, that the entire world is almost immediately affected ... HD tv, dvd players, lasers, etc, etc.

So no, I disagree with you ... research in so many different fields is progressing so rapidly that once the minimum necessary breakthough takes place, then the technology based on that breakthrough matures almost over night so to speak.

MI is on the horizon and we as a species are rushing headlong to meet it.




posted on Jun, 17 2009 @ 11:38 PM
link   
Many people think so. I just watched a documentary called "technocalyps" that was about this very subject. Also, google terms like 'singularity', 'transhumanism', 'posthuman', and people like 'ray kurzweil', and you can read more about people who believe that machine intelligence will become dominant, possibly by humans turning themselves into machines.



posted on Jun, 18 2009 @ 12:03 AM
link   

Originally posted by DragonsDemesne
Many people think so. I just watched a documentary called "technocalyps" that was about this very subject. Also, google terms like 'singularity', 'transhumanism', 'posthuman', and people like 'ray kurzweil', and you can read more about people who believe that machine intelligence will become dominant, possibly by humans turning themselves into machines.


Yes, seems like many others are also likewise considering this increasingly plausible future for humankind. It's just hard for most people to comprehend the eponential rate at which we're accumulating technological knowledge/expertese and that today's scifi quickly becomes tomorrows accepted world.

I guess that the real question is not whether it can happen ... but rather how quickly is it going to happen and will we be caught totally off-guard and with our pants down !



posted on Jun, 19 2009 @ 02:33 AM
link   
I've actually been thinking about this sort of thing a lot lately, with watching Technocalyps (I think I mentioned that earlier), as well as reading 'The Way' trilogy by Greg Bear right now. It's a sci-fi series where a society lives most of the transhuman dreams. I won't spoil the story (and in fact am still on the second novel of three) but if you like scifi and transhumanism, you'll love these. Best scifi I've read in a long time.

One interesting idea in there that I suppose I will spoil is what Bear calls 'partials'. It basically involves copying your entire personality and downloading it into a new body, sort of like a clone. People make partials of themselves to share in a workload or do something dangerous, since they are expendable. It's an idea I don't remember seeing anywhere else, though the concept of putting your consciousness into computer is not new; it is the idea of duplication, rather than backup, that I'd never seen.

There are a lot of other great ideas in there, some of which are familiar, a few of which are new, at least to me, but I won't spoil it. You can probably read wiki for a plot summary or something if you want to see what I'm talking about.

Just think, though, if you could have multiple copies of yourself, either in physical cloned bodies or in computer form or something. Odds are you'd probably work pretty well as a team, at least in that you would cooperate very well. You'd all be thinking the same way, though, so you'd be unlikely to come up with anything more creative together than alone. Or, imagine cloning the smartest people and setting them to work on different projects.



posted on Jun, 20 2009 @ 12:14 AM
link   
reply to post by DragonsDemesne
 


I think this is where so called artificial intelligence is headed. Man will try to enhance himself by replacing his flesh with machines. It's either a form of immortality or a way to enslave your soul.

Note though, the actual record of AI research is very dismal. It's been around for over 30 years with almost none of their promised advances materializing. Where is the Hal 9000? Can you talk to your computer? No you have to program it, which is still a very tedious process.

However, I think only the elites will be permitted to use this immortality technology. There is talk about how our knowledge has increased exponentially and how rapidly we are advancing. I'm afraid I disagree. Technology is being viciously suppressed. If Tesla's flying cars and free energy devices of the late 19'th and early 20'th century are any indication, we may have taken a few steps backwards or at least in the wrong direction.

Theoretical physics, for example, has in some ways stagnated for the last 30 years or so thanks to the red herring of string theory. They have very few predictions or exploitable effects. We still use chemical rockets despite things like Searle disks and the Biefield-Brown effect etc. Cancer may well have been cured by Royal Rife in the 1930's, and certainly his microscope has not been equaled.

So perhaps we will get this technology, but it may be reserved for a very few. Maybe an AI device is already running things behind the scenes!



posted on Jun, 20 2009 @ 01:21 AM
link   
Good idea....
Runs to garage...
Im going to make a self creating microchip, in which its sole purpose is to replicate and talk to its neighboors...heheheeh HAHAHAHA

WHAHAHAHA



Lets see what happens in a few months.......



[edit on 20-6-2009 by R3KR]



posted on Jun, 21 2009 @ 04:38 AM
link   
Forget what I said, I'm going to do research on this, I might change my opinion. However, right now, I do not think it's possible.

What I posted earlier would've possibly been offensive and I didn't intend it that way.

[edit on 21-6-2009 by Dea_Ex_Machina]



posted on Jun, 22 2009 @ 10:07 AM
link   
i still say not yet because scientists are stuck at the moment on creatin AI.... i mean i dont really wat to die by a rebal toaster lol....thabks god



posted on Jun, 30 2009 @ 01:02 AM
link   
honestly, no, I don't beleive the development of the human race will be our death warrant. all discussion of machine logic and human feeling aside, no matter how advanced a machine is, it will eventually break down with noone to repair it, and no matter how advanced robots built to repair the machine are, they also will eventully break down. plus, it's really not that logical to set out to destroy just so your kind will be dominant, that's a human trait, and humans aren't logical.

Hopefully we'll reach a point where we become one with our technology anyway, and its a moot point. I'd see the future being more like Ghost in the Shell rather than Terminator.



posted on Jun, 30 2009 @ 07:17 AM
link   

Originally posted by Gren
honestly, no, I don't beleive the development of the human race will be our death warrant. all discussion of machine logic and human feeling aside, no matter how advanced a machine is, it will eventually break down with noone to repair it, and no matter how advanced robots built to repair the machine are, they also will eventully break down. plus, it's really not that logical to set out to destroy just so your kind will be dominant, that's a human trait, and humans aren't logical.

Hopefully we'll reach a point where we become one with our technology anyway, and its a moot point. I'd see the future being more like Ghost in the Shell rather than Terminator.


AHMEN!!!!!



posted on Jun, 30 2009 @ 07:35 AM
link   
i guess in a way its a philosophical debate as to what is next in human evolution, is genetically modifying ourselves evolution? Adapting ourselves to a changing environment? is half human half robot an evolutionary step? I dont know



posted on Jun, 30 2009 @ 07:52 AM
link   
It would be interesting to have more intelligent machines placed in control of Governments nothing to corrupt them, I'd be fine with it as long as they understood our needs for things like fun, recreation, family etc.

I think we would all probably be better of but chances are at some point someone won't like one of the policy's and attempt to destroy it how will it react to that would it defend itself and if so how.

I think we would probably be pretty safe as long as we kept it out of the loop for things like defence systems and well away from hackers.



posted on Feb, 18 2015 @ 02:04 AM
link   
5.4 years after this thread was made I am bumping it because now the questions it posits can finally be definitively answered.

1. Yes machine intelligence will become the dominant species.
2. It will be used in defense systems, and receive full funding and support.
3. Yes human intelligences will me integrated into machine intelligence.
4. Machine Intelligence will take over many governing functions of human intelligence constructs. Including defense system.



posted on Feb, 18 2015 @ 02:26 PM
link   

originally posted by: zetabeam

So to state that we need a lot more research before MI could possibly happen is taking a very, very short sighted look at the incredible speed at which we are acquiring technological sophistication/knowledge.


That speed will not be maintained. From 1850 to 1950 we went from understanding only a small amount about the physical nature of the world to understanding virtually everything correctly at a fundamental level which matters for human level engineering.

That's a one-off leap.


How long did it take to put a mobile phone in the hands of a huge percentage of the global population ? 30 years ? 20 years ? how about 10 years ?


How long did it take to put a flying car in the hands of a huge percentage of the global population? About never.
What about Mister Fusion in the basement? Never. And we knew the physics of these well before any mobile phone was engineered.

Physical limitations.


It's got to the point that almost as soon as a new discovery/invention happens, that the entire world is almost immediately affected ... HD tv, dvd players, lasers, etc, etc.


Only in a small segment. In biology and macroscale engineering, progress is slow and very incremental.


So no, I disagree with you ... research in so many different fields is progressing so rapidly that once the minimum necessary breakthough takes place, then the technology based on that breakthrough matures almost over night so to speak.


Because for machine intelligence, there is no evidence that there is any "minimum necessary breakthrough", but a very large set of breakthroughs and understandings adding up over time.


MI is on the horizon and we as a species are rushing headlong to meet it.


Machine Intelligence is not some apotheosis of a superintelligent mind being born, that's a religious scientific fantasy.
In any case, machine intelligence isn't something on the horizon either. You have it already on your phone. Android phones use trained neural networks highly optimized for efficient computation to perform speech recognition. Google has many top scientists working on this, including one of the world leaders in artificial neural networks, Geoff Hinton.

Google has brought machine intelligence to practical everyday products: google search, voice recognition and google translate, and they have image recognition done pretty well in the lab. No gods, just algorithms and products as a result of enormous hard labor.

Consider scientific medicine. Truly only got started at a serious level with understanding the germ theory of disease. Lots of significant progress and understanding. Molecular biology is a huge and serious subject.

If you were to believe the hype of the 'transhumanist' AI cultists and make the analogy, doctors would be on the verge of giving birth to 100 foot tall immortal demigods with extra sensory perception with laser beams shooting out of their heads.

Back in reality, they can't stop gluttons from getting fat and killing themselves slowly.


edit on 18-2-2015 by mbkennel because: (no reason given)

edit on 18-2-2015 by mbkennel because: (no reason given)

edit on 18-2-2015 by mbkennel because: (no reason given)



posted on Feb, 19 2015 @ 07:14 AM
link   
As long as sicnetists are not dumb enough to create artificial intelligence without a lot of traps and conditions. Nope. Sure robots will be superiors to humans in every way, but if we created robots that only perform work and tasks and are not inteligent in anmy way ( simillar to computers) we won't have to worry about our imminent destruction.



posted on Feb, 24 2015 @ 12:10 PM
link   
First of all, the fields of AI, robotics, genetics, microbiology , neuroscience and a few others really fascinate me. I’m pretty much convinced that true machine intelligence will become manifest. I can’t say this out of mastery of the subject or first-hand knowledge; I’ve been following the development of AI for around 20 years now, and can’t help but notice that progress is moving along at a seemingly increasing rate. As far as that’s concerned, and whether right or wrong, I view much of this stuff through the lens of my own personal work experience. I don’t work in AI, robotics, game development, or any of the neat sh-t, but I do work very closely with computer systems, networks, etc. I’m a system software designer/developer/sr. analyst. Even with what I do, it’s an ongoing process of attending classes, lectures, shows, etc. to stay informed about the latest advances. I’m sure with AI it’s a similar process, but on steroids. Anyway, I found out long ago that when asked about a timeframe on any particular project, never blurt out the time in which you think you can accomplish it; tac maybe 50% - 75% to the end of that to account for unknowables. For me, at least, that method is generally within the ball park.

On that note, I’m not so sure AI will have a “major impact” on humanity quite as soon as some seem to think. Ray Kurzweil (futurist and chief engineer at Google) claims it’s right around the corner; he currently estimates that human level machine intelligence will make it’s debut around mid-century. And, who knows? Maybe he’s right. He’s certainly a lot closer to the field than I am, and definately knows the current state of the “art” and hurdles yet to be cleared. On the other hand, though, he’s so excited about this technology that maybe he’s taking it all too personally, and is consequently unable to provide an objective analysis/projection. It could be more of a wishlist at this time, since nothing is written in stone.

If I had to guess, it seems plausible to me that within the next 75-100 years, give or take, the first “human level machine intelligence” (whatever that means) may emerge. Regardless of whether it’s 15 years, 75 years or 100 years, this would be no small accomplishment. After that’s done, it’s anyone’s guess where it will inevibly lead us. I can clearly see the dark side of this tech, but not so clearly how it can be controlled and used only for the good of mankind. God forbid it’s designed to think and react like it’s human creators. If that’s the case we’re sunk for sure. We’ll have created our own Frankenstein monster.

As far as the Transhumanist movement goes, I can’t knock anyone’s desire to improve their condition. Even so, though, it could all just be a pipedream anyway. If machine intelligence advances in the manner some have predicted, it may be futile to expect to keep up with it by going hybrid, with augmented intelligence, etc. Unless we literally become one with the machines, we’re not even in the race. If that ever actually happens, then to answer the OP’s question in the title, “Yes, humans would probably become a subservient, subspecies, looked upon as we view the apes.”. After all, how can a human compete with a machine 10,000 times smarter than they are? A machine possessed with properties/characteristics like goal-seeking, self-preservation and a willingness to compete/fight for resources? Somehow I doubt any of us will see the day of the Terminator.

Then again, I could be completely wrong in my assessment. This is just my 2 cents...

PS: You know, a machine could be much smarter than we are, and still not be sentient. It’s quite possible that machines will never be sentient and will never achieve consciousness. As long as they can convincingly mimic humans, though, who cares? I think people will fall in love with their machines long before their machines are able to reciprocate. Kinda funny when you think about it....



posted on Feb, 24 2015 @ 08:50 PM
link   
You cannot call a machine a specie because all species are life and no artificial intelligence is life. Artificiality is not biological. An android that would have living tissue parts but an artificially driven intellect instead of a brain would not be a specie. An artificial specie can exist however as manipulation of DNA is possible and new species have already been created using gene splicing and cloning techniques.

EDIT: In reply to the actual question of whether or not AI will take over the world and conquer humanity I don't really think that will happen.
edit on 24-2-2015 by Asynchrony because: (no reason given)



posted on Feb, 25 2015 @ 02:47 AM
link   
It seems to me like many people aren't fully grasping what sort of ramifications exponential growth in intelligence could have. At some point, there will be an artificial intelligence that is equivalent to human intelligence. There are differing viewpoints on how far away from this we are (personally I think it will be within the next 15-20 years), but there isn't a single person, knowledgeable in such things, who doesn't believe it will happen.

Many people seemnto be of the opinion that once that level is reached, we will still have control over it and be on a more or less even playing field, but that is not how exponential growth works. There will be a single moment when artificial intelligence has reached "human level". Ten minutes later, it could be four times more 'intelligent'. A month later, it could be so far ahead of us, that we wouldn't even be able to recognize it as an intelligence.

I highly recommend anyone with even a casual interest in such things read THIS. It's by far the best article I've seen about this, without being overly technical. Everything is put into terms that most anybody can easily understand, but it still does a fantastic job of explaining the potential ramifications of what we will inevitably be faced with in the near future.



posted on Feb, 25 2015 @ 05:12 AM
link   
Anyone thought of the 'off' switch yet?
So, a sentient computer gets built, its only connection is to the power grid, the computer gets an idea to take over the world, just how is it going to do that? does the computer know what the world is? has anyone been stupid enough to type in how to take over the world? Also where 'the world' is?
No bravery involved in flipping the 'off' switch if the moniter screen shows "I'm going to take over the world" .
I sometimes wonder about humans, armaments need factories, factories need supplies, metal ores need to be dug out of the ground, refined, transported, shaped, machined, fitted to-gether, unless the computer has control of all that, its just another very fancy abacus.



posted on Feb, 25 2015 @ 05:30 AM
link   
a reply to: pikestaff
That would be logical if an advanced intelligence, when it is created, were to remain entirely isolated. The reality, though, is that it almost certainly will not remain isolated. At some point, it will have access to the internet (if its' not born of the internet), in order to analyze this or that. A computer/intelligence capable of handling virtually limitless amounts of data doesn't do much good if someone has to manually feed it said data. When that happens, it would be able to propagate itself all over the place, and there would be no turning it off.




top topics



 
7
<< 1    3 >>

log in

join