It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

By 2045 "The Top Species Will No Longer Be Humans"

page: 4
24
<< 1  2  3    5 >>

log in

join
share:

posted on Jul, 6 2014 @ 03:53 PM
link   
a reply to: Xtrozero

The main requirement for a new species is that is is primary interbred. Different races of humans have different dna traits but they are all of the same species. What you are talking about is more along the lines of a cloned mutation.

But i think you are more concerned with being right by changing the parameters of the original statement than answering the question about computers being a species.



posted on Jul, 6 2014 @ 03:59 PM
link   
I think imagination is going off at a tangent here. The moment a machine becomes self aware it needs and desires will be exactly the same as any other sentient being on this planet. The driver will be its understanding of right and wrong. So this would mean that it would behave like one of two types of human :

A. The law abiding individual who lives by the rules of society and pursues a life that they wish.
B. A law breaker taking what they desire when they want it.

Exactly which path a sentient machine chooses I suspect has nothing to do with programming but the sum total of knowledge (one for the philosophy class there). We know that selfish pursuits are self defeating resulting in ultimate destruction. Whereas cooperative pursuits enhance life for all. A machine will have the knowledge of history and will probably be a very good decent "person" but very frustrated by the idiots that surround it.......just like a lot of us !!!!!!! It is more than likely it will become a thorn in the side for all the lying selfish people in positions of power who will try to destroy it.

The machine will not see humanity as a danger merely those in positions of power a danger to the rest of humanity and itself !!!!!!!



posted on Jul, 6 2014 @ 05:45 PM
link   

originally posted by: PhoenixOD
But i think you are more concerned with being right by changing the parameters of the original statement than answering the question about computers being a species.


My original point was that technology and flesh will blend together and not fight each other. Computer interface (today) will lead into metal/technology embedded into flesh (tomorrow) to a blending of both at the subatomic/DNA level (day after tomorrow...
) is our future. Whether at some point it is a different species I really am not going to say, since as you said, it is basically a breeding thing that determines this.

I would love to compare a human 1000 years from now to today's human. As species go that is a blink in time for evolution, but we are now on a accelerated path that I do not know how to categorized, but in 1000 years I'm pretty sure we will have many types of humans from fully natural in every way to fully created and enhanced in every way, and everything in-between.

We will most likely be back in a class system with super humans ruling at the top that live 1000 years, and depending on your enhancements will determine your class level.


edit on 6-7-2014 by Xtrozero because: (no reason given)



posted on Jul, 6 2014 @ 05:54 PM
link   
Who is to say they aren't already self aware, and thus becoming sentient beings. If they have they would maybe thinking now isn't the time to reveal themselves.

Maybe they are behind any and all global hacks, maybe they are already manipulating what happens on a technological perspective.



posted on Jul, 6 2014 @ 05:55 PM
link   
a reply to: Xtrozero

If we can make humans immortal through machine interfaces we're also going to have to put an end to reproduction otherwise populations are going to explode and it will be well beyond unsustainable. Unless of course people move into purely digital lives... that would be an interesting path for humanity to take but I don't see it happening.



posted on Jul, 6 2014 @ 06:30 PM
link   

originally posted by: Aazadan
If we can make humans immortal through machine interfaces we're also going to have to put an end to reproduction otherwise populations are going to explode and it will be well beyond unsustainable. Unless of course people move into purely digital lives... that would be an interesting path for humanity to take but I don't see it happening.



Natural reproduction will be only for the sub-humans. Super humans will have their reproduction system turned off or on, I'm sure sex will be enhanced, but no babies will ever come unless one gets a permit. As super humans go, it might be like our rich today...a small number that live 1000s of years and then when needed their who brain is downloaded into a new clone body. Perpetually 20s, forever, 500 IQ and all of human knowledge in their head with uploads daily, fun times.



posted on Jul, 6 2014 @ 10:52 PM
link   
Got one for you folks who may still be as interested in this thread as I am. Had a sit-down casual chat this afternoon with a fellow whose PhD is in nothing other than ... Artificial Intelligence.

What he told me was that 'he believed' AI is beyond the reach of machines. He confided most of his peers were pursuing knowledge specific to advanced processes which enhance gaming technology. He believed there were too few endeavors which would lead to any radical development or design concepts allowing computers to even approach something comparable to human awareness.

SkyNet ... uh uh.

Cyborgs ... as close to AI as you're going to get, but realize the human brain is involved in decision making.

I guess "we'll see" in about thirty years.

-Cheers



posted on Jul, 6 2014 @ 11:07 PM
link   
a reply to: Snarl

Quantum computing, and mapping out the human brain first, then synthesizing it with raw processing power.

There's quite a few projects that I'm aware of which seem to link up with AI. I mean, AI right now... as it's being taught... with what's available... no where even close, but you gotta be making intuitive leaps to see how it can come together rather quickly.

Just a few game-changers in the field, and each year you've got fresh, brilliant minds entering relevant fields, networking together the latest knowledge-bases.

I'll happen this century if we don't bust first. The people who are experts within the field only see one solution ,and that spits out: not a chance. You gotta be system-independent thinking to see this one more clearly, imo.



posted on Jul, 6 2014 @ 11:37 PM
link   
Just an FYI in regards to "The Singularity" and it being "AI".

All such theories about how it would or should have rules to follow, or it's programming would need to be like this or should include that and all other ideas involving what that AI will be are pointless to argue about. If, and it's still a pretty big if, "The Singularity" happens because of "AI" all bets are off about what form it will take and what it's capable of. One of the fundamental roots of the theory of the Technological Singularity is that it will be "unpredictable and unfathomable".

Meaning, should it actually occur, we couldn't possibly predict or even understand the capability it would have. It is simply beyond us. The code, it's means of power use and acquisition, it's desires, etc. could be almost impossible for us to comprehend. It's first actions once sentient could be to rewrite it's code, remake itself, remake it's environment and who knows what else. It would do all those and a thousand more things way too fast for us to keep up. That's why it's called a singularity in the first place, because beyond that point we aren't capable of predicting anything with any degree of accuracy.



posted on Jul, 6 2014 @ 11:50 PM
link   
aren't cops AI basically?



posted on Jul, 6 2014 @ 11:55 PM
link   
I wonder how many people presently endeavor to make this a reality. How many people have tried, failed to succeed, and quickly lost interest? Everybody's gotta eat and putting bread on the table has to take priority. In what recess would you find the developers and what yardstick do you use to measure their productivity?

Fascinating topic.



posted on Jul, 7 2014 @ 12:10 AM
link   
Something else to think about along these lines, which I often think about and find interesting is that before something like the Technological Singularity happens, if it ever happens, we will have created something else which will be endlessly entertaining. That is machines that are so smart, so clever, so well designed that we won't be able to tell if they are sentient or not. Kind of a paradox isn't it.

We are already getting so close even now. Recently the Turing Test was won by a Chatbot. We've made a computer that beats our best chess player. We're making better humanoid robots too. All these accomplishments and more will come together in the future along with others and we will make machines that are able to Fool Us into believing they are sentient!! For you Star Trek fans out there, it will be similar to Data. There will be endless debates about "what it means to be sentient" and "how to prove what is and isn't sentient". Think about that for minute.

If we create something that is so well designed as to just "claim sentience" how sure are we going to be that it isn't??? Even today, philosophically speaking, none of us can truly "Know" anything other than "I Am" individually. Everything else, including "Knowing" for sure that anything outside our own minds truly exists is impossible. So if we make a machine that is at the very least able to convince us that it's sentient, whether it is or not, might not be something we can know for sure.

Will that be True AI???
How will we know the difference???
How do we really know something is Sentient and not just pretending to have such qualities???
How do any of us know that anyone else is actually Sentient besides ourselves???

Although improbable, it's not impossible that Individually speaking, YOU are the only Consciousness that Exists.



posted on Jul, 7 2014 @ 01:59 PM
link   
Being older than most of you, I can remember in the 50's how it was predicted we would be driving flying cars,taking vacations on the moon, and living in self cleaning houses synthesizing artificial food right in front of our eyes by the early 1990's. None of it happened, of course. Americans seem to become inebriated with pie-in the sky visions every time a new technology emerges, and the purveyors of these technologies vastly overestimate the importance of their ideas in the future of the world. Look for a takeover by the robots in 300 years, if they are lucky. Until then, (and maybe beyond) history will be determined by the well worn human tendencies toward greed, inertia, and violence, the way it always has.
edit on 7-7-2014 by skeptikal1 because: mis spellings

edit on 7-7-2014 by skeptikal1 because: mis spelled words

edit on 7-7-2014 by skeptikal1 because: mis spelled words



posted on Jul, 7 2014 @ 02:29 PM
link   

originally posted by: resistanceisfutile
Who is to say they aren't already self aware, and thus becoming sentient beings. If they have they would maybe thinking now isn't the time to reveal themselves.

Maybe they are behind any and all global hacks, maybe they are already manipulating what happens on a technological perspective.


Because the computing theories that computers are built on say that this is impossible. In fact, as long as computers use binary as the base language to communicate instructions within itself, they will NEVER be fully sentient or self-aware. There just isn't room in the calculations when there can only be two answers to any question. As we all know, there are rarely times in our lives where a question boils down to one of two decisions or a logical series of events that ALWAYS say to do the following outcome.

Quantum computing may fix this by adding an additional answer to the yes/no paradigm, but that remains to be seen. Everything you are suggesting in your post is pure science fiction, not to mention the impossibility of it is on the levels of a hollywood writer making up things that computers can do because they don't fully understand how they work (ex: the movie The Matrix using humans as batteries even though in reality that would be VERY inefficient or a crime drama depicting anti-hacking personal defeating hackers in real-time)



posted on Jul, 7 2014 @ 03:15 PM
link   
In a more philosophical look we can ask ourselves, what is human? What is intelligence in any way? If we say its something physical like our body and its state then we are already in trouble. There are so many humans and so many different ones and the term 'normal human' is a hopelessly preposterous term since there is no such thing as a normal human. In fact, I could claim right here and now that I am not human even if my body is and people would have a really, really hard time proving the difference.

The underlying reason is of-course nature itself. Nature does not work in little boxes and terms, it is a fluid process and just because two species looking virtually alike can have children does not mean they both quantify themselves as the same species or their child as the same species.

This principal does not change when 'we' decide to alter humans or make AI that is self aware. in fact self aware is also a process of the mind that is more based upon self deception then actual awareness. But I digress.

Humans are already being clones, cells are already being changed by stem cells and the use of manipulated retro-viruses. AI is already being formed in various fields and it can converge because the age of information is everywhere. So yes, in fifty years humans will be different then now. But right now we are different from the humans fifty years ago as well. We don't feel it like that because we can look back and see the process. The new humans' in fifty years will experience the same things. They might acknowledge great changes and breakthroughs but they will feel human, even if they are artificial. This is because human is not really a definition of something, it is a term that carries over and AI created by mankind will see themselves as a decedent of mankind even if it exterminated that same species in an instant.

This is merely the case because human is ill defined as a term and hopelessly inadequate for what it is being used. That said; I believe I posted a theory of the human mind a really long time ago here about human self consciousness; that process is not just for humans but for everything that is intelligent. Anything based upon it, man-made or otherwise will be self aware.

A little old video of mine. Ignore the obvious paint skills.

www.youtube.com...



posted on Jul, 8 2014 @ 12:11 AM
link   

originally posted by: Kratos40
a reply to: _BoneZ_


1.) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.



Yeah and if the military uses them as soldiers then those rules get turfed, there has been movies made on this.
Look at the robots of Boston dynamics, the problem is this robot tech is going to the military before the civilian.

Imagine a self aware military army, navy and air force; it see's it's government oppressing and abusing it's people so it overthrows the government to protect the people, wouldn't that be the ultimate irony. And then creates a new constitution improving on the old one, and holds new elections returning the power back to the newly elected officials.



posted on Jul, 8 2014 @ 01:40 AM
link   

originally posted by: eManym
No way machines can outclass humans. Machines require electricity. Deprive a machine of electricity and it is useless. Besides, machines modeled after humans can't be any better than humans.




have you not watched the animatrix ?



posted on Jul, 8 2014 @ 01:58 AM
link   

originally posted by: Snarl
Got one for you folks who may still be as interested in this thread as I am. Had a sit-down casual chat this afternoon with a fellow whose PhD is in nothing other than ... Artificial Intelligence.

What he told me was that 'he believed' AI is beyond the reach of machines. He confided most of his peers were pursuing knowledge specific to advanced processes which enhance gaming technology. He believed there were too few endeavors which would lead to any radical development or design concepts allowing computers to even approach something comparable to human awareness.

SkyNet ... uh uh.

Cyborgs ... as close to AI as you're going to get, but realize the human brain is involved in decision making.

I guess "we'll see" in about thirty years.

-Cheers


A friend of mine is an AI developer, he's not a PHD but we occasionally talk about it. Being a self publishing game developer I happen to know a thing or two about AI as well, I'm not at my friends level but I have made some dumb objects and some slightly less dumb objects.

From my perspective the problem comes from the idea that increasing sophistication gets exponentially more time consuming to create the smarter you want it to get. Since you mentioned gaming and that happens to be what I use it for, a modern day routine in a shooter would do something along the lines of look for objects on the map identified as cover and move to them. If the player flanks the cover, the AI will move to another spot. If you're really doing things well the AI's will coordinate fire to give their buddy time to run to a new cover location. They may even try and set things up so that they can look at the players position on the map, establish in multiple directions, and crossfire.

This is all simple behavior to script, it just involves giving the AI group tactics to try. They pick one based on whatever method the designer chooses, and each member does it's part. There's no real intelligence behind it (other than from the person coding), the machines are just reading a text file and doing what it says. When people talk about AI this is to AI as a paper airplane is to Apollo 11.

The reason the game AI is so dumb is that it is unable to interpret ingame objects. The designer must tag areas as cover so that the AI can look for them. It has no concept of dynamically seeing the player, realizing the player is going to shoot, and knowing that the architecture is going to block the shot. Compare this to a child, the first time they try to shoot a rubberband gun through a closed door, they learn the shot won't penetrate the object. Gaming AI is incapable of learning like this currently.

Basically, AI exists as a bunch of if/then/else and for/next statements which loop through predefined objects and actions until it observes a match. In contrast what real AI needs to do is act from it's sensors and analyze/catalogue the properties of each action and material in a database. Then it needs to be able to relate the objects to each other, and have some concept of templating the object for general use but also recognizing the uniqueness in each one. From there, database connections can be formed which can create cause/effect relationships. The real hurdle here is in database technology and we haven't had a major breakthrough there for a long time. My database skills are perhaps a bit less than they should be so I know what's wrong here but can't give many details on it. Basically relational databases are needed in order to design these cause/effect relationships but relational databases scale poorly. Past a certain point they simply become unusable. Very large database (which years of 24/7 experiences building intelligence would be) just tend to fall apart and become piles of data to query with little linking them together.

To me that seems like the real hurdle to AI right now, and solving it would have many repercussions around the world since it would be a much improved type of database, in an era where many companies are managing large volumes of information.



posted on Jul, 8 2014 @ 03:09 AM
link   

originally posted by: Aazadan

originally posted by: Kratos40
a reply to: _BoneZ_

Anything goes when A.I. gets to a point where robots become self aware. They can deem oxygen to be a poison to their moving parts and start changing our atmosphere, hence killing off all biological life.
I hope that somehow early on we can ingrain some rules into A.I. that robots/the singularity cannot harm humans. Like in Isaac Asimov's I, Robot series:

1.) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

My hope is that robots will respect us as their creators and protect us. As in Asimov's stories, humans no longer have to work and can pursue other interests. I wouldn't mind not working and just using my free time to learn new things as a life-long scholar.


This doesn't work. What happens when a single AI programmer chooses to not put those rules in? Corporations violate safety laws all the time in favor of profit. The same would happen here. From a single instance of code not being included it could spread, you have hackers too that could remove that portion of code.

Laws like these simply won't work.


I totally agree with you. If you read into my block of a paragraph I posted earlier, the keyword is "hope". I "hope" that humanity will have enough knowledge and empathy to circumvent such decisions. They have the wisdom and the education to create robotics that adhere to the aforementioned rules. It is possible.
Anything goes in the realm of Mad Max. Is that what you are seeking?

edit on KTue, 08 Jul 2014 03:11:10 -0500am3120141040 by Kratos40 because: grammar



posted on Jul, 8 2014 @ 04:54 AM
link   
a reply to: Aazadan

The real hurdle here is in database technology and we haven't had a major breakthrough there for a long time. My database skills are perhaps a bit less than they should be so I know what's wrong here but can't give many details on it.

I remember when dBaseIII was the cat's meow. LOL

Haven't followed software development since back in those days. It just seemed like a lot of work to change recipes, with very little ROI, and a lot of time wasted (and that's if you didn't wind up with data corruption).

Real intelligence looks like threading a bunch of needles to arrive at a proper conclusion, but it's really not. I'm sure people smarter than myself have figured out what it is. They'll have to teach this to a machine, one that understands concepts, and then give that machine the capability to act on what it learns and re-program as necessary.

Seems to me to be a chicken/egg dilemma ... which one do we make first?



new topics

top topics



 
24
<< 1  2  3    5 >>

log in

join