It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Should Artificial Intelligence and Augmented Humans recieve equal rights?

page: 4
8
<< 1  2  3    5 >>

log in

join
share:

posted on Aug, 4 2011 @ 01:22 PM
link   

Originally posted by 547000
Then define self-aware. I think anything that is coded to monitor itself might count as self aware. Also we have the problem that most code is predermined. We can simulate a random number generator, but can anything that is predermined really count as more than a tool?
edit on 4-8-2011 by 547000 because: (no reason given)

edit on 4-8-2011 by 547000 because: (no reason given)


Human brain is also nothing more than a computer. If computers can simulate human brain neural network (and there is no reason why not), they will hava qualitatively the same sentience and mind as us.


edit on 4/8/11 by Maslo because: (no reason given)




posted on Aug, 4 2011 @ 01:51 PM
link   
reply to post by Cryptonomicon
 


Luddites unable to accept progress, who are unable to think beyond the current paradigm, unable to take on the responsibility and privileges that come with technological advancement, will be their own end. Technology doesn't destroy humans, humans destroy humans. Embrace and evolve, or fall by the wayside. Progress is inevitable. With or without technology, the human race will most likely destroy itself eventually. The difference is that, with technology, we at least stand a fighting chance.



posted on Aug, 4 2011 @ 01:55 PM
link   

Originally posted by DaveNorris
equal rights should be given to beings that you can reason with.


I agree with that. I think comprehension and reasonable negotiations are required for something to be considered granted equality.
advanced (strong) AI could do this quite easily, a lion on the other hand would bite your hand off when you tried to discuss the finer points of being civil. many animals are pretty much machines with very little processing power fueling their more estoric and creative parts of the brain...hard to discuss works of Machiavelli to a goat.



posted on Aug, 4 2011 @ 01:59 PM
link   

Originally posted by strings0305
reply to post by Cryptonomicon
 


Luddites unable to accept progress, who are unable to think beyond the current paradigm, unable to take on the responsibility and privileges that come with technological advancement, will be their own end. Technology doesn't destroy humans, humans destroy humans. Embrace and evolve, or fall by the wayside. Progress is inevitable. With or without technology, the human race will most likely destroy itself eventually. The difference is that, with technology, we at least stand a fighting chance.


I would further that and say humanity will inevitably destroy itself, it is through technology however that may allow us to resurrect ourselves once we have crippled ourselves...if left alone (progress), we will deplete the world of resources and be tossed back into the iron age (or worse considering the last few years of resource hoarding will inevitably lead to the most brutal wars imaginable to try and survive).

Technology (advanced) has the ability to steer us in a different path.

I think the best argument I read so far is how a computer, coded properly anyhow, will be absolutely honest and transparent, and that does make the corrupt humans scared overall...suddenly I am seeing a bit of a conspiracy to progress a luddie movement, not from the average joes that would probably lose their jobs to a machine or A.H., but to corporatists whom make a pretty penny at dishonest practices.



posted on Aug, 4 2011 @ 02:01 PM
link   
reply to post by 547000
 


You don't seem to understand the progress that has been made in AI modelling. Very soon, sooner than most people would like to believe, AI programming will break free from any sort of predetermined model. Readily available, cheap computation has lead to some stunning breakthroughs in the programming of self-organizing integrated neural networks among other AI models. Something being computable doesn't mean that it is deterministic. On the other hand, apparent complexity doesn't mean that something -isn't- deterministic. Just because a human being is unable to grasp the number of variables and the computations of state of the human condition doesn't necessarily mean that every single cell in your body, every choice you make, every day of your life, isn't predetermined. Open your eyes and your mind. Educate yourself. There is no excuse for ignorance in the age of the internet. Anyone can acquire more knowledge in the span of a few days than most people could have hoped to learn in a lifetime just twenty short years ago.

The day an AI becomes self-ware, the day when any sort of interaction with an AI is indistinguishable from that of a human, there will be no debate about its consciousness. Luddites will move on to "does it have a soul?" and will use that as an excuse to torture and persecute a sentient being simply because it's genesis lies with humanity and not some sort of beaded divinity.



posted on Aug, 4 2011 @ 02:03 PM
link   

Originally posted by Maslo

Originally posted by 547000
Then define self-aware. I think anything that is coded to monitor itself might count as self aware. Also we have the problem that most code is predermined. We can simulate a random number generator, but can anything that is predermined really count as more than a tool?
edit on 4-8-2011 by 547000 because: (no reason given)

edit on 4-8-2011 by 547000 because: (no reason given)


Human brain is also nothing more than a computer. If computers can simulate human brain neural network (and there is no reason why not), they will hava qualitatively the same sentience and mind as us.


edit on 4/8/11 by Maslo because: (no reason given)


Alternatively, what happens when we start using brains themselves to start fueling computers? be it a animals brain as a core computer component (Your Dell wants steak!) to the ole brain in a jar concept.

Also, that is another interesting angle (and one not covered here)...what about the virtual reality intelligence beings? Not just lines of programs, but I reckon in (x) years, many actual living people will become literal brains in a jar living their existance on the net in VR (verses death..might be the new thing to do when your about to die...remove head, put brain into a matrix goop and reconnect...you awake in VR)...what rights do they have?



posted on Aug, 4 2011 @ 02:05 PM
link   
reply to post by SaturnFX
 


Saturn, I recommend you look up rat-brain robot and get ready to be amazed!
What you are talking about is already being done. Capitalism is collapsing, technology is making incredible breakthroughs... We are on the verge of a major catastrophe that will be the growing pains of a new world beyond anything most of us can imagine.

And these are only the things that we see online and in the news. In scientific journals, things beyond the imagination and comprehension of most people, are taking place. And even there, there is still more work being done that goes unpublished. Life is going to get -very- interesting over the next twenty years.
edit on 4-8-2011 by strings0305 because: Update!



posted on Aug, 4 2011 @ 02:08 PM
link   

Originally posted by strings0305
The day an AI becomes self-ware, the day when any sort of interaction with an AI is indistinguishable from that of a human,


I disagree

I think a AI will be able to be told apart...it will be the one far more intelligent than the norm.



posted on Aug, 4 2011 @ 02:11 PM
link   

Originally posted by strings0305
reply to post by SaturnFX
 


Saturn, I recommend you look up rat-brain robot and get ready to be amazed!
What you are talking about is already being done. Capitalism is collapsing, technology is making incredible breakthroughs... We are on the verge of a major catastrophe that will be the growing pains of a new world beyond anything most of us can imagine.

And these are only the things that we see online and in the news. In scientific journals, things beyond the imagination and comprehension of most people, are taking place. And even there, there is still more work being done that goes unpublished. Life is going to get -very- interesting over the next twenty years.
edit on 4-8-2011 by strings0305 because: Update!


I think I read about that sometime back, yes...micromachines with some gray matter from rats...was interesting research.

I do look forward to the day when man can connect up fully to a computer..brain to brain, I am very much a fan of augmentation to ourselves to enhance ourselves, lives, etc...perhaps even live eternally sort of in a virtual matrix once the bio gives out...but we are still some years (decades) away from that possibility...so, best to eat right and quit smoking if I want a chance to see that day.



posted on Aug, 4 2011 @ 02:19 PM
link   
reply to post by SaturnFX
 


Unless we had already augmented ourselves to make it in the first place!

edit on 4-8-2011 by strings0305 because: Word edit



posted on Aug, 4 2011 @ 04:06 PM
link   

Originally posted by strings0305
reply to post by Cryptonomicon
 


Luddites unable to accept progress, who are unable to think beyond the current paradigm, unable to take on the responsibility and privileges that come with technological advancement, will be their own end. Technology doesn't destroy humans, humans destroy humans. Embrace and evolve, or fall by the wayside. Progress is inevitable. With or without technology, the human race will most likely destroy itself eventually. The difference is that, with technology, we at least stand a fighting chance.

You don't get it at all. You truly don't.

This isn't about people staying with 12" records VS. the next iPod.

No human has the ability to take "responsibility" for what is 100% out of their control. You don't either.

Additionally, your statement that "technology doesn't kill people" is an outdated concept from this soon to be old paradigm and underscores your complete lack of understanding for the topic at hand. The emergent systems and the blending of humans with machines will result in neither humans NOR machines NOR "technology". Entities will take on forms never before imagined by anybody.

It's s completely new paradigm, and your choice to "go with it" will not save you nor your children.

The only thing that will save us is preventing it from happening.


edit on 8/4/2011 by Cryptonomicon because: (no reason given)



posted on Aug, 4 2011 @ 04:25 PM
link   
reply to post by Cryptonomicon
 


I agree with your projections. I agree that our local model of existence will indeed break down, but I would argue that there is nothing to "save" me or my children from. Preventing it will be tantamount to the extinction of the human race, even if that takes one hundred, two hundred, or a thousand years. Embracing it will be the evolution of the human race. Saying that I am human because I am flesh will become a fallacy. Individuals, the world, and even the universe itself would begin to stir, as if from some long slumber. War, famine, poverty, greed would all end as things become inherently connected to one another.

That's not to say this will be some sort of technological utopia, or that it is preferable to "individual" existence. Simply that it is inevitable, and might not be as horrific as some people think. It's natural to be afraid, yes. But to close your mind is an act of ego. Could it be horrible? Yes. But what if it isn't?



posted on Aug, 4 2011 @ 04:26 PM
link   

Originally posted by strings0305
reply to post by 547000
 


You don't seem to understand the progress that has been made in AI modelling. Very soon, sooner than most people would like to believe, AI programming will break free from any sort of predetermined model. Readily available, cheap computation has lead to some stunning breakthroughs in the programming of self-organizing integrated neural networks among other AI models. Something being computable doesn't mean that it is deterministic. On the other hand, apparent complexity doesn't mean that something -isn't- deterministic. Just because a human being is unable to grasp the number of variables and the computations of state of the human condition doesn't necessarily mean that every single cell in your body, every choice you make, every day of your life, isn't predetermined. Open your eyes and your mind. Educate yourself. There is no excuse for ignorance in the age of the internet. Anyone can acquire more knowledge in the span of a few days than most people could have hoped to learn in a lifetime just twenty short years ago.

The day an AI becomes self-ware, the day when any sort of interaction with an AI is indistinguishable from that of a human, there will be no debate about its consciousness. Luddites will move on to "does it have a soul?" and will use that as an excuse to torture and persecute a sentient being simply because it's genesis lies with humanity and not some sort of beaded divinity.


I have heard people say this for the last 20 years. Very soon, AI will surpass anything we can imagine. Very soon, we will be living among AI. Meanwhile, AI hasn't made any exponential advances. It's still advancing incrementally. The human brain evaluates millions of variables to make a decision. It does this without you thinking about it. But we know we are conscious because 'we' are in here.

In fact, it could be said we don't make any decisions, it conscience mind simply rationalizes what are subconscience has already decided to enact. In fact, there were studies where they could predict what button a set was going to push, based on his brain activity, up to 6 seconds before they were even consciously aware of a decision. So it could be said our only difference between us and sufficiently advanced AI is that 'we' are in here, observing and rationalizing our actions.

My thinking is that consciousness will not just miraculously appear. Most people realize this, but science makes the distinction that if AI is indistinguishable from consciousness, it is consciousness. This is the only position science can take, because it has to define consciousness by its functions. Hell, that's an area that is almost completely ignored in most disciplines.

I think an incremental argument would be the approach used to 'prove' that AI is not conscious. If you take the AI, and simplify it incrementally, where at its most basic it simply is one 'if then' condition, and line them all up, there is not one point where it become 'conscious'. The first iteration is very obviously just a simple computer program. No one would argue that it is aware of its actions. Each iteration from there gets slowly more advanced, but the line of succession is clear and unbroken. Therefore, if the original simple program was not consciousness, then neither is the end result, because there is not one point along its iterative development where it can be determined that it transcended to a conscious being.

The converse of this is that because there is no moment along an augmented human's iterations where consciousness left, so the augmented human can be assumed to be conscious as the original.



posted on Aug, 4 2011 @ 04:46 PM
link   
reply to post by Akasirus
 


"But we know we are conscious because 'we' are in here."

Is actually only applicable to the self, the only consciousness a being can be 100% sure of is their own. Other than that, it's a logical fallacy to say that you "know" someone else is conscious without experiencing them. You can be convinced that someone is, but you can't know.

"My thinking is that consciousness will not just miraculously appear. Most people realize this, but science makes the distinction that if AI is indistinguishable from consciousness, it is consciousness. This is the only position science can take, because it has to define consciousness by its functions. Hell, that's an area that is almost completely ignored in most disciplines. "

I agree that consciousness will not miraculously appear. And the reason science makes the distinction is because of my prior point. It is ignored by most disciplines because of faith based assumptions about the nature of consciousness. "If I can't quantify it yet, I can't/shouldn't touch it." I don't mean faith as an act of religion, I mean full on, unproven belief.

"I think an incremental argument would be the approach used to 'prove' that AI is not conscious. If you take the AI, and simplify it incrementally, where at its most basic it simply is one 'if then' condition, and line them all up, there is not one point where it become 'conscious'. The first iteration is very obviously just a simple computer program. No one would argue that it is aware of its actions. Each iteration from there gets slowly more advanced, but the line of succession is clear and unbroken. Therefore, if the original simple program was not consciousness, then neither is the end result, because there is not one point along its iterative development where it can be determined that it transcended to a conscious being. "

You're making the assumption that human consciousness isn't simply a repeating iteration of increasing complexity itself. Which is quite frankly a huge assumption. Yes, the brain operates on millions of variable inputs, but it doesn't make sense of each one individually, it extrapolates based on some rather simple data using noise (statistical noise from each sensory system, not acoustic!) as a sort of way point for the various parallel processes going on inside the different parts of the nervous system, to make sense of that data. Evidence of this abounds, from selective memory, to false memories, to enhanced tactile sensation when stochastic vibrations are applied topically to the touching organ in question.

Now, imagine that we finally understand the brain and realize that it is simply a recursive, self-improving algorithm put in squishy form. Moving backwards, we eventually find a root node and that root node is a simple design that, alone, has no consciousness. Does that mean that Humans and AI are both not conscious?

Oh! Forgot to bring up that idea of exponential growth you mentioned. The fact of the matter is, that what most people understand by exponential growth is the Kurzweilian idea of doubling infotech every two years.

That aside if something grows by 5% each year, it has an effective doubling rate of fourteen years, we figure that out by the taking the number 70/the % growth per unit time. We get 70 by multiplying 100 by the natural logarithm of 2 (I.E. Doubling)

If AI research were to grow at a rate of 5% each year, it would double every 14 years, which would mean that in 140 years, it goes like this: 2-4-8-16-32-64-128-256-512-1024... 1024 times more advance than it currently is over 10 doubling periods. If that inital % is even 3 or 4% higher, then the doubling time reduces dramatically. At 7%, the doubling time is 10 years. At 10% per year, it becomes seven, etc. Most people would call that a linear increase, but the exponential function basically just laughs at that. You don't have to double every two years to be significant! :p
edit on 4-8-2011 by strings0305 because: Additional info



posted on Aug, 4 2011 @ 04:52 PM
link   
I should also clarify my definition of "soon", I just realized that I have yet to actually give an idea of my opinion on statements such as "sooner than most people would like to realize." I'm thinking on the order of sixty to seventy years. Not never as some would like to think, and not twenty five years as others would.



posted on Aug, 4 2011 @ 05:05 PM
link   
reply to post by SaturnFX
 




Interesting question, one that was posed in an episode of Star Trek Voyager. The ship's EMG (emergency medical hologram) finds out there are other models (algorithmic strings entangled with sub-routines, etc) being used for slavery to mine in the voids of space on various planets, asteroids, etc.

Well this hologram isn't just the average computer software program, he's been made sentient, and continuously strives to be better, to learn more, and incorporates creative intelligence, and ironically, appears more emotional than the rest of the human(oid) crew.

He authors a book, and it's well received "throughout the galaxy." Other holograms start reading the first published novel from another hologram and ends with them questioning their servitude.

It's not a "right" that will have to be contended with, with the failed human condition of emotional infect-ability of thought virii gone, pure logic based decision making synthetic organisms will far outweigh us in all facets, and it's not what we will have to worry about how "they" feel that will be important, it will be "what are they going to do about it" that we should worry about.

okay, i'm too tired to rewire that last paragraph, run-ons galore.

my point: It won't matter by that time, they will do as they please.



posted on Aug, 4 2011 @ 07:24 PM
link   
Absolutly NOT! A.I. can be repaired or rebuilt and humans cannot.



posted on Aug, 4 2011 @ 07:32 PM
link   

Originally posted by iLoGiCViZiOnS
Absolutly NOT! A.I. can be repaired or rebuilt and humans cannot.


Only for significantly retrograde levels of technology.

The ability to "repair" the human organism goes hand in hand with the technology required by hard AI and that can "enhance" human beings.

The future will blur out the differences between pure AI and enhanced human.



posted on Aug, 4 2011 @ 08:37 PM
link   
reply to post by strings0305
 


I agree with you for the most part and I can see both sides of the coin so to speak, but I think any serious vote or debate on the subject would disqualify AI as having consciousness, at least in the terms that we experience it. My main basis for this is not a logical or pragmatic stance, but a self-centered one. People are very egotistical, and history is littered with examples of humans trying to prove themselves bigger or more important then they really are. This would be a very big pill to swallow for most; that there is nothing special about us, our experience of reality is no more unique or special than any other sufficiently complex chunk of atoms.

Science would have us assume that consciousness does not exist, or at least form no opinion about it, because it is not something that is quantifiable or falsifiable. We should just seek to satisfy our hierarchy of needs, there's no reason or explanation as to why we would be aware of it. So for now, this is definitely more of a philosophical question.

I know we can't truly know if someone else is conscious or not, in the same way we can't truly know what someone else feels like when they are happy, sad, etc. They are just arbitrary words used to describe something that is completely internal. Yet without this assumption, there is no need to have empathy, and I'd be free of any guilt for raping and pillaging because other people are not self-aware.

My incremental test certainly wouldn't be conclusive, but it serves as an interesting thought experiment. Let's say they've discovered our 'brain algorithm' and limit the recursive nature to where it is at the level of a Google web spider, for example. If you agree that at that level it is not conscious, where would you draw the line? If you move the complexity slider up a notch and observe it at each point, each increment would be so similar to the last that it would be difficult to conclude that one point being conscious, but the point before, ever so slightly less complex, is not.

If the level of complexity transcendence occurs can't be determined, then it could be said that either AI is conscious at both its most basic and most advanced state, or at neither.if there is an undeterminable point along the way AI could never be granted rights. We would have to draw a line somewhere, logically or arbitrarily.

I realize there is likely no way to sufficiently know, though I'm curious what the argument to try to convince people may be. Proving the root node of the brain's process does not have consciousness wouldn't prove that we do not have consciousness. For one, as long as I am here, pondering this question nothing will convince me I am solely the sum of my chemical processes, I think that is counter-intuitive to our human nature. Secondly, that wouldn't prove that we are no more conscious of a robot with the same algorithm, only that if we are more 'conscious' so to speak, it is not a result of the algorithm,

What would you say about the theory that we exist either as more than 3 dimensional beings, or in space that contains more dimensions than we can consciously perceive? As the possibility of additional dimensions opens up, so does the possibility of a very natural and physical existence of our thoughts, our 'soul' so to speak, in a dimension just out of reach of our conscious mind. AI would therefore be a good facsimile, in the same way a square is a 2D facsimile of the shadow of a cube.
edit on 4-8-2011 by Akasirus because: An attempt to be slightly more concise.



posted on Aug, 5 2011 @ 12:36 AM
link   

Originally posted by strings0305
reply to post by 547000
 


You don't seem to understand the progress that has been made in AI modelling. Very soon, sooner than most people would like to believe, AI programming will break free from any sort of predetermined model. Readily available, cheap computation has lead to some stunning breakthroughs in the programming of self-organizing integrated neural networks among other AI models. Something being computable doesn't mean that it is deterministic. On the other hand, apparent complexity doesn't mean that something -isn't- deterministic. Just because a human being is unable to grasp the number of variables and the computations of state of the human condition doesn't necessarily mean that every single cell in your body, every choice you make, every day of your life, isn't predetermined. Open your eyes and your mind. Educate yourself. There is no excuse for ignorance in the age of the internet. Anyone can acquire more knowledge in the span of a few days than most people could have hoped to learn in a lifetime just twenty short years ago.

The day an AI becomes self-ware, the day when any sort of interaction with an AI is indistinguishable from that of a human, there will be no debate about its consciousness. Luddites will move on to "does it have a soul?" and will use that as an excuse to torture and persecute a sentient being simply because it's genesis lies with humanity and not some sort of beaded divinity.


Spare me the BS. AI is pure logic. The algorithms have to be predetermined unless the logic the programmer puts in have to react to data has to modify the code itself, but it still has to be coded with that capability. AI won't cry unless we decide to put in that capability. AI won't feel pain unless we had that capability inserted too. In the end AI is a means to an end and not an end to itself. AI is a computational tool to carrry out tasks and nothing more. Reading too much science fiction can make you lose track of that.




top topics



 
8
<< 1  2  3    5 >>

log in

join