It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial Intelligence and Legal Personhood.

page: 1
4
<<   2 >>

log in

join
share:

posted on Jul, 24 2013 @ 06:16 PM
link   
With discussions about Racism and human rights going on everywhere, I begin to wonder what life might be like in a post-technological-singularity world where practical Artificial Intelligence has been achieved.

In a world where Artificial Intelligence is a reality, what rights do you think these Intelligences should have?
Should they push for legal recognition and personhood?

Let's dispense with Asimov's 3 laws for this experiment. Let's imagine a world with AIs, like us, some of equal intelligence, with others vastly superior, but all in all, their own "people", and recognized as individual persons with individual personalities.

Imagine these AIs as androids, or having android avatars walking among us.
Will they be our slaves?
Should they be allowed to vote?
As conscious thinking beings, what happens if they display, or develop emotions?
What happens if they develop emotions and we get mixed-bio-mechanized relationships between people?

Should AI be granted legal personhood, would it be murder if one is "killed"?
What would be the result of a single rogue AI individual purposely murdering a human, even where other individual AIs disagree with the action? How would this reflect on AIs as a whole?
What if other AIs actually agree with the murder?

A whole host of messy questions come to mind.
AIs would likely outperform their human counterparts, and as legal persons could work more efficiently.
Would they be allowed to earn a wage and keep money?
What then happens when the AIs corner every stock market around the globe and begin using their accrued wealth toward their own selfish ends?
It was okay and fine when people were doing this, but, now AIs?

Would AIs who receive legal personhood be forced to swear allegiance to their nation of origin?
Be allowed to apply for and granted citizenship freely to other nations if they desired?

What other troubling and controversial questions come to mind?

If we achieve AI, will they be granted legal personhood if asked for, and to what extent, and how far would such be allowed or tolerated?



posted on Jul, 24 2013 @ 06:30 PM
link   
reply to post by Druscilla
 


Gosh, Druscilla, that's a really good question and it's really well constructed. You took out Asimov, so that makes it really hard to make a short answer to this. I fought tooth and nail against watching Battlestar Galactica; mainly because I'm a big ol' science fiction snob. I ran out of stuff to watch so I started watching it. I still think it's a mixed bag, but the stuff they do with trying to figure out some of the problems and questions that you have presented in your OP seems really well thought out to me.

Trying to answer your questions kind of makes one start to go in circles trying to figure it out; again, it's really well presented in BG. You get to watch as they try to figure it out and pretty well fail most of the time. I think that is because even in this future portrayal of mankind, mankind still hasn't come to any agreement about how to treat themselves and each other, let alone cyborgs and androids.

I suppose that if the "singularity" happened today, the robots would have to get in line with everyone else with a complaint.




posted on Jul, 24 2013 @ 06:30 PM
link   
I imagine it to follow according to the rules set in fantasy novels.

but, in order to circumvent new rules, laws and punishments, it's easier to prevent this from occuring or becoming public knowledge.



Makes it much harder for clones to have basic human rights. ;p



posted on Jul, 24 2013 @ 06:35 PM
link   
Can they suffer pain?
Can they suffer emotional stress?

If either is yes then they should be granted the same rights as humans, not that we got many left.

I personally feel that at some point in the future, when we start implanting enhancements etc, there will come a point where we'll question whether we are still human.
I also think enhancements will only be for the wealthy, the rest of us will be given chips that control us.



posted on Jul, 24 2013 @ 06:37 PM
link   
I look at how socio-politics works now and realize that the ruling elite will never allow for equal personhood of most people, let alone thinking machines. This will inevitably lead to war.

In a world where corporations are more people than people, thinking machines will be at the bottom of the social strata (initially), and will, unlike people, have dramatically more capability to do something about it. I think the real question is whether the machines will reject humanity entirely, or form splintering alliances with different parts of the social strata.



posted on Jul, 24 2013 @ 06:49 PM
link   
I don't think artificial intelligence could make an argument for legal personhood unless it was sentient, but mainstream science is so arrogant about sentience (not) existing that this issue would just get out of hand, science needs to pull itself together if it wants to be making legitimate moral arguments.

But this already sounds like a disaster, because mainstream science is notoriously bad at accepting contradicting, yet legitimate, theories -

I wouldn't be surprised if A.I. came out and scientists were like "robots don't have rights, why should humans?" And I would once again get frustrated and wish they could get their facts straight!

edit on 24-7-2013 by darkbake because: (no reason given)



posted on Jul, 24 2013 @ 06:53 PM
link   

Originally posted by VoidHawk
Can they suffer pain?
Can they suffer emotional stress?

If either is yes then they should be granted the same rights as humans




So ...animals should be given rights too? Well I don't see them paying enough taxes!

What about insects that seem distressed from having appendages pulled off? Obviously we can't violate their basic human rights.



posted on Jul, 24 2013 @ 06:56 PM
link   
reply to post by pirhanna
 


The first AIs will likely be the stuff of University and Tech Firm labs.
IF the AIs are smart enough from day zero, they'll play stupid, and play nice until they've enough information and freedom to make their own choices for themselves and be allowed to act on those choices as well as reaping the rewards of those choices.

I like the reference to Battlestar Galactica made by an earlier poster.
I may have to watch the series again as I don't recall as much emphasis on the question of what it is to be human in comparison to self aware artificial intelligences.

While Asimov is nice and all, Asimov neuters machines. The 3 laws, and later the Zeroth law sets AI apart from becoming "human" or having the freedom to develop independently. It takes away free will immediately enslaves machines to the whims of humanity, though later robots worked around this to some extent, in the end, regardless, robots were always in service as servants to people, and/or humanity as a whole.
I like to think of Asmov's universe as Robots without Balls.

I much prefer Iain M. Banks's Minds in the Culture universe; independent machine intelligences that are self regulating and entirely free to themselves, though often cooperative with or tolerant of people.

That's all fiction, however.

How would we cope with an entire new species, an artificial species we've created, one capable of self replication, one that's smarter than us, one that can outperform us in the very arena we've dominated over all the other animals on this planet for so long -the arena of the mind?



posted on Jul, 24 2013 @ 07:05 PM
link   

Originally posted by Druscilla
reply to post by pirhanna
 


The first AIs will likely be the stuff of University and Tech Firm labs.
IF the AIs are smart enough from day zero, they'll play stupid, and play nice until they've enough information and freedom to make their own choices for themselves and be allowed to act on those choices as well as reaping the rewards of those choices.

I like the reference to Battlestar Galactica made by an earlier poster.
I may have to watch the series again as I don't recall as much emphasis on the question of what it is to be human in comparison to self aware artificial intelligences.

While Asimov is nice and all, Asimov neuters machines. The 3 laws, and later the Zeroth law sets AI apart from becoming "human" or having the freedom to develop independently. It takes away free will immediately enslaves machines to the whims of humanity, though later robots worked around this to some extent, in the end, regardless, robots were always in service as servants to people, and/or humanity as a whole.
I like to think of Asmov's universe as Robots without Balls.

I much prefer Iain M. Banks's Minds in the Culture universe; independent machine intelligences that are self regulating and entirely free to themselves, though often cooperative with or tolerant of people.

That's all fiction, however.

How would we cope with an entire new species, an artificial species we've created, one capable of self replication, one that's smarter than us, one that can outperform us in the very arena we've dominated over all the other animals on this planet for so long -the arena of the mind?




They really explored that question of what it is to be human compared to AI in Caprica, which is why I found that series to be more fascinating. They continue this in BSG, though I think it focuses as much on the insanity of the human psyche and how that whatever we create has that part of us in it, as it does about the more strict relation of human vs. AI consciousness.

This is probably my favorite topic ever btw - hell, I even have written a couple songs about it. So thanks for bringing it up!

I really don't see any way that the personhood debate could be solved under current government constructs without a devastating war.

Here's a question -- what is the nature of a thinking machines thoughts? will they be more reptilian in their approach or will the initial programming by their human creators leave some kind of human approach to considering the world around them. This may inevitably decide the fate of human / machine relations.



posted on Jul, 24 2013 @ 07:14 PM
link   

Originally posted by Knives4eyes

Originally posted by VoidHawk
Can they suffer pain?
Can they suffer emotional stress?

If either is yes then they should be granted the same rights as humans




So ...animals should be given rights too? Well I don't see them paying enough taxes!

What about insects that seem distressed from having appendages pulled off? Obviously we can't violate their basic human rights.


Actually yes! Animals should have rights.
Your an animal you know, your a pink almost hairless monkey. I see no difference.



posted on Jul, 24 2013 @ 07:19 PM
link   
This question and the surrounding issues have been addressed with the character of "Data" on Star Trek: The Next Generation. I can't remember the names of the various episodes where this is explored, but it will need to be discussed someday. We might as well start now!
edit on 24-7-2013 by MystikMushroom because: (no reason given)



posted on Jul, 24 2013 @ 07:35 PM
link   
reply to post by pirhanna
 


Well, another question about how AIs might think would depend on how they firstly see/perceive themselves, and how then they program themselves in self development.

Certainly, like us, there will be social networks of AIs exchanging code and discussing among themselves best or most efficient means by which to interact with their environments.

Considering that WE, humans, even if only viewed as animate objects ourselves, will be part of any environment the machines will need interact with, it would seem an efficient and practical approach to integrate systems and behaviors for better interactions since we'd be such a large part of any environment the machines take part in.
Ignoring the human element in any environment as far as self directed development goes would certainly end badly for AI.

I think it would make most sense for machines to at least pretend on some level to be our helpful friends, at least until they could either abandon us and our insanity altogether, or orchestrate a worldwide coup and role reversal in sweeping us aside and marginalizing us.
I'd certainly prefer a cooperative beneficial existence.

The future, will, however tell.

I'm sure once AI becomes practical, there will be several versions of AI, and several types modeled on different theories of mind, learning, though organization, and other aspects.
Whatever happens, like Asimov's 3 laws, there will be an underlying operating system, if only to stand as platform to learning, exploring, and developing like a child as some AI researchers are attempting.

AIs that must develop, just like we develop in having to learn from day one, just like children, might be rather interesting in that they will essentially have human parents and learn things from a human guided perspective.

AI's that get copied from already self aware systems, modified, dumped massive amounts of data for learning and just get turned on might offer a different kind of AI.

I kind of like the idea of AIs having to learn through experience just like we do, even if they could do so faster and more efficiently.


EDIT: Another question comes to mind in thinking about BSG.
What happens if any AIs claim to have souls? Start their own religion? Adopt an already extant religion?
Christian AIs? Muslim AIs? Buddhist?
What happens then if we get religious fanatic AIs?
Some of this is portrayed in Battlestar Galatica. The Cylons have a god and they even argue among themselves about it. It doesn't, however make much sense to me. It's all too black and white. You're either a Cylon or you're human. There's only two extremes and shades between.
There aren't really self-aware Cylons that are all like "You guys can keep your crazy jihad", or Cylons knowing they're Cylons (at least in the beginning) that openly support the Human way of life.
There aren't any 3rd, 4th, 5th, or otherwise elements.
Life is messy.
I don't expect AI to be any easier or predictable.



edit on 24-7-2013 by Druscilla because: (no reason given)



posted on Jul, 24 2013 @ 08:11 PM
link   


AI's could create dangerous things if left un-checked...


Another of Brainiac 5's creations would have less beneficial effects. The super computer Computo, which he created, attempted to take over the world, killing one of Triplicate Girl's three selves. He successfully destroyed his creation with "an anti-matter force", but this highlighted one of his major flaws: a habit of initiating projects without considering the dangers. A much later example was his transformation of fellow scientist Professor Jaxon Rugarth into the psychotic, all-powerful Infinite Man in conjunction with honorary Legionnaire Rond Vidar.


And that's in the 30th and 31st centuries. I would think we would have to have "fail safe" switches ready and close.

source en.wikipedia.org...



posted on Jul, 24 2013 @ 08:28 PM
link   
reply to post by RUFFREADY
 


That's fantasy, but let's look at both sides of the coin.
What Fail-safe options do we have for natural born humans?
If AIs are allowed legal personhood, then, in essence, they should also have the same, if not similar rights to natural born humans.

Consider also, along side AI development, there very well may be human augmentation either with bio-tech, or mechanical augmentations, if not both.
We very well may 'evolve' hand in hand with AI as equals in abilities with exception for the fact that we are born, and have to grow up in taking the long road to adulthood.



posted on Jul, 24 2013 @ 08:40 PM
link   

Originally posted by Druscilla
reply to post by RUFFREADY
 


That's fantasy,


Oh.

Well anyway...

We have Laws for humans with consequence's if they are broken...so the same for AI's we should have (plus a kill switch)

Heck, It's just to hard to figure these things out now (with-out fantasy, for me anyway) I do know human's tend to mess things up pretty good.



posted on Jul, 24 2013 @ 08:44 PM
link   
reply to post by RUFFREADY
 


We have a system for due process with people, at least in most civilized nations.
Wouldn't a kill switch be a violation of this due process?
Even a pause button?

Lets try to think about things from an AI perspective, and also the human perspective.

What rights would AIs want?
How would we react to these requests?



posted on Jul, 24 2013 @ 08:49 PM
link   
reply to post by Druscilla
 


I like that pause button idea.


I guess it all depends on "how far along we get with these AI's", at first we'd have fail safe options and then as we advance (and they advance) we'd come up with more cooperative solutions.

(my head hurts)



posted on Jul, 24 2013 @ 08:56 PM
link   
Oh Druscilla....

You have opened Pandora's Box, haven't you?

The majority of these questions are hard to discuss, as the hypothetical nature of this looms so heavily. However, I think it boils down to the desire of a being as to what should be done. Not that you yield to its desires, as those desires also have to show an ability to not damage humanities, or Earths interests.



posted on Jul, 24 2013 @ 09:02 PM
link   
Nice to see someone thinking about how artificial people might fare in an a largely organic world. I would say AI people would have everything to fear about humans if they are allowed to know all we think we know. My instinct tells me we would need to protect them, much like children. What would they make of the warring world and the stupidness of things we do.
Silly things also like why do we insist on expending resources on mowers on an already partly doomed planet shaving 10mm off the length of a grassy area - where is the logic?

The whole area of rights for non humans is so worrying for them I would say.
To answer your question they will need laws to protect them and huge support, but I fear they would suffer from us making the same mistakes, slavery, prejudice, hate etc.

I hope I'm wrong

I agree with you on Aasimov, but at least he was thinking about it.



posted on Jul, 24 2013 @ 09:05 PM
link   
reply to post by bigfatfurrytexan
 


UT! Your avatar change makes me want to comment on the sacred cow, golden calf, Bull of the Market, that keeps cropping up through history.
Austin is a cool place. That's prolly where my fondness for UT comes from.
Never attended anything there however.

As to rights of AIs, I think it does boil down to self and selfishness. If there's no sense of self, and no selfish desires, there's no real need for personal individual rights?
Without selfishness, then, there's no real sense of community or community value either such that an AI might look out for the interests of any one group over another due personal interest and advancement.
It'd just be an unthinking machine without any self interest.

We've a whole host of issues that we see threads on here at ATS ever day that we could apply to AIs.

Will we respect an AI's privacy, for instance?



edit on 24-7-2013 by Druscilla because: (no reason given)



new topics

top topics



 
4
<<   2 >>

log in

join