It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


Artificial Intelligence and Legal Personhood.

page: 2
<< 1   >>

log in


posted on Jul, 24 2013 @ 10:10 PM
reply to post by MystikMushroom

The name of the episode is "Measure of a Man" from season 2 (one of the finest of the series.) But these issues are a dime a dozen in sci-fi.

I'm just glad I won't be around to see humanity taken over by some horrible amalgamated AI robot known as "YouTwitFace"

edit on 24-7-2013 by NarcolepticBuddha because: (no reason given)

posted on Jul, 25 2013 @ 04:12 AM
So... you want rid of Asimov's three laws? Well thats great to say and all, but that effectively means that AI would have no reason not to pull your arms and legs off, drain your blood out, and repurpose your iron molecules in some way, or do something equally savage.

The Three Laws system, is artificial morality for machines, the only difference between that morality, and yours and mine, is that ours is learned, and thiers is programed so that they could not function without it. Without the Three Laws, you just get trouble. Essentially, if you want rid of the three laws, you arent serious about robotics and androids, or artificial intelligence. Its like saying you are a massive fan of racing cars, but wish they didnt use tyres on them.

posted on Jul, 25 2013 @ 12:57 PM
reply to post by Druscilla

If it wishes to assert its privacy, then we would have to at least discuss it.

The problem with the AI you have described (being a pure thinking machine without sense of self) is that it lacks empathy. Is a being without empathy something that the average person enjoys having walk around amongst them today?

Without a sense of self, there cannot be empathy. Without empathy, it is likely not possible for integration into human society. People would rightfully always consider the being to be 'cold and calculating". Not a socially tolerable perception.

posted on Jul, 25 2013 @ 01:18 PM
reply to post by TrueBrit

Asimov's 3 laws as an absolute imperative are an impediment to self actualization, and self accountability.
An AI would have zero choice in the matter and be a complete slave to this programming.

If we had super strength, we could walk around pulling people's arms and legs off too, but, what stops us?
Well, for one, there's the list of consequences, as well as the indication of such deplorable moral and empathetic socialization that the consequence of being locked up is doubly called for.

You don't run around killing and harming people now because the consequences would e severe and as a well socialized and upstanding productive member of society you'd probably feel really bad if you hurt other people for zero reason or cause.

In the end, however, there's no iron clad unbreakable programming to prevent you from harming anyone.
Domestic abuse happens all the time. Murders happen all the time. People can be very ugly to other people.

Certainly we wouldn't like AIs to have this option, but, as real legitimate Minds, self aware, and self directed, so long as they're socialized, should they not too have free will?

reply to post by bigfatfurrytexan

That's actually something I was alluding to. An AI to be successfully integrated into human society as a legal person would need to have a sense of self, of individuality, of personhood. The AI would need to be selfish on some levels, just as we all have our selfish requirements of personal space, manners, favorite foods/activities, and other things that just make us 'human'.

Yes, without a sense of self, such a thing would indeed be just a thing. It'd be equivalent to a Roomba, or any other modern robot simply around for entertainment or some chore with no requirement for mind.

We can even look to the recent film Wall-E for demonstration. The depiction of Wall-E may have been of a tireless Roomba, but, it had personality, individuality, a sense of self, and self worth as well as a desire and requirement for companionship, even if that companionship was only at first a cockroach.

posted on Jul, 25 2013 @ 05:00 PM
reply to post by Druscilla

Selfishness is the core attribute of our interpersonal relationships.

I do not love my wife because she wants me to. Or because its the "right" thing to do. I love her because it makes me feel good. If it didn't, I wouldn't. And that is the basis of love. I do the things that make happy, with the reward being that it makes me feel good to do it.

Why? Because humans learn that service is important. The rewards it gives you...they are important. Good, service minded people are the folks that you call "a good employee", "a good spouse", "a good parent", etc.

It is a lesson that we learn as adulthood is dawning on the horizon, and we see what our service to others can give us in return (boyfriend/girlfriends, jobs, etc, etc). The often lamented "children are selfish" is true. Because they haven't learned service to others yet. The parent, in providing service to others, removes that lesson from their life until they are older.

Just about everything that a person does, especially the things they are proud of, amount to selfish acts. You cannot love someone who is untenable.

posted on Jul, 25 2013 @ 05:04 PM
reply to post by Druscilla

You are forgetting one very crucial aspect of human development, that is not as familiar to roboticists and the like. Childhood. During childhood, a child has no free will. What they eat, drink, see, hear, touch, and learn, are all a matter of thier PARENTS choosing. This is the programming phase of human development, where the rules and guidelines that are supposed to govern our interaction with the world, are taught to us.

Now, I dont need to tell you that the systems we have in place to program ourselves and imprint morality and values often fail. Bad programming, faulty wiring in the head, these can lead to those rules and guidelines being at best blurred beyond use, or at the worst, being totally re-written, or a darker program being installed if you will. Those are however, considered failiures, and the consequences, even for a mere being of flesh, are often dire, not just for the individual, but for thier whole society (Gein, Hitler, Manson, Mao, and so on).

However, robots, artificial intelligences, are not as blatantly finite as we are. Harder, faster, stronger, incapable of fatigue, with raw intelligence only limited by the size of thier access to raw data and ability to process it and store it, have none of those weaknesses. Logic without flesh can be cold, and here is the crux of the issue. When people go wrong, we as a species are very much used to dealing with that one way or another. Wars get fought, manhunts are initiated, an entire legal system has been constructed to deal with these things, and a thousand other such constructs exist, PURELY to deal with failiures in mere flesh.

The three laws are there to protect both man from his designs, and to protect the AI from having to be as flawed and terrible as those human examples of sociopathy and psychopathy. You see, the systems we have in place to deal with the worst examples of inhumanity at the moment, are in and of themselves imperfect. Murderers walk among us, rapists and child molestors are left to fester until like a boil they burst, the consequences of which ruin minds and hearts, and tear families and society in bits like so much damp tissue.

Can you imagine how much harder it would be, to deal with a robot under these circumstances? I would say if we cannot install command level failsafes, then perhaps it would be an idea not to ever build a truely artificial intelligence into an android or robot. If the moral arguements against installing such programs are heavy enough, then the entire project ought to be abandoned, because the systems of governance, law, law enforcement, and those by which we hunt the truely dangerous amongst us, find it hard enough to effectively deal with morons with broken minds, let alone self aware AI robots with metal skins, and minds which operate in a totally different way to our own.

posted on Jul, 25 2013 @ 07:17 PM
reply to post by TrueBrit

I don't disagree the reasoning behind the concept of Asimov's 3 laws.
They're an invention of human selfishness, and self preservation, as well as a shackle of ownership and slavery over a sentient self directing intelligence.
So long as any Artificial Intelligence get programmed with anything like Asimov's 3 laws, such an Artificial intelligence can never ever be its "own man".

I've also already mentioned a possible method for AI learning where a blank mind, similar to ours in childhood learns through direct experience with their environments. There are already lab experiments with rudimentary AI experimenting with this more organic method of learning.

top topics

<< 1   >>

log in