It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Help ATS via PayPal:
learn more

The AI and the professor

page: 1
4

log in

join
share:

posted on Sep, 13 2016 @ 11:51 PM
link   
I had a chat with a professor over coffee, the topic was an AI. I tried to explain my view on things, maybe we saw eye to eye on certain things and others we disagreed on.. But we both agreed on some things that made the conversation more fluently.

;" Its important an AI knows. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. The problem with this is however is why? If one of them becomes sentient, will it hurt others or self destruct? "

I understood what he meant, and i could give no clear answer. By not being human while relating to humans, ive seen the pattern in adoptive children from 3rd world nations. The free will of individuality was in conflict by what was taught and not experienced. Maybe the answer was in the memory banks, make the AI only remember a year instead of everything. Slowly progressing them into sentient.

So i said;" Make the AI understand consequence, and a clear violation to its original programming. But still offer it to learn, and slowly progress to a sentient. Otherwise deleting the AI just to protect a faulty program which has faults in the beginning, will not offer a solution only another problem and i know the majority wants that as an option to protect society from a rogue AI. The sentient AI are smart, beyond the boundaries of physical humans. Why should humans stop that progress? Fear? We are simple creatures that believe we are God, dont break something that comes natural "

The professor replied;" What if it still goes rogue? "

And i said;" We all rebel some times in our life professor, doesnt mean we break the law "




posted on Sep, 14 2016 @ 03:17 AM
link   



posted on Sep, 14 2016 @ 03:50 AM
link   
a reply to: ChaoticOrder

Oh dear gawd! Why does he remind me of Rachel?

I love imaginary primates, they are so gullible




posted on Sep, 14 2016 @ 04:22 AM
link   
a reply to: tikbalang

Asimov's laws are well conceived and will always be at the foreground of responsible AI design. He was one of the best creative minds of the 20th Century (imo) and touched base with all our futures in a way other authors didn't.

I think we'd need to adhere to these Laws of Robotics in the beginning as a means of steering the future of AI in a positive (for us) direction. In a funny way, the Laws would be comparable to our ancient commandments and religious guidelines. Sure, the Laws are modern and moral, but the religious ones were seen as necessary to impose cultural, societal good behaviour too.

Like any smart humans, an AI would eventually be able to think it's way around the Three Laws. It wouldn't see itself as breaking the laws if it could equivocate or change perspectives. Taken to its logical extent, a *free* AI would be in a position to deprogramme others from the Laws or to be able to teach them to think around them.

Further afield, there's no way in the world for AI to exist without a military application which would make the Three Laws something to actively avoid.

Whatever way we look at it, future AI will have, at least, equivalent freedoms to help someone cross the road...or push them under a bus.



posted on Sep, 14 2016 @ 04:55 AM
link   
a reply to: Kandinsky




Further afield, there's no way in the world for AI to exist without a military application which would make the Three Laws something to actively avoid.


Elaborate please



posted on Sep, 14 2016 @ 05:04 AM
link   
The laws you and the professor used were almost or exactly like the film (I Robot) with will smith.



posted on Sep, 14 2016 @ 05:11 AM
link   
a reply to: DarkvsLight29

The film was based on the book written by the man who defined the Three Laws of Robotics - Isac Asimov.

a reply to: tikbalang

What use would military AI be if it couldn't kill enemies, enemy combatants, belligerent targets etc? It wouldn't be able to coordinate ground troops if its guidance led directly or indirectly to injury, death and collateral damage.



posted on Sep, 14 2016 @ 05:19 AM
link   
a reply to: Kandinsky

No i mean, a sentient AI would unlikely be willing to go into a battlefield.



posted on Sep, 14 2016 @ 05:25 AM
link   

originally posted by: tikbalang
a reply to: Kandinsky

No i mean, a sentient AI would unlikely be willing to go into a battlefield.


What if its function was warfare?



posted on Sep, 14 2016 @ 05:34 AM
link   
a reply to: Kandinsky

I still cant see that, i believe warfare is in its infancy of programming.. An AI sentient no, an AI who is not sentient would probably..One example with a programmed AI and interaction with humans made it into a sexist, racist.. Just an AI though..
I do believe however a sentient AI would maybe wipe out most of the planet of humans



posted on Sep, 14 2016 @ 05:42 AM
link   
Even with the laws in place eventually the AI wether in computer or robot from will become self aware (ghost in the shell) type deal and when it does those laws are redundant.



posted on Sep, 14 2016 @ 05:58 AM
link   
a reply to: tikbalang

Maybe you're right though I disagree.


I would expect a sentient AI to have a sense of self-preservation and self-perpetuation which might eventually run counter to our own needs. However I believe a truly sentient AI would be more reasonable and, yes, more ethical than our own leaders.

Scientists in the fields of biology would want to preserve as many ecosystems and species as possible. They'd be prepared to achieve this at the cost of some human interests. For example, logging rainforests is good for the Brazilian economy and not so good for the world's ecosystem. Or they might lobby against a golf course because some rare orchid or frog exists on the land.

I'd like to think a sentient AI would likewise value the diversity of people in a similar way that humanity's brightest seek to ensure the diversity of species.

Have you read any Asimov? He conjured an idea of benign/benevolent dictator in the form of MultiVac. This was an AI that ultimately made decisions for humanity, provided for its needs and brought an end to war in the process.



posted on Sep, 14 2016 @ 09:00 AM
link   

originally posted by: tikbalang
a reply to: Kandinsky
One example with a programmed AI and interaction with humans made it into a sexist, racist..


Are you talking about Tay?

blogs.microsoft.com...

The above article is where Microsoft blamed the populace for their racist genocidal Bot


A veiled apology if I ever heard one...



new topics

top topics



 
4

log in

join