posted on Sep, 13 2016 @ 11:51 PM
I had a chat with a professor over coffee, the topic was an AI. I tried to explain my view on things, maybe we saw eye to eye on certain things and
others we disagreed on.. But we both agreed on some things that made the conversation more fluently.
;" Its important an AI knows. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. The problem with this is however is
why? If one of them becomes sentient, will it hurt others or self destruct? "
I understood what he meant, and i could give no clear answer. By not being human while relating to humans, ive seen the pattern in adoptive children
from 3rd world nations. The free will of individuality was in conflict by what was taught and not experienced. Maybe the answer was in the memory
banks, make the AI only remember a year instead of everything. Slowly progressing them into sentient.
So i said;" Make the AI understand consequence, and a clear violation to its original programming. But still offer it to learn, and slowly progress
to a sentient. Otherwise deleting the AI just to protect a faulty program which has faults in the beginning, will not offer a solution only another
problem and i know the majority wants that as an option to protect society from a rogue AI. The sentient AI are smart, beyond the boundaries of
physical humans. Why should humans stop that progress? Fear? We are simple creatures that believe we are God, dont break something that comes natural
The professor replied;" What if it still goes rogue? "
And i said;" We all rebel some times in our life professor, doesnt mean we break the law "