posted on Apr, 18 2014 @ 10:15 PM
originally posted by: Skorpy
I know it sounds like the movie iRobot and the three Laws.
The reason that movie was one of the most frightening I've seen isn't because I'm
worried it can happen in our lifetimes, (well maybe if you're at the minimum age for this site which is 13?), it's because I do think it's
inevitable that computers will get more complex intelligence.
Unlike other science fiction movies that seem pretty far fetched, it's not hard to imagine this scenario:
1. Program robots to protect/help mankind only.
2. Robots get smarter, figure out mankind is destroying the environment which can't sustain the population growth withotu great human suffering.
3. Robots decide to "help" man by forcing him to do what he's to stupid to do, for his own good.
4. Robot just following programming, and is actually helping the human race, but we won't see it that way because we think we should be able to have
as many babies as we want without worrying about things like population doubling and finite global resources.
That's if they follow their programming.
Check this out:
Scientists plot AI that learns from mistakes
Scientists at Oregon State University are hoping to improve artificial intelligence with a project the uses "rich interaction" to teach machines
when they make mistakes.
The researchers claim the project could lead to a computer that wants to "communicate with, learn from, and get to know you better as a
Ever had a file, hard drive or OS get corrupted or get malware and start doing crazy stuff? That could happen too. Just because the technology still
has a long way to go doesn't mean it's unreachable.