It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
...if a machine can think, decide and act on its own volition, if it can be harmed or held responsible for its actions, should we stop treating it like property and start treating it more like a person with rights?
What if a robot achieves true self-awareness? Should it have equal rights with us and the same protection under the law, or at least something similar?
These are some of the issues being discussed by the European Parliament’s Committee on Legal Affairs. ...
...Of the legal solutions proposed, perhaps most interesting was the suggestion of creating a legal status of “electronic persons” for the most sophisticated robots.
The report also questions about whether or not sufficiently sophisticated robots should be regarded as natural persons, legal persons (like corporations), animals or objects. Rather than lumping them into an existing category, it proposes that a new category of “electronic person” is more appropriate.
The European Parliament will vote on the resolution this month. Regardless of the result, reconsidering robots and the law is inevitable and will require complex legal, computer science, and insurance research.
...the current EU directive on liability for harm by robots only covers foreseeable damage caused by manufacturing defects. In these cases, the manufacturer is responsible. However, when robots are able to learn and adapt to their environment in unpredictable ways, it’s harder for a manufacturer to foresee problems that could cause harm.
...computers still have a long way to go before they match human intelligence if they ever do.
But it can be agreed that robots – or more precisely the software that controls them – is becoming increasingly complex. Autonomous (or “emergent”) machines are becoming more common. There are ongoing discussions about the legal liability for autonomous vehicles, or whether we might be able to sue robotic surgeons.
These are not complicated problems as long as liability rests with the manufacturers. But what if manufacturers cannot be easily identified, such as if open source software is used by autonomous vehicles? Whom do you sue when there are millions of “creators” all over the world?
Peking University’s Yueh-Hsuan Weng writes that Japan and South Korea expect us to live in a human-robot coexistence by 2030. Japan’s Ministry of Economy, Trade, and Industry has created a series of robot guidelines addressing business and safety issues for next generation robots.
If we did give robots some kind of legal status, what would it be? If they behaved like humans, we could treat them like legal subjects rather than legal objects, or at least something in between. Legal subjects have rights and duties, and this gives them legal “personhood”. They do not have to be physical persons; a corporation is not a physical person but is recognized as a legal subject. Legal objects, on the other hand, do not have rights or duties although they may have economic value.
Assigning rights and duties to an inanimate object or software program independent of their creators may seem strange. However, with corporations, we already see extensive rights and obligations given to fictitious legal entities.
Perhaps the approach to robots could be similar to that of corporations? ...
..if a machine can think, decide and act on its own volition, if it can be harmed or held responsible for its actions, should we stop treating it like property and start treating it more like a person with rights?
originally posted by: FamCore
a reply to: neo96
And the social justice warriors and snowflakes will be crying out "Robots are people too!!!"
How to Keep Your AI From Turning Into a Racist Monster
...If you’re not sure whether algorithmic bias could derail your plan, you should be.
Algorithmic bias—when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed—causes everything from warped Google searches to barring qualified women from medical school. It doesn’t take active prejudice to produce skewed results (more on that later) in web searches, data-driven home loan decisions, or photo-recognition software. It just takes distorted data that no one notices and corrects for.
...“We are running the risk of seeding self-teaching AI with the discriminatory undertones of our society in ways that will be hard to rein in, because of the often self-reinforcing nature of machine learning.”
...Rather than clinging to the belief that technology is impartial, engineers and developers should take steps to ensure they don’t accidentally create something that is just as racist, sexist, and xenophobic as humanity has shown itself to be.
originally posted by: soficrow
And always good to consider:
How to Keep Your AI From Turning Into a Racist Monster
...If you’re not sure whether algorithmic bias could derail your plan, you should be.
Algorithmic bias—when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed—causes everything from warped Google searches to barring qualified women from medical school. It doesn’t take active prejudice to produce skewed results (more on that later) in web searches, data-driven home loan decisions, or photo-recognition software. It just takes distorted data that no one notices and corrects for.
...“We are running the risk of seeding self-teaching AI with the discriminatory undertones of our society in ways that will be hard to rein in, because of the often self-reinforcing nature of machine learning.”
...Rather than clinging to the belief that technology is impartial, engineers and developers should take steps to ensure they don’t accidentally create something that is just as racist, sexist, and xenophobic as humanity has shown itself to be.