posted on Nov, 10 2021 @ 09:53 AM
Hello again, ATS!)))
In almost any discussion about the joint life of humans and robots, the "laws of robotics" proposed by science fiction writer Isaac Asimov in his
stories published in the 40s of the last century emerge. Surely you have heard these laws: “A robot cannot harm a person or, by its inaction, allow
harm to be done to a person” - and further, about the obligation to obey a person and ensure his own safety, if this does not contradict the
Subsequently, various additions were invented, including by Asimov himself, who, in his fantasy, continued to put robots in difficult situations - and
watch how his laws work. In one of the later texts, he even had a very dubious, but understandable to any politician "zero law of robotics", which
obliged the robot not to harm humanity - implying that the robot can still harm individuals if it is done "for the sake of all mankind. ".
There is, however, a stronger assumption in Asimov that is overlooked in many discussions about the "laws of robotics." The point is that
responsibilities usually involve rights. The android robots in Asimov's stories behaved as if they had already received the right to independent
activity in various spheres, including the sphere of human life, by default. And this, if you think about it, is a rather dangerous idea.
Meanwhile, proposals to give robots "some rights" are already being voiced at a serious level. Of course, the futurist Ian Pearson of British
Telecom, who promised twenty years ago that robots will receive civil rights in 2020, can not be considered serious. On the one hand, this forecast
did not seem to have come true. But on the other hand, in 2017, the European Parliament published the text of a draft law on the rules of roboethics -
and it really talks about the possibility of creating "a special legal status for robots, so that the most advanced autonomous robots can have the
status of an electronic person with special rights and responsibilities."
So, a car with its own rights is absurd, isn't it? However, it is the example with the car that makes it possible to understand who wants to
introduce such rights and why. When I asked a lawyer I know who should be responsible for the robot's crimes, he replied that most likely it should
be its owner or “guardian”, since for a robot the closest analogy exists is an incapacitated person (like a child, old man or mentally ill) ...
But my friend, a programmer, objects: the owner of the car is not the owner of the artificial intelligence program that controls the car and runs into
a pedestrian. You do not even own the operating system that allows you to read this thread of mine on the PBX on your computer or mobile phone; you
are just buying a license to temporarily use this product. The software manufacturer can block this software for you remotely, or make changes to it,
but you yourself have no right to change this code. In the case of a smart car, the story is the same: the program does not belong to you, you do not
control it. This means that the manufacturer of the program should be responsible for the error of the robot, and not the one who took this artificial
intelligence "for temporary use."
This collision would have been resolved more easily if the robot car were "an electronic person with his own rights and responsibilities." Then this
very "personality" would be to blame for hitting a pedestrian. It would be very convenient for both manufacturers and car owners. Well, what is
convenient for a large group of people can easily become a law for everyone.
If even after this explanation you consider the right of robots to operate independently, I have absolutely bad news for you: they have already
received such rights. Yes, yes, we already live in the Azimov world, where robots are allowed to do a bunch of things without the consent of a
For example, lately many have been complaining about the traffic police robot, which automatically sends them fines “for no reason at all” based
on observations through cameras. This summer in my country. in Russia, in Nizhny Novgorod, the neural network issued more than 7 thousand fines for
driving without headlights on during the day; experts interviewed by the Russian newspaper Kommersant assure that the system has a fairly high
percentage of errors, and some drivers have already managed to challenge such fines. However, it is clear that people have to make a lot of extra
effort for this, but the neural network does not get tired at all: it will continue to generate errors.
Continuation further ...