It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

On the Ethical Conduct of Warfare: Predator Drones

page: 1
0

log in

join
share:

posted on Feb, 21 2011 @ 03:27 PM
link   
On the Ethical Conduct of Warfare: Predator Drones

Jim Fetzer


“A robot may not injure a human being or, through inaction, allow a human being to come to harm”
— Isaac Asimov’s “First Law of Robotics”

Among the most intriguing questions that modern technology poses is the extent to which inanimate machines might be capable of replacing human beings in combat and warfare. The very idea of armies of robots has a certain appeal, even though “The Terminator” and “I, Robot”, have raised challenging questions related to the capacity for machine mentality and the prospect that, once they’ve attained a certain level of intelligence, these machines might turn against those who designed and built them to advance their own “interests”, if, indeed, such a thing is possible. In an earlier article, “Intelligence vs. Mentality: Important but Independent Concepts" (1997), for example, I have argued that, while machines may well be described as “intelligent” because of the plasticity of behavior they can display in response to different programs, they are not the possessors of minds and therefore may be capable of simulating human intelligence but not of its possession.

From a philosophical point of view, there are at least three perspectives that could be brought to bear upon the use of the specific form of digital technology known as “predator drones”, which are pilot-less aircraft that can be deployed with the capacity to project lethal force —perhaps most commonly, by missile attacks, primarily — with or without any intervention by human minds. The first is that of metaphysics, in particular, from the perspective of the kinds of things they are, especially with respect to the question of autonomy. The second is that of epistemology, in particular, the question of the kind of knowledge that can be obtained about their reliability on missions. And the third is that of axiology, in particular, the moral questions that arise from their use as killing machines, where, as I shall suggest, there is an inherent tension between the first and the third of these perspectives, which is considerably compounded by the second.

As a former artillery officer, I can appreciate the use of weapons that are capable of killing at a distance with considerable anonymity about those who are going to be killed. In traditional warfare, artillery has been used to attack relatively well-defined military targets, but has not infrequently been accompanied by civilian casualties, which today are often referred to as “collateral damage”. An intermediate species of killing machine arises from the use of controlled drones, where human minds are an essential link in the causal chains that produce their intentional lethal effects. The use of predator drones, of course, is distinct from surveillance drones in this respect, because surveillance drones can acquire information without bringing about death or devastation. Without those capacities, however, there would be scant purpose in the deployment of predator drones, the existence of which is predicated upon their function as killing machines.

Ontology and Autonomy

The important metaphysical — more precisely, ontological — question that arises within this context is the applicability of the concept of autonomy to inanimate machines. The traditional philosophical conception related to issues of moral responsibility concerns whether arguments by analogy apply. Moral responsibility for human actions typically requires a certain basic capacity for rationality of action and rationality of belief, combined with an absence of coercion and of constraint. When humans are unable to form rational beliefs (responsive to the information available to them, because they are paranoid) or take rational actions (which promote their motives based upon their beliefs, because they are neurotic), they may be exonerated from moral responsibility for their actions. Similarly, when their actions are affected by coercion (by means of threats) or constraints (by being restrained), degrees of responsibility may require adjudication.

While human actions result from a causal interaction of motives, beliefs, ethics, abilities and capabilities, counterparts for predator drones do not appear to exist except in an extended or figurative sense. If capabilities represent the absence of factors that inhibit their abilities from being exercised — as is the case when they cannot fly because their batteries need recharging — then their incapacity to perform their intended tasks could not be said to be their own responsibility. But insofar as they are designed and built to conform to the programs that control them, it is difficult to suppose that analogies with humans properly apply. Since analogies are faulty when (a) there are more differences than similarities, (b) when there are few but crucial differences, or (c) when their conclusions are treaded as certain rather than merely probable, absent mentality, it is difficult to conclude that they are capable of the possession of beliefs, motives, or morality.

From the perspective of epistemology, the kind of knowledge that can be acquired about these machines is not akin to that of pure mathematics, which acquires certainty at the expense of their content, but rather than of applied mathematics, which acquires its content at the expense of its certainty. The complex causal interaction between software, firmware, and hardware makes the performance of these systems both empirical and uncertain as the product of evaluating their success in use against the properties of their design. If they are not engineered in accordance with the appropriate specifications, for example, then the result of their deployment can be fraught with hazard. The reliability of these systems in delivering their lethal force to appropriate targets can be completely unknown without testing and study, where the conditions of their use in Iraq and Afghanistan makes their probability of success unpredictable.

Epistemology and Targeting

The most serious problems with their deployment, however, arise from the criteria for determining the targets against which they are properly deployed. In the language of artillery, sometimes targets are designated as “free fire” zones, where any human within that vicinity is considered to be a legitimate target. That works when the enemy is clearly defined and geographically prescribed. In the case of guerilla (or “irregular”) warfare, however, there are neither uniforms to identify the enemy nor territorial boundaries to distinguish them, as is the case in Iraq and Afghanistan, where virtually any group of individuals, no matter how innocuous they may turn out to be, tends to be regarded as “fair game” for drone attack. In military language, of course, it’s all readily excusable as “collateral damage”.

How many wedding parties are we going to take out because the drone saw group behavior that it had been programmed to hit? How often do we have sufficient information to know that we are actually targeting insurgents and not innocents? Surely I am not alone in finding our actions repugnant when I read, “Over 700 killed in 44 drone strikes in 2009” taking out 5 intended targets —140 to 1 — and 123 civilians were killed for 3 al-Qaeda in January 2010. The headlines are ubiquitous: “CIA chief in Pakistan exposed. Top spy received death threats; U.S. drones kill 54”, Wisconsin State Journal (18 December 2010), where the American government claims, just as it did in Vietnam, that every dead body was a ”suspected militant”: none were innocent men, women, or children. Even The Washington Post (21 February 2011) seems to perceive that something is wrong with killing so many people and hitting so few targets.

We are now invading Pakistani airspace in our relentless determination to take out those who oppose us. From the point of view of the countries that we have invaded and occupied, they might be more aptly described as “freedom fighters”. Since we invaded these countries in violation of international law, the UN Charter and the US Constitution, we appear to be committing crimes against humanity. And the risk posed by our own technology is now extending to the USA itself. A recent article found in Software 26th August 2010 12:26 GMT, “ROBOT KILL-CHOPPER GOES ROGUE above Washington DC!” by Lewis Page, describes a perceived threat to the nation’s capitol as attributable to “software error”. No deaths resulted from this infraction, but perhaps the next time a mistake of this kind will lead to the deaths of members of Congress or of “The First Family” on a picnic outing in the Rose Garden, which will make for spectacular headlines. Yet we don’t even pause to ask ourselves, “What’s wrong with collateral damage?”

Morality and Methodology

We cannot know whether or our conduct or that of our machines is moral or not unless we know the nature of morality. The answer depends upon which theory of morality is correct. There are many claimants to that role, including subjective theories, family-value theories, religious-based theories, and culture-related theories, according to which “an action is right” when you (your family, your religion, or your culture) approve of it. So if you (your family, your religion, or your culture) approve of incest, cannibalism, or sacrificing virgins to appease the gods, such actions cannot be immoral, if one of these theories is true. All these approaches make morality a matter of power, where right reduces to might. If someone approves of killing, robbing, or raping you, then you have no basis to complain on the ground that those actions are immoral, if subjectivism is correct. Similarly for family, religion, and culture-based alternatives. Every person, every family, every religion, and very culture is equal, regardless of their practices, if such theories are true. They thus embody the principle that “might makes right”.

As James Rachels, The Elements of Moral Philosophy (1999), has explained, on any of these accounts, the very ideas of criticism, reform, or progress in matters of morality no longer apply. If attitudes about right and wrong differ or change, if that is all there is to it, even when they concern your life, liberty, or happiness. If some person, family, or group has the power to impose their will upon you, then these theories afford you no basis to complain. While Rachels is correct, as far as he goes, I have sought to establish objective criteria for arbitrating between moral theories that parallel those we have for scientific theories, including the clarify and precision of their language, their scope of application for the purpose of explanation and of prediction, their respective degrees of empirical support, and the simplicity (or economy or elegance) with which that degree if systematic power is attained. And, indeed, as I explain in detail in The Evolution of Intelligence (2005) and in Render Unto Darwin (2007), there do appear to be parallel criteria of adequacy for moral theories.

Theories of morality, no less than theories of physics, chemistry, and such, are also subject to evaluation on the basis of (CA-1) the clarify and precision of their language as a first criterion. Since the problem of morality arises from the abuse of power, it seems apparent that a second criterion of adequacy (CA-2) should be that an acceptable theory not be reducible to the principle that “might makes right”. Yet a third, which might be viewed as encompassing empirical content in the form of virtually universal human experience (CA-3) holds that an acceptable theory of morality should properly classify the “pre-analytically” clear cases of immoral conduct — such as murder, robbery, and rape — as “immoral” on that theory; and similarly for “pre-analytically” clear cases of moral behavior, such as (apart from special cases) telling the truth, keeping our promises, and dealing equitably with other persons. The fourth (CA-4) is that an adequate theory of morality should shed light on the “pre-analytically” unclear cases, such as pot, prostitution, and flag burning but also abortion, stem-cell research, and cloning.

Alternative Theories

While I address those “unclear cases” in the recent books I have cited, here I shall confine myself to considering the moral status of the use of predator drones, If we apply the four criteria by focusing on the second, third, and fourth, then the inadequacies of all but one moral theory become apparent. With regard to the four traditional theories I have discussed — simple subjectivism, family values, religious ethics, and cultural relativity — it should be apparent that they reduce to the corrupt principle that might makes right and therefore violate (CA-2). Since they permit pre-analytically clear cases of immoral behavior to qualify as “moral”, they also violate (CA-3). Because the “morality” of unclear cases, like the use of predator drones, varies with attitudes, which can differ from person to person, group to group, religion to religion and culture to culture at the same time or within any of those at different times, none of these theories satisfies (CA-4).

The relativity of traditional theories has motivated students of morality to move in the direction of more philosophical theories, which tend to fall into the categories of what are know as “consequentialist” and “non-consequentialist“ theories. The former classify an action as “right” when it produces at least as much GOOD as its effect as does any available alternative, where what is GOOD is usually taken to be happiness. The problem, however, remains of deciding FOR WHOM that happiness ought to be produced, since it might be the individual, the group, or everyone. According to Ethical Egoism, for example, an action is right when it brings about as much happiness for you personally as any available alternative. The consequences for others simply don't count. So Ted Bundy, John Gacy, and Jeffrey Dahmer, for example, are home free — morally speaking — though few juries would be likely to be impressed by the argument that killing gave them more happiness than any available alternative. The violations of (CA-2), (CA-3), and (CA-4), I presume, require no elaboration.

According to Limited Utilitarianism, moreover, an action is right when it brings about as much happiness for the members of your group as any available alternative. This is good news for The Third Reich, the Mafia, and General Motors. If no available alternative(s) would produce more happiness for Nazis than territorial acquisition, military domination, and racial extermination, then those qualify as moral actions if Limited Utilitarianism is true. As in the case of Ethical Egoism, the violations of (CA-2), (CA-3) and (CA-4) appear to be obvious. Classic Utilitarianism, among consequentialist theories, is the only one that dictates the necessity of encompassing the effects actions have upon everyone rather than some special class. But even this virtue does not guarantee the right results. If a social arrangement with a certain percentage of slaves, say, 15%, would bring about greater happiness for the population as a whole — because the increase in happiness of the masters outweighed the decrease in happiness of the slaves — then that arrangement would qualify as moral. Yet slavery is immoral if any practice is immoral.

Deontological Morality

The problem here is more subtle than in other cases and therefore deserves more explanation. Actions that benefit the majority may do so at the expense of the minority. The Classical Utilitarian conception of “the greatest good for the greatest number” should not come at the expense of the life, liberty, or property of the minority — absent mechanisms to insure that their rights are protected and upheld. Technically, we are talking about a concept of morality that is distributive (as a property of each person) rather than collective (as a property of the group), as I shall explain. Suppose that ten smokers were selected at random by the government each year, put on television and shot. It might well be that enthusiasm for smoking would fall dramatically, that heart and lung disease would diminish, that health care premiums would drop and that the net happiness of society would be maximized. If that were the case, should we select ten smokers at random each year, put them on television and shoot them?

If theories that qualify manifestly immoral behavior, such as a slave-based society or random public executions to promote the health of the nation.as "moral" ought to be rejected, then perhaps a non-consequentialist approach might do better. According to what is known as Deontological Moral Theory, actions are moral when they involve treating other persons with respect. More formally expressed, it requires that other persons should always be treated as ends (as intrinsically valuable) and never merely as means (instrumentally). This approach has its roots in (what is technically known as) “the Second Formulation of the Categorical Imperative” advanced by Immanuel Kant, but we can forego such niceties here.

This does not mean that persons can never treat other persons as means, which usually happens without thereby generating immorality. The relationship between employers and employees is clearly one in which employers use their employees as a means to conduct a business and make profits, while employees use their employment as a means to make a buck and earn a living. Within a context of mutual respect, this is moral conduct as a feature characteristic of human life. When employers abuse their employees by subjecting them to unsafe working conditions, excessive hours, or poor wages, however, the relationship becomes exploitative and immoral. These are the conditions that typify “the sweat shop” and explain why they are despicable business practices.

They can also occur when employees fail to perform their duties, steal from their employers, or abuse the workplace. Similar considerations apply to doctors and patients, students and faculty, or ministers and congregations, which may explain our dismay at their betrayal. Perhaps the central consequence of a deontological perspective is the centrality of due process. No one should be deprived of their life, liberty or property without an appropriate form of certification that punishment of that kind is something that they deserve, which reveals the gross immorality of military aggression, territorial conquest, systematic genocide—and death by the use of predator drones to kill other persons, with only superficial regard for due process in the case of the intended targets and non-existent for everyone else!

Axiology and Autonomy

When we are talking about a so-called "autonomous machine", then the question becomes whether or not such an entity is even capable of understanding what it means for something to be a person or to treat it with respect. There are ways to guarantee killing the enemy within a target zone, namely, by killing everyone in it. And there are ways to avoid killing the wrong target, namely, by killing no one in it. The problem is to kill all and only the intended targets. But is that possible? This becomes extremely problematical in the case of unconventional warfare. In principle, persons are entitled to be treated with respect by following rules of due process, where no one is deprived of life, liberty, or property without having the opportunity to defend them selves. In the case of the use of predator drones, however, the only processes utilized by autonomous machines are those that accrue from the target identification criteria with which they are programmed.

These machines, like other tools including computerized systems, are inherently amoral — neither moral nor immoral — from a deontological point of view. They, like other digital machines, have no concept of morality, of personhood or of mutual respect. They are simply complex causal systems that function on the basis of their programs. Were these conventional wars involving well-defined terrain and uniformed combatants, their use, in principle, would be no different than high-altitude bombing or artillery strikes, where, although the precise identity of our targets are not always known, we know who they are with high probability. In cases like Iraq and Afghanistan, our information is partial, sketchy, and all too often wrong. We are killing around 140 innocents for every intended target!

We are taking out citizens of Iraq, Afghanistan, and now Pakistan, which, alas, if research on 9/11 is well founded — visit 911scholars.org... , for example, or patriotsquestion911.com... -- have never threatened us. So we really have no business being there at all. Yet to this day we continue to hear about the threat from al-Qaeda and from Osama bin Laden, who appears to have died in 2001. We are depriving the citizens of other countries of their life, liberty, and property with no semblance of due process. This means that our actions are not only in violation of international law, the UN Charter, and the United States’ Constitution but also violate basic human rights. We once believed it was better for ten guilty me to go free than for one innocent man to be punished. We now practice the policy that it is better for 140 civilians to die than for one suspected “insurgent“ to live. We have come a long way from Isaac Asimov’s “First Law”.

Jim Fetzer, a former Marine Corps officer who earned his Ph.D. in the history and the philosophy of science, is McKnight Professor Emeritus at the Duluth campus of the University of Minnesota. He has published extensively on the theoretical foundations of computer science, AI, and cognitive science. His academic web site may be found at www.d.umn.edu... .

edit on 22/2/11 by masqua because: title change per author request



posted on Feb, 21 2011 @ 03:46 PM
link   
First off, why did you post this in the 9/11 conspiracies forum? Second off, as a former Marine (Yes I know, there is no such thing as a former Marine, but in your case I will make the exception), as a former Marine, you should know that the only rule in warfare is, my unit comes home alive. PERIOD.

If an unmanned drone will save the lives of men and women in my unit, then send it in.



posted on Feb, 21 2011 @ 03:51 PM
link   
Ya get this thread out of the 911 section unless this is a pointless segway to some missile pod hologram tv fakery argument......

debunkers dream.



posted on Feb, 21 2011 @ 03:55 PM
link   
The predator is not autonomous. Its remotely piloted, but still piloted.

We are not yet into the realm of the roaming robot killer. Its not too far away, but it not here yet.



posted on Feb, 21 2011 @ 10:20 PM
link   
These are autonomous drones, but I did place it in the wrong section. My mistake. My inexperience is showing.

reply to post by Shadow Herder
 



posted on Feb, 22 2011 @ 12:08 AM
link   
Well, the fact is that the use of predator drones is killing a lot of innocent civilians, where for each one killed, our forces have to confront about 10 more "insurgents". Plus, since these invasions have been predicated upon the false premises that Islamic fundamentalists attacked us one 9/11, once you understand that 9/11 was a fraud, you realize that, if we weren't there at all, we wouldn't be losing any men at all. Try "Are Wars in Iraq and Afghanistan justified by 9/11?", noliesradio.org... , and you might begin to get the idea.

reply to post by vipertech0596
 



posted on Feb, 22 2011 @ 12:10 AM
link   
No, if you read the paper a little more carefully, you will see that I am talking about autonomous machines of the kind you do not believe exist. Unfortunately, they do, and what I have provided is the problems that attend them.

reply to post by justwokeup
 



edit on 22-2-2011 by JimFetzer because: (no reason given)



posted on Feb, 22 2011 @ 12:19 AM
link   
interesting when the mods move this i will reply more till then do you have proof that the drones are not drones but self thinking self guiding self targeting planes er drones?link or news or some thing?



posted on Feb, 22 2011 @ 12:26 AM
link   
Well, I was hesitating in replying to comments for that very reason, but the answer is, "Yes!", I am talking about the kind of drones that might be (mistakenly) described as "self-thinking" and such. The are autonomous drones. None of these systems can actually think but they can follow their programs and behave in accord with them when they are not malfunctioning. They have no idea what they are doing. They are machines operating based on programs.

reply to post by bekod
 



edit on 22-2-2011 by JimFetzer because: (no reason given)



posted on Feb, 22 2011 @ 06:30 AM
link   
The thread is now in a proper forum.

note: In future, rather than make off topic remarks, please hit alert and allow mods to make the move. Thank you, JF, for U2Uing me on this problem.



posted on Feb, 22 2011 @ 11:34 PM
link   
reply to post by JimFetzer
 


I remember years ago when i was investigating 911, I came across some information about Israeli drones. Some of these drones were actual american made fighter jets without pilots that can be programmed to seek and destroy without any human input. The were also working on a fleet of smaller fighter drones. I assume some were autonomous and others flown remotely like a video game.



posted on Feb, 23 2011 @ 12:10 AM
link   
Actually, it was Israel that first developed drones and introduced them to the US.

Drones must be controlled by human input when it comes to the shooting part, or programmed to fly to a self-destruction phase.

Drones can automatically fly to a preprogrammed point, but the human controller has to determine the fire point.

So, there's no real ethics point here.



posted on Feb, 23 2011 @ 08:20 AM
link   
Manned or unmanned, their record of atrocities is completely
outrageous. If we are killing 140 innocents for every targeted
"insurgent", what does that tell us about the morality of our
actions? If they were manned, the moral implications are even
worse. I can't believe anyone would say something so absurd!


Originally posted by FarArcher
Actually, it was Israel that first developed drones and introduced them to the US.

Drones must be controlled by human input when it comes to the shooting part, or programmed to fly to a self-destruction phase.

Drones can automatically fly to a preprogrammed point, but the human controller has to determine the fire point.

So, there's no real ethics point here.




posted on Feb, 23 2011 @ 08:26 AM
link   
The idea that the guy standing next to me at the fast food joint has just killed hundreds of children from thousands of miles away, just astonishes me. They take off thier flightsuits and go to lunch among us. Serial killers are tame compared to these pilots. Funny how we have no fear of the most destrutive people in our societies.
edit on 23-2-2011 by earthdude because: spelling

edit on 23-2-2011 by earthdude because: (no reason given)



posted on Feb, 23 2011 @ 04:25 PM
link   
Some good posts. The final paragraph summarizes the issues: We are taking out citizens of Iraq, Afghanistan, and now Pakistan, which, alas, if research on 9/11 is well founded — visit 911scholars.org... , for example, or patriotsquestion911.com... -- have never threatened us. So we really have no business being there at all. Yet to this day we continue to hear about the threat from al-Qaeda and from Osama bin Laden, who appears to have died in 2001. We are depriving the citizens of other countries of their life, liberty, and property with no semblance of due process. This means that our actions are not only in violation of international law, the UN Charter, and the United States’ Constitution but also violate basic human rights. We once believed it was better for ten guilty me to go free than for one innocent man to be punished. We now practice the policy that it is better for 140 civilians to die than for one suspected “insurgent“ to live. We have come a long way from Isaac Asimov’s “First Law”.



posted on Feb, 23 2011 @ 05:09 PM
link   
At the moment there is no difference between the pilot of a predator/reaper drone firing a hellfire remotely from Nevada, or a pilot firing the same missile from the cockpit of an AH-64.

Its the same decision process fed by the same intelligence apparatus. In both cases sensor footage is evaluated, a man receives clearance to engage and makes a decision to pull the trigger.

Killing innocent people is unethical any way you cut it. Western militaries try to minimise it. On the other hand some people do need killing. Mistakes will get made and the people involved will still have to live with it. The biggest influence on fire/no fire is the rules of engagement and thats a political decision.

Drones allow more loiter time and therefore less pressure to make a snap decision. I would wager if you were going after the same targets with fast jets and cruise missiles the amount of innocent dead would increase.

Remotely Piloted Drones are not the problem.

Truly autonomous combat drones are not here yet. Bae systems Taranis, Phantom Ray and X-47 will allegedly have a greater degree of autonomy when they arrive. I doubt even they will seek their own moving targets. They may be used to autonomously drop bombs on fixed installations and return like a re-usable cruise missile.



posted on Feb, 24 2011 @ 07:17 AM
link   
We have no bona fide justification for being in these countries in the first place. We are properly perceived as invading and occupying them. We are slaughtering their people with reckless disregard for their rights as human beings. And to attribute these decisions to "politics" discounts that we, the United States of America, are supposed to stand for something. We are now known as the greatest aggressor nation in the world (along with "our gallant ally" in the Middle East, which is maintaining the world's largest concentration camp in Gaza). The moral issues are the same whether these drones are remotely-controlled or autonomous. We are not only violating Asimov's "First Law of Robotics" but the values and traditions that are embodied in the Declaration of Liberty, the US Constitution, and the UN Declaration of Human Rights, which are all profoundly deontological. If you have any doubt about whether we belong there, try "Are Wars in Iraq and Afghanistan justified by 9/11?", which is archived at noliesradio.org...

reply to post by justwokeup
 



edit on 24-2-2011 by JimFetzer because: (no reason given)

edit on 24-2-2011 by JimFetzer because: (no reason given)

edit on 24-2-2011 by JimFetzer because: (no reason given)



posted on Feb, 24 2011 @ 12:33 PM
link   
reply to post by JimFetzer
 


The issue of whether or not the intelligence agencies of western democracies should be conducting assassinations in other countries is a political one. Thats a question quite separate to the specific method employed. Its not what I thought this thread was about.

I thought it was about the ethics of deploying autonomous killing machines. That is an interesting topic. In my posts i was trying to explain that the strikes in Pakistan with Predator and Reaper do not yet fall into that category.

As a personal opinion, although its a ways off, weapons systems with truly autonomous target selection and engagement should be precluded through international treaty.



posted on Mar, 1 2011 @ 09:13 PM
link   
If you read the paper carefully, I am covering all the possibilities, minded or un-minded, remotely controlled or autonomous. Like you, I am intrigued by the use of autonomous machines, where I have done some work on the theoretical foundations of computer science, artificial intelligence, and cognitive science. So if you have questions about any aspect of different kinds of predator drones, please present them. You appear to be very thoughtful.

reply to post by justwokeup
 



posted on Apr, 15 2011 @ 08:39 AM
link   

Originally posted by JimFetzer
We have no bona fide justification for being in these countries in the first place. We are properly perceived as invading and occupying them. We are slaughtering their people with reckless disregard for their rights as human beings. And to attribute these decisions to "politics" discounts that we, the United States of America, are supposed to stand for something. We are now known as the greatest aggressor nation in the world (along with "our gallant ally" in the Middle East, which is maintaining the world's largest concentration camp in Gaza). The moral issues are the same whether these drones are remotely-controlled or autonomous. We are not only violating Asimov's "First Law of Robotics" but the values and traditions that are embodied in the Declaration of Liberty, the US Constitution, and the UN Declaration of Human Rights, which are all profoundly deontological. If you have any doubt about whether we belong there, try "Are Wars in Iraq and Afghanistan justified by 9/11?", which is archived at noliesradio.org...

reply to post by justwokeup
 



edit on 24-2-2011 by JimFetzer because: (no reason given)

edit on 24-2-2011 by JimFetzer because: (no reason given)

edit on 24-2-2011 by JimFetzer because: (no reason given)


no reason? of course we have a reason...oil for one...live testing of a drone weapons system for two, restoration of poppy growth for three.
all this ads up to money...and lots of it.
oil industry...billions of dollars in profits
military industry...billions of dollars in profits
heroin production...billions of dollars in profits
"always follow the money" has always been my motto for causality of events



new topics

top topics



 
0

log in

join