What if we can shoot the astronaut?

I am satisfied by this outcome for the astronaut scenario where the astronaut has the opportunity to self-destruct. But if the astronaut cannot self-destruct, but you can destroy him, then are you not then using him as a mere means to the end of your own survival?

Again, some intuition of pristinity and intervention is at play here. I think that if the people on the planet were there first, then they are the ones who have the right to protect themselves from outside interference, certainly to the extent of defending their own lives against viruses. I don't really see the problem with blowing him out of the sky under such a circumstance, regrettable as that may be.

But if the intuitions involved are of pristinity and intervention, then why not simply make the astronaut a member of that society? He can hardly be said to be interfering with his own society.

He is still interfering in the private lives of people who were minding their own business. It seems to me that it does not matter here whether he meant to do it or not, because he can still be liable, if not culpable, for the deaths that he will cause if he lands. The logical consequence of this belief is that if you get thrown down a well onto Mr Pointy, then, you cannot kill him either.

This seems to make sense with respect to scenarios like September 11 as well. If innocent people are used merely as means to the end of killing others by terrorists, then they may still be destroyed if necessary for the defence of those others who will be killed. They do not need to be culpable for the act in order to be liable for it. After all, if we decided that we could never kill one to preserve the pristinty of other lives, then we would be vulnerable to exactly the sort of terrorist attacks that destroyed the World Trade Centre. It therefore seems reasonable that take these measures to protect ourselves, as unfortunate as they may otherwise be.

Does this not, however, mean that you are using the innocents in question as mere means to the end of the survival of other people? It seems to me that you are, simply because by definition, it is not their fault that they were brought into that situation. If that is the case, then the action still cannot be justified on a Kantian rationale.

Perhaps it is best to remember the--I thought, rather powerful--argument from my textbook on artificial intelligence. Although nobody feels comfortable with putting a value on human life, it is a fact that trade-offs are made all the time:

  1. Aircraft are given a complete overhaul at intervals determined by trips and miles flown, rather than after every trip.
  2. Car bodies are made with relatively thin sheet metal to reduce costs, despite the decrease in accident survival rates.
  3. Leaded fuel is still widely used even though it has known health hazards.

Paradoxically, a refusal to "put a monetary value on life" means that life is often undervalued. Ross Schachter relates an experience with a government agency that commissioned a study on removing asbestos from schools. The study assumed a particular dollar value for the life of a school-age child, and argued that the rational choice under that assumption was to remove the asbestos. The government agency, morally outraged, rejected the report out of hand, and then decided against asbestos removal.1

I think that awareness of this thesis needs to be made more acute. This would seem to be the case if the dollar value being presupposed on life were made explicit in each of the examples above.

Where would you even find out about such statistics?

Well, wait a minute. The difficulty with example 1 above is that, providing the aircraft are maintained according to the schedule as it stands, then those planes should not crash. Qantas, for example, has had only one crash in the time that it has been in operation, because it adheres to a safety schedule that all but guarantees that it will not crash. In fact, after that crash, on 23 September 1999 in Bangkok, a complete overhaul was made of both Qantas' and the Civil Aviation Safety Authory's (CASA) operations to ensure this would not happen again. That certainly does not sound like any "trade-off" was being made with respect to human life!

What about the car bodies example?

Again, the EU has accepted the challenge in setting an ultimate goal of zero road accidents, and their results so far have been impressive. This kind of approach is fully Kantian.

As for leaded fuel, again it's standard on cars in Australia not to use it, and I don't even know who does anymore. But it seems like an ultimate goal of zero health risks from such a thing would be feasible just as much as the EU's goal of zero road accidents. I don't think that we have to settle for "trade-offs" where the safety of the equipment itself is concerned.

That still leaves us with the difficulty that if we shoot the astronaut, we are using him as a mere means to an end.

We can't help the fact that someone is going to die in that situation, but we can help how many people it is.

But if you follow that line of reasoning, then we also can't help that someone is going to die in the star ark. But then that eliminates the possibility of pristinity versus intervention, and has once again degenerated into a fairly straightforward form of utilitarianism.

Notes


1Russell, Stuart and Norvig, Peter (ed.), Artificial Intelligence: A Modern Approach, New Jersey: Prentice Hall, 1995, p.480.

Comments

Popular posts from this blog

The Philosophy of Al Qaeda

Am I a reductive or non-reductive naturalist?

Rational Conlangs