I went to hear Dr. Brown (or "Doc Brown," as I'm sure every fan of Back To The Future calls him) speak on his work restoring poverty-stricken neighborhoods around Mercer. The primary focus on the talk was the various pitfalls and failures encountered by previous attempts at social engineering; for example, he discussed how the civil rights movement, while obviously effective at creating more equality, also resulted in the rigid stratification of poor blacks. Essentially, while a black middle class was created, the poorest black families were forced to remain where they were with no chance of escape.
The discussion was fascinating and wide-ranging, and raised some interesting points; however, I want to approach this from an ethical standpoint. To what extent should unintended consequences be considered when making an ethical decision?
(A brief digression: I recognize that expecting unintended consequences may, at first, sound like a contradiction of terms, and on the surface, it is; however, I am here talking about consequences that may not, at first glance, be apparent, but reveal themselves upon further consideration.)
Take a utilitarian “net good,” for example. How far in the future should the agent look? Should they even care about the long-term consequences of their actions? Would a utilitarian accept short-term loss in favor of long-term gain, or should the short-term pleasure be the primary focus?
Or consider Kant’s categorical imperative. If the decision-maker does not act with the full knowledge of the effects of his actions, can he really make an ethical decision? Can he say with absolute confidence that any situation is exactly what it appears to be?
Obviously, thinking too hard about this can result in paralysis and inaction; however, the consequences of one’s actions cannot be ignored. To do so would be to engage in gross ethical irresponsibility. Doc Brown didn’t come down on one side or the other, suggesting that these consequences were “endemic” to social engineering.
No comments:
Post a Comment