The Importance of Discernment for Practical Ethical Systems
“The beginning of wisdom is this,” says Proverbs 4:7 (NIV): “Get wisdom. Though it cost all you have, get understanding.” To modern eyes, this appears to be circular, or even sinister, since Proverbs 4 is written by someone who claims to teach wisdom and therefore has a vested interest in people wanting to be wise. In the world of the Old Testament, though, wisdom was not always thought of as a product that you acquired. Rather, it was a state of being. You didn’t want to know what to do in complex situations; you wanted to train your mind so that, when you were put in complex situations, your instincts would turn out to be right. The beginning of wisdom, that is, the first and most important thing that wise people know, is this: above all else, put effort into training yourself to be wise.
Today, most mainstream Christianity has a similar concept in the doctrine of discernment. When put in a situation where we must choose between two sins, Christians are permitted to make the choice based on their understanding of what God would prefer rather than simply freezing. The “true” sin, in this case, lies with the people who put the Christian in the unwinnable situation beforehand. An uncle of mine once wound up stuck in an airport with his family in a very dangerous part of the world. He bribed a guard to let them onto their plane. This would have been a sin in isolation, but the alternative would be to fail to provide for the welfare of his family by forcing them to stay in the airport. Upon reflection, neither he nor I think he did anything wrong.
My main claim is twofold: first, that discernment is a practical necessity for any ethical system, and second, that ethical systems should therefore be evaluated for the discerning minds that they produce as well as for the direct outcomes that they imply. Discernment is certainly necessary in a “doctrinal” ethical system, one which gives you a finite-sized book of rules and tells you to follow them where possible and extrapolate where not. Such systems are extremely prevalent in day-to-day life: any company worth working for, open-source project worth contributing to, or social club worth joining will have some form of code of conduct, either expressed or implied. It’s no coincidence that theologians have thought deeply about this for quite some time.
Other ethical systems try to remove the need for discernment. There seem to be basically two approaches. The first is to pile on the rules until there’s one specific answer for everything. In practice, nobody ever does this, although “Why doesn’t God tell us exactly what to do?” is a very common question that evangelists get asked. The answer is that human behaviour is infinitely varied even to the point of infinite recursion. Consider, for example, the fact that an ethical system with a rule for every situation allows you to perfectly predict the behaviour of its followers. You can then set up events wherein these followers must either give you their money or break their code. This doesn’t seem fair, so we would expect the code to have a footnote to the effect that the rules may be broken if someone is clearly just trying to extort you. That, however, is just a new set of rules, and it can be exploited by a (presumably more complicated but still possible) situation. A realistically-functional fully-doctrinal system without discernment would contain infinite information, and would therefore, by the existence of a covariant entropy bound, only be expressible in infinite time or space.
So that’s a non-starter. The more common approach is to use “meta-rules,” which usually take the form of functions that ethical people are supposed to maximize or minimize. Utilitarianism, for instance, takes as its function the total happiness of everyone, or the average happiness, or the total human flourishing, or one of many other variants on the same idea. When I want to decide if I should do X or Y, I simply think “How much net happiness would be generated by X? How much by Y? Which is the greater?” Kant instead asks me to maximize the extent to which you can will that the maxims you are following be universal laws, but he’s still giving me a function.
This seems easier to handle, but in practice it’s no better than the first option. If I seriously attempted to measure my actions by the happiness that they caused, I would not be able to publish this post without considering the potential reactions to it from every single person in the world, both living and not yet born. On top of that, I would have to think about how other people would react to those reactions, and so on. Functions complicated enough to support grand ethical statements necessarily need to take grandiose inputs. Clearly this is no good at all.
Of course, the utilitarians don’t seriously take this view. Instead, they use heuristics. Most actions I take will affect most people in the world only negligibly, so, if I can help a few people to feel real, intense joy, that will almost certainly balance out any small unconscious sadness that I cause in the process. In doing this, the utilitarians prove my point for me. There is no good way to select your particular heuristic, beyond your gut feeling about what might work and your experience of what’s worked in the past. The utilitarians are placed in an impossible situation, and what do they do? They use discernment.
The utilitarian use of heuristics raises another problem with systems based on meta-rules: they pass a huge burden of calculation onto the actor, and thereby require a seriously privileged social situation. It is all very well for Sartre to accuse his waiter of acting in bad faith, but the waiter, who has to focus on work so that he can get paid and eat his own dinner, doesn’t really have the time to consider this in full. And yet Sartre clearly expects the waiter to correct his behavior. Nobody is seriously suggesting that only the idle rich should be expected to be ethical, so we might as well concentrate on systems that are friendly to members of other social classes.
Sooner or later, then, whatever ethical system you use, you will come to a point where simply following the dictates as written is not practical. Perhaps you must make a decision between two actions, both of which are loathsome to your deity of choice. Perhaps you must universalize an overly-complicated maxim, or minimize a difficult misery-estimation function. In these situations, you will have to use your own discernment. Depending on your point of view, this might be tragic or glorious, and there is certainly a place for armchair-quarterbacking your actions later – I don’t want to pretend that “discernment” is a magic word that makes all your ethical issues go away – but it is nonetheless a fact of life.
If this is the case, then ethical systems will necessarily produce not only followers, but also discerners, and we should evaluate them on both. The standard way to compare ethical systems, at least in popular philosophy, is to set up situations where one of them gives a different result than the other. (I remember a teacher of mine asking “Why is it that all first-year ethics problems involve axe murderers?”) To take an easy example, let’s knock down naive utilitarianism. The only rule here is to maximize total happiness across all people.
Let’s say that there exists some person who derives hundreds of times as much pleasure as anyone else from any given thing. In that case, by naive utilitarianism, it is necessary to give everything we have to this person. In short, someone born with a particular kind of brain defect which overstimulates their pleasure centers would become the king of everything and subject the rest of us to ceaseless toil and misery.
This is a standard argument (which Robert Nozick handles with much more nuance in his writing on the utility monster), and I personally find it quite effective, but we must admit that it lacks some realism. There probably is not such a person around, or, at least, not that we know of. A potentially better objection arises from thinking about what kind of discernment naive utilitarianism fosters. The future king of everything might not exist, but the naive utilitarian must always be looking for them, and must be prepared not only to give over everything he has, but to steal all of your stuff and give it over as well. Sure, it might be emotionally difficult, but this system of ethics presents those emotions as a barrier to be overcome, rather than the last gasp of a dying conscience.
The naive utilitarian’s discernment makes him utterly untrustworthy. You don’t know which heuristics he chooses to use, and so you don’t know when you’ll find yourself on the wrong side of an approximation. Every time you watch him pick up a kitchen knife, you have to wonder if he’s found five people who need your organs for transplants. You really don’t want to be around this guy for long. It’s an issue with naive utilitarianism which, in my view, is at least as bad as the utility monster, and which certainly doesn’t require any hypothetical individuals to pop into being.
(Let me be clear: I’m not saying that this applies to all utilitarianism. It’s a well-explored ideology with many people who are working to solve just this kind of problem. I’ve intentionally straw-manned it here to demonstrate how one might use the discerning mindsets generated by an ethical system as a way to attack or defend it. That’s all.)
The beginning of wisdom is this: get wisdom. Though it cost all you have, learn how to act when you’re up against the wall, because sooner or later you will be. It’s a commonly-heard sentiment that we only really know someone once we’ve seen how they act in extremis, and, while we usually think this way when discussing hypocrisy or selfishness, the thought is true of discernment as well. Seeing my uncle bribe that border guard had the potential to be a marriage-defining moment for my aunt. This is always going to be the case. We might as well incorporate it into our formal ethics.