Posted by: dacalu | 6 March 2012

Units of Value

Tonight’s blog is a bit of a ramble dealing with where we start in looking at agents and value.  I think there’s a rational reason to start with agents – after all, agents and preferences are what I want to model.  Nonetheless, there seem to be a number of options about how to go about that.  I spell some of my thoughts out here, but would love to hear from you about other possibilities, or a critique of these.

I would like to say a word or two about the units in the utility function I’m proposing.  There are a number of ways we could set this up.  Jeremy Bentham proposed ethics that maximized happiness.  For him, there was some unit of pleasure such that we could measure the amount of happiness in the world.  He even suggested that some were capable of more happiness than others…  I find this somewhat unsatisfactory.  I couldn’t tell you exactly why, but it strikes me that one truly euphoric individual could warrant any number of small misfortunes in others.  (See Brave New World for a fuller development of this idea.)  One can somewhat weaken this strict utilitarianism by advocating “the greatest good for the greatest number,” though that begs the question of how we balance breadth of coverage with depth of happiness.  It also begs the question, “what is happiness?”  Aristotle had a concept of fulfillment he called eudaimonia.  It is sometimes translated as happiness, though I think we might need a more nuanced word.  Alternatively one might say that happiness was nothing more than a pleasant feeling caused by endorphins.  In this case, a perfect world would be one run by machines in which we were all drugged senseless…  If happiness means simply “what it is we seek,” it hasn’t given us any predictive power as an ethical model.  If it means simply pleasure, I think most of us would agree that is insufficient.

“To make a goal of comfort or happiness has never appealed to me;  A system of ethics built on this basis would be sufficient only for a herd of cattle.” -Albert Einstein

Or, in a more religious bent,

“The Christian religion…does not begin in comfort; it begins in…dismay.  In religion, comfort is the one thing you cannot get by looking for it.  If you look for truth, you may find comfort in the end:  If you look for comfort you will not get either comfort or truth—only soft soap and wishful thinking to begin with and, in the end, despair.” -CS Lewis in Mere Christianity

No.  I think we will have to look a little harder to find a model of preference that can make a utility function worthwhile.

Some theologians have presented the notion that happiness is obedience to God’s will.  This train of thought is particularly popular in Islam.  I think this presents two major obstacles; the first epistemological, the second psychological.  First, it would be easy to end up with the same problem we had with happiness.  What exactly is God’s will?  We might be tempted – indeed I believe we are regularly tempted – to assign the label “God’s will” to whatever our biology or culture predisposes us to prefer.

“No man ever believes that the Bible means what it says: He is always convinced that it says what he means.” – Bernard Shaw

I am not such a skeptic as all that.  I do think that we can, to a limited degree, overcome the obstacle of determining God’s will – through the study of scripture and tradition, by the light of reason, in community, and in prayer and openness.  Nonetheless, this is jumping the gun a bit.  It requires the introduction of an epistemology – a way of knowing, when we don’t even have the basic pieces in place yet.

I think the naive notion that God simply plants the right ideas in our head via scripture borders on evil.  If this were so, there should be complete agreement on the correct interpretation among all readers, or at least among the majority of Christian readers.  It is not so.  I refuse to believe God is so inefficient as to provide a sure and certain means of communication which He only chooses to use with a few individuals.  What if the inerrantists turn out to be the ones predestined for Hell, and God chooses not to communicate with them? Even if somehow convinced myself that my personal reading was unquestionable inspiration (completely cutting off the necessity for communal understanding), I’ve poisoned the well for sharing it with anyone else.  Their understanding, too must be inerrantly revealed by God, which removes the need to talk about.  No, I think determining God’s will takes work and communication.  And that means we’re going to have to set up a language with which to do that work.  Hence the present endeavor.

So, maybe the epistemological hurdle can be overcome.  What about the psychological hurdle.  Here, I am less confident.  It seems to me that the language of obedience fails to recognize individual preference except in light of their adherence to or deviation from the norm – that is God’s will or, for that matter, any universal value system.  We’ve somewhat short-circuited the question of personal values by a priori stating that they are either proper or improper.  That would require judgement without analysis.  Shouldn’t we attempt to model the systems correctly before judging whether personal utility functions line up with a universal utility function?  I think so.

I am, therefore, unwilling either to maximize happiness or to use “correctness” as a foundation for morality.  Rather, I would propose we work with what we already have – agents.  We have admitted that agency seems to be a binary category, but that some agents may have a broader range of choice than others.  It would be sensible, then, to argue for a value system that maximizes both freedom and degrees of freedom.  I think that Nietzsche and Kierkegaard both go down this path.

Alas, existentialism doesn’t really move me.  There are far too many things I could be free to do, which would give me neither pleasure nor fulfillment.  I reject strict existentialism, primarily because it fails to line up with the actual preferences I observe in myself or others.  Rational, consistent, but failing to fit the data.  I value freedom as a virtue, but not the only virtue.  Somehow I want it to be freedom to do good, freedom to be happy, or freedom to be obedient.  Pure freedom is lifeless.

I’m going to have to find some other way to maximize agents.  I could maximize their number – but that seems patently absurd from a consequentialist perspective.  I can’t think of anyone who genuinely believes that more people is always better than fewer.  It works against any notion of resource management or even community management.  It would result in reproduction at every opportunity or some level of species awareness (or worse, awareness of all sentient beings) that is clearly beyond our capability.  So, I’ll accept more agents as a very minor good, but only in context.

Pragmatically, I’m going to reach for a preference system that maximizes some good for available agents.  That is, I’m going to take into account those agents within my sphere of influence.  (Careful, that’s much larger than you imagine.)  I want to maximize their happiness, freedom, and fitness with divine will.  The next challenge will be to balance the three.

 

Advertisements

Responses

  1. I would use the term blessedness instead of happiness, although the two are kind of synonyms, because happiness is so strongly connected to emotion, while blessedness is more strongly connected to connection with God.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: