David Reinstein
"Why do this, why try to generate a best take on 'my welfare function'" without having gone through the literature (much, only listening to podcasts etc)?
Uninformed theorizing may bring a new perspective which would be constrained by reading
Narrating "figuring out things" may bring insights to others (help newbies learn about population ethics, give experts a sense of how an economist would take this on)
- Prob (1-p) of 'small world S' with NL = 100 million people indexed by i
- Prob p of 'large world L' with NL = 100 billion people indexed by i
As a 'baseline', think of p=1/2 for intuition; assume that if you didn't act, this would be its value.
Ignore the 'non-identity' issue.
Each individual has a value function vi(W),
call it the vector VW across all individuals in world W
For simplicity, maybe we assume that every individual (in a particular world W) gets the same v_i(S), but this can be affected by further arguments, discussed below
For now assume all value/happiness is positive, and/or we have a clear way of weighing negatives and positives (as well as among positive states)
You can 'donate' (or take costly actions) by choosing some vector g1, g2, g3 each of which may potentially affect either:
the values each individual gets (happiness whatever) in state S
the values each individual gets in state L
the probability of a 'large world L', probability p
Aside: Why not include 'extinction' or 0 population?
I think this conjures non-utilitarian, non 'add up across people' values
We already know there will be at least some people ... alive today
I want a 'consequentialist utilitarian welfare function':
I think I should care about outcomes (probability-weighted perhaps) mediated through how the people themselves value their own states. I think I should act so that my actions do best to maximize this. I don't want the 'thing I am maximizing' to depend on my actions or on how I think about my actions or 'why' I chose something.
'Average utilitarian' (or some function like that)
Total utilitarian
Some representation of 'person affecting' (but it's hard to achieve that with a simple SWF
Considering allowing a function of both the VW vector of happinesses and the population in a state VW ... but that does seem a bit of A FUDGE
Here the 'average utility' model seems appropriate for a person with PAV; you don't care which world exists, so you won't invest in p
True, if you 'could' affect happinesses, it does seem weird and unfair that you are valuing each person in the Small world so much more ... but as you can't, this doesn't guide you to an 'unjustifiable decision'
Possible variation: you can affect both p and VS but not VL ... would probably preserve this
Here the 'total utilitarian' model proscribes the 'right action'. For the same cost, you would improve the life of someone in world S instead of someone in world L, but you would only value this at approximately 1/p = 2x as much in our example ... only because each person in world S is about twice as likely to exist as each person in world L
But it feels weird to a PAV, because with this SWF, if you could increase the probability of world L, you would do so. The actual value function puts (in our example) 1000 times more value on world L
A 'correction term' to either the average or total model that implies that 'increasing p should have little or no value'
A 'value of my actions (and outcomes?)' that does treat the components of my decision (g0, g1, g2) differently insofar as these affect p versus the value functions
David Rhys-Bernard curated Pablo Stafforini's list (below), adding to it further.
David Reinstein: I took a quick look at the syllabi below. Some seem to be analytically rigorous, and some seem to engage empirical economics and social science (especially measuring the impact of poverty interventions).
However,
I did not see any that focused rigorously (with maths) on engaging Economic theory, Decision Science, or Econometrics/measurement/quant stuff
I found mostly 'themes and reading lists'; no 'web book/textbook' yet
Reply