If we had a machine that could make predictions on how to increase future happiness, what would it’s measurement of happiness be?
- Total happiness
If it measured total happiness, it could lead to a result were we are all more unhappy but there are more of us so total happiness has increased. The mere addition paradox.
- Average happiness
If it was to measure average happiness, the machine could make decisions to kill anyone unhappy to increase the average happiness.
Both measurements don’t always give satisfactory results, so is there a measurement that can overcome both these problems?
- Total happiness with exceptions
If we allow the machine to make decisions to increase long term total happiness, though we add an exception in that it can only add new people if by adding those new people it increases long term average happiness.
Though still hardly satisfactory in that it’s slightly too complicated to be taken as an axiom and you can always still argue some results are not moral and it would be hard for me to disagree. What also makes me feel uneasy is should 0 be neutral or total unhappiness, if we started considering unhappiness to be in negative numbers you get many more unsettling events we would have to assume to be normal and okay morally. I want to do some more research on the the cognitive neuroscience and other aspect of happiness and hopefully when I feel I can turn this into practical advice you will see more blogs.