Doing less good, but for good reason

Level of technicality: moderate

Am I doing enough to make the world better? If I only donate 20% of my income, instead of 95%, am I doing something wrong? Or if I only spend 20% of my free time volunteering for valuable causes, or if I choose a career that’s a bit less impactful than another option?

I worry about this a lot.[1] Why? I think that the wellbeing of others is just as important as my own, and so I have just as strong a reason to help others as I do to help myself. And, due to the unfortunate state of our world, 95% of the people in it are poorer than me. That means that, for the most part, my time and money will be better spent on improving their lives than on improving mine. So how can I justify not contributing as much as I can towards helping them? How am I not doing something terrible by only giving away, say, 20% of my income?

In this post, I’ll give one reason why not, even if you do think you’re required to do as much good as possible.

When I’m deciding what to do right now, that’s all that I’m deciding – I can’t decide my future actions, since I can’t guarantee that I’ll follow through. The future version of me might not stick to my plans. So he’s an independent agent (or, more accurately, lots of different agents). He might get tempted by another option, or get lazy, or change his mind completely. But the effects of my actions will still be dependent, sometimes, on what he does. So, to make sure my good deeds won’t be foiled by his laziness, I need to predict his behaviour. Just like if I was relying on any other person to help me achieve something.

To get specific, I think my future self is easily demotivated and prone to burnout. Suppose I try to donate 95% of my income every year for the rest of my life. But, when I’m deciding that, I’m actually just deciding how much to donate in the first year. Future me is the one who’ll decide how much to donate next year, and every year after that. And if I donate 95% now, I expect future me to lose a few too many of his creature comforts and give up on donating altogether. So the effect of my action will be that only 0.95 years’ worth of income goes to a good cause.

But if I donate 20% now, I think future me can handle that. I think he’ll stick to the plan and keep giving 20% for the next 40 years. And that means 8 (0.2×40) full years of income going to a good cause. That’s way better. (Of course, my income would likely go up over those 40 years so the difference would be greater still.)

Likewise, I could choose now to go into the highest-impact job I can find, which also happens to be stressful and unpleasant, or I could stick with something a bit less impactful but more enjoyable. (Disclaimer: There’s a huge variety of really high-impact jobs out there, so you probably won’t face this dilemma.) It would be better for the world if I spent the next 40 years doing the stressful, more impactful job rather than the enjoyable one. But it’d also be better that I do the enjoyable job than go live on a commune or something.

If I take the stressful job, I expect my future self to last a year and then go live on a commune for the remainder of my working life. Whereas, if I take the enjoyable job, he’ll stick with it. As long as I do more good in 40 years of the enjoyable job than I do in 1 year of the stressful job, it’s better if right now I choose the enjoyable one.

In general, when I’m choosing right now, it’s plausible that the action with the best expected consequence is actually the one that’s a bit easier, a bit less demanding. That’s because I’m just choosing this one action, not all of my future actions – all I can do for those future actions is predict what future me will do. And he’s more likely to play along if I don’t impose too high a cost on him.

But is there anything fishy about this? About treating your future self as a separate agent and strategising around them? Recently, I was lucky enough to read a manuscript by Stephen White in which he deals with exactly this issue.

 Steve’s (more general) predicament:

Suppose that it would be best if you did action a1 at time t1 and then a2 at t2. Both actions are entirely up to you. But, if you don’t end up doing a2, then it’s better to not do a1.

Here’s an example decision matrix:

Here, a1 might be donating 95% of your income in year 1. And a2 might be donating any amount at all in years 2-40. I’m oversimplifying, yes, but this captures the important bits.

In this case, if you predict that you’re unlikely to do a2, does that mean you shouldn’t do a1? Here are some arguments for “no, you have no less reason to do a1″.

1. Deliberation crowds out prediction (Levi, 1997)

Suppose that you’re deciding whether to get up or stay in bed. You predict that you’ll most likely end up deciding to stay in bed.

This doesn’t seem like reason itself to stay in bed. But does it give you reason to get up? If it does, then it’s a bit more likely that you’ll get up. You should revise the probability of you staying in bed downwards. And then you’ve undermined that reason you had. This seems incoherent.

More generally, if you’re deliberating among some options then you’re free to choose any of them. It’s up to you which one you’ll choose. So it must be up to you what the probability is. You can freely choose to undermine your own prediction. So you can’t coherently deliberate while also having your choice (even partly) determined by self-prediction. Prediction can’t determine what you ought to do in a case like this.

But, as Steve points out, this argument only applies to your present decision. For an option which only your future self gets to choose (like a2), it’s not up to (present) you whether you choose it. There’s no incoherence in choosing on the basis of a future self-prediction. Just like there’s no incoherence in choosing on the basis of what you predict other people will do, since you don’t have control over them. So, the argument doesn’t apply to the cases we care about.

2. Epistemic evasion (Marušić, 2015)

Okay, it’s not incoherent to decide based on future self-prediction, but it still seems a bit dodgy. Here’s another possible source of the dodginess.

When we’re deciding whether to do something, it should be on the basis of only our practical reasons for and against doing it. But if we decide not to do a1 followed by a2, we’re either: a) acting on something other than practical reasons; or b) pretending that whether we decide to a2 is a practical question. But it’s not. It’s up to us – it’s determined by a decision-making process that we control.

When we decide based on the prediction that we won’t do a2, we’re treating that decision as an epistemic matter rather than something we control. And so we’re denying our own agency and evading that decision. That’s the argument, at least.

But this isn’t very persuasive.

For one, you could easily think that you don’t have sufficient control or ‘agency’ over future-you’s decisions. However firmly you decide now, your future self can always defect. And in many cases, it seems silly to rely on your present decision: try deciding to take an addictive drug for just one week and then stopping; or try deciding to sing the first line of your favourite Disney song without singing the rest. You won’t be able to control future you – they won’t be able to help themselves.

For two, even if you do have agency over future decisions and you are denying it, what’s wrong with that? Especially if you don’t trust yourself. If I’m on a diet and get rid of all of the junk food in my home, I’m denying my own agency. But that doesn’t seem like a bad thing at all.

For three, when I’m making decisions based on careful predictions of my future actions, it’s precisely because I accept responsibility for those actions. I do want to ensure that the consequences of my actions, in combination, are good. Accepting that I can’t presently control those future actions isn’t giving up my agency in this (more important) sense.

3. Letting yourself off the hook (Stephen White)

Steve presents a better reason to reject self-prediction – that it lets us abandon our obligations much too easily.

Here’s an example: You’ve promised your friend to keep a secret for them. But you know that you’re bad at keeping secrets. Sooner or later, you’re going to reveal it and thereby hurt your friend (with probability 1 over some time period, you predict). And, right now, you’re talking to the popular crowd. If you revealed the secret right now, you’d gain standing with the popular crowd. That’s better than revealing the secret later on, with no upshot. So you should betray your friend immediately.

That seems messed up. But, if we predict our future actions and maximise expected value[2] based on those predictions, that’s what we should do.

The general problem here is that reasoning based on self-prediction is exploitable. We can use it to get out of moral obligations often just by establishing that we’re unlikely to fulfil them. And, further, it incentivises us to be lazy and unreliable.

And, in the case of donating to charity each year, you might think that we’re just giving up on our moral responsibility based on our weakness of will. In a slogan, “If you can’t be bothered to do the most good, then you can let yourself off the hook.”

So I shouldn’t ever self-predict?

Based on this exploitability, it seems like we shouldn’t take actions based on predictions of whether we’ll follow through with some later action.

But it also seems wrong to never take self-prediction into account when making decisions. Suppose there’s a highly addictive drug which will bring me good times with no side effects for the first week and will have horrible side effects after that. The best course of action would be to start taking the drug, enjoy the good times, and quit cold turkey after a week. I might predict that there’s almost no chance of me actually quitting but, no worries, I’m not supposed to decide based on self-prediction. And that seems silly.

Or suppose I can save 1,000,000 lives right now by turning one crank just once. Or I can turn another much heavier crank and save 1,000,010 lives, but only if I turn it for 12 hours a day for the next 10 years. If I don’t make it through all 10 years of crank-turning, then all 1,000,010 lives are lost. Sure, I might be physically able to do it, and it might be at least possible that I’ll have the motivation to stick with it for 10 years. But do I trust myself to stick with it? Not really. And I wouldn’t risk that many lives on the hope that I have that strong a will. Again, it seems silly to risk it. (Semi-relevant.)

After years of this, you’re going to be a bit cranky…

So how do we resolve this?

Here’s Steve’s suggestion: we allow ourselves to use self-prediction in our decision-making, but we limit it. We’re permitted to make decisions based on self-prediction, but we’re still not permitted to act on reasons which are contrary to our reasons for following through and doing the very best sequence of actions.

Take the example of deciding whether to reveal our friend’s secret to the popular crowd. The pain our friend would experience gives us reason not to reveal the secret now or ever, if we can help it. And the status boost we’d get from telling the popular crowd gives us reason to reveal it. But it’s a different sort of reason, and it goes against the original reason. Following Steve’s suggestion, acting on this reason would be wrong.

In the case of donating to charity, giving 20% this year rather than 95%, it’s different. Our reason to give 95% every year comes from the good it’ll do for those who benefit. And one of our reasons to give 20% instead of 95% comes from the same source – the good it’ll do – since we predict that this’ll result in more good in the end, given our future actions. This is the same sort of reason, so it’s okay to act on it.

But there’s another reason here as well – we’ll benefit personally. We’ll have a lot more spending money if we only give 20%, we’ll be able to afford fancier cars and houses and holidays. And this reason is contrary to our reason for donating, so we shouldn’t act on it.

But what does it mean to act on a reason? My understanding is that it means to do the act which you have (that) reason to do, while being motivated by that same reason. So, if you give only 20% out of selfishness instead of to maximise the good you do, you’re doing something wrong. To get off the hook, you need to have the right motivation. To drop down to an annual donation of 20%, you’re not allowed to be motivated by your own comfort or self-interest. Instead you must give 95% and then inevitably drop to 0% next year.

I agree that we need to fix this exploitability problem, and I like Steve’s principle. But I’m not too comfortable with bringing motivation into the picture. It’s hard to figure out other people’s motivations when judging their actions. I can’t even identify my own motivations half the time. (Maybe you can’t either.)

And even if I could, I wouldn’t know how to change my motivation. So, if I get stuck with the wrong motivation, I might not be permitted to do something which would have far greater expected benefit for the world. And that seems a bit awful – I shouldn’t make the world far better, just because I’m afflicted with a poor motivation? Nope, I’m not on board with that.

More generally, I see the rightness of acts as depending almost entirely on the outcomes they produce (or the outcomes that we have evidence they’ll produce). And I certainly don’t find it plausible for rightness to depend so much on what the agent is thinking. If you think the same, you’ll probably want a different solution.

Non-dodgy self-prediction (consequentialist version)

Here’s another way of getting similar judgements, but with more of a consequentialist flavour:

Self-prediction is fine. Maximising expected moral value is fine. But exploitation is not fine, but this is because the exploiting reduces expected moral value.

Recall the hand-crank example. I could save 1,000,000 lives by turning a crank just once. Or I could turn a really heavy crank for 12 hours a day for 10 years straight, and save 1,000,010 lives (or 0 lives if I stop early). Suppose I’m physically capable of doing that, but almost certainly not mentally tough enough. Now, when I’m making the decision, the unpleasant prospect of working for 10 years straight might be weighing heavily on my mind. It might well be my main motivation for choosing the easy option. But that surely doesn’t make it the wrong choice!

Plus, that choice doesn’t seem to involve any exploitation or letting-off-the-hook, at least in the relevant sense. It’d be exploitation if I was certain (or very highly confident) that I had the mental fortitude to turn the crank for 10 years, and decided not to. Specifically, if I decided that I would give up before the 10 years had elapsed. And then, based on that decision, revised down my confidence so that the easy option had higher expected value. Alternatively, I could affect the prediction in other ways, if I had a few weeks or months to decide. In that time, I could take actions which might decrease my mental fortitude, such as developing the bad habits of binge-watching Netflix, eating lots of Pringles, etc. Decrease my mental fortitude enough and I can say that the 10 years of crank-turning is almost guaranteed to fail, so I can take the easy way out. Now that would be exploiting the whole self-prediction thing.

Or perhaps you might find yourself inspired to do some hand-cranking by just how consistently Netflix cranks out poor-quality movies…

But you know what? The wrongness of that exploitation can be explained without demonising self-prediction. If I made a decision to later give up on crank-turning, so that I could change my credences and take the easy option, that decision was wrong. If I decided to start binge-watching Netflix to decrease my mental fortitude, that decision was wrong too. Both of those decisions decreased the expected moral value that I’d produce.

And we don’t have to restrict self-prediction to say that. In fact, it’s because of our predictions of our future actions that we can say that those actions decrease expected moral value – we need to judge our own likelihood of completing 10 whole years of crank-turning with and without, for instance, many weeks Netflix-watching beforehand. And, using self-prediction in a similar way, we can also justify taking actions beforehand which increase our resolve, and thereby increase the expected value of the options available to us – maybe psyching ourselves up for the crank-turning, or spending a few weeks meditating.

I think this view is plausible. More plausible than one which relies on the motivations of agents to judge their actions. And if you too are inclined towards consequentialism, I think this lets you escape Steve’s problem without turning to the dark side.

Other cases

If we embrace self-prediction, and we reject exploitation based on the explanation above, what does that tell us about the other examples from above?

1. Revealing your friend’s secret

Recall the case in which you’re deciding whether to reveal your friend’s secret to the popular crowd and get in their good graces. And you think you’re guaranteed to reveal it in the long run.

I think there are a few important points to make about this case, if we did ever actually face it: 1) your credence that you’ll reveal the secret in the long run shouldn’t be 0 – it might be low, but never 0, if you’re a good Bayesian; 2) the harm to your friend will probably be smaller the longer you keep the secret; and 3) you have moral reason to not harm your friend and to not reveal the secret but, on the other hand, you don’t have any moral reason to reveal it (or, if you’re a hedonist or some such, at least not very strong moral reason).

From either of (1) or (2), there’s at least some reason not to reveal the secret now. And from (3), there isn’t any reason to reveal it now.[3] So you’re still required to keep it, even if you take self-prediction into account.

What would be seriously wrong here is if you started thinking about the future and convinced yourself (falsely) that you’d inevitably betray your friend in future. Or if you cultivated treacherous tendencies. But those actions can be justified as wrong based on the expected values they bring about.

2. Annual donations

What about donating to charity each year?

Here’s something that seems totally fine. You objectively assess your future self’s willingness to continue making sacrifices for altruistic purposes. You figure out the likelihood that future you will stick with your donation plans for each level of sacrifice. And then this year you give away whatever amount maximises your total expected lifetime donations. That might be 20%, it might be 5%, it might be 80%. But try to be accurate – part of you is going to want to make that percentage as low as possible. It’s okay if you’re motivated in part by your own self-interest, but as long as you still pick the number that’s actually right.

Here are some things which aren’t fine:

  • doing a poor job of coming up with credences and erring on the low side;
  • being overly optimistic with your credences, sacrificing too much, and burning yourself out;
  • cultivating lazy or selfish tendencies which make you less likely to keep donating in future, and reducing your donations based on that;
  • not cultivating resolute and altruistic tendencies to increase the chances that you stick with it; and
  • subtly deciding that you won’t keep donating unless you get to keep certain luxuries, and reducing your donations because of that.

I think some of this is quite important, and underappreciated within the effective altruism community. It’s actually really important to improve the resolve of your future self. You should cultivate your own altruism and sympathy for the plight of others. You should try to make yourself more motivated (e.g., by spending time with other people who are working to improve the world). And you should try to make yourself better able to sacrifice luxuries.

Self-prediction as a whole is perhaps underappreciated as well. Suppose you believe that you’re required to do the most good, using as much of your available resources as you can. (Obviously, most effective altruists don’t, and are happy to contribute just some significant amount – basically, to satisfice.) If you do want to contribute as much as possible, it might not be the case that you should sacrifice all of your comforts in life. It might even be the case that you should only sacrifice a small portion of them. Because, given your future self’s behaviour, that might do more good. And that’s the important thing.

But perhaps the most crucial lesson to be drawn from this is that effective altruists really need to look after themselves – you should look after your mental health, you need to avoid sacrificing more than you can handle, you need to not beat yourself up about not being more extreme. Lest your future self be unable to keep it up. So, folks, please take care of yourselves!

As for my worry right back at the start: am I doing something wrong by contributing ‘only’ 20% or so? No, not if that’s what it takes to keep me contributing. And should I feel guilty about it? Hell no.


[1] See, for instance, this previous post. There, I describe one way of squaring consequentialist intuitions with the claim that it’s permissible to do less than 100% of the good you could do. I quite like the approach described in that post, but you might not. If not, maybe this post will provide some comfort.

[2] Yes, I know, non-consequentialists may say that the act is wrong independently of the expected value of the outcome. But even non-consequentialists will have trouble saying an act is wrong if you can’t avoid doing it.

[3] If you’re a hedonist or something similar, you only have weak reason in favour of revealing it. So it’s not clear that you’d have sufficient reason to reveal it. And, even if you did, that’s not going to seem so implausible if you really think hedonic value is important.

One thought on “Doing less good, but for good reason

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s