One of the interesting things that comes out of decision theory is trying to answer the simple question: “What constitutes a good decision?” If a destitute person with only one dollar uses it to purchase a lottery ticket and wins a million dollars, was that a good decision? The expected return on a lottery ticket is the average value returned compared to the purchase price. The expected return varies greatly, but might be about one third, or more typically one fourth. That is, in the world of lottery purchasers, they can expect on the average to get back $0.25 for every dollar they spend.
For that one person, the choice was correct. That is, since he got back much more than he paid, by most standards, it was the right decision in retrospect. However, before the lottery winners were announced, most people would agree it was a bad decision for a poor person to make. One way to proceed is to simply assume the person was lucky and let it go at that.
But let us consider the situation in more detail. The gambler is destitute, and whether he has a dollar is not going to change that. Saving the dollar will not even get a hamburger (well, maybe a tiny one or something on sale), and if it were spent for food, the person would be hungry tomorrow. Does the pain of losing the last dollar equal the advantage of keeping it, and does the value of winning, compared to the pain of losing, override the basic probabilities? That is, does computing the expected return on a wager tell the whole story of whether to bet or not? Would the goodness of decision change for a billionaire making the same gamble?
Consider a whole town of destitute people. Should they all play the lottery? Any child can grow up to be president — can the whole class grow up to be presidents?
These sorts of considerations are what make decision theory a worthwhile study. Most people can compute or estimate the odds of winning a game. Most can estimate the expected return for a decision, but that is not the whole story.
Now suppose the not-so-nearly destitute person has five dollars and wants to bet them all on $1 lottery tickets. Assume that identical lotteries happen every week. Would it be better to bet five at once on a single game, or bet $1 per week on new games each time?
The raw calculations for a similar situation can be seen here.
What is not considered in raw math is the reward a person gets from the simple act of gambling, and that, after all, is generally how the lottery is sold. Advertisements usually try to sell us on how much fun it is to play, not what the expected return is. If the destitute person enjoys betting and that takes away some of the misery of existence, then how does that fold into the calculation? I would think more enjoyment comes from spreading out the bets.
Note that none of this depends on the morality of the situation. In fact, I deplore state-sponsored lotteries because they are in essence a regressive tax on low-income people. The main justification for them is that people will gamble regardless, so the government might as well regulate it and make some money in the process. Is that a good decision?