Here’s a simple question about game theory.

Imagine that you think (a) that coherent credences need not satisfy countable additivity, and you think (b) that sometimes mixed strategies are optimal, and so we should consider all mixed strategies in deciding what an agent should do. I’m not particularly fond of either (a) or (b), but I know that I have readers who endorse each, and I suspect that I have readers who endorse the conjunction of the two.

Now imagine we’re trying to analyse a game where a player has a countable infinity of option in front of them. Should we take one of the player’s options to be the mixed strategy where the player has, for each option, probability 0 of taking that option? If so, then (a) how do we evaluate how good a strategy that is, and (b) are there any cases where this is the optimal strategy?

Here’s a more particular instance of this puzzle. Row and Column are playing a game of infinite matching pennies. Each player has to select a positive integer. Row wins if they select the same number, Column wins if they select a different number. I imagine that the opponent of countable additivity will want to say that the solution to the game is that both players play the strategy of selecting a random number, with each number having probability zero of being selected. But I don’t know how the opponent of countable additivity figures out what the expected return of this strategy (or any other infinitary strategy) is for each player, so I don’t know why they would think it is an equilibrium. So what should someone who believes (a) and (b) say about this game?

I might be counted as someone who accepts both (a) and (b), but I don’t see why I should say that “the” solution is to select a random number. Maybe “a” solution, but even that is doubtful. To calculate expected return, we need a payoff matrix, but assuming all payoffs are finite, it seems pretty clear to me that the expected return of the “randomize” strategy is zero. While it does seem like any other strategy also has expected return zero, I would hesitate to say that all strategies are equilibria. For example, imagine you are trying to match. What about the strategy that says assign prob = 1 to selecting 1 and prob = 0 to all others. In one way of thinking about it, this seems to weakly dominate the randomize strategy (since you are better off if the opponent chooses 1 and have prob = 0 of winning if she chooses anything else) though in another way, it seems as though clearly you are worse off if she selects 2 since at least with randomize, you COULD win.

Even though I am not sure of all the strategies that count as equilibria, I would definitely say that any strategy where your probabilities sum to 1 is an equilibria.

By the way, it seems to me that you could also learn something interesting about games where the payoffs for matching at different numbers were not identical. For example, if you win twice as much if you both select 1 (say it doesn’t matter to row) then is it clearly better to alter the weights of your decisions? Again, different ways of thinking about it seem to lead to different answers.

Sorry, for some reason I was trying to figure out an optimal strategy for row against the randomize strategy and of course an equilibrium is a pair of strategies. I still think the first paragraph is all right – no matter what strategy column plays, the expected payoff to row of randomize is zero. And I mean “hesitate to say that all strategies against randomize form equilibria”. But the second paragraph makes no sense. What I was thinking was that if column is playing randomize, then row can’t do any better than having a strategy where the probabilities sum to 1 But this doesn’t mean they are part of equilibria. Again, depending on how you count, it might be that column actually does “better” by not randomizing and instead having a strategy where making sure that certain numbers (wherever row has positive probability) are actually impossible (not just prob=0). This makes me wonder if the whole setup is problematic though. “Randomize” does seem like a strategy, but surely a mixed strategy must have probabilities that sum to one (otherwise columns best strategy is just every number is impossible).

And in the third paragraph, “row” should be “column”. There I mean that column doesn’t care if she loses by matching at 1 or matching at 2. But if row does care, it might be a reason to choose 1 with higher probability than 2. Sorry for the messy posts.

Consider the following game:

Player A chooses a natural number, and Player B does nothing. The payoff for Player A is the natural number she chooses. The payoff for Player B is 1. (Obviously, this game could be more perspicuously phrased as a basic decision problem, but the notion of mixed strategy is more common in game theory than decision theory.)

Any mixed strategy where the probability of choosing an element of any finite set is 0 will have an infinite expectation, while the expectation of any pure strategy will be finite.

The way to define an expected return I assume has got to be some generalization of the Lebesgue integral (take the supremum of the expectations of finitary strategies dominated by the one under consideration), but I do have some slight worry about whether the lack of countable additivity will do something strange in certain cases. No weird possibilities are coming to mind, however.

One further point – there isn’t just one distribution where each number has probability 0 of being selected. For any distribution where the probabilities of the atoms don’t sum to 1, the probabilities of the atoms only specify the probabilities of finite and co-finite sets. For infinite sets with infinite complements, there are upper and lower bounds, but there are in general going to be many such distributions.

(For instance, there will be such a distribution that gives probability 1 to the even numbers and probability 0 to the odds, and vice versa, in addition to the more natural ones that give probability 1/2 to both.)

Also, since the sample spaces here are infinite, we have to be careful what algebra the probabilities are defined over. Presumably, we’ll have the finite and co-finite sets in any relevant algebra, but the interesting question is which infinite co-infinite sets will be in the algebra.

As a familiar point (I don’t remember who first proved it), the collection of subsets of the natural numbers whose density has a well-defined limit (which we might like to have as the most natural domain for a uniform finitely-additive distribution on the naturals) isn’t even an algebra at all, since it isn’t closed under intersections. (The collection of odd numbers is in the set, as is the collection that contains odds between 2^2n and 2^(2n+1) but evens between 2^(2n+1) and 2^(2n+2), but their intersection has no limiting density, since it keeps fluctuating between 1/3 and 2/3.)

Since the payoff set in your infinite matching pennies game is infinite and co-infinite, the expectation won’t even make sense unless it’s in the product algebra of the two distributions used by the two players.

And actually, thinking about it a bit more, it looks like that particular payoff set won’t be in

anyproduct algebra, even if both players have well-defined probabilities foreverysubset of the naturals. That’s because I think the product algebra is going to only be built up fromfiniteunions and intersections of “rectangles” (sets that “factorize” into the two components) while this payoff set can only be expressed in terms of an infinite union or intersection of such sets.If one or both strategies are countably additive, then maybe we can try to use the product σ-algebra instead of the product algebra, but it’s not totally clear why that should be justified if we’re not generally assuming countable additivity.

“Randomize” does seem like a strategy, but surely a mixed strategy must have probabilities that sum to one (otherwise columns best strategy is just every number is impossible).This is part of what I find confusing about the finite additivity position. It isn’t really true that a mixed strategy must have probabilities that sum to 1. If there are an uncountable number of options, e.g. say I have to pick a real number number in [0, 1), then I think it is fine to use a mixed strategy that involves using a randomising device. E.g. I can create an unstable particle, measure how long it takes to decay, call that t and let my choice be the probability be the chance that such a particle decays in less than t. That looks like a perfectly well-formed strategy, even though every possible play has probability 0. Moreover, I can see how to calculate the expected returns of that strategy against other strategies that might be played.

What I don’t see is how to carry over such calculations to the finitely additive case. That’s in part because I don’t see how to really randomise. But largely it is because I don’t know how to think of expectations without countable additivity.

Any mixed strategy where the probability of choosing an element of any finite set is 0 will have an infinite expectation, while the expectation of any pure strategy will be finite.I don’t see why the first part of this should be true. Won’t the expectation simply be undefined?

Since the payoff set in your infinite matching pennies game is infinite and co-infinite, the expectation won’t even make sense unless it’s in the product algebra of the two distributions used by the two players.I agree. And this is part of what was puzzling me about the case. It seems that what looked like a well-defined game could have quite poorly defined (expected) outcomes if we let the players play these non-additive strategies. And that seems like a very strange conclusion.

In the game I described, the payoffs are measurable with respect to the algebra of finite and co-finite sets.

To calculate the expectation, let’s specify the strategy. First, Player A sets up some randomization (never mind how) that chooses natural numbers in some way where for any finite set S, the probability that the method chooses a number in S is 0. Then, Player A plays the strategy corresponding to the number output by the random process, and thus gets a payoff equal to that number. A different mixed strategy might employ the same randomizing process, but then choose which strategy to play by means of some other function from natural numbers to natural numbers, distinct from the identity.

To calculate the expectation of a strategy, we consider any measurable strategy with only finitely many possible distinct plays, and calculate the expectations of the ones of this form that are dominated by the original strategy, and take the supremum. (This is just the Lebesgue definition of the integral of a measurable function. We do have to be careful when defining a “measurable” strategy with only finitely many possible plays, but it suffices to say that such a strategy is measurable iff there is a probability of each play.)

So for the specific strategy of playing the number n if n is the number that comes up in the randomization, we calculate the expectation as follows. For any k, we can consider the strategy that plays 0 if the number that comes up in randomization is less than k, and plays k if the number that comes up is at least k. This strategy has only finitely many possible plays (0 and k), and each such play has a probability (0 and 1, respectively) and thus has an expectation (namely, k), and is dominated by the original strategy. Therefore, the expectation of the original strategy (under this definition) must be at least k.

Thus, the expectation is calculated by taking the supremum of some set that contains all the natural numbers, and this supremum is in fact infinite, and not undefined, regardless of what other numbers are in the set.

In fact, by the way I’ve defined it, the supremum is

alwaysdefined, and this holds true for the standard definition of Lebesgue integral as well. However, it’s traditional to restrict the definition of Lebesgue integral to general measurable functions, which are ones such that for any k, the set of x such that f(x) is less than k has a probability. But that’s the case for the strategy I discuss in the game I discuss (for any k, the set of x such that f(x) is less than k always has probabiliy 0), so the expectation should be fine.Unfortunately, the game of infinite matching pennies doesn’t satisfy that condition with any non-countably-additive strategy.

Actually, now that I think about this more, maybe that’s ok – it just means that agents (even ones that countenance non-countably-additive probabilities) can’t consider non-countably-additive strategies for that game.

Consider a simple game where Player A just has two possible pure strategies. No one should be disturbed by the consideration of a mixed strategy whereby Player A fixes some non-measurable subset of the interval [0,1] and then plays strategy 1 if a dart she throws hits that set, and strategy 2 if the dart misses that set. This mixed strategy makes the payoff table unmeasurable, and therefore expectations don’t exist. But that’s a problem for this supposed mixed strategy, and not a problem for the game. So we must have some sort of constraint requiring that mixed strategies always make the payoff table of the game measurable.

In the case where there are only finitely many pure strategies this is all unproblematic, and doesn’t get in the way of proving existence of Nash equilibria, because ignoring these weird unmeasurable mixtures doesn’t seem to involve ruling out anything relevant (whether we assume countable additivity or not). In particular, the only way the payoff table can be unmeasurable is if one of the players has chosen a randomization strategy that makes the event that she plays a particular pure strategy unmeasurable.

In the infinite case, things get a bit tougher. But perhaps we should just insist on the same claim – the only mixed strategies that may be considered are ones that make the payoff table measurable. There is the slightly weird fact that the event of either player playing any particular pure strategy may be measurable, even though the payoff table isn’t. (This is what happens when both players use a non-countably-additive mixed strategy for the infinite matching pennies game.) And the even weirder fact that whether one player’s mixed strategy makes the payoff table measurable may depend on which particular strategy the other player uses. (For instance, if Player B uses any mixed strategy that concentrates all probability on a particular finite set of pure strategies, then non-countably-additive strategies on the part of Player A will make the payoff table measurable, while if Player B uses a non-countably-additive mixed strategy than any non-countably-additive strategy on the part of Player A will make the payoff table unmeasurable.)

I guess I don’t quite know what the upshot of this is, because presumably which mixtures A considers shouldn’t depend on which mixture B is actually using. If we insist on that, then I suppose we have to say that the game is the problem here, since its payoff table is unmeasurable for many mixed strategies that make each player’s play measurable.

But I think the same is true even if we assume countable additivity. Fix some well-ordering of the reals in minimal order-type, and then consider a game where whichever player chooses the number that is later in the ordering wins. The uniform distribution makes the play of any pure strategy by either player a measurable event (with probability 0) but makes the payoff table unmeasurable. So the problem really seems to be about infinitary games in general, or perhaps just for games where the number of pure strategies available to each player is larger than the amount of additivity required for probability functions. It’s not really about countable additivity per se.