A Puzzle for Counterfactual Decision Theory

The following feels like a purely technical problem for standard versions of causal decision theory, and it should yield to a purely technical solution, but the said solution wasn’t immediately obvious to me, so I’m posting the puzzle here.

Let A > B be the counterfactual, if it were the case that A, it would be the case that B. Following Gibbard and Harper, who in turn follow Stalnaker, we’ll assume a logic for > that respects strong centring (i.e. A & B entails A > B) and conditional excluded middle (i.e. A > (B v C) entails (A > B) v (A > C).) In that case we can say that the expected value of an action A is the sum across possible consequences C of the following function:

bq. Pr(A > C) U(A & C)

Where Pr is the probability function and U is the utility function. We’ll need to add on some bells and whistles for dealing with cases where there are a continuum of possible consequences, but the basic approach seems to deal soundly with most cases. But there’s at least one case where there is a bug.

For any proposition p, let @p be the proposition that says p is actually true.

Let A be the act of my taking a particular bet such that all the following are true for some particular p.

Pr(p) = = Pr(@p) = 0.1
Pr(A > p) = 0.9
U(A & (p & ~@p)) = 100
U(A & ~(p & ~@p)) = -1
U(~A) = 0

What should I do? A is a bet that pays off 100 iff p & ~@p is true, and -1 otherwise. Since I know a priori that p & ~@p isn’t true, I know a priori that A will lose 1. So I should do ~A. But Pr(A > (p & ~@p)) is quite high, at least 0.8 given the numbers listed if I’ve done the sums correctly. So even though I know a priori that A is a losing bet, it seems I would be better off were I to take the bet.

As far as I can see, this provides no reason whatsoever to take the bet. There is one rational thing to do here, and it is to decline the bet. There should be a version of causal/counterfactual decision theory that implies that, but I can’t quite see how it should go.