It’s well known that it’s easy to ‘mix’ two unconditional probability functions and produce a third unconditional probability function. So if x ∈ [0, 1], and f_{1} and f_{2} are both unconditional probability functions, and for any proposition p in the domain of both f_{1} and f_{2}, f_{3}(p) = xf_{1}(p) + (1-x)f_{2}(p), then f_{3} will also be an unconditional probability function. (This is really immediate from the axioms for unconditional probability.) I thought the same kind of thing would work for conditional probability, but I can’t figure out how to do it.

It’s certainly not true that if f_{1} and f_{2} are both conditional probability functions, then the function f_{3} defined by f_{3}(p|q) = xf_{1}(p|q) + (1-x)f_{2}(p|q) will be a conditional probability function. Here’s a counterexample.

- f
_{1}(A | BC) = 0.3 - f
_{1}(B | C) = 0.4 - f
_{1}(AB | C) = 0.12 (a consequence of previous two posits) - f
_{2}(A | BC) = 0.5 - f
_{2}(B | C) = 0.6 - f
_{2}(AB | C) = 0.3 (again a consequence) - x = 0.5

If we just apply the above formula, we get this

- f
_{3}(A | BC) = 0.4 - f
_{3}(B | C) = 0.5 - f
_{3}(AB | C) = 0.21 (inconsistent with previous two lines, if f_{3}is a probability function)

One natural move is to say that when f_{1}(q) = f_{2}(q) = 1, then f_{3}(p|q) = xf_{1}(p|q) + (1-x)f_{2}(p|q). That will deliver something that is a conditional probability function as far as it goes, but it won’t tell us what f_{3}(p|q) is when f_{1}(q) = f_{2}(q) = 0. And I can’t figure out a sensible way to handle that case that doesn’t run into a version of the inconsistency I just mentioned.

It feels like this is a simple problem that should have a simple solution, but I’m not sure just what it is. There’s a lot of information about mixing probability functions in this paper by David Jehle and Branden Fitelson, but it doesn’t, as far as I can see, touch on just this issue. Any suggestions would be appreciated!

Posted by Brian Weatherson in *Uncategorized*