It’s well known that it’s easy to ‘mix’ two unconditional probability functions and produce a third unconditional probability function. So if x ∈ [0, 1], and f1 and f2 are both unconditional probability functions, and for any proposition p in the domain of both f1 and f2, f3(p) = xf1(p) + (1-x)f2(p), then f3 will also be an unconditional probability function. (This is really immediate from the axioms for unconditional probability.) I thought the same kind of thing would work for conditional probability, but I can’t figure out how to do it.
It’s certainly not true that if f1 and f2 are both conditional probability functions, then the function f3 defined by f3(p|q) = xf1(p|q) + (1-x)f2(p|q) will be a conditional probability function. Here’s a counterexample.
- f1(A | BC) = 0.3
- f1(B | C) = 0.4
- f1(AB | C) = 0.12 (a consequence of previous two posits)
- f2(A | BC) = 0.5
- f2(B | C) = 0.6
- f2(AB | C) = 0.3 (again a consequence)
- x = 0.5
If we just apply the above formula, we get this
- f3(A | BC) = 0.4
- f3(B | C) = 0.5
- f3(AB | C) = 0.21 (inconsistent with previous two lines, if f3 is a probability function)
One natural move is to say that when f1(q) = f2(q) = 1, then f3(p|q) = xf1(p|q) + (1-x)f2(p|q). That will deliver something that is a conditional probability function as far as it goes, but it won’t tell us what f3(p|q) is when f1(q) = f2(q) = 0. And I can’t figure out a sensible way to handle that case that doesn’t run into a version of the inconsistency I just mentioned.
It feels like this is a simple problem that should have a simple solution, but I’m not sure just what it is. There’s a lot of information about mixing probability functions in this paper by David Jehle and Branden Fitelson, but it doesn’t, as far as I can see, touch on just this issue. Any suggestions would be appreciated!