Let U~Uniform(0,1) Let sensor target measurement be x, so A ~ (x + U), B ~ x or U with probability 0.5. We draw a from A and b from B, we want the estimator to minimise mean absolute error - the bayes optimal rule is the posterior median of x over the likelihood function of L(x | a, b).
Note that if a = 0 and b = 1 -> we KNOW b!=x because a is too small - there is no u with (u + 1) / 2 = 0. I'll skip the full calculation here, but basically if b could feasibly be correct its "atomic weight" ends up being as least as large as 0.5, so it is the posterior median, otherwise we know b is just noise, and the median is just a. So our estimator is
b if a in range [b/2, (b+1)/2]; a otherwise
This appears to do better than OPs solution running an experiment of 1M trials (MAE ~ 0.104 vs 0.116, I verify OPs numbers). The estimator to minimise the mean squared error (the maximum likelihood estimator) is more interesting - on the range a in [b/2, (b+1)/2] it becomes a nonlinear function of a of the form 1 / (1 + piecewise_linear(a)).
I was not able to replicate OP's work, I must be misunderstanding something. Based on these two lines:
> U is uniform random noise over the same domain as P
> samples of P taken uniformly from [0, 1)
I have concluded that U ~ Uniform(0,1) and X ~ Uniform(0,1). i.e., U and X are i.i.d. Once I have that, then there is never any way to break the symmetry between X and U, and B always has a 50% chance of being either X or U.
There are two iid Uniform noise variables, the Us in A~x + U and B~x or U are independent.