Another short one this time. The second half of this month turned out to be pretty crappy – I had a cold, then a migraine, then a nasty week-long ear infection as a complication of the cold. Earache is horrible, do not recommend. I’m mostly better now and my brain kind of works again, but I don’t have enough time left to pull much together. I do have more blatherings on negative probability ready to go so I’ll talk about that below, but it’s going to be quite a rushed job.
Thanks for your suggestions on moving the newsletter! I’m still likely to move to publishing them on the blog as a separate Newsletters page, but will probably also investigate Substack. Pushing that to next month anyway.
Non-local boxes and negative probability
I’m back down the negative probability rabbithole again. After I wrote up the two posts on it I temporarily lost interest – I often find that happens after writing up. But then I started to follow up one loose thread at the beginning of the month. And then when I was sitting around being ill I turned up an interesting paper by Abramsky and Brandenburger that connects quite well with it.
The loose thread follows from this part of the original blog post:
This ‘strange machine’ looks pretty bizarre. But it’s extremely similar to a situation that actually comes up in quantum physics. I’ll go into the details in the follow-up post (‘now with added equations!’), but this example almost replicates the quasiprobability distribution for a qubit, one of the simplest systems in quantum physics. The main difference is that Piponi’s machine is slightly ‘worse’ than quantum physics, in that the -½ value is more negative than anything you get there.
Now, ‘things that are worse than quantum mechanics’ is actually a big research topic in quantum foundations. Quantum mechanics is a theory that allows weird nonlocal correlations between separated systems, but which doesn’t allow direct signalling between the systems. It turns out that it’s not actually the most extreme theory that would satisfy both those conditions – you can come up with hypothetical systems that would have stronger correlations but still not allow signalling. People study these systems because they’re interesting in themselves, but also to get clear on exactly what’s special about quantum mechanics – if it’s only one point in a space of ‘weird theories’, we need stronger conditions to pin this down. I’m not going to be able to get into much detail here because of time constraints and not knowing the field at all well to start with, but there’s a nice review article by Popescu here.
The most well-known of these hypothetical systems is the Popescu-Rohrlich box (‘PR box’). To explain this I need to first introduce the CHSH game, which is one of those incredibly boring cooperative games that Alice and Bob are always playing. (They never have anything better to do.) This blog post explains it well – the basic setup, which I’m copying straight from the post, works as follows:
- a referee generates two independent uniformly chosen random bits (x and y, also called “questions”) and sends x bit to Alice and y bit to Bob
- After receiving the questions, Alice and Bob send their answers back to the referee; an answer is one bit too (a for Alice and b for Bob)
- Alice and Bob win if xy = a xor b.
Alice and Bob aren’t allowed to communicate during the game. But they can pick a strategy beforehand. For example, they could choose to both always answer 0. In that case a xor b = 0, and they win if {x,y} = {0,0}, {0,1} or {1,0}, but not if it’s {1,1}. So 75% of the time. This is actually the best you can do classically, as you can see by plugging through the other strategy options.
In quantum physics, you can do better if Alice and Bob share the right kind of entangled quantum state, and then measure it one of two different ways depending on whether they receive 0 or 1 as their ‘question’. The details are given in that blog post, and the probability of winning turns out to be about 85%. They still don’t communicate – Alice doesn’t learn anything about Bob’s question, and vice versa.
The PR box goes one step better again. It’s not a real system that we know how to make, just a thought experiment – a magic box that just knows what the winning answers a and b should be given x and y. It’s defined by the probability distribution
P(a,b | x,y) = ½ if xy = a xor b, and 0 otherwise.
So, for example, if they get the questions {x=0,y=0}
P(0,0 | 0,0) = ½
P(0,1 | 0,0) = 0
P(1,0 | 0,0) = 0
P(1,1| 0,0) = ½
So the box will either give the answers {a=0, b=0} or {a=1, b=1}, and Bob and Alice will win. Same analysis goes through for the other question choices, for a 100% win rate.
So the PR box is ‘worse than quantum physics’, in the sense that it is even further from classical physics. Piponi’s machine is also ‘worse than quantum physics’, in the sense that if you describe it with negative probabilities you need to use ones that are more negative than anything that comes up in quantum physics. So I was interested in whether there was a connection, and started reading up on the PR box (which I knew almost nothing about at the time). After a while of this I started thinking there probably wasn’t any deep connection, and that it was just my usual hyperactive pattern-matching. But then I found that paper by Abramsky and Brandenburger, and it talks about both Piponi’s machine and the PR box! Now I’m thinking that there may actually be something interesting there.
Abramsky and Brandenburger point out that you can also describe the PR box using negative probabilities. If you try and assign probabilities to the whole set of possible answers {a=0, a=1, b=0, b=1} all at once, you can’t do it with positive probabilities. But you can do it if you allow some to be negative, as described in the paper. It’s interesting to me that the negative probabilities in the PR box are all -½, the same as in Piponi’s machine. Is that a coincidence? Possibly, possibly not.
There’s more interesting stuff later on in the Abramsky and Brandenburger paper, though it gets too abstract for me and I’m going to have to figure out how to apply it to an example. They have this interesting idea of giving an operational interpretation of negative probabilities by “pushing the minus signs inwards”. So instead of having a negative probability of an event, you have a positive probability of a “negative event”, in a technical sense they describe. I’ve also been thinking for a while that this is the way to go, but I’m more interested in trying to get a clear explanation of how this would work for a single example (ideally Piponi’s one, as it’s so simple) than trying to come up with a formalism. Right now it feels like this should be quite doable, but from previous experience I know that it’s actually really hard to think clearly about any of this stuff, and I’ll just get muddled again. So I’ll have to see how that goes.
Next month
I’ll keep thinking about the negative probability thing, hopefully, and maybe bang my head against The Origin of Objects a bit more (there was a lot of good discussion in the comments section of my middle distance post). Also maybe I’ll write a better newsletter. Also maybe my ear will start behaving itself again.