February 2018

Hi all,

You’re getting this because you thought it was a good idea at some point last month to sign up to a monthly email containing… unknown stuff… by someone called ‘drossbucket’ on Twitter. Thanks very much for supporting my writing experiment, and signing up to what was a pretty dubious value proposition from your side!

This one is more of a test – I didn’t decide to try this until well into February, so I’m not really sure what I even did this month, and what was last month. (One of the reasons I’m doing this is as an incentive to keep better records.) I also didn’t leave myself much time to write it. So this one is more like ‘things I thought about some time recently and still vaguely remember’.

The main thing I tend to learn about is physics, so I should warn you that this is likely to be quite physics-heavy. I’m not sure how this is going to work yet – I want to be able to talk about what I’m actually learning, but if it’s annoyingly technical it might be difficult to give enough background. Anyway that shouldn’t be too much of an issue this month, as I’m currently thinking about a fairly general-interest crackpot sounding topic.

The other main topic is likely to be the sort of stuff I write on my blog: mathematical intuition, the subjective experience of understanding new things, my attempts to read various continental philosophers, etc.

I’d be very happy to get questions or comments on any of this! I should probably point out though that I’m envisioning this as a place for writing exploratory stuff that isn’t necessarily very polished, or playing around with ideas that are still a bit vague or undeveloped. So, although I definitely want to know if something is confusing or flat out wrong, this isn’t the place where I want to be playing the ‘find as many holes in the argument as possible’ game and I don’t want tons of critical comments. Nothing is ready for that yet!

OK, with that out of the way, let’s start with…

Negative probabilities!

The idea is quite fun in itself, but first I’ll give a bit of background on why I’m thinking about this, because it’ll be relevant in later months too. My plan for 2018 is to go beyond just learning some physics in my spare time and to do ‘something novel’, interpreted broadly. ‘Novel’ in this case doesn’t have to mean original research (though that would definitely count) – I’m thinking of a wider conception of what counts as a novel contribution, in the style of Chris Olah and Shan Carter’s Research debt essay (I wrote some comments on it here).

A good online explanation of something that doesn’t currently have a good online explanation would definitely count by my standards. Just following a lecture course or doing some exercises from a textbook wouldn’t.

I spent a lot of time in January distractedly cycling through my various odd obsessions looking for a decent, tractable topic, and I eventually found a good one. I want to do something with this paper, ‘A toy model for quantum mechanics’ by S. J. van Enk.

This paper is a response to the Spekkens toy model, which I’ll talk about another time. This model is fascinating, does’t require any particular knowledge to understand (the basic system is literally just a set of four boxes that may or may not be coloured in) and recreates a surprising number of the odd features of quantum mechanics. Not all of them, though… it doesn’t violate Bell’s inequality. I’m not going to go into this here, but violating Bell’s inequality is kind of the gold standard of quantum weirdness and a toy model that is truly ‘like quantum mechanics’ would need to have an analogue of this.

The van Enk paper extends Spekkens a bit, and it does have a Bell analogue… so calling it a toy model for quantum mechanics’ isn’t too much of a stretch. But the last sentence of the abstract is:

Negative probabilities are found to arise naturally within the model, and can be used to explain the Bell-CHSH inequality violations.

So you still have something pretty weird and unintuitive. Which is not surprising… there’s a central core of quantum mechanics that really is sort of irreducibly weird, and any toy model is going to have the weirdness somewhere.

I think the word ‘explain’ in the abstract is pushing it though. It’s true that it doesn’t just ‘put in negative probabilities by hand’… he puts in some measurements you can actually do, which all have positive probabilities, and the negative probabilities are all for things which you can’t get at directly.

On the other hand, there’s no real interpretation offered for the negative probabilities. They’re just… there.

Anyway I do have some vague ideas for how to go about finding an interpretation, which is why I’m so interested in this particular paper. And even if I get nowhere with that, which is extremely plausible, there are a whole bunch of interesting adjacent topics that would be worth writing up online.

First up, though, it seems worthwhile thinking about negative probabilities on their own, independently of this model. Luckily this is a topic that John Baez has talked about! (The first step on my learning-anything algorithm is to see if John Baez has written about it already.) There are also good comments on his Google+ page, especially the ones from Matt Leifer and Michael Nielsen.

He gives a few sources – the most lucid thing he finds is a talk by Feynman. The only other reasonably decent thing I’ve found so far is [this paper on signed probability theory] by Edward H. Allen. I haven’t read either of those in detail yet, so I’ll just briefly mention what they are.

Feynman’s one is a talk at some event for Bohm, and he throws out a lot of ideas in a short space. His main point is that negative probabilities might be OK as a calculational tool, if the answer comes out positive. (I’ll expand on that below.)

Allen’s paper is also very clearly written, and points out that the underlying mathematics we’d need is already there – it’s the theory of signed measures. The difficulty is finding a useful interpretation. I haven’t read much yet, but there’s an interesting setup with an ‘actual space’ of outcomes you know about, and then a ‘latent space’ of unknown unknowns which you may find out about in the course of your experiment. The probabilities of all the stuff in the actual space sum to 1, while those in the latent space sum to -1, and then you’re in a situation where you can apply signed measure theory.

This has the obvious problem that we know nothing about the unknown unknowns, so have no way of assigning any particular probability to them:

pasted image 0

It looks like he does have some way round this, using the fact that ‘latent events eventually become actual if they are to be of any interest to us at all’. So there is some ‘actualising mechanism’ that allows latent events to become actual over time, allowing you to infer their latent probability.

Finally, my own very basic observation:

We already subtract probabilities, which is basically the same thing as adding negative probabilities. For example, we write stuff like

P(heads) = 1 – P(tails).

You could also write this as

P(heads) = 1 + (-P(tails)),

i.e. P(heads) = “the probability of all the things happening, plus the probability of some other funny event happening which cancels out tails.” This funny event has a negative probability.

This could be useful as a source of intuition for interpretations. My own intuition is that negative probabilities do admit some sort of useful interpretation connected to time reversal of operations. This idea of ‘a funny event that cancels out tails’ fits in well with this.

Subtracting apples

I’ve pulled out one thing from the Feynman paper because it’s something I’d never thought much about before that’s interesting in its own right. Feynman starts with the following advanced problem:

A man starting the day with five apples who gives away ten and is given eight during the day has three left.

Seems reasonable. (5 – 10) + 8 = 3. I’m happy with that.

But notice that the intermediate step, 5 – 10, produces a negative number, which if these are actual apples can’t happen. The actual order of events could have been many things, but definitely was not ‘start with five apples, give away ten, then get eight’. Whereas he could have got eight more and then given away ten – that is a perfectly legit day of apple trading. So the problem has more underlying structure than you might normally think about!

Feynman’s point is that nobody cares about ‘negative apples’ in the sum (5 – 10) + 8, because the final answer comes out right. It’s just a useful calculational tool. His idea is that negative probabilities may also sometimes be useful in this sense.

But I’m also wondering if there’s a vaguely interesting toy model for something hidden in there! There’s a ‘macrostate’ 5 – 10 + 8 that has various possible ‘microstates’ that can instantiate it: stuff like 1 + 1 + 1 + 1 + 1 – 1 – 1 + … where you write out every apple transaction individually. However not all possible microstates are allowed: you can’t dip below zero apples at any point.

That’s quite a physics-like situation in itself. Possibly not in a very profound way, but definitely worth making a note of.

I started playing around with this a bit. I decided that all these 5s and 8s were too much like bigshot advanced maths, and just looked at allowable sequences of adding and taking away n lots of 1. E.g. for n=3 you have

1 + 1 + 1 – this is ok

1 + 1 – 1 – also ok

1 – 1 + 1 – also ok

1 – 1 – 1 – nope

(and then all the ones starting with -1, which are obviously no good)

These turn out to have some relation to the Catalan numbers, which looks like one of those weird sequences that comes up all over the place. It’s always somehow reassuring when one of these turns up, like you’re actually doing proper maths after all. (Of course a problem this simple pretty much has to map to something well known, so it isn’t really a surprise.)

Now for something completely different.

Merleau-Ponty

Fair warning: I’m going to make some effort to make the physics stuff I talk about accessible, but for philosophy I’m probably just going to start info-dumping at whatever level I’m currently at, without necessarily situating it or explaining it very well. I’m not quite sure what the difference is in my head – I think it’s that this is all very much new to me and I’m just exploring, and so it’s enough effort just to get down some of my thoughts without also fussing about comprehensibility.

Whereas in physics I’m more familiar with the territory, so I’m more interested in building into a coherent picture and don’t mind stopping to put things in context.

The other main interesting thing I’m doing at the moment is reading Merleau-Ponty’s The Phenomenology of Perception, very slowly. It’s exactly what I needed to read, and is actually very readable – this isn’t Heidegger, he’s mainly using normal language rather making up his own terms. He’s sort of got one foot in the continental phenomenology tradition, which is mostly new to me, and one in the psych theories of the time (especially Gestalt psychology), which has a way more accessible writing style if you’ve got a science background.

It’s still going slowly, though. This is partly because all my reading gets shoved to the end of the day at the moment, after I’ve had a full day of work and learning nonsense about negative probabilities, and so I can’t always be bothered with it, and partly because phenomenology is just hard. If you want to talk about the world as it appears to us before we shove too much conceptualisation on the top, then that’s just not how we normally talk about the world, so it takes effort to understand it.

(Interestingly it looks like Merleau-Ponty also moved to using more esoteric, less everyday language in later life. Maybe because of this? Anyway I’m very grateful that he started out in this more readable style.)

I’ll probably end up writing a blog post on this soon, and hopefully more on the book as I keep reading.

I also have a secondary source, ‘Merleau-Ponty’s Ontology’ by M.C. Dillon, which I pulled off the shelf at the local university library. I’m really enjoying it – it’s an opinionated appraisal of specific parts of the original rather than a straightforward commentary type student text, and has a distinctive writing style of its own. Possibly an equivalent of Dreyfus for Heidegger? Obviously that has the danger of having some weird interpretation that isn’t very true to the original, but the opinionated texts are so much more fun to read that I always end up picking them.

Dillon is very interested in the non-duality angle, which he spends some time situating historically:

ponty1

This seems to be a key part:

ponty2.png

The top diagram ‘depicts the ontological/epistemelogical dualism that underlies both intellectualism and empiricism’. Appearance is derivative from some sort of underlying reality. Whereas in the bottom diagram, Merleau-Ponty’s ontology starts with the phenomena, and notices an immanent/transcendent split between them.

Dillon is one of those people who’s very keen on how the top diagram represents a millennia-long terrible mistake in Western thought, how Merleau-Ponty is finally freeing us from this bad idea, etc etc. I don’t fully get this myself, though maybe I’ll come round to it. It’s true that we get the phenomena ‘all at once’, immanence and transcendence mixed in, so it makes sense to start there. After all it’s where we all do start. We can’t get at this wonderful world of inaccessible things-in-themselves, so fair enough, let’s chuck it out.

Still, the immanence/transcendence split really is a split. There really is a bunch of complex autonomous stuff in your experience that you have no control over, and there are also a bunch of ordering principles and conceptualisation that you bring to them. That question is still interesting whether you’re in the top diagram or the bottom one. I guess it’s only really a millennia-long terrible mistake etc etc if the sort of insights you build off of the top diagram are useless when you move to the bottom one.

There’s also some really interesting-looking stuff about language that I’ve just started reading, so hopefully more on that later. I haven’t yet digested enough to write anything coherent, but I’m interested in this bit because of the parallels with mathematics, where you have a similar phenomenon of a chain of symbols which need to ground out in having an actual meaning at some point. Relevant quote:

ponty3

Miscellaneous things that happened in February

  • I wrote a not-very-good blog post, and also wrote up my ‘two types of mathematician’ linkdump with a few more examples for LW2.0, where it was surprisingly popular and got a lot of good comments.
  • We had an earthquake in Bristol! A gigantic 4.4 on the Richter scale, maybe something fell off a shelf somewhere. Still a big novelty to British people. Now we’re stressing out over some snow instead.
  • I’ve been getting up at 6am for about a year now and it works really well, so I thought I should at least try getting up at 5:30. It was really surprisingly horrible compared to 6. I’ve no idea why, but that experiment has now been terminated after stumbling around like a zombie for a couple of weeks.
  • I went to a cider festival.

OK, that’s it, thanks again for trying this experiment. Let me know if you have any thoughts on any of this. Also, please let me know if this was not what you were expecting (I have no idea what you were expecting!) and want to unsubscribe 🙂

Cheers,

Lucy