May 2018: The Constructive Loaf

“It was Toni who first put forward the concept of the Constructive Loaf. Our time, he argued, was spent being either compulsorily crammed with knowledge, or compulsorily diverted. His theory was that by lounging about in a suitably insouciant fashion, but keeping an eye open all the time, you could really catch life on the hip – you could harvest all the aperçus of the flâneur. Also, we liked loafing around and watching other people doing things and tiring themselves out.”

Julian Barnes, Metroland

Banning my usual distractions is working really well this time. It was OK when I tried the same thing in November and December, and I turned up some interesting new stuff, but not as much as I expected. This time I got really productively bored and had a long Constructive Loaf around the internet looking at other people working hard (also I read Metroland).

I’m still really missing my usual part of the internet though, and looking forward to being able to read it again in mid-June.  I need to find an approach that’s less crude than my current one of completely banning everything from time to time, but still makes space for productive boredom.

This time I’m going to start with some bits of physics that are all connected to the phase space formulation of quantum mechanics. This will start off fairly well with some more general-interest stuff on music and time-frequency analysis, and then will move on to infodumping a load of undigested arcane lore about the Wigner function. If you don’t want any of that, skip to the bottom where there will be a review of Matthew Crawford’s The World Beyond Our Heads, and some links to interesting things I found while constructively loafing.

Nothing about Popper or Deutsch in this one. Maybe soon. I know I keep picking up topics one month and then dropping them the next, so I never talk about what I claim I’m going to talk about. I’m pretty repetitive in my interests on longer timescales, though, so probably a lot of it will loop round eventually if I keep this up.

Why I’m interested in the phase space formulation

In previous newsletters I’ve talked about this paper by van Enk on a toy model analogue of quantum physics. To recap, Spekken’s original model has four boxes arranged in a 2×2 which each have a ‘probability’ P. There are also a set of possible measurements Q_i you can take, of which you can only pick one:

  • Q0: Is it in the bottom two boxes?
  • Q1: Is it in the leftmost two boxes?
  • Q2: Is it in the top-left-to-bottom-right diagonal?

In van Enk’s version, the possible measurements are generalised so that you can instead ask ‘a bit of each question, as long as you only end up with half the possible information’, in a sense I made more formal back in March. These measurements always give a number between 0 and 1, so they are legit probabilities. But the underlying Ps are scare-quote ‘probabilities’ which may be negative.

Now, this actually looks very much like something in proper QM, which is one reason the toy model is interesting. Like most theories in physics, QM can be written in a number of ways that look very different on the surface but produce equivalent results. The most common one you learn in a first course is a theory of operators in Hilbert space, which is very different to how we normally think about classical mechanics. The second most popular one is probably Feynman’s one, path integrals on Hilbert space.

I’m interested in a third one, the phase space formulation. It’s popular in quantum optics for reasons I don’t fully understand yet (something to do with Gaussian states?). This gets rid of the operators and reinterprets QM as a theory of a distribution function over phase space, which makes it similar to the theory of probabilistic classical mechanics. Unlike classical mechanics, though, this distribution function – the Wigner function – can take negative values.

As soon as I mentioned this in the first newsletter, Bob Peterson emailed to ask about the connection to the Wigner function. And I said, yeah, I was aware that they’re both things with negative probabilities, and that’s one of the reasons I’m interested in this model. But I only noticed more recently that there’s actually a strong analogy (maybe this was obvious to Bob?) One of the key properties of the Wigner function is that if you sum it along the position axis of phase space, you get the momentum probability distribution, and if you sum it along the momentum axis, you get the position one, and these are legit probability distributions taking values between 0 and 1. This is like the Q0 and Q1 measurements of the van Enk model: you can sum the boxes in rows or columns and reconstruct legit probabilities.

I looked into this more, and discovered that there’s a theory of Wigner functions on discrete phase spaces as well (this is useful for quantum spin). This is extremely similar to the van Enk model for the case of a 2×2 matrix – apparently not exactly the same though. The paper is very clear and also taught me new things about the continuous Wigner function, and I learnt how the Q2 (‘diagonal’) measurement fits in. I’ll get to that in a bit though.

The phase space formulation is something I’ve been wanting to understand better for a long time anyway, so I’m happy to spend some time looking at it. The appealing thing is the similarity to classical mechanics. It’s always been really annoying to me that we learn the two theories with a different language, so it’s a pain working out what’s genuinely conceptually new in QM and what could be done in classical physics too. Eventually I’d like to learn a ‘dictionary’ something like the one in this Wikipedia table, so that I know how to map things between Liouville mechanics and QM phase space mechanics, and can get a better idea of where the differences come in. But I haven’t got as far as dynamics yet and am just trying to get a bit of an intuitive grip on the basic ingredients, starting with the Wigner function.

(Just for completeness: something I didn’t know til last year is that you can also represent classical mechanics on Hilbert space! This thing is called Koopman-von Neumann theory. I haven’t dug into it much yet but I’m intrigued by it. I think you can also use that to do a path integral version of classical mechanics, which ends up being pretty boring because the particles only take their classical trajectories anyway. Still, it would complete the pattern nicely even if nobody ever uses it for anything. This paper might be relevant.)

This business of getting some intuition has been quite slow, because the Wigner function is often introduced in a really opaque, unhelpful way, even by the dubious standards of normal physics exposition. This goes right back to the original paper – Wigner just sort of dumps it on the page, with the following cryptic footnote:

wigner-szilard

‘Me and my mate thought of it once, but we won’t tell you where or how.’ Thanks, Wigner!

I searched a bit, and it looks like Szilard was probably never involved – Wigner just wanted to boost Szilard’s career when he got out of Germany in the 30s. Haven’t found the original source yet though.

Time-frequency analysis and musical notation

One thing I learnt that was completely new to me is that the Wigner function is also used in classical physics. Time and frequency are Fourier pairs in classical physics in the same way that position and momentum are in quantum physics. This means you can’t measure both of them precisely at the same time: a precisely localised position measurement will map to a fully spread out infinite sinusoid, and vice versa. So trying to plot time against frequency precisely for a signal is always going to be something of a bodge.

Terence Tao apparently teaches a course on this, and he points out that actually people do try to plot frequency and time at once, and despite being slightly fake mathematically, it has a long history and is very useful:

scale

I had never thought about this before! Musical notation is just time along the x axis and frequency up the y axis. We would normally say that that first middle C has a frequency of 256 Hz, and strictly that only applies if you play it for all of time, to get an infinite sinusoid. In reality, though, all you want is for the waveform to be reasonably periodic at 256 Hz for the half a second or so that the note is supposed to last. The musical score does a good job of summarising the information you actually want, and that you won’t get from just looking at a position or frequency plot on its own.

There’s lots of theory in signal processing on different ways to do this. The simplest option is the short term Fourier transform. You keep the signal for a small chunk (‘window’) of time and set the rest to zero, and calculate the Fourier transform of that to get a local frequency. So if your window covers the duration of the middle C you’ll pick up the 256 Hz. Then you move your window along a bit in time to pick up some different frequencies. This sort of spectrogram got some unexpected publicity while I was writing this!

There’s an underlying symmetry in the Fourier transform, so that you could instead start in the frequency domain and take short windows of that, and transform to the time domain. This would be a pretty stupid choice in music, I think, because the signals we want to listen to tend to have spread-out, periodic behaviour in time rather than frequency. (On the other hand there are some important short-duration features in music, like the ‘attack’ at the start of a bowed string note, where it might be better?) But really it’s a trade off… you either end up with good time resolution or good frequency resolution.

To do better than this, you can make a cleverer choice of window. The short time Fourier transform just uses the ‘boxcar function’ as its window (this is the function that’s 1 for a small chunk of the domain and 0 everywhere else). Using a Gaussian window turns out to be the best compromise between resolution in time and resolution in frequency. This is called the Gabor transform.

This is still a ‘one size fits all’ window function, though. It seems plausible that you can often do better by using a window that’s based on the actual function that you are transforming. In the case of the Wigner function, it looks like you use something like the mirror image of the signal itself as the window function (see e.g. here. I say ‘something like’ as I think there might still be another coordinate change going on… I need to write it out properly and see…)

I think I’m now at the point where I’ve told myself one plausible story about the Wigner function, and when I next get time I’m happy to actually work some examples and see why this idea of using the function itself as a window function is a useful one.

Arcane Wigner function lore

Now for the undigested infodump bit. There are a couple of other routes to the Wigner function that I found during my trawl for intuition – I don’t understand them too well yet but I’m going to just write down what I do know in vague sort of terms.

One of my starting points was these two blog posts about the Wigner function by Jess Riedel, another physicist who got annoyed by the usual way it’s introduced without explanation. (Also, from the look of his blog links he has some kind of connection to the Less Wrong lot… the rationalists get everywhere!) The first post starts with the density function, which is used more frequently in quantum physics and is something like an autocorrelation function in statistics, and then applies two transformations to it, a Fourier transform and a coordinate rotation. This is strongly connected to the story I’m trying to tell above.

The second post does something fancier, explaining that it’s the ‘convolutionary mean’ of two other distributions that quantum optics people like for some reason. I don’t yet know what that reason is, so this is not very helpful to me yet! Still, this does look very similar to the way it was introduced in Ville’s original paper (Ville was the person who introduced the Wigner function to signal processing, where it is often known as the Wigner-Ville distribution.) Ville explains himself quite clearly, and makes connections with lots of important stuff to do with operator ordering and the Wigner-Weyl transform… there’s a lot I’d like to learn at some point.

There’s another route I want to go down before that, though, that’s more directly related to the van Enk model. The key property of the Wigner function that every physics text talks about is the ability to reproduce the position and momentum distributions by integrating along the axes of phase space. Jess Riedel is dissatisfied with this and points out that this is also true of the ‘Jigner function’, a shitty function he just made up… it’s not enough to pick it out uniquely.

This paper by Wootters explains that actually the Wigner function has a more elegant property, which goes something like this: you can integrate along any line in phase space and get a similar marginal distribution, but for an observable that’s something like aposition + bmomentum for some constants a, b. I think this is the property you need for the Wigner function to respect Galilean invariance, and still get something that makes sense if you transform your position and velocity coordinates. Which means that there’s some group theoretic structure going on here, and probably that was what Wigner and Weyl were thinking about, given that they both had an unusual knowledge of group theory for physicists of the time.

The Wootters paper goes on to use this property to construct a finite analogue to the Wigner function. In the case of a 2×2 phase space this starts to look very familiar… here are some Wigner functions:

wootters1
wootters2

These look very like the ontic states in the Spekkens model! Van Enk says they are not exactly the same, and that ‘it would nevertheless be interesting to study the precise relations’, which translates as ‘these are similar but I don’t know exactly how’. It sounds like maybe the difference comes in when you start combining systems.

Still, you can see the connection with the questions Q0, Q1, Q2 I talked about earlier. If you sum the boxes horizontally, vertically or along the diagonals you always get a legit, positive probability. The diagonal choice Q2 works because of this extra property of phase space, that you can sum along any line and not just the axes.

I’m feeling happy with my choice of paper. The basic model is simple enough to dig into in detail without taking huge amounts of time to understand, but it has all these rich connections to other bits of physics. Probably next I should consolidate a bit, and learn how some of these vague qualitative arguments work in detail.

Matthew Crawford – The World Beyond Our Heads

I saw this book in my local library and picked it up. It’s been given the subtitle ‘How To Flourish in an Age of Distraction’, and it looks like the publisher has tried to sell it as a sort of book-length version of one of those hand-wringers in The Atlantic about how we all gawp at our smartphones too much.

I’m a sucker for those, so I had a flick through. The actual contents looked a bit more interesting than that. Actually, I recognise a lot of this! Merleau-Ponty on perception… Polanyi on tacit knowledge… lots of references to embodied cognition. I hadn’t seen this set of ideas before in a printed pop book, so I thought I’d better read it.

Apparently he previously wrote a book called Shop Work As Soul Craft, on the advantages of working in the skilled trades. In this one he zooms out further to take a more theoretical/philosophical look at why working with your hands with real objects is so satisfying.

There’s a lot of good stuff in the book, which I’ll get to in a minute. I still struggled to warm to it, though, despite it being full of topics I’m really interested in. This was mostly a tone thing, I think. He writes in a style I’ve seen before and don’t get on with – I’m not American and can’t place it very exactly, but I think it’s something like ‘mild social conservatism repackaged for the educated coastal elite’. According to Wikipedia he writes for something called The New Atlantis, which may be of the places this style comes from. I don’t know. There’s also a more generic ‘get off my lawn’ thing going on, where we are treated to lots of anecdotes about how the airport is too loud and there’s too much advertising and children’s TV is terrible and he can’t change the music in the gym.

The oddest feature of the book for me was his choice of pronouns for the various example characters he makes up throughout the book to illustrate his points. This is always a pain because every option seems to annoy someone, but using ‘he’ consistently would at least have fitted the grumpy old man image quite well. Maybe his editor told him not to do that, though, or maybe he has some kind of point to make, because what he actually decided to do was use a mix of ‘he’ and ‘she’, but only ever pick the pronoun that fits traditional expectations of what gender the character would be. Because he mostly talks about traditionally masculine occupations, this means that like 80% of the characters, and almost all of the sympathetic ones, are male – all the hockey players, carpenters, short-order cooks and motorcycle mechanics he’s using to demonstrate skilled interaction with the environment. The only female characters I remember are a gambling addict, a New Age self-help bore, a disapproving old lady, and one musician who actually gets to embody the positive qualities he’s interested in. It’s just weird, and distracting.

OK, that’s all my whining about tone done. I’m now going to talk about a couple of things I actually liked.

One thing I’d vaguely thought about before but hadn’t considered in any depth is how objects vary in how much information they transmit to the user from the environment. That’s a bit abstract; I’ll give some examples. At the very low-information end, you have this toy for toddlers that he describes:

When my oldest daughter was a toddler, we had a Leap Frog Learning Table in the house. Each side of the square table presents some sort of electromechanical enticement. There are four bulbous piano keys; a violin-looking thing that is played by moving a slide rigidly located on a track; a transparent cylinder full of beads mounted on an axle such that any attempt, no matter how oblique, makes it rotate; and a booklike thing with two thick plastic pages in it.

… Turning off the Leap Frog Learning Table would produce rage and hysterics in my daughter… the device seemed to provide not just stimulation but the experience of agency (of a sort). By hitting buttons, the toddler can reliably make something happen.

The Leap Frog Learning Table is designed to take very complicated data from the environment – toddlers bashing the thing any old how, at any old speed or angle – and funnel this mess into a very small number of possible outcomes. So the toddler’s glancing swipe at the cylinder is translated to a satisfying rolling motion – they get to make something happen.

At the opposite end of the spectrum would be a real violin. I play the violin, and you could describe it quite well as a machine for amplifying tiny changes in arm and hand movements into very different sounds (mostly horrible ones, which is why beginners sound so awful). There are a large number of degrees of freedom – the movement of every finger, including those on the bow hand, contributes to the final sound – and almost all of them are continuous – there are no keys or frets to accommodate small mistakes in positioning.

Crawford argues that although tools and instruments that transmit this kind of rich information about the world can be very frustrating in the short term, they also have enormous potential for satisfaction in the long term as you come to master them. Whereas objects like the Leap Frog Learning Table have comparatively little to offer if you’re not two years old:

“Variations in how you hit the button on a Leap Frog Learning Table or a slot machine do not similarly produce variations in the effect you produce. There is a closed loop between your action and the effect that you perceive, but the bandwidth of variability has been collapsed… You are neither learning something about the world, as the blind man does with his cane, nor acquiring something that could properly be called a skill. Rather, you are acting within the perception-action circuits encoded in the narrow affordances of the game, learned in a few trials. This is a kind of autistic pseudo-action, based on exact repetition, and the feeling of efficacy that it offers evidently holds great appeal.”

Crawford proceeds to use ‘autistic’ in this derogatory sense throughout the book, in a way which would not make him any friends on Tumblr (but, well, neither would the rest of it).

In the same chapter there’s some interesting material on gambling machines, and the tricks used to make them addictive. Apparently one of the big innovations here was the idea of ‘virtual reel mapping’. Old-style mechanical fruit machines would have three actual reels with images on them that you needed to match up, and just looking at the machine would give you a rough indication of the total number of images on the reel, and therefore the rough odds of matching them up and winning. Early computerised machines followed this pattern, but soon the machine designers realised that there no longer needed to be this close coupling between the machine’s internal representation and what the gambler sees. So the newer machines would have a much larger number of virtual positions that are mostly mapped to losing combinations, with a large percentage of these combinations being ‘near misses’ to make the machine more addictive. The intuitive sense of odds you get from watching the machine becomes completely useless, because the internal logic of the machine is now something very complicated that the screen actively hides from you.

If I was going to identify a real deficiency in the book, deeper than my complaints at the beginning, it would be that in his enthusiasm for people doing real things with real tools he’s forgotten a lot of the advantages of the more systematised, ‘autistic’ world he dislikes. Some acknowledgement of the fact that funnelling messy reality into categories is how we get shit done at scale would be nice. Also, for someone who named his epilogue ‘Reclaiming the Real’ it would be nice if he occasionally considered that the fact that he hasn’t liked anything that has happened in the last twenty years could possibly be an artefact of the strong prescription on his grumpy old man glasses, and tried taking them off once in a while.

Sim City, coupled and decoupled

In the references to Crawford’s book I found an intriguing paper by Mizuko Ito, Mobilizing Fun in the Production and Consumption of Children’s Software.

The early parts of the paper are written in some dialect of academicese that I find really really boring, but it turns out to have some great samples of dialogue later on. Here a kid is playing Sim City, with an undergrad watching:

simcity

As Ito puts it, “by the end Jimmy has wasted thousands of dollars on a highway to nowhere, blown up a building, and trashed his city.” So what’s the appeal of playing the game in this way?

Ito mainly seems to be interested in the social dynamics of the situation – the conflict between J finding ‘fun’, ‘spectacular’ effects in the game, and H trying to drag them back to more ‘educational’ behaviours. I can see that too, but I’m interested in a slightly different reading. To my mind, J is ‘sketching’: he’s finding out what the highway tool can do as a tool, rather than immediately subsuming it to the overall logic of the game. While focussed on this, he ignores any more abstract considerations that would pull him out of engagement with the highway tool, dismissing the budget popup as fast as he can. Now he knows what he can make with it, he may as well just trash the current city and start a new one where he can investigate highways more systematically. His explorations are useless in the context of the current game, but will give him raw material to work with later in a different city, where he might need a cloverleaf junction or an overhead highway. (Also, this way he gets to make cool stuff and then blow it up.)

Constructive loafing references

  • I started doing a half-arsed lit review on cognitive decoupling, to try and understand where the term came from. If I finish this I’ll write it up for Less Wrong, as people there seem to like the idea. I think it originated with Keith Stanovich (one of the cognitive biases people) but haven’t quite worked out what the first reference is. He does mention this 1987 paper, Pretense and Representation, as a precursor that introduces the idea of decoupling (in a very literal way – he is imagining it as an ability to take a copy of some kind of mental representation, and transforming the copy in order to think hypothetically about the situation.)
  • There’s a philosopher and historian of science I like, John Norton – he writes very clearly and publishes a lot online. I looked on his website to see what he’s doing currently and discovered that he’s writing a book on the problem of induction. This includes a chapter on Bayesianism and why he doesn’t think much of it. Looks promising.
  • I found this book, Mathematics and the Roots of Postmodern Thought. It skips very fast between topics but looks good for getting ideas and references from. One of the million things I’m interested in and don’t have time to read up on is the various mathematicians that were influenced by phenomenology. Poincaré and Weyl for sure, maybe also Eddington in physics? (Eddington was seriously weird and I’m not sure what his influences were.) I tried to find out about this kind of stuff back in undergrad but had absolutely no idea what I was doing. Anyway there’s some material in this book on Poincaré’s campaign against Russell’s program to reduce maths to logic.
  • One of the books I tried reading in this ‘no idea what I’m doing’ phase was Poincaré’s Science and Hypothesis. I’ve now discovered that I should have been reading Science and Method, which has all the mathematical intuition stuff I like in. Similar to the version in The Value of Science but some different analogies:
    poincsre
  • I also found this while following up references from the book. It’s about Derrida, and I can actually understand it, which is a first. Between this and the book I’m now imagining an alternate-universe postmodernism where the big fights are in mathematics, not linguistics and anthropology, and take place over algebra/geometry instead of writing/speech.
  • I’m currently reading Empson’s Seven Types of Ambiguity, which is some kind of masterpiece of skilful movement between what I’ve been calling coupled and decoupled thinking, enough that (if I decide I understand him well enough) I might write a proper blog post on it to illustrate what I meant by the idea. He uses ambiguities in poetic meaning as a tool to break apart the halo of associations around words that gives poetry its ‘thick’ texture. Doing this requires extreme sensitivity to the poem in the first place to allow the meanings in, and also the ability to pick those meanings apart.

Next month

I’m going to Vienna for a week, mostly being a tourist but with two days of physics. Carlo Rovelli will be talking to us about relational quantum mechanics. I still haven’t done any reading on that, so I guess that’s what I’m doing this week.

After that, no idea really, I haven’t come up with a specific plan. Probably I will read a lot of internet when my ban expires, and maybe write some more on decoupling. Or plug on with one of my other half-baked projects… I have way too many of these right now.

Cheers,

Lucy