A heap of broken images


[Written as part of Notebook Blog Month.]

I’ve had these notes sitting around for ages now for some kind of pretentious essay on… The Waste Land and ‘oddly satisfying’ videos? Who even knows. I’m never going to write it up properly, so I may as well have a stab at it as part of this notebook project.

So, the original inspiration is David Chapman’s page on the atomised mode. This is mostly going to be an elaboration (or rip-off) of some of the ideas there. Particularly this bit:

In our present, atomized mode of meaningness, cultures, societies, and selves cannot hold together. They shatter into tiny jagged shards. We shake the broken bits together, in senseless kaleidoscopic, hypnotic reconfigurations, with no context or coherence.

Those ‘tiny jagged shards’ reminded me of something else, the ‘heap of broken images’ in The Waste Land:

What are the roots that clutch, what branches grow
Out of this stony rubbish? Son of man,
You cannot say, or guess, for you know only
A heap of broken images, where the sun beats,
And the dead tree gives no shelter, the cricket no relief,
And the dry stone no sound of water.

This isn’t just a random similarity. The Waste Land is about the same process of the fragmentation into shards of meaning, but from an earlier phase of the process. (In as much as it’s about anything, I mean. I realise it’s a poem, not a lecture, and reading it in just the reductive way I’m going to in this post would be a bad idea.) Eliot is writing in 1922, against the backdrop of systems of meaning all in flames. In the third section of the poem, The Fire Sermon, he uses the Thames and its centuries of accreted histories to explore this unravelling. He compares mythic imagery of the past:

Elizabeth and Leicester
Beating oars
The stern was formed
A gilded shell
Red and gold
The brisk swell
Rippled both shores
Southwest wind
Carried down stream
The peal of bells

to random bits of incoherent junk in modernity:

The river’s tent is broken: the last fingers of leaf
Clutch and sink into the wet bank. The wind
Crosses the brown land, unheard. The nymphs are departed.
Sweet Thames, run softly, till I end my song.
The river bears no empty bottles, sandwich papers,
Silk handkerchiefs, cardboard boxes, cigarette ends
Or other testimony of summer nights. The nymphs are departed.

Only fragments of meaning remain, and the poem is made up from a disconnected collage of these. “Sweet Thames, run softly” is from Spenser’s Prothalamion. Possibly more echoes in there I don’t recognise.

(Not going to track down references because notebook blog, but my memory is that the coherence of meaning in the Elizabethan age is a recurring theme in Eliot’s critical essays as well. E.g. vividness and immediacy of imagery in Shakespeare and then the Metaphysical poets. Something like a ‘phoenix age’ in John Michael Greer’s retelling of Vico, to chuck in another reference.)

Anyway, it’s now nearly a century later, and we’ve hit new levels of incoherence. David Chapman’s example is Gangnam Style, which was current when he started writing the page:

Gangnam Style has been watched 2.9 billion times on YouTube. Even counting repeat views, it’s probably well-known to most young people on the planet. Its genre is, in fact, K-pop; but may be the only K-pop song most Westerners have ever heard.

Genre — which defined many subcultures — has disintegrated. Atomization seemed at first like subculturalism taken to an extreme, but it is a qualitatively new mode. K-pop may be a subculture in Korea, but in America it’s just YouTube. It’s normal for a Top 40 hit to mash up country-style pedal steel guitar with bubble-gum-pop vocals, hip-hop rapping, EDM bass, and black metal blast beats. “Authenticity”—the aesthetic ideal of subculturalism—is impossible because there are no standards to be authentic to.

I think atomisation goes one level deeper again, though. Gangnam Style lasts for whole minutes, after all, and is recognisably a song. It has a relatively stable theme, a recurring chorus, and similar images crop up throughout the music video. I’m interested in stuff that’s fragmented nonsense right down to the five-second level.

My favourite examples are from this wonderful article by John Mahoney in The Awl on chumboxes. Chumboxes are those ‘related article’ boxes full of terrible clickbait at the bottom of news sites, supplied by the likes of Taboola and Outbrain. The analogy is to buckets of chum: chopped up bits of fish chucked in the water to attract larger fish. Chumbox chum is designed to appeal at this same shark-brain level:

Like everything else on the internet, traffic flowing through chumboxes must be tracked in order for everyone to be paid. Each box in the grid’s performance can be tracked both individually and in context of its neighbors. This allows them to be highly optimized; some chum is clearly better than others. As a byproduct of this optimization, an aesthetic has arisen. An effective chumbox clearly plays on reflex and the subconscious. The chumbox aesthetic broadcasts our most basic, libidinal, electrical desires back at us. And gets us to click.

Mahoney clicked through chumbox after chumbox to get to the absolute worst chum possible, and analysed common themes. Here’s a couple of examples:

Top left: Sexy Thing and Localized Rule. We won’t dwell on the efficacy of a Sexy Thing in advertising. But do note this Sexy Thing, enhanced with a chummy sprinkle of sinister context (crime? Young women in handcuffs?). Here the Sexy Thing is combined with a more digital-age enhancement, the Localized Rule. Scouring a visitor’s IP for its geographic location, anxiety can be created by informing you of a brand new reason to find yourself handcuffed in the back of a squad car in your neighborhood.

Upper Middle: Oozing Food/Egg. A trend without an immediately recognizable psychological precedent? Oozing eggs are extremely common, and are possibly deployed under similar principles as Disgusting Invertebrates or Globular Masses Presented as Weird Food. Or perhaps the resemblance to an oozing pustular sore brings us back into the familiar realm of the Skin Thing?

The only aim of chum is to hit one-note reactions of disgust, body horror, horniness, nosiness, fear and greed, so that you click through. There’s no need to make sense at all. This isn’t confined to chum, of course. Other examples include:

  • those videos aimed at toddlers of opening Kinder eggs etc
  • ‘Oddly satisfying’ videos
  • ASMR
  • Some bits of TV Tropes would fit. Some of it is longer, more coherent narrative patterns, but some is just recognisable fragments of plot that appear on screen for a few seconds
  • Argument theatre. Short clips of the moment where ‘X DESTROYS liberal snowflake’, or whatever the 2020 version of this 2018 pattern is
  • Almost anything that’s bad on purpose to make you click

Surely this is the end of the line? I feel like we must be right out in the estuary of this process… can we even get much more fragmented than this? (Probably we can, and I just can’t see it.) This looks like the famous lines from the end of The Fire Sermon:

On Margate Sands.
I can connect
Nothing with nothing.
The broken fingernails of dirty hands.

Broken fingernails of dirty hands would make a pretty good chumbox image, actually.

Maybe this post looks like I think this fragmentation is purely a bad thing. Chumboxes are pretty gross, and there there’s all the Waste Land comparisons. Eliot’s response was essentially reactionary, looking backwards towards an Anglo-Catholic traditionalism that he hoped could hold the line against incoherence. His friend and Waste Land editor Ezra Pound later turned to outright fascism.

Actually, I think atomisation is a lot more interesting and positive than that. Again, I’m ripping off Chapman’s take here:

This may sound like a problem. Overall, my description of the atomized mode may sound like a panicked condemnation. However, there is much to like about atomization, and—I will suggest—it provides vital resources for constructing the next, fluid mode.

Now I have some examples of my own to think about, this is making much more sense to me. Maybe they can help to illustrate what these ‘vital resources’ are?

There’s enormous power down at the chumbox level of meaning. All the fragments have been liberated from any top-down need to make conceptual sense on a timescale greater than ten seconds, and appeal directly at the visceral level. Everything is very vivid and raw.

Still, to do anything useful with these fragments we need to build with them. We need to figure out how to recombine them skilfully from the bottom up, while keeping the energy. The kaleidoscope is maybe not the ideal metaphor here; it makes sense of the heap of broken images by imposing a rigid geometric scheme. We need something more local, provisional and reconfigurable. I don’t really know what, or I’d be writing that post instead of this one! But ‘build upwards from the fragments’ at least points towards the right direction.

Marx on alienation speedrun


[Written as part of Notebook Blog Month.]

This is a bit of an experiment to see how ‘set a timer for an hour and see what you find’ works for finding out basic information about a topic. Why Marx on alienation specifically? Well, it’s been in the back of my mind as something where I’ve wanted to know more for a while now, but not to the point where I could ever be bothered to, you know, put the work in. At least this way I’ll put some half-arsed work in, and find out something.

Before I set the timer, I’ll give some quick background. I first got curious about this when I was reading The World Beyond Your Head by Matthew Crawford. This book is in large part about the skilful use of physical tools, often in the context of work, and how modern life degrades the ability to form these craftsman-like relationships with your materials. (Ugh, that’s a horribly abstract and clunky sentence. I don’t want to spend ages writing this intro, so I’m not going to spend time making it into a concrete one. But I do include a lot of examples in that linked review.)

This sounds a lot like alienation, to me, but Crawford never mentions Marx. This isn’t surprising, because Marx is very much Not His Tribe and it’s a very tribal sort of book. But it seemed like an omission I should follow up.

(I suppose it’s possible he does talk about him in Shop Class As Soul Craft, his other book on the topic. I haven’t read that one.)

The other reason I’m interested in this is pretty silly. I’m fascinated by this meme:


There’s something quite deep buried in there, I think. The two computers look different – the work one is more of a utilitarian enterprise-edition black box, the home one is a friendlier silver laptop. But I suspect that even if you worked at a trendy startup with a Macbook covered in stickers, you’d still get a lot of the same effect. The two computers are going to be used in very different ways.

Bad Screen is embedded in a work culture with distanced, ‘professional’ norms. You’re expected to turn up at set times, put in a certain number of hours, and do tasks that you quite frankly don’t care about either way, and wouldn’t look at for five minutes if you weren’t being paid. If you’re feeling any strong emotions that day, you’re mostly expected to leave them at home to the extent that they’d interfere with your job.

Good Screen is much more deeply connected to your whole life – you might stare at it at weird hours of the night and morning, rather than ‘office hours’. You use it to further instrinsically motivated projects that you chose yourself, like overanalysing stupid internet memes. If you’re miserable while looking at Good Screen you can just cry about it, or get angry, or whatever. Good Screen reaches tendrils right out into your whole life, while Bad Screen is much more weakly connected to a smaller, professional self. That difference in contexts is going to leak right down to the perceptual level. You’re alienated from Bad Screen in a way that you aren’t from Good Screen.

I suspect Marx’s interest will be structural – the conditions that lead to this alienation – rather than phenomenological – what alienation feels like on the inside. So might not be as directly relevant as I’d like. But still… I’m not going to know either way if I don’t read anything.

OK, that’s enough blathering about background. Timer time. I’ll tidy up and add in links afterwards, but the majority of writing is done during the hour.

OK. Type ‘marx on alienation’ into Google, get pull quote:


Oh dear, two long German words already. Entfremdung (alienation/estrangement) looks fine I think. Gattungswesen (species-essence) is going to take more work to understand.

Fine I’ll read the Wikipedia article. It continues:

The theoretical basis of alienation within the capitalist mode of production is that the worker invariably loses the ability to determine life and destiny when deprived of the right to think (conceive) of themselves as the director of their own actions; to determine the character of said actions; to define relationships with other people; and to own those items of value from goods and services, produced by their own labour. Although the worker is an autonomous, self-realized human being, as an economic entity this worker is directed to goals and diverted to activities that are dictated by the bourgeoisie—who own the means of production—in order to extract from the worker the maximum amount of surplus value in the course of business competition among industrialists.

No massive surprises there. OK lets find out what the source texts are:

In the Economic and Philosophic Manuscripts of 1844 (1932), Karl Marx expressed the Entfremdung theory—of estrangement from the self. Philosophically, the theory of Entfremdung relies upon The Essence of Christianity (1841) by Ludwig Feuerbach which states that the idea of a supernatural god has alienated the natural characteristics of the human being. Moreover, Max Stirner extended Feuerbach’s analysis in The Ego and its Own (1845) that even the idea of “humanity” is an alienating concept for individuals to intellectually consider in its full philosophic implication. Marx and Friedrich Engels responded to these philosophic propositions in The German Ideology (1845).

So probably want to read about that first one. Open in new tab. Feuerbach sounds kind of interesting but different rabbit hole.

Next is type of alienation. Here’s Marx himself (from “Comment on James Mill”):

Let us suppose that we had carried out production as human beings. Each of us would have, in two ways, affirmed himself, and the other person. (i) In my production I would have objectified my individuality, its specific character, and, therefore, enjoyed not only an individual manifestation of my life during the activity, but also, when looking at the object, I would have the individual pleasure of knowing my personality to be objective, visible to the senses, and, hence, a power beyond all doubt. (ii) In your enjoyment, or use, of my product I would have the direct enjoyment both of being conscious of having satisfied a human need by my work, that is, of having objectified man’s essential nature, and of having thus created an object corresponding to the need of another man’s essential nature … Our products would be so many mirrors in which we saw reflected our essential nature.

So you affirm yourself in two ways – you manifest your individuality by what you choose to make, and also by directly enjoying seeing your work be useful to another person. Marx identifies four ways that industrial production breaks this:

Alienation of the worker. Design of the product is fixed by the capitalist class, worker is just implementing a fixed model. More detailed stuff I’ll read if I get time.

Alienation of the worker from the act of production. The product itself is made in some repetitive way that gives little psychological satisfaction.

Alienation of the worker from their Gattungswesen (species-essence). Now we get the long German word. Open in new tab. Described below as:

Conceptually, in the term “species-essence” the word “species” describes the intrinsic human mental essence that is characterized by a “plurality of interests” and “psychological dynamism”, whereby every individual has the desire and the tendency to engage in the many activities that promote mutual human survival and psychological well-being, by means of emotional connections with other people, with society. The psychic value of a human consists in being able to conceive (think) of the ends of their actions as purposeful ideas, which are distinct from the actions required to realize a given idea.

I’d need to read more to grasp the subtleties. For now, imagine it something like ‘human potential’ or ‘actualisation’. Mix of individual and societal, individual determination over your work and also connection to others.

This is thwarted by industrial production by turning the worker into a interchangeable, mechanised part.

Alienation of the worker from other workers. Workers are competing in the labour market and turned against each other.

Shit that’s already 25 minutes gone. I’d like to get further than one wiki article. Skim the rest for interesting bits. Ignore Hegel. OK the rest of this is mostly Hegel. Tiny criticism section which is all Althusser.

References: Marx originals, some university lecture notes. Maaybe follow the lecture notes if time.

Go back to that other wiki article on Economic and Philosophic Manuscripts of 1844. OK maybe Feuerbach is more important than I realised and not separate rabbit hole:

Marx expounds his theory of alienation, which he adapted (not without changes) from Feuerbach’s The Essence of Christianity (1841).

Another version of four types of alienation. Bit about Aristotle:

He refers to Aristotle’s praxis and production, by saying that the exchange of human activity involved in the exchange of human product, is the generic activity of man. Man’s conscious and authentic existence, he states, is social activity and social satisfaction.

Moreover, he sees human nature in true common life, and if that is not existent as such, men create their common human nature by creating their common life. Furthermore, he argues similarly to Aristotle that authentic common life does not originate from thought but from the material base, needs and egoism. However, in Marx’s view, life has to be humanly organized to allow a real form of common life, since under alienation human interaction is subordinated to a relationship between things. Hence consciousness alone is by far not enough.

Labour value of value. Everything reduced to its exchange rate, causing further alienation.

Marx is of the opinion that alienation reaches its peak once labour becomes wage labour.

OK that’s the end of the two wiki articles. Back to Google. Stanford Philosophy is next. See how far I get through this.

Introductory definitional stuff. Alienation as a kind of separation that is sin some way problematic.

Disclaimers. Not going to be much about historical development but mentions some names. More Hegel. Rousseau.

Two adjacent concepts ‘drawn from Hegelian and Marxist traditions’: fetishism and objectification.

‘Fetishism’ refers here to the idea of human creations which have somehow escaped (inappropriately separated out from) human control, achieved the appearance of independence, and come to enslave and oppress their creators… Consider, for instance, the frequency with which ‘market forces’ are understood and represented within modern culture as something outside of human control, as akin to natural forces which decide our fate.

Feuerbach had some similar argument about Christianity:

the Christian God demands real world sacrifices from individuals, typically in the form of a denial or repression of their essential human needs.

Similar to Marx’s alienation, but not all alienation is fetishism. E.g. alienation can come from our ‘our ruthlessly instrumental treatment of nature, rather than in nature’s tyranny over us’.

Next objectification. Not the feminist version.

In the present context, objectification refers rather to the role of productive activity in mediating the evolving relationship between humankind and the natural world.

i.e. humans make stuff and collectively shape the world.

These world transforming productive activities, we might say, embody the progressive self-realisation of humankind.

Marx would say that this doesn’t always take an alienated form. E.g. meaningful work that is freely chosen, that develops your capabilities and is useful to others. So different to say:

what is sometimes called the ‘Christian’ view of work. On this account, work is seen as a necessary evil, an unpleasant activity unfortunately required for our survival. It owes its name to its embrace of the claim that it was only after the Fall that human beings were required to work by the sweat of their brows

OK next we have subjective and objective alienation. Subjective – experiencing life as lacking meaning, feeling estranged from the world. Objective – potential is frustrated by conditions of the world.

Then there is a table of

subjective + objective present
objective present, not subjective
subjective present, not objective

and how different thinkers fit in. E.g. ‘I take it that existentialists think of (something like) objective alienation as a permanent feature of all human societies.’ Potentially interesting but will skip for now.

Haha I’m slow. Just realised this article is alienation in general, not just Marx. That’s why all these other people keep cropping up! D’oh! Ah well at least I’m getting some background. Still 15 minutes to go. Will skim for Marxy bits.

Something about the positive side of alienation, as an expression of individuality and differentiation. Hard to be too alienated in premodern tribe where you have no choice about what you have to do with life. In this view it should be an intermediate stage:

By a dialectical progression is meant only a movement from a stage characterised by a relationship of ‘undifferentiated unity’, through a stage characterised by a relationship of ‘differentiated disunity’, to a stage characterised by a relationship of ‘differentiated unity’.

Future communist societies will get to the wonderful world of ‘differentiated unity’ and then we’ll all be happy.

The suggestion here is that internal to the second stage, the stage of alienation, there is both a problematic separation from community and a positive liberation from engulfment.

Interesting sounding bit on morality as alienation. All those shoulds alienating you from your own taste.

certain conceptions of morality might embody or encourage a problematic division of self, and a problematic separation from much that is valuable in our lives.

Utilitarianism as example. Again this isn’t really Marx but alienation in general.

Unresolved issues. How much alienation in pre-capitalist societies? Religion alienation as another flavour.

Its plausibility is scarcely incontrovertible, given the amount of sheer productive drudgery, and worse, in pre-capitalist societies.

How much can we be free of systematic alienation?

Marx’s view about communism rests crucially on the judgement that it is the social relations of capitalist society, and not its material or technical arrangements, which are the cause of systematic forms of alienation.

OK that’s the end of the article. What can I do with the remaining 6 minutes? Not much. Back to Google. Next listing is from something called marxists.org. Really short but has a big reference list. More Hegel 😦

Some university lecture notes. Nothing is jumping out at me. Try this blog post. A little bit more about the species-being concept:

Because humans are biological beings, and not merely free-floating immaterial minds, we, like all other biological beings, must interact with and transform the natural world in order to survive. But what distinguishes us from all other animals, like bees, spiders, or beavers, which all transform the world based on instinct, is that we transform the world consciously and freely.5 Thus, the essence of a human being – what Marx calls our species-being – is to consciously and freely transform the world in order to meet our needs. Like many other philosophers, Marx believes that excellently doing what makes us distinctively human is the true source of fulfillment.

3 minutes. Next page of results. Now I’m on Issue 79 of International Socialism. Long and not a riveting read. OK here’s an example:

Peter Linebaugh in his history of 18th century London, The London Hanged, explained that workers considered themselves masters of what they produced. It took great repression, a ‘judicial onslaught’, in the late 18th century to convince them that what they produced belonged exclusively to the capitalists who owned the factories. During the 18th century most workers were not paid exclusively in money. ‘This was true of Russian serf labour, American slave labour, Irish agricultural labour and the metropolitan labour in London trades’

Reading some concrete history would be a lot more interesting. Thinking back to Lark Rise and Enclosure.

Time’s up! That was fun, and I learned a few things. A lot of it was just ‘in the water’, so that I vaguely knew it already, but I did learn some new terminology, and I have an idea of where to find the primary sources and go deeper if I want to.

Writing and searching in the same hour was a bit difficult and tended to bias towards spending longer on each link. Maybe an hour of just going down link rabbit holes and writing up afterwards would be worth trying too? Still, I think this format is promising.

Roses and traffic lights


[Written as part of Notebook Blog Month.]

I read Iain McGilchrist’s The Master and His Emissary last year, and I’m still digesting what I think of the book as a whole (see this thread for a few thoughts), but I’ve already got a lot of use out of some of the ideas I picked up along the way. One of the most surprising things I learned is that there are two very different ways we use the word ‘symbol’, and I’d never noticed! This observation is probably not unique to McGilchrist, and may be obvious to others anyway, but it was news to me.

The first sense is roughly what we mean by symbolism in poetry. The power of a poetic symbol lies in the strength of its associations to other ideas, objects and symbols, both direct and culturally specific. The rose is a canonical (western?) example:

In one sense of the word, a symbol such as the rose is the focus or centre of an endless network of connotations which ramify through our physical and mental, personal and cultural, experience in life, literature and art: the strength of the symbol is in direct proportion to the power it has to convey an array of implicit meanings, which need to remain implicit to be powerful. In this it is like a joke that has several layers of meaning – explaining them destroys its power.

The other sense is a more technical, practical one, that applies to the sort of symbols you see on clothes labels, maps and airport signs. These symbols need to be unambiguous. In this case secondary associations are useless at best and may be actively dangerous. A red traffic light needs to mean one thing only:

The other sort of symbol could be exemplified by the red traffic light: its power lies in its use, and its use depends on a 1:1 mapping of the command ‘stop’ onto the colour red, which precludes ambiguity and has to be explicit.

In the book these two quotes are bookended by a couple of sentences linking this back to his hemisphere model. I left those out because this point stands whether he’s right or not. I’m more interested in the implications, which I haven’t thought through very much yet.  I can’t think of any situations where we mistake one kind of symbol for the other – we generally know whether we’re reading poetry or a clothes label – so maybe this is something we just know how to navigate in practice, and the mixing together of concepts doesn’t matter very much.

Still, I find this confusion deeply weird, and I’m left with a few questions:

  • Who else has written about this point? Any good references?
  • Do other languages split these two concepts up into separate words?
  • Are there good examples of intermediate cases? Emoji seem like one potential good place to look. They need a fairly fixed meaning to work, but often pull a network of secondary meanings along with them (which sometimes – as with the eggplant/aubergine – end up overshadowing the original intended meaning). They’re often conveying squishy human emotions, and haven’t been as rigorously pruned for the purposes of technical rationality as airport signs. This would be an interesting topic to explore in itself.
  • Does being unaware of this distinction ever cause trouble in practice? Maybe emoji are the place to look again.

If you have any thoughts, let me know!

Five jobs meme, post-PhD edition

IMG_20200603_163119957 (2)

[Written as part of Notebook Blog Month.]

There was a ‘first five jobs’ meme going around in March when I first started compiling this list of potential posts. This is my favourite entry:

I’m going to do this but with the first five jobs I did immediately after getting a PhD in physics, because the answer is funnier. I had a sort of unmotivated directionless phase after I finished, and did a bunch of weird temporary jobs in the absence of any better ideas. Exactly five in fact. I haven’t made anything up here, the jobs really were that odd. Here’s the list:

1. Walking round hospitals measuring things

I really wanted to do something mindless straight after I finished, and I’d done some temporary admin work before, so I signed up with a temp agency. They really excelled themselves and came up with something more mindless than I could ever have imagined.

The job was seriously weird. There were two hospitals being merged together on a new site, and the project management office needed to collect data on how much storage space the new hospital would need for medical supplies. I’m not sure what the best way of doing this would be, but maybe it would involve, I don’t know, some Fermi estimates based on their current storage requirements, plus some efficiencies for the single site. What they actually did was make a giant spreadsheet of every sort of item ordered by the hospital (bandages, prosthetics, tiny orthopaedic screws) and then employ EIGHT OF US to go round the hospitals with tape measures FOR WEEKS tracking down and measuring every individual item on the list, including the tiny orthopaedic screws. It was a kind of bizarre treasure hunt round the wards and cupboards and operating theatre storerooms, and I sometimes got to scrub up and go into the theatres themselves for the more obscure items. I genuinely enjoyed this job, because I like walking and exploring and being nosy, but also wtf??

I was so talented at this challenging job that I was kept with one other guy for an even tougher assignment – weighing things. We went round finding all the different types of surgical kits and putting them on scales for… reasons, I guess.

I have no idea if any of this data was ever useful for anything.

2. Following medical secretaries around

After this I had a few weeks off for some reason I forget, and then I phoned the temp agency again. They had more work at the same place! This time they were collecting data on the space they’d need for admin work, and my job was to follow people round and tick a box saying what they were doing every five minutes. Most people were understandably pretty unhappy about being seen like a state, and were grumpy at first, but when they realised I wasn’t too irritating they’d soon warm up, often to the point of offering me office cake.

I discovered Slate Star Codex some time in this job (this was early 2014) and obsessively read my way through the whole back catalogue on a tiny phone screen in between ticking boxes. That was my first step down the rabbit hole of becoming an extremely online person, so I guess it’s notable in retrospect.

3. Receptionist in a soup factory

I only did this one for two weeks, which is good because it was deadly boring. I was on the reception desk signing visitors in and out of a soup factory in an industrial estate on the outskirts of Bristol. Not many people visit soup factories in industrial estates unless they’re bringing a lorry of soup ingredients, so this wasn’t very taxing. To pass the rest of the time I was given a big pile of soup batch reports (temperature, density, etc) to enter into some spreadsheet.

Not much else to say about this one. The soup smelled quite nice in the morning I guess.

4. Numeracy drop-in sessions for nurses

After this I stumbled into a job that actually used some of my mathematical knowledge. This wasn’t really due to any effort on my part – my landlord, a research chemist, knew some people at the maths department of a local university and passed my name on. It’s not a big research university with an army of PhD students to do all the bits of marking and tutoring that crop up, so I took some of these on.

This is the only one where I’ve tweaked the title for comic effect, because I did a lot of more normal stuff too, marking Fourier series engineering coursework and running computer labs. But the best thing they gave me was the numeracy drop-in sessions. There’d been a high profile case somewhere where a patient had been given X milligrams of something instead of X micrograms and died, and now nursing students had to pass a test to show understanding of unit conversions along with some other basic maths. It was a nice walk along the river to the nursing hospital, which was in a converted Victorian lunatic asylum, and I’d sit in their fancy barrel-vaulted canteen and help people out if a numeracy test was coming up, and get on with whatever I wanted if it wasn’t. Pretty enjoyable job.

5. Sorting the post in a law firm

This was about as exciting as it sounds. The post would come in early in the morning, and then we’d open it, sort it, scan it, email it out, and file the originals. Sometimes people would request the original documents, so in the afternoon we’d pick those out of the files. For some added excitement we’d do a trolley run to other offices to fetch the post.

The most fun I had on this job was when I got put on ‘destruction’ for a week. This is still a lot less fun than it sounds, and involved chucking old documents into bags for shredding after rescuing any stray passports and birth certificates. Still, it wasn’t supervised much, and a leisurely week of listening to music while throwing things in bags is quite relaxing.

After a couple of months of this I finally decided I’d had enough of this, and started looking for a more normal job. Since then I’ve been working as a programmer, like everybody else who left academia after a STEM PhD and didn’t have any other ideas. So that was that was the end of my prestigious career in weird temp work. Though I guess there’s still time to become a duck roper.

Synthesising foggy pearls



[Written as part of Notebook Blog Month.]

Very short one to start things off. I have a new twitter bio and I wanted to explain it.


Like everything else in my head it comes from mashing two things together and deciding that they are the same thing. First, here’s an alternative definition of ‘metascience’, from… whatever this game is:

I’m not sure ‘metascience’ has much of an agreed-on definition beyond the obvious ‘ideas that extend around or beyond science’, but the conference mentioned in the tweet expands on it with the following:

During this decade, we have witnessed the emergence of a new discipline called metascience, metaresearch, or the science of science. Most exciting was the fact that this is emerging as a truly interdisciplinary enterprise with contributors from every domain of research. This symposium served as a formative meeting for metascience as a discipline. The meeting brought together leading scholars that are investigating questions related to themes such as:

  • How do scientists generate ideas?
  • How are our statistics, methods, and measurement practices affecting our capacity to identify robust findings?
  • Does the distinction between exploratory and confirmatory research matter
  • What is replication and its impact and its value?
  • How do scientists interpret and treat evidence?
  • What are the cultures and norms of science?

I think that ‘synthesising foggy pearls’ is actually weirdly appropriate, especially for the first point about idea generation. You go into the fog — vague, contextual, disorienting swirls of untheorised confusing stuff — and try to condense out something more structured, durable and reusable. This process fascinates me more than almost anything. I want to understand how we do it, and I want to understand how to do it better.

A couple of months later I saw this tweet:

It turns out that there’s a beetle that synthesises pearls from fog already! Maybe it can teach us how to do metascience…

Notebook Blog Month

I’m going to try an experiment in June: writing lots of short notebook-style posts, roughly in the style of David MacIver’s notebook blog. I’ve been thinking of testing whether this format works for me for ages, and it seems like a good time to finally do it.

I crashed most of my writing routines in late 2019 by getting a new job with a long bus commute and dropping my monthly newsletter for a while to readjust. That was the main engine driving new post drafts, so once that crashed the blog went with it. Then 2020 came along and crashed everything else. I’ve been doing a lot of weird half-baked physics stuff but not really writing anything up properly, and I’ve sort of forgotten how to by now. This is my attempt to flywheel up some writing energy again, starting with some easier raw material than badly organised physics notes.

If I like it I may continue with something like this, if not I’ll probably go back to the newsletter format. Or combine both somehow? Don’t know yet.

Anyway… I’ve made a list. I’ve dredged through old drafts and newsletter notes and incomprehensible shower-thought emails to myself, and managed to pull together 50 topics I could potentially write about. And I’m going to make quick low-res attempts at a bunch of these. Some of these are pretty constrained in scope anyway, others would be a serious research project to do well, but either way I’ll just sit down for an hour or two and see what I can bash out in that time. I might also veer off the list if any good ideas come up during the month.

My goal is 20 posts but I’m not going to beat myself up if I don’t get there. I don’t have a good sense of how difficult this is going to be, and the idea is to have fun rather than kicking myself through a miserable obstacle course. Ten would be fine.

I wanted to call this Shitty Blog Posts Month, which is a funnier name, but I’m trying to wean myself off this sort of self-deprecation. It is quite likely that many of the posts will be shitty, given the time constraints. But I want to avoid the whole thing of saying ‘oh I’m not really trying, look I’ve labelled it ‘shitty’ and put it on a blog called Drossbucket, don’t judge me’. That was very effective several years ago for getting past my defences and starting to write at all, but I don’t need it any more.

OK, here’s the list. There’s no meaning behind the order, I just came up with a whole load of ideas and then randomised the list afterwards. In practice I’m likely to do a mix of the ones I’m most excited about and the ones I can phone in quickly, but I’d also be interested to know which ones look appealing to people – it’d be more fun to write for an audience!

Edit to add: I’m crossing these off as I do them. Also if I have more ideas I’m adding them to the end after the original fifty.

  1. Cognitive dancing! cognitive style! followup post
  2. Pullman – alethiometer/marionette theatre essay
  3. Debugging resources – just a list of blog posts etc people suggested in that twitter thread.
  4. The way mathematicians name variables is actually good and not bad. Notes from November: “I had a good thought about… something… but lost it before I opened this document 😦 Possibly it was connected to that John Cook thing David Chapman mentioned about how everything in probability is P… was connecting it to that thing people say about meaningless variable names in mathematics. I don’t think this is such a big deal because mathematical symbols end up richly saturated with meaning when you use them… they tend to represent recurring concepts not arbitrary bags-of-crap like in programming.”
  5. Bret Victor Kill Math
  6. Mane 6 as Mitford sisters
  7. Talking About Machines Kindle highlights
  8. Examples only
  9. Whiny post about why I find debugging hard
  10. Redraft Wittgenstein newsletter stuff as post
  11. “Neoliberalism”
  12. Something on ‘accountability’ narratives e.g. that Meaningness gluing goldfish crackers to ceiling example.
  13. I don’t like the ‘bullshit jobs’ classification, too binary. Attempt to bullshit out a better taxonomy.
  14. Programmer envy
  15. Old ‘fragmentation’ draft
  16. Something about that vague thing I was just thinking about on intellectually understanding vs viscerally understanding – the visceral one feels a lot ‘more real’ so it is really tempting (coming from a sort of rational worldview) to think there is a clean theory behind it. whereas actually that visceral sense is coming out of the background of previous engagement with the thing, which is complicated and specific, and isn’t ever going to resolve to a clean theory. need to sort out what I’m even trying to say.
  17. Do something with all those Tasic postmodern mathematics notes I made
  18. Bristol bridge walk tweets in blog post form
  19. Rewilding physics
  20. Too many cooks spoil the global section
  21. Marx on alienation speedrun. Feel like I should know what he has to say but I can’t be arsed. So set a timer for one hour to research and read, then write up what I find.
  22. Pebbles and sheep as an example of the middle distance thing. If they are an example of the middle distance thing. Write the post and find out.
  23. Something like Dan Luu’s ‘HN: The Good Parts’ post where I dig out comments I really like from various places. or maybe exceptional comment threads? i dunno.
  24. Rubik’s cube learning notes
  25. Write out that twitter thread on McGilchrist/Derrida/whatever with some words between it
  26. Thinking on the page’/ writing, fast and slow
  27. Old draft on my dislike of ‘thin’ technical terminology
  28. Trailing clouds of glory’ ramble from August 2019 notes
  29. Crackpot time 3
  30. That thing I was thinking about in the stationery aisle in the post office. Something like ‘most stationery is ambiguity reduction’. Is it true? Write the post and find out.
  31. Some shit on going all out on your natural strengths vs getting to mediocre on your weaknesses.
  32. Worse than quantum mechanics. (PR box and Piponi’s machine are both ‘worse than QM’ in some way. Is there any deep connection there?)
  33. Something to do with the banana’s indexicality post
  34. You Are Not An Artisan thoughts
  35. David MacIver’s book prompt notebook post thing
  36. Taste
  37. Pretentious essay on The Waste Land and oddly satisfying videos
  38. Close to the Machine Kindle highlights
  39. The Well Wrought Urn / Heresy of Paraphrase
  40. The middle distance post comments are good enough that I could probably make a post by summarising them.
  41. Seven Types of Ambiguity Kindle highlights
  42. diff mcgilchrist.txt chapman.txt
  43. Two types of symbols bit from McGilchrist
  44. Keep your identity embodied and *maybe* also illegible.
  45. Five jobs meme: post-PhD edition
  46. Ben Hoffman on LW has a good comment… somewhere… about how we end up doing ‘accidental deliberate practice’ on things we already like, e.g. improving writing by monologuing in your head while walking. Find it and give more examples.
  47. Practical Criticism book thoughts
  48. AU where Derrida trolled mathematicians instead and writing/speech became algebra/geometry
  49. Bristol study hall – what it is + why it’s good
  50. ‘Eating fog’ (Namib desert beetle)
  51. Having opinions in public
  52. Motive power
  53. Universities are still good
  54. Dig out that Garfinkel pulsar thread and do something with it
  55. Doing things on purpose
  56. Some rambling thoughts on visual imagery


Grabiner on eighteenth century mathematics

These are some notes I wrote a couple of years ago on Judith Grabiner’s paper ‘Is Mathematical Truth Time Dependent?’ David Chapman suggested I put them up somewhere public, so here they are with a few tweaks and comments. They’re still notes, though – don’t expect proper sentences everywhere! I’m not personally hugely interested in the framing question about mathematical truth, but I really enjoyed the main part of the paper, which compares the ‘if it works, do it’ culture of eighteenth century mathematics to the focus on rigour that came later.

I haven’t read all that much history of mathematics, so I don’t have a lot of context to put this in. If something looks off or oversimplified let me know.

I found this essay in an anthology called New Directions in the Philosophy of Mathematics, edited by Thomas Tymoczko. I picked this book up more or less by luck when I was a PhD student and a professor was having a clearout, and I didn’t have high hopes – nothing I’d previously read about philosophy of mathematics had made much sense to me. Platonism, logicism, formalism and the rest all seemed equally bad, and I wasn’t too interested in formal logic and foundations. However, this book promised something different:

The origin of this book was a seminar in the philosophpy of mathematics held at Smith College during the summer of 1979. An informal group of mathematicians, philosophers and logicians met regularly to discuss common concerns about the nature of mathematics. Our meetings were alternately frustrating and stimulating. We were frustrated by the inablility of traditional philosophical formulations to articulate the actual experience of mathematicians. We did not want yet another restatement of the merits and vicissitudes of the various foundational programs – platonism, logicism, formalism and intuitionism. However, we were also frustrated by the difficulty of articulating a viable alternative to foundationalism, a new approach that would speak to mathematicians and philosophers about their common concerns. Our meetings were most exciting when we managed to glimpse an alternative.

There’s plenty of other good stuff in the book, including some famous pieces like Thurston’s classic On proof and progress in mathematics, and a couple of reprinted sections of Lakatos’s Proofs and Refutations.

Anyway, here are the notes. Anything I’ve put in quotes is Grabiner. Anything in square brackets is some random tangent I’ve gone off on.

Two “generalizations about the way many eighteenth-century mathematicians worked”:

  1. “… the primary emphasis was on getting results”. Huge explosion in creativity, but “the chances are good that these results were originally obtained in ways utterly different from the ways we prove them today. It is doubtful that Euler and his contemporaries would have been able to derive their results if they had been burdened with our standards of rigor”.
  2. “… mathematicians placed great reliance on the power of symbols. Sometimes it seems to have been assumed that if one could just write down something which was symbolically coherent, the truth of the statement was guaranteed.” This extended to e.g. manipulating infinite power series just like very long polynomials.

Euler’s Taylor expansion of \cos(nz) starting from the binomial expansion as one example. He takes z as infinitely small and n as infinitely large, and is happy to assume their product is finite without worrying too much. “The modern reader may be left slightly breathless”, but he gets the right answer.

Trust in symbol manipulation was “somewhat anomalous in the history of mathematics”. Grabiner suggests it came from the recent success of algebra and the calculus. E.g. Leibniz’s notation, which “does the thinking for us” (chain rule as example). This also extended out of maths, e.g. Lavoisier’s idea of ‘chemical algebra’.

18th c was interested in foundations (e.g. Berkeley on calculus being insufficiently rigorous) but this was “not the basic concern” and generally was “relegated to Chapter I of textbooks, or found in popularizations”, not in research papers.

This changed in the 19th c beginning with Cauchy and Bolzano – beginnings of rigorous treatments of limits, continuity etc.

Why did standards change?

“The first explanation which may occur to us is like the one we use to justify rigor to our students today: the calculus was made rigorous to avoid errors, and to correct errors already made.” Doesn’t really hold up – there were surprisingly few mistakes in the 18th c stuff as they “had an almost unerring intuition”.

[I’ve been meaning to look into this for a while, as I get sick of that particular justification being trotted out, always with the same dubious examples. One of these is Weierstrass’s continuous-everywhere-differentiable-nowhere function. This is a genuine example of something the less rigorous approach failed to find, but it came much later, so isn’t what got people started on rigour.

The other example normally given is about something called “the Italian school of algebraic geometry”, which apparently went off the rails in the early 20th c and published false stuff. There’s some information on that in the answers to a MathOverflow question by Kevin Buzzard and the linked email from David Mumford – from a quick read it looks like it was one guy, Severi, who really lost it. Anyway, this is also a lot later than the 18th century.]

It is true though that by the end of the 18th c they were getting into topics – complex functions, multivariable calculus – where “there are many plausible conjectures whose truth is relatively difficult to evaluate intuitively”, so rigour was more useful.

Second possible explanation – need to unify the mass of results thrown up in the 18th c. Probably some truth to this: current methods were hitting diminishing returns, time to “sit back and reflect”.

Third explanation – prior existence of rigour in Euclid’s geometry. Berkeley’s attack on calculus was on this line.

One other interesting factor she suggests – an increasing need for mathematicians to teach (as they became employees of government-sponsored institutions rather than being attached to royal courts). École Polytechnique as model for these.

“Teaching always makes the teacher think carefully about the basis for the subject”. Moving from self-educated or apprentice-master set-ups, where you learn piecemeal from examples of successful thinking, to a more formalised ‘here are the foundations’ approach.

Her evidence – origins of foundational work often emerged from lecture courses. This was true for Lagrange, Cauchy, Weierstrass and Dedekind.

[I don’t know how strong this evidence is, but it’s a really interesting theory. I’ve had literalbanana‘s blog post on indexicality thoroughly stuck in my head for the last month, so I’m seeing that idea everywhere – this is one example. Teaching a large class forces you to take knowledge that was previously highly situated and indexical – ‘oh yes, you need to do this’ – and pull it out into a public form that makes sense to people not deeply immersed in that context. Compare Thurston’s quote in Proof and progress in mathematics: “When a significant theorem is proved, it often (but not always) happens that the solution can be communicated in a matter of minutes from one person to another within the subfield. The same proof would be communicated and generally understood in an hour talk to members of the subfield. It would be the subject of a 15- or 20-page paper, which could be read and understood in a few hours or perhaps days by members of the subfield.”]

How did standards change?

Often 18th c methods were repurposed/generalised. E.g. haphazard comparisons of particular series to the geometric series became Cauchy’s general convergence tests. Or old methods of computing the error term epsilon for the nth approximation get turned round, so that we are given epsilon and show we can always find n to beat that error term. This is essentially the definition of convergence we still use today.


Goes back to original question: is mathematical truth time-dependent? Sets up two bad options to knock down…

  • Relativism. “‘Sufficient unto the day is the rigor thereof.’ Mathematical truth is just what the editors of the Transactions say it is.” This wouldn’t explain why Cauchy and Weierstrass were ever unsatisfied in the first place.
  • MAXIMAL RIGOUR AT ALL TIMES. The 18th c was just sloppy. “According to this high standard, which textbooks sometimes urge on students, Euler would never have written a line.”

[A lot of my grumpiness about rigour is because it was exactly what I didn’t need as a first year maths student. I was just exploring the 18th century explosion myself and discovering the power of mathematics, and what I needed right then was to be able to run with that and learn more cool shit, without fussing over precise epsilon-delta definitions. Maybe it would have worked for me a couple of years later, if I’d seen enough examples to have come across a situation where rigour was useful. This seems to vary a lot though – David Chapman replied that lack of rigour was what he was annoyed by at that age, and he was driven to the library to read about Dedekind cuts.]

… then suggests “a third possibility”:

  • A Kuhnian picture where mathematics grows “not only by successive increments, but also by occasional revolutions”. “We can be consoled that most of the old bricks will find places somewhere in the new structure”.

The shitpost-to-scholarship pipeline

I’m at @ssica3003‘s Sensemaker Workshop today, and thought it would be fun to get a blog post out while I’m here, so I dug out this draft I wrote back in August for the newsletter. I wasn’t sure I liked it much at the time, but reading back it’s better than I remembered and works as a first stab at the the idea, at least. Hopefully I can get it a little further down the pipeline. There are some questions at the end, so let me know if you have any thoughts.

shitpost_basicAnyway, the rough idea is… there’s an extraordinary explosion of creative idea generation going on online. And there’s this fascinating kind of pipeline where people will start feeling out the vague beginnings of an idea through twitter threads and dumb throwaway posts and blog comments and email conversations, and then if something looks promising they’ll discuss more and pull in bits of other people’s ideas, and gradually build up to more thought out, polished work.

I’m excited about this culture for a lot of reasons. It’s a kind of online version of the casual, unobservable ‘dark matter’ part of academia, the part you can’t access by looking at published work – all the throwing around wild claims over coffee in the common room and in the pub on Friday evenings, bits of ‘yeah, that paper is unreadable, but this is what they’re really talking about’ insight from people in the know, standing round the whiteboard trying to figure something out, group meeting gossip, and the like. And it’s an incredibly vivid and alive version, at a time when large parts of normal academia have become rigid and bureaucratised and plain boring. This seems important to me: it’s the shitposting engine that produces the raw generative power that can drive more focussed work further down the line.

There’s a kind of wild energy; people aren’t afraid to go after big topics. We’ve got ourselves free of the constraints of the academic pimple factory:

An example of Little History is an essay by Matt Might (clearly a Marvel superhero in a counterfactual universe) titled The Illustrated Guide to a Phd. Go read it. It’ll only take a minute. It frames the sum of all human knowledge as a big circular bubble, and your PhD as a little pimple on the surface of it. I’ll call this the Mighty Diagram. It gets passed around in graduate student circles with depressing frequency.

Instead of a dent in the universe, you get a pimple on a uncritically proceduralist conceptualization of the frontier of knowledge as the sum of all the peer-reviewed academic literature in the world.

What makes this essay utterly horrifying is that it is actually an accurate description of what a PhD is; it calibrates academic career expectations correctly and offers an accurate sense of perspective on the peer-reviewed life. I suspect Matt Might sincerely intended the essay as a helpful guide to academic survival, but its effect is to put aspiring scholars in their place, rather than help them find a sense of place in the universe. It’s a You Are Here map for your intellectual journey at the end of a PhD, you disgusting little pimple, you. Kneel before this awe-inspiring edifice of knowledge that you’re lucky to be allowed to add a pimple to.

This rings very true with my own experience of academia, and the mindset it got me into. I personally found that after a couple of years out of there my thinking kind of cleared and became more expansive, and I was able to have good ideas again.

Unfortunately, this pipeline only goes so far. Currently, I think we’re in something like this situation:

shitpost_fullIt currently tends to dump ideas out somewhere around the ‘insight porn’ point – ideas that you read, think ‘oh that’s clever’, hit the like button, maybe comment on or talk about for a bit, then completely forget a week later. In the best case, a fragment of the idea or a bit of new jargon escapes into the local thought soup and can be combined with other ideas that are currently percolating. Sometimes this can be quite a powerful effect on its own. But there are a lot of places that academia still goes to that just can’t be reached in this way.

One of my favourite examples of this dynamic is Sarah Perry’s theory of mess. This is a genuinely great idea, and it’s not just a vague ‘insight’ – it’s an initial sketch of a satisfying explanatory theory of what mess is, complete with some very convincing examples and thought experiments (put a kaleidoscope filter on your mess, and it’s no longer mess!). But as far as I can tell, it got the same treatment as everything else that goes down the pipe – we all liked it and moved on. No real discussion (that I know of) of how to test it, or what has already been done in this line, or probing to see where it might fall down. Does it work? Who knows! On to the next idea!

Now, there’s an obvious explanation for why this happens. Most of us are not doing this as a full time job. We’re fitting this into the spare time we get, alongside paid work or other responsibilities. So we’re only really interested in doing the enjoyable parts of the idea generation process. Chucking around ideas is easy and fun, whereas checking whether they actually work is hard and boring. It’s not a big surprise that people prefer easy and fun work to hard and boring work.

There’s a lot of truth to this, but I think it’s slightly too cynical, in that it both makes the first part of the pipeline sound too easy and the second half of the pipeline too hard. Chucking around ideas is easy, but to be able to do that we need to have some good ones to chuck around, and that’s not exactly trivial. We have some advantage in being able to go after very broad, vague, ambiguous, undeveloped topics, and slowly clear fog. There’s no pressure to quickly get to a point where we can publish something. And at the same time, polishing up ideas is hardly some unrelenting tedious grind. Calculating can be fun, testing can be fun, writing up can be fun. If your eventual aim is to publish in traditional academia then there are some definite unfun parts, like altering your conversational blog post style to fit a more academic register, but this is only one part of the process.

My own experiments

For me, at least, it just feels unsatisfying to leave ideas at the insight porn stage. There’s a natural pull in the direction of getting further down the pipeline, rather than a tedious sense of duty.  I’ve been playing around with some haphazard experiments of my own, and I think I’ve got past the insight porn stage too with some of them, but nowhere near as far as I’d like. I’ll go through a couple of examples.

A few years ago, I wrote a tumblr post called stupid bat and ball, title all lower case, 700 words of low-effort writing not far above the shitpost level. I wasn’t really expecting it to go anywhere further. But it did contain a small core of insight – the bat and ball question of the Cognitive Reflection Test is different to the other two questions in some respect, and so the questions don’t really form a natural set. When I got the wordpress blog I reposted it, and eventually it attracted some really good comments that probed the mechanics of the bat and ball question much more deeply than I had. So I realised that this idea probably was worth investigating and that I should up my game a bit, and I started reading some of the literature. I discovered that the bat and ball question came first and the others were picked ‘to be like it’, with no elaboration of the process for picking them, which confirmed my suspicion that not much work went into question validation. And I found a fascinating follow-up paper showing how ridiculously sticky the wrong answer is.

The comments to this post pushed things further again, coming up with more detailed explorations of how the difficulty relates to the way the problem maps numbers to an abstract quantity (the difference in price), but fools you into mapping it to a concrete one (the price of the bat). @_awbery pointed out that this abstract/concrete confusion is completely missing from the other two questions, where all the quantities map to concrete objects. And anders devised a set of ‘similar questions’ that turn up the level of abstractness one step at a time. These comments point towards something like rat-running experiments for the Cognitive Reflection Test, getting an understanding of how the tools we’re trying to use actually work before using them to make inferences about abstractions like ‘cognitive reflection’. I do think a potentially valuable contribution could be made here. 

But… I’m not really the person to do it. (Even if I cared more about this specific question than I do. I’d pretty much used up my remaining store of shits-to-give on writing the blog post, and didn’t even have enough left to engage with the comments as fully as I’d have liked to.) Doing psych research without fooling yourself sounds like an absolute minefield even if you know what you’re doing, and I have no expertise at all. 

So I guess in this case I quit the pipeline at the level of having a sort of slapdash lit review with some pointers to interesting ways to take it further. Not the most impressive result. But the interesting bit for me was the distance I travelled from the original tumblr post, which I’d put no effort into at all, and the way the project took on a life of its own, with other people helping to propel this considerably further than I’d ever thought to take it myself.

My other example is all my thinking about negative probability in the last year and a half. Although it sounds superficially like a kind of a crackpot topic, there are deep links to quantum mechanics on phase space, and I’ve been using my fascination with this as a serious starting point to learn all kinds of interesting things in quantum foundations/quantum information. I’ve been experimenting with the discipline of using a single paper as my focus, and this has been incredibly helpful for keeping me on track, and damping down my normal habit of wandering from subject to subject too quickly to pick up anything useful. 

I’m more serious about this project than the bat and ball one – it actually connects to an enduring deep interest rather than something I blundered into by accident. Again, I’m not yet as far down the pipeline as I want to be, but I’ve got past the vague insight level. My last couple of posts explored an intriguing decomposition of the Wigner function for a qubit that I found myself, and that I can see some potential use for in interpreting negative probabilities. Since then had quite a few more ideas that I want to investigate, and I’ve started to link things into a more coherent picture. There’s also a lot more I could be doing in terms of making contact with people in academia and asking questions (something I’m rather bad at). I can definitely see how to push further. 

It’s still really funny to me that I’m cheerfully crashing about between cognitive psych and quantum foundations, with a few clueless forays into reading Derrida for good measure. Whereas in academia I’d have felt daring if I tried to pivot from burst to continuous sources of gravitational waves from neutron stars, or something. Obviously this is too scattered for me to get anything done, and I need to get better at idea triage. But there’s something really psychologically healthy about this mindset of just taking a direct run at whatever I feel like, instead of thinking ‘oh, that’s outside my field, I can’t think about that.’ I want to keep this even as I hopefully learn to focus my efforts more usefully.


Right, I want to push this out now so I can stop being antisocial at the workshop. I’ll end with some questions:

  • What are examples of people navigating the whole shitpost-to-scholarship pipeline successfully on the public internet? I’m particularly interested in people who are trying for academia-style focussed research on specific object-level questions, rather than big-picture synthesis or popularisation.
  • Is there any kind of institutional support out there, or is it all just individual weird nerds pursuing individual weird research programs?
  • Has anyone written about this well already? For a start, Venkatesh Rao had a couple of excellent threads here and here on a similar topic. It’s a much more pessimistic take, which actually fits my current drizzle-soaked winter-brain opinions better than these cheery ramblings from last summer – for example, in most of my experiments I haven’t managed to get much further than this sort of ‘reading published literature and blogging a few derivative observations’ stuff. I’d like to hear about anything else relevant that people have liked.


The middle distance

At the end of my last post, I talked about Brian Cantwell Smith’s idea of ‘the middle distance’ – an intermediate space between complete causal disconnectedness and rigid causal coupling. I was already vaguely aware of this idea from a helpful exchange somewhere in the bowels of a Meaningness comments section but hadn’t quite grasped its importance (the whole thread is worth reading, but I’m thinking about the bit starting here). Then I blundered into my own clumsy restatement of the idea while thinking about cognitive decoupling, and finally saw the point. So I started reading On the Origin of Objects.

It’s a difficult book, with a lot more metaphysics than I realised I was signing up for, and this ‘middle distance’ idea is only a small part of a very complex, densely interconnected argument that that I don’t understand at all well and am not even going to attempt to explain. But the examples Smith uses to illustrate the idea are very accessible without the rest of the machinery of the book, and helpful on their own.

I was also surprised by how little I could find online – searching for e.g. “brian cantwell smith” “middle distance” turns up lots of direct references to On The Origin of Objects, and a couple of reviews, but not much in the way of secondary commentary explaining the term. You pretty much have to just go and read the whole book. So I thought it was worth making a post that just extracted these three examples out.

Example 1: Super-sunflowers

Smith’s first example is fanciful but intended to quickly give the flavour of the idea:

… imagine that a species of “super-sunflower” develops in California to grow in the presence of large redwoods. Suppose that ordinary sunflowers move heliotropically, as the myth would have it, but that they stop or even droop when the sun goes behind a tree. Once the sun re-emerges, they can once again be effectively driven by the direction of the incident rays, lifting up their faces, and reorienting to the new position. But this takes time. Super-sunflowers perform the following trick: even when the sun disappears, they continue to rotate at approximately the requisite ¼° per minute, so that the super-sunflowers are more nearly oriented to the light when the sun appears.

A normal sunflower is directly coupled to the movement of the sun. This is analogous to simple feedback systems like, for example, the bimetallic strip in a thermostat, which curls when the strip is heated and one side expands more than the other. In some weak sense, the curve of the bimetallic strip ‘represents’ the change in temperature. But the coupling is so direct that calling it ‘representation’ is dragging in more intentional language that we need. It’s just a load of physics.

The super-sunflower brings in a new ingredient: it carries on attempting to track the sun even when they’re out of direct causal contact. Smith argues that this disconnected tracking is the (sunflower) seed that genuine intentionality grows from. We are now on the way to something that can really be said to ‘represent’ the movement of the sun:

This behaviour, which I will call “non-effective tracking”, is no less than the forerunner of semantics: a very simple form of effect-transcending coordination in some way essential to the overall existence or well-being of the constituted system.

Example 2: Error checking

Now for a more realistic example. Consider the following simple error-checking system:


There’s a 32 bit word that we want to send, but we want to be sure that it’s been transmitted correctly. So we also send a 6-bit ‘check code’ containing the number of ones (19 of them in this instance, or 010011 in binary). If these don’t match, we know something’s gone wrong.

Obviously, we want the 6-bit code to stay coordinated with the 32-bit word for the whole storage period, and not just randomly change to some other count of ones, or it’s useless. Less obviously (“because it is such a basic assumption underlying the whole situation that we do not tend to think about it explicitly”), we don’t want the 6-bit code to invariably be correlated to the 32-bit word, so that a change in the word always changes the code. Otherwise we couldn’t do error checking at all! If a cosmic ray flips one of the bits in the word, we want the code to remain intact, so we can use it to detect the error. So again we have this ‘middle distance’ between direct coupling and irrelevance.

Example 3: File caches

One final real-world example: file caches. We want the data stored in the cache to be similar to the real data, or it’s not going to be much of a cache. At the same time, though, if we make everything exactly the same as the original data store, it’s going to take exactly as long to access the cache as it is to access the original data, so that it’s no longer really a cache.

Flex and slop

In all these examples, it’s important that the ‘representing’ system tries to stay coordinated with the distant ‘represented’ system while they’re out of direct contact. The super-sunflower keeps turning, the check code maintains its count of ones, the file cache maintains the data that was previously written to it:

In all these situations, what starts out as effectively coupled is gradually pulled apart, but separated in such a way as to honor a non-effective long-distance coordination condition, leading eventually to effective reconnection or reconciliation.

For this to be possible, the world needs to be able to support the right level of separation:

The world is fundamentally characterized by an underlying flex or slop – a kind of slack or ‘play’ that allows some bits to move about or adjust without much influencing, and without being much influenced by, other bits. Thus we can play jazz in Helsinki, as loud as we please, without troubling the Trappists in Montana. Moths can fly into the night with only a minimal expenditure of energy, because they have to rearrange only a tiny fraction of the world’s mass. An idea can erupt in Los Angeles, turn into a project, capture the fancy of hundreds of people, and later subside, never to be heard of again, all without having any impact whatsoever on the goings-on in New York.

This slop makes causal disconnection possible – ‘subjects’ can rearrange the representation independently of the ‘objects’ being represented. (This is what makes computation ‘cheap’ – we can rearrange some bits without having to also rearrange some big object elsewhere that they are supposed to represent some aspect of.) To make the point, Smith compares this with two imaginary worlds where this sort of ‘middle distance’ representation couldn’t get started. The first world consists of nothing but a huge assemblage of interlocking gears that turn together exactly without slipping, all at the same time. In this world, there is no slop at all, so nothing can ever get out of causal contact with anything else. You could maybe say that one cog ‘represents’ another cog, but really everything is just like the thermostat, too directly coupled to count interestingly as a representation. The second world is just a bunch of particles drifting in the void without interaction. This has gone beyond slop into complete irrelevance. Nothing is connected enough to have any kind of structural relation to anything else.

The three examples given above – file caches, error checking and the super-sunflower – are really only one step up from the thermostat, too simple to have anything much like genuine ‘intentional content’. The tracking behaviour of the representing object is too simple – the super-sunflower just moves across the sky, and the file cache and check code just sit there unchanged. Smith acknowledges this, and says that the exchange between ‘representer’ and ‘represented’ has to have a lot more structure, with alternating patterns of being in and out of causal contact, and some other ‘stabilisation’ patterns that I don’t really understand, that somehow help to individuate the two as separate objects. At this point, the concrete examples run completely dry, and I get lost in some complicated argument about ‘patterns of cross-cutting extension’ which I haven’t managed to disentangle yet. The basic idea illustrated by the three examples was new to me, though, and worth having on its own.

Cognitive decoupling and banana phones

Last year I wrote a post which used an obscure term from cognitive psychology and an obscure passage from The Bell Jar to make a confused point about something I didn’t understand very well. I wasn’t expecting this to go very far, but it got more interest than I expected, and some very thoughtful comments. Then John Nerst wrote a much clearer summary of the central idea, attached it to a noisily controversial argument-of-the-month and sent it flying off around the internet. Suddenly ‘cognitive decoupling’ was something of a hit.

If I’d known this was going to happen I might have put a bit more effort into the original blog post. For a start, I might have done some actual reading, instead of just grabbing a term I liked the sound of from one of Sarah Constantin’s blog posts and running with it. So I wanted to understand how the term as we’ve been applying it differs from Stanovich’s original use, and what his influences were. I haven’t done a particularly thorough job on this, but I have turned up a few interesting things, including a surprisingly direct link to a 1987 paper on pretending that a banana is a phone. I also learned that the intellectual history I’d hallucinated for the term based on zero reading was completely wrong, but wrong in a way that’s been strangely productive to think about. I’ll describe both the actual history and my weird fake one below. But first I’ll briefly go back over what the hell ‘cognitive decoupling’ is supposed to mean, for people who don’t want to wade through all those links.

Roses, tripe, and the bat and ball again

Stanovich is interested in whether, to use Constantin’s phrase, ‘rational people exist’. In this case ‘rational’ behaviour is meant to mean something like systematically avoiding cognitive biases that most people fall into. One of his examples is the Wason selection task, which involves turning over cards to verify the statement ‘If the card has an even number on one face it will be red on the reverse’. More vivid real-world situations, like Stanovich’s example of ‘if you eat tripe you will get sick’, are much easier for people to reason about than the decontextualised card-picking version. (Cosmides and Tooby’s beer version is even easier than the tripe one.)

A second example he gives is the ‘rose syllogism’:

Premise 1: All living things need water
Premise 2: Roses need water
Therefore, Roses are living things

A majority of university students incorrectly judge this as valid, whereas almost nobody thinks this structurally equivalent version makes sense:

Premise 1: All insects need oxygen
Premise 2: Mice need oxygen
Therefore, Mice are insects

The rose conclusion fits well with our existing background understanding of the world, so we are inclined to accept it. The mouse conclusion is stupid, so this doesn’t happen.

A final example would be the bat and ball problem from the Cognitive Reflection Test: ‘A bat and a ball cost $1.10. The bat costs $1 more than the ball. How much does the ball cost?’. I’ve already written about that one in excruciating detail, so I won’t repeat myself too much, but in this case the interfering context isn’t so much background knowledge as a very distracting wrong answer.

Stanovich’s contention is that people that manage to navigate these problems successfully have an unusually high capacity for something he calls ‘cognitive decoupling’: separating out the knowledge we need to reason about a specific situation from other, interfering contextual information. In a 2013 paper with Toplak he describes decoupling as follows:

When we reason hypothetically, we create temporary models of the world and test out actions (or alternative causes) in that simulated world. In order to reason hypothetically we must, however, have one critical cognitive capability—we must be able to prevent our representations of the real world from becoming confused with representations of imaginary situations. The so-called cognitive decoupling operations are the central feature of Type 2 processing that make this possible…

The important issue for our purposes is that decoupling secondary representations from the world and then maintaining the decoupling while simulation is carried out is the defining feature of Type 2 processing.

(‘Type 2’ is a more recent name for ‘System 2’, in the ‘System 1’/’System 2’ dual process typology made famous by Kahneman’s Thinking, Fast and Slow. See Kaj Sotala’s post here for a nice discussion of Stanovich and Evan’s work relating this split to the idea of cognitive decoupling, and other work that has questioned the relevance of this split.)

I don’t know how well this works as an explanation of what’s really going on in these situations. I haven’t dug into the history of the Wason or rose-syllogism tests at all, and, as with the bat and ball question, I’d really like to know what was done to validate these as good tests. What similar questions were tried? What other explanations, like prior exposure to logical reasoning, were identified, and how were these controlled for? I don’t have time for that currently. For the purposes of this post, I’m more interested in understanding what Stanovich’s influences were in coming up with this idea, rather than whether it’s a particularly good explanation.

Context, wide and narrow

Constantin’s post is more or less what she calls a ‘fact post’, summarising research in the area without too much editorial gloss. When I picked this up, I was mostly excited by the one bit of speculation at the end, and the striking ‘cognitive decoupling elite’ phrase, and didn’t make any effort to stay close to Stanovich’s meaning. Now I’ve read some more, I think that in the end we didn’t drift too far away. Here is Nerst’s summary of the idea:

High-decouplers isolate ideas from each other and the surrounding context. This is a necessary practice in science which works by isolating variables, teasing out causality and formalizing and operationalizing claims into carefully delineated hypotheses. Cognitive decoupling is what scientists do.

To a high-decoupler, all you need to do to isolate an idea from its context or implications is to say so: “by X I don’t mean Y”. When that magical ritual has been performed you have the right to have your claims evaluated in isolation. This is Rational Style debate…

While science and engineering disciplines (and analytic philosophy) are populated by people with a knack for decoupling who learn to take this norm for granted, other intellectual disciplines are not. Instead they’re largely composed of what’s opposite the scientist in the gallery of brainy archetypes: the literary or artistic intellectual.

This crowd doesn’t live in a world where decoupling is standard practice. On the contrary, coupling is what makes what they do work. Novelists, poets, artists and other storytellers like journalists, politicians and PR people rely on thick, rich and ambiguous meanings, associations, implications and allusions to evoke feelings, impressions and ideas in their audience. The words “artistic” and “literary” refers to using idea couplings well to subtly and indirectly push the audience’s meaning-buttons.

Now of course, Nerst is aiming at a much wider scope – he’s trying to apply this to controversial real-world arguments, rather than experimental studies of cognitive biases. But he’s talking about roughly the same mechanism of isolating an idea from its surrounding context.

There is a more subtle difference, though, that I find interesting. It’s not a sharp distinction so much as a difference in emphasis. In Nerst’s description, we’re looking at the coupling between one specific idea and its whole background context, which can be a complex soup of ‘thick, rich and ambiguous meanings, associations, implications and allusions’. This is a clear ‘outside’ description of the beautiful ‘inside’ one that I pulled from The Bell Jar, talking about how it actually feels (to some of us, anyway) to drag ideas out from the context that gave them meaning:

Botany was fine, because I loved cutting up leaves and putting them under the microscope and drawing diagrams of bread mould and the odd, heart-shaped leaf in the sex cycle of the fern, it seemed so real to me.

The day I went in to physics class it was death.

A short dark man with a high, lisping voice, named Mr Manzi, stood in front of the class in a tight blue suit holding a little wooden ball. He put the ball on a steep grooved slide and let it run down to the bottom. Then he started talking about let a equal acceleration and let t equal time and suddenly he was scribbling letters and numbers and equals signs all over the blackboard and my mind went dead.

… I may have made a straight A in physics, but I was panic-struck. Physics made me sick the whole time I learned it. What I couldn’t stand was this shrinking everything into letters and numbers. Instead of leaf shapes and enlarged diagrams of the hole the leaves breathe through and fascinating words like carotene and xanthophyll on the blackboard, there were these hideous, cramped, scorpion-lettered formulas in Mr Manzi’s special red chalk.

In this description, the satisfying thing about the botany classes is the rich sensory context: the sounds of the words, the vivid images of ferns and bread mould, the tactile sense of chopping leaves. This is a very broad-spectrum idea of context.

Now, Stanovich does seem to want cognitive decoupling to apply in situations where people access a wide range of background knowledge (‘roses are living things’), but when he comes to hypothesising a mechanism for how this works he goes for something with a much narrower focus. In the 2013 paper with Toplak he talks about specific, explicit ‘representations’ of knowledge interfering with other explicit representations. (I’ll go into more detail later about exactly what he means by a ‘representation’.) He cites an older paper, Pretense and Representation by Leslie, as inspiration for the ‘decoupling’ term:

In a much-cited article, Leslie (1987) modeled pretense by positing a so-called secondary representation (see Perner 1991) that was a copy of the primary representation but that was decoupled from the world so that it could be manipulated — that is, be a mechanism for simulation.

This is very clearly about being able to decouple one specific explicit belief from another similarly explicit ‘secondary representation’, rather than the whole background morass of implicit context. I wanted to understand how this was supposed to work, so I went back and read the paper. This is where the banana phones come in.

Pretending a banana is a phone

The first surprise for me was how literal this paper was. (Apparently 80s cognitive science was like that.) Leslie is interested in how pretending works – how a small child pretends that a banana is a telephone, to take his main example. And the mechanism he posits is… copy-and-paste, but for the brain:


As in, we get some kind of perceptual input which causes us to store a ‘representation’ that means ‘this is a banana’. Then we make a copy of this. Now we can operate on the copy (‘this banana is a telephone’) without also messing up the banana representation. They’ve become decoupled.

What are these ‘representations’? Leslie has this to say:

What I mean by representation will, I hope, become clear as the discussion progresses. It has much in common with the concepts developed by the information-processing, or cognitivist, approach to cognition and perception…

This is followed by a long string of references to Chomsky, Dennett, etc. So his main influence appears to be, roughly, computational theories of mind. Looking at how he uses the term in the paper itself, it appears that we’re in the domain of Good Old-Fashioned AI: ‘representations’ can be put into a rough correspondence with English propositions about bananas, telephones, and cups of tea, and that we then use them as a kind of raw material to run inference rules on and come to new conclusions:


Leslie doesn’t talk about how all these representations come to mean anything in the real world — how do we know that the string of characters ‘cups contain water’, or its postulated mental equivalent, has anything to do with actual cups and actual water? How do we even parse the complicated flux of the real world into discrete named objects, like ‘cups’, to start with? There’s no story in the paper that tries to bridge this gap — these representations are just sitting there ‘in the head’, causally disconnected from the world.

Well, OK, maybe 80s cognitive science was like that. Maybe Leslie thought that someone else already had a convincing story for how this bit works, and he could just apply the resulting formalism of propositions and inference rules. But this same language of ‘representations’ and ‘simulations’ is still being used uncritically in much more recent papers. Stanovich and Toplak, for example, reproduce Leslie’s decoupling diagram and describe it using the same terms:

For Leslie (1987), the decoupled secondary representation is necessary in order to avoid representational abuse — the possibility of confusing our simulations with our primary representations of the world as it actually is… decoupled representations of actions about to be taken become representations of potential actions, but the latter must not infect the former while the mental simulation is being carried out.

There’s another strange thing about Stanovich using this paper as a model to build on. (I completely missed this, but David Chapman pointed it out to me in an earlier conversation.) Stanovich is interested in what makes actions or behaviours rational, and he wants cognitive decoupling to be at least a partial explanation of this. Leslie is looking at toddlers pretending that bananas are telephones. If even very young children are passing this test for ‘rationality’, it’s not going to be much use for discriminating between ‘rational’ and ‘irrational’ behaviour in adults. So Stanovich would need a narrower definition of ‘decoupling’ that excludes the banana-telephone example if he wants to eventually use it as a rationality criterion.

So I wasn’t very impressed with this as a plausible mechanism for decoupling. Then again, the mechanism I’d been imagining turns out to have some obvious failings too.

Rabbits and the St. Louis Arch

When I first started thinking about cognitive decoupling, I imagined a very different history for the term. ‘Decoupling’ sounds very physicsy to me, bringing up associations of actual interaction forces and coupling constants, and I’d been reading Dreyfus’s Why Heideggerian AI Failed, which discusses dynamical-systems-inspired models of cognition:

Fortunately, there is at least one model of how the brain could provide the causal basis for the intentional arc. Walter Freeman, a founding figure in neuroscience and the first to take seriously the idea of the brain as a nonlinear dynamical system, has worked out an account of how the brain of an active animal can find and augment significance in its world. On the basis of years of work on olfaction, vision, touch, and hearing in alert and moving rabbits, Freeman proposes a model of rabbit learning based on the coupling of the brain and the environment…

The organism normally actively seeks to improve its current situation. Thus, according to Freeman’s model, when hungry, frightened, disoriented, etc., the rabbit sniffs around until it falls upon food, a hiding place, or whatever else it senses it needs. The animal’s neural connections are then strengthened to the extent that reflects the extent to which the result satisfied the animal’s current need. In Freeman’s neurodynamic model, the input to the rabbit’s olfactory bulb modifies the bulb’s neuron connections according to the Hebbian rule that neurons that fire together wire together.

In many ways this still sounds like a much more promising starting point to me than the inference-rule-following of the Leslie paper. For a start, it seems to fit much better with what’s known about the architecture of the brain (I think – I’m pretty ignorant about this). Neurons are very slow compared to computer processors, but make up for this by being very densely interconnected. So getting anything useful done would rely on a huge amount of activation happening in parallel, producing a kind of global, diffuse ‘background context’ that isn’t sharply divided into separate concepts.

Better still, the problem of how situations intrinsically mean something about the world is sidestepped, because in this case, the rabbit and environment are literally, physically coupled together. A carrot smell out in the world pulls its olfactory bulb into a different state, which itself pulls the rabbit into a different kind of behaviour, which in turn alters the global structure of the bulb in such a way that this behaviour is more likely to occur again in the future. This coupling is so direct that referring to it as a ‘representation’ seems like overkill:

Freeman argues that each new attractor does not represent, say, a carrot, or the smell of carrot, or even what to do with a carrot. Rather, the brain’s current state is the result of the sum of the animal’s past experiences with carrots, and this state is directly coupled with or resonates to the affordance offered by the current carrot.

However, this is also where the problems come in. Everything is so closely causally coupled that there’s no room in this model for decoupling! The idea behind ‘cognitive decoupling’ is to be able to pull away from the world long enough to consider things in the abstract, without all the associations that normally get dragged along for free. In the olfactory bulb model, the rabbit is so locked into its surroundings that this sort of distance is unattainable.

At some point I was googling a bunch of keywords like ‘dynamical systems’ and ‘decoupling’ in the hope of fishing up something interesting, and I came across a review by Rick Grush of Mind as Motion: Explorations in the Dynamics of Cognition by Port and van Gelder, which had a memorable description of the problem:

…many paradigmatically cognitive capacities seem to have nothing at all to do with being in a tightly coupled relationship with the environment. I can think about the St. Louis Arch while I’m sitting in a hot tub in southern California or while flying over the Atlantic Ocean.

Even this basic kind of decoupling from a situation – thinking about something that’s not happening to you right now – needs some capacities that are missing from the olfactory bulb model. Grush even uses the word ‘decoupling’ to describe this:

…what is needed, in slightly more refined terms, is an executive part, C (for Controller), of an agent, A, which is in an environment E, decoupling from E, and coupling instead to some other system E’ that stands in for E, in order for the agent to ‘think about’ E (see Figure 2). Cognitive agents are exactly those which can selectively couple to either the ‘real’ environment, or to an environment model, or emulator, perhaps internally supported, in order to reason about what would happen if certain actions were undertaken with the real environment.

This actually sounds like a plausible alternate history for Stanovich’s idea, with its intellectual roots in dynamical systems rather than the representational theory of mind. So maybe my hallucinations were not too silly after all.

Final thoughts

I still think that the idea of cognitive decoupling is getting at something genuinely interesting – otherwise I wouldn’t have spent all this time rambling on about it! I don’t think the current representational story for how it works is much good. But the ability to isolate ‘abstract structure’ (whatever that means, exactly) from its surrounding context does seem to be a real skill that people vary in. In practice I expect that much of this context will be more of a diffuse associational soup than the sharp propositional statements of Leslie’s pretence model.

It’s interesting to me that the banana phone model and the olfactory bulb model both run into problems, but in opposite directions. Leslie’s banana phone relies on a bunch of free-floating propositions (‘this is a banana’), with no story for how they refer to actual bananas and phones out in the world. Freeman’s rabbit olfactory bulb has no problem with this – relevance is guaranteed through direct causal coupling to the outside world – but it’s so directly coupled that there’s no space for decoupling. We need something between these two extremes.

David Chapman pointed out to me that Brian Cantwell Smith already has a term for this in The Origin of Objects – he calls it ‘the middle distance’ between direct coupling and causal irrelevance. I’ve been reading the book and have already found his examples to be hugely useful in thinking about this more clearly. These are worth a post in their own right, so I’ll describe them in a followup to this one.