April 2018

Hi all,

This is going to be a shorter one. I did manage to spend more time outside and less time sitting around obsessing over obscure Less Wrong posts, so there is not so much material.

(Edit after writing: not as short as I expected! But a lot of the length is me quoting other people.)

I haven’t been thinking about physics so much either, so I’ll skip that for this month. I’m going to a short physics workshop in Vienna in June, and the plan is to have May be all physics all the time so that everything is near at the top of my head by the time I get there. It’ll mostly be more of the van Enk paper, and also spending a little bit of time trying to digest Carlo Rovelli’s relational interpretation of quantum mechanics as he will be speaking to us at the workshop.

Cognitive decoupling

I wrote a blog post early this month on cognitive decoupling, which is a term I got from Sarah Constantin:

Stanovich talks about “cognitive decoupling”, the ability to block out context and experiential knowledge and just follow formal rules, as a main component of both performance on intelligence tests and performance on the cognitive bias tests that correlate with intelligence.  Cognitive decoupling is the opposite of holistic thinking. It’s the ability to separate, to view things in the abstract, to play devil’s advocate.

I got some excellent comments. Actually all the comments I got were really good. I’m still thinking about one from nostalgebraist:

… when I think about my own thinking, I’m tempted to say that I, like you, frequently rely on “understanding the domain” in place of “decoupling.” For example, I rely a lot on concrete examples of each abstract concept to guide my thinking about a concept; when I’m reasoning about a vector space, there is a “felt sense” of vector spaces in my head which is a sort of cloud of pictures and standard examples and other less articulable things, sort of like the felt sense that enters my mind when I think about a person I know. Perhaps this is a “coupled” way of thinking, but I’m having trouble envisioning what a “decoupled” alternative would even look like. If I were to banish these associations from my mind, and focus only on the vector space axioms themselves, I would still have to choose how to apply those axioms to whatever I’m doing, and that is what I get out of the associations.

(Likewise, skilled chess players depend on intuition and felt senses about board positions, and it is hard to imagine what a more “formal” alternative would look like. It’s not as if some chess players “use the rules” more than others; they all use the rules, but there is more to playing chess than that.)

I agree with all of this! But even given this, I still want to say that there is a difference… I’m just not really sure what this difference consists of:

I think that what I’m trying to claim is that this extra stuff varies between people (and for individual people as they tackle different problems), and that some of it looks more cognitively coupled and less like “using the rules” than other stuff.

For instance, a geometer using their visual intuition for a curved surface as a guide is doing something tightly coupled. On the other hand, sometimes I’m just sort of churning through algebraic steps and I don’t have any understanding of what each step means in itself.

There’s definitely some confusion here, that nostalgebraist was right to point out. A lot of my motivation for thinking about this stuff comes from something like this: ‘There’s this one thing in maths that I don’t have much ability to do, but other people seem to do it fine, and it’s something to with if-this-then-that chains of logical reasoning.’ And I’m arguing that this is connected to decoupling, but a different possibility is that there’s a different set of intuitions that couple to some other physical modality of reasoning (understanding of processes in time? I dunno), and that other people are using this to do logical reasoning instead:

… I have very little idea what other people are doing when they churn through equations. I don’t know if they are reasoning things explicitly using formal rules, and are just much faster than me, or they have more memory slots than me to keep the churning in. That would strike me as pretty decoupled. Another option is that they are doing the ‘if this then that’ stuff intuitively, and have some native hardware to do it on that I am annoyingly lacking in. In that case, maybe it isn’t that decoupled? Maybe there’s *another* set of intuitions and felt senses to do with formal logic, that I just completely lack access to. In that case it looks decoupled from my perspective, because I would have to just sit down with a piece of paper for hours and have a miserable time pushing decontextualised symbols around.

I’d be interested to hear what other people think. I’d particularly like to hear from anyone who is doing logical this-then-that stuff in a more coupled, ‘intuitive’ style, and if so what they’re grounding it with.

Possibly related: Sarah Constantin again, on effortful attention:

“Conscious by choice” seems to be pointing at the phenomenon of effortful attention, while “the unfocused mechanism of consciousness” is more like awareness.  There seems to be some intuition here that effortful attention is related to the productive abilities of humanity, our ability to live in greater security and with greater thought for the future than animals do.  We don’t usually “think on purpose”, but when we do, it matters a lot.

We should be thinking of “being conscious by choice” more as a sort of weird Bene Gesserit witchcraft than as either the default state or as an irrelevant aberration. It is neither the whole of cognition, nor is it unimportant — it is a special power, and we don’t know how it works.

I don’t know if there’s some link between ‘effortful’ and ‘decoupled’… other people with more of the bene gesserit power for effortful attention may have more ability to keep churning in the face of not being able to match up the steps to anything in particular.

I also got a message from Raymond Finzel (@rfinz) asking if I’d read Epistemological Pluralism and the Revaluation of the Concrete, by Sherry Turkle and Seymour Papert. I had, and it’s very relevant, and I even wrote a post inspired by it… and then apparently forgot about it completely when I wrote this one! It must have been in the back of my head somewhere, though. It’s well worth reading. Sample quote:

When working with Lego materials and motors, most children make a robot walk by attaching wheels to a motor that makes them turn. They are seeing the wheels and the motor through abstract concepts of the way they work: the wheels roll, the motor turns. Alex goes a different route. He looks at the objects more concretely; that is, without the filter of abstractions. He turns the Lego wheels on their sides to make flat “shoes” for his robot and harnesses one of the motor’s most concrete features: the fact that it vibrates. As anyone who has worked with machinery knows, when a machine vibrates it tends to “travel,” something normally to be avoided. When Alex ran into this phenomenon, his response was ingenious. He doesn’t use the motor to make anything “turn,” but to make his robot (greatly stabilized by its flat “wheel shoes”) vibrate and thus “travel.” When Alex programs, he likes to keep things similarly concrete…

… Alex wanted to draw a skeleton. Structured programming views a computer program as a hierarchical sequence. Thus, a structured program TO DRAW SKELETON might be made up of four subprocedures: TO HEAD, TO BODY, TO ARMS, TO LEGS, just as TO SQUARE could be built up from repetitions of a subprocedure TO SIDE. But Alex rebels against dividing his skeleton program into subprocedures; his program inserts bones one by one, marking the place for insertion with repetitions of instructions. One of the reasons often given for using subprocedures is economy in the number of instructions. Alex explains that doing it his way was “worth the extra typing” because the phrase repetition gave him a “better sense of where I am in the pattern” of the program. He had considered the structured approach but prefers his own style for aesthetic reasons: “It has rhythm,” he says. In his opinion, using subprocedures for parts of the skeleton is too arbitrary and preemptive, one might say abstract. “It makes you decide how to divide up the body, and perhaps you would change your mind about what goes together with what. Like, I would rather think about the two hands together instead of each hand with the arms.”


Cognitive decoupling, take two

The post has made a reappearance in the last week, with John Nerst incorporating the idea into a deep dive into an argument between Sam Harris and Ezra Klein, where it works surprisingly well. I wondered why he’d decided to go so far into some random internet controversy-of-the-month, but then I read the emails and it was really a perfect example of intelligent people talking past each other, and that’s his favourite topic, so it makes sense. Luckily for me, the rest of the source material seems to be podcasts, and I don’t care that much, so I will hopefully not get sucked into reading any more of of this.

Anyway he does a good job of explaining the thing. David Chapman pulled out a lot of the key ideas in this series of tweets, which introduced me to the idea of circumscription.

Systematic rationality only works if you do limit inference, because nothing is really truly true. If you take true-enough-for-this-context as “truly true,” you infer way too much. If you take it as “false,” you can’t infer anything.

Effective use of rational inference requires “circumscription assumptions,” which say what sorts of considerations count as relevant in a particular situation, for particular purposes.

I think the connection to decoupling is that once you’re within a set of circumscription assumptions, you can use decoupled reasoning to infer consequences. But sometimes to make progress you need to pull yourself out of your current frame, and this is where recoupling to a more raw experiential view of the world can help. Like Alex in the Turkle and Papert quote, using the motor to make shoes for his robot. This makes a lot of sense, but I need to think about exactly how the two ideas interact some more.

It’s been fun getting these comments. I’m still new enough to blogging that this is pretty cool, reviving some idea and then watching it go off on its own weird journey around the internet. Even if it means that decoupling is now on the SSC subreddit and the discussions are terrible.

I have probably done some violence to Stanovich’s meaning in the process, though. Sarah Constantin seems to be pretty careful normally, but I haven’t been at all – I just liked the idea and started blathering about it. Should really do some more research into what he meant by it. I read a little bit of his stuff when I was doing some research into the Cognitive Reflection Test (I griped about it here but keep meaning to do a proper post) and I think I’m talking about a very similar thing, but would have to read more to be sure.

Popper, Deutsch, critical rationalism

I was having an email conversation with David Chapman following on from the last one of these, and we got on to the subject of Popper… is it just the falsification thing and explaining that inductivism is rubbish, or does he also have a more subtle, interesting theory of what counts as good science?   

I’d definitely got it into my head that it was the latter, but I’d originally picked this up not from Popper but from the physicist David Deutsch. He wrote a book called The Fabric of Reality, which I read when I was 18 and cramming all the popular science I could find into my head as fast as possible. This book has a ridiculously ambitious structure where it weaves together ‘four strands’ of reasoning – evolution, quantum physics (particularly the many worlds interpretation), computation and Popperian epistemology – into some synthesised ‘big picture’ that I have completely forgotten. The only bits of the book I remember well are the early chapters on Popper, which I really liked.

Deutsch talks about how science is about good explanations (rather than, say, inductivist-style predictions based on previous evidence). This makes sense to me, but what I don’t remember is how much detail he goes into on what explanations actually are. What are explanations made of, and what counts as a good one?

I’m only on the early chapters of The Fabric of Reality at the moment and that isn’t really discussed. I have a vague idea that he does have a story on what an explanation is, and it’s really weird. As in, he really does weave all those four strands together, and conjectures that ‘information’ has something to do with properties that are invariant over many universes in the multiverse, or something. Like this:


This definitely doesn’t come from Popper! I’ll have to read a bit more to be sure of what it is, though. I also have a copy of The Logic of Scientific Discovery, so I should be able to see what Popper actually wrote.

One reason I care about this is that Deutsch seems to be plugging Popper a lot currently under the label ‘Critical Rationalism’. And I’d like to know what’s Popper and what’s Deutsch’s esoteric interpretation of Popper.

The main reason I care, though, is just that I’d love to see deeper explorations of what explanation is, and how you find good ones.

Next month…

… will be a physics month. Also an internet diet month, so I won’t be on Twitter or blogs or so on. Will be back on them some time mid-June. I’m still very happy to hear from you by email though!

Cheers, and thanks for reading,