Metarationality: a messy introduction

In the last couple of years, David Chapman’s Meaningness site has reached the point where enough of the structure is there for a bunch of STEM nerds like me to start working out what he’s actually talking about. So there’s been a lot of excited shouting about things like ‘metarationality’ and ‘ethnomethodology’ and ‘postformal reasoning’.

Not everyone is overjoyed by this. There was a Less Wrong comment by Viliam back in January which I thought made the point clearly:

How this all feels to me:

When I look at the Sequences, as the core around which the rationalist community formed, I find many interesting ideas and mental tools. (Randomly listing stuff that comes to my mind: Bayes theorem, Kolmogorov complexity, cognitive biases, planning fallacy, anchoring, politics is the mindkiller, 0 and 1 are not probabilities, cryonics, having your bottom line written first, how an algorithm feels from inside, many-worlds interpretation of quantum physics, etc.)

When I look at “Keganism”, it seems like an affective spiral based on one idea.

I am not saying that it is a wrong or worthless idea, just that comparing “having this ‘one weird trick’ and applying it to everything” with the whole body of knowledge and attitudes is a type error. If this one idea has merit, it can become another useful tool in a large toolset. But it does not surpass the whole toolset or make it obsolete, which the “post-” prefix would suggest.

Essentially, the “post-” prefix is just a status claim; it connotationally means “smarter than”.

To compare, Eliezer never said that using Bayes theorem is “post-mathematics”, or that accepting many-worlds interpretation of quantum physics is “post-physics”. Because that would just be silly. Similarly, the idea of “interpenetration of systems” doesn’t make one “post-rational”.

In other words, what have the metarationalists ever done for us? Rationality gave us a load of genuinely exciting cognitive tools. Then I went to metarationality, and all I got were these lousy Kegan stages.

This would be a very fair comment, if the only thing there was the Kegan idea. I’m one of the people who does find something weirdly compelling in that, and I was thinking about it for months. But I have to admit it’s a total PR disaster for attracting the people who don’t, on the level of equating torture with dust specks. (At least PR disasters are one thing that rationalists and metarationalists have in common!)

‘You don’t understand this because you haven’t reached a high enough stage of cognitive development’ is an obnoxious argument. People are right to want to run away from this stuff.

Also, as Viliam points out, it’s just one idea, with a rather unimpressive evidence base at that. That wouldn’t warrant a fancy new word like ‘metarationality’ on its own. [1]

Another idea I sometimes see is that metarationality is about the fact that a particular formal system of beliefs might not work well in some contexts, so it’s useful to keep a few in mind and be able to switch between them. This is point 2 of a second comment by Viliam on a different Less Wrong thread, trying to steelman his understanding of metarationality:

Despite admitting verbally that a map is not the territory, rationalists hope that if they take one map, and keep updating it long enough, this map will asymptotically approach the territory. In other words, that in every moment, using one map is the right strategy. Meta-rationalists don’t believe in the ability to update one map sufficiently (or perhaps just sufficiently quickly), and intentionally use different maps for different contexts. (Which of course does not prevent them from updating the individual maps.) As a side effect of this strategy, the meta-rationalist is always aware that the currently used map is just a map; one of many possible maps. The rationalist, having invested too much time and energy into updating one map, may find it emotionally too difficult to admit that the map does not fit the territory, when they encounter a new part of territory where the existing map fits poorly. Which means that on the emotional level, rationalists treat their one map as the territory.

Furthermore, meta-rationalists don’t really believe that if you take one map and keep updating it long enough, you will necessarily asymptotically approach the territory. First, the incoming information is already interpreted by the map in use; second, the instructions for updating are themselves contained in the map. So it is quite possible that different maps, even after updating on tons of data from the territory, would still converge towards different attractors. And even if, hypothetically, given infinite computing power, they would converge towards the same place, it is still possible that they will not come sufficiently close during one human life, or that a sufficiently advanced map would fit into a human brain. Therefore, using multiple maps may be the optimal approach for a human. (Even if you choose “the current scientific knowledge” as one of your starting maps.)

I don’t personally find the map/territory distinction all that helpful and will talk about that more later. Still, I think that this is OK as far as it goes, and much closer to the central core than the Kegan stage idea. To me it’s rather a vague, general sort of insight, though, and there are plenty of other places where you could get it. I’m not surprised that people aren’t falling over themselves with excitement about it.

I think people are looking for concrete, specific interesting ideas, along the lines of Viliam’s list of concepts he learned from rationality. I very much have this orientation myself, of always needing to go from concrete to abstract, so I think I understand a bit of what’s missing for a lot of people.

(Example: My first experience of reading Meaningness a few years ago was to click around some early posts, read a lot of generalities about words like ‘nebulosity’ and ‘pattern’ and ‘eternalism’, and completely miss the point. ‘Maybe this is some kind of Alain de Botton style pop philosophy? Anyway, it’s definitely not something I care about.’ There are a lot more specifics now, so it’s much easier to follow a concrete-to-abstract path and work out what’s going on. If you also tend to learn this way, I’d advise starting with the most recent parts, or something from the metablog, and working your way back.)

I do think that these concrete, specific ideas exist. I wrote a little bit in that first Less Wrong thread about what I found interesting, but it’s pretty superficial and I’ve thought about it a lot more since. This is my first attempt at a more coherent synthesis post. I’ve written it mostly for my own benefit, to make myself think more clearly about some vague parts, and it’s very much work-in-progress thinking out loud, rather than finalised understanding. This the kind of thing I enjoy writing, out on the ‘too early’ side of this ‘when to write’ graph. (This blog is mostly for talking about about things I’m still trying to understand myself, rather than polished after-the-fact explanation.)

There’s a lot there, it’s real stuff rather than vague generalities about how systems sometimes don’t work very well, and it’s very much worth bothering with. That’s what I want to try and get across in this post!

Also, this is just my idea of what’s interesting, and I’ve stuck to one route through the ideas in this post because otherwise the length would get completely out of control. Maybe others see things completely differently. If so, I’d like to know. I’ve filled this with links to make it easy to explore elsewhere.


Contents

This post is pretty long, so maybe a summary of what’s in it would be a good idea. Roughly, I’m going to cover:

  • How we think we think, vs. how we actually think: if you look closely at even the most formal types of reasoning, what we’re actually doing doesn’t tend to look so rigorous and logical.
  • Still, on its own that wouldn’t preclude the brain doing something very rigorous and logical behind the scenes, in the same way that we don’t consciously know about the image processing our brain is doing for our visual field. Some discussion of why the prospects for that don’t look great either.
  • Dumb confused interlude on the unreasonable effectiveness of mathematics.
  • The cognitive flip: queering the inside-the-head/outside-the-head binary. Sarah Perry’s theory of mess as a nice example of this.
  • Through fog to the other side: despite all this confusion, we can navigate anyway.

How we think we think, vs. how we actually think

I’ve got like a thousand words into this without bringing up my own weirdo obsession with mathematical intuition, which is unusually restrained, so let’s fix that now. It’s something I find interesting in itself, but it’s also this surprisingly direct rabbit hole into some rather fundamental ideas about how we think.

Take mathematical proofs, for example. In first year of a maths degree, everyone makes a massive deal out of how you’re going to be learning to write proofs now, and how this is some extra special new rigorous mode of thought that will teach you to think like a real mathematician. This is sort of true, but I found the transition incredibly frustrating and confusing, because I could never work out what was going on. What level of argument actually constitutes a proof?

I’d imagined that a formal proof would be some kind of absolutely airtight thing where you started with a few reasonable axioms and some rules of inference, and derived everything from that. I was quite excited, because it did sound like that would be a very useful thing to learn!

We did learn a little bit of formal logic stuff, basic propositional and predicate calculus. But most of the proofs we saw were not starting there, along with say the axioms of the real numbers, and working up. (If they had, they’d have been hundreds of pages long and completely unilluminating.)

Instead we proved stuff at this weird intermediate level. There were theorems like ‘a sequence of numbers that always increases, but is bounded by some upper value, will converge’, or ‘a continuous function f on the interval from a to b takes on all the values between f(a) and f(b)’. We weren’t deriving these from basic axioms. But also we weren’t saying ‘yeah that’s obviously true, just look at it’, like I’d have done before going to the class. Instead there was this special domain-specific kind of reasoning with lots of epsilons and deltas, and a bunch of steps to include.

How do you know which steps are the important ones to include? I never really worked that out at the time. In practice, I just memorised the proof and regurgitated it in the exam, because at least that way I knew I’d get the level right. Unsurprising, this didn’t exactly ignite a fire of deep intellectual excitement inside me. In the end I just gave up and started taking as many applied maths and physics courses as possible, where I could mostly continue doing whatever I was doing before they tried to introduce me to this stupid proofs thing.

If I went back now I’d probably understand. That weird intermediate level probably is the right one, the one that fills in the holes where your intuition can go astray but also avoids boring you with the rote tedium of obvious deductive steps. [2] Maybe seeing more pathological examples would help, cases where your intuitive ideas really do fail and this level of rigour is actually useful. [3]

An interesting question at this point is, how do you generate these intermediate-level proofs? One answer would be that you are starting from the really formal level, thinking up very careful airtight proofs in your head, and then only writing down some extracted key steps. I think it’s fairly clear you’re not doing that, at least at the level of conscious access (more on the idea that that’s what we’re ‘really’ doing later).

The reality seems to be messier. Explicitly thinking through formal rules is useful some of the time. But it’s only one method among many.

Sometimes, for example, you notice that, say, an equation admits an algebraic simplification you’ve used many times before, and mechanistic formula-churning takes over for a while. This may have required thinking through formal rules when you first learned it, but by now your fingers basically know the answer. Sometimes the resulting expression looks messy, and some rather obsessive part of the mind is not happy until like terms are collected tidily together. Sometimes you realise that part of the complexity of the problem ‘is just book-keeping’, and can be disposed of by, for example, choosing the origin of your coordinate system sensibly. The phrase ‘without loss of generality’ becomes your friend. Sometimes a context-specific trick comes to mind (‘probably it’s another one of those thingies where we sandwich it between two easier functions and show that they converge’).

There’s no real end to the list of things to try. Generating proofs is a fully general education in learning to Think Real Good. But some fundamental human faculties come up again and again. The mathematician Bill Thurston gave a nice list of these in his wonderful essay On proof and progress in mathematics. This is the sort of essay where when you start quoting it, you end up wanting to quote the whole lot (seriously just go and read it!), but I’ve tried to resist this and cut the quote down to something sensible:

(1) Human language. We have powerful special-purpose facilities for speaking and understanding human language, which also tie in to reading and writing. Our linguistic facility is an important tool for thinking, not just for communication. A crude example is the quadratic formula which people may remember as a little chant, “ex equals minus bee plus or minus the square root of bee squared minus four ay see all over two ay.” …

(2) Vision, spatial sense, kinesthetic (motion) sense. People have very powerful facilities for taking in information visually or kinesthetically, and thinking with their spatial sense. On the other hand, they do not have a very good built-in facility for inverse vision, that is, turning an internal spatial understanding back into a two-dimensional image. Consequently, mathematicians usually have fewer and poorer figures in their papers and books than in their heads …

(3) Logic and deduction. We have some built-in ways of reasoning and putting things together associated with how we make logical deductions: cause and effect (related to implication), contradiction or negation, etc. Mathematicians apparently don’t generally rely on the formal rules of deduction as they are thinking. Rather, they hold a fair bit of logical structure of a proof in their heads, breaking proofs into intermediate results so that they don’t have to hold too much logic at once …

(4) Intuition, association, metaphor. People have amazing facilities for sensing something without knowing where it comes from (intuition); for sensing that some phenomenon or situation or object is like something else (association); and for building and testing connections and comparisons, holding two things in mind at the same time (metaphor). These facilities are quite important for mathematics. Personally, I put a lot of effort into “listening” to my intuitions and associations, and building them into metaphors and connections. This involves a kind of simultaneous quieting and focusing of my mind. Words, logic, and detailed pictures rattling around can inhibit intuitions and associations.

(5) Stimulus-response. This is often emphasized in schools; for instance, if you see 3927 × 253, you write one number above the other and draw a line underneath, etc. This is also important for research mathematics: seeing a diagram of a knot, I might write down a presentation for the fundamental group of its complement by a procedure that is similar in feel to the multiplication algorithm.

(6) Process and time. We have a facility for thinking about processes or sequences of actions that can often be used to good effect in mathematical reasoning. One way to think of a function is as an action, a process, that takes the domain to the range. This is particularly valuable when composing functions. Another use of this facility is in remembering proofs: people often remember a proof as a process consisting of several steps.

Logical deduction makes an appearance on this list, but it’s not running the show. It’s one helpful friend in a group of equals. [4]

Mathematics isn’t special here: it just happened to be my rabbit hole into thinking about how we think about things. It’s a striking example, because the gap between the formal stories we like to tell and the messy reality is such a large one. But there are many other rabbit holes. User interface design (or probably any kind of design) is another good entry point. You’re looking at what people actually do when using your software, not your clean elegant theory of what you hoped they’d do. [5]

Apparently the general field that studies what people actually do when they work with systems is called ‘ethnomethodology’. Who knew? Why does nobody tell you this??

(Side note: if you poke around Bret Victor’s website, you can find this pile of pdfs, which looks like some kind of secret metarationality curriculum. You can find the Thurston paper and some of the mathematical intuition literature there, but overall there’s a strong design/programming focus, which could be a good way in for many people.)

After virtue epistemology

On its own, this description of what we do when we think about a problem shouldn’t necessarily trouble anyone. After all, we don’t expect to have cognitive access to everything the brain does. We already expect that we’re doing a lot of image processing and pattern recognition and stuff. So maybe we’re actually all running a bunch of low-level algorithms in our head which are doing something very formal and mathematical, like Bayesian inference or something. We have no direct access to those, though, so maybe it’s perfectly reasonable that what we see at a higher level looks like a bunch of disconnected heuristics. If we just contented ourselves with a natural history of those heuristics, we might be missing out on the chance of a deeper explanatory theory.

Scott Alexander makes exactly this point in a comment on Chapman’s blog:

Imagine if someone had reminded Archimedes that human mental simulation of physics is actually really really good, and that you could eyeball where a projectile would fall much more quickly (and accurately!) than Archimedes could calculate it. Therefore, instead of trying to formalize physics, we should create a “virtue physics” where we try to train people’s minds to better use their natural physics simulating abilities.

But in fact there are useful roles both for virtue physics and mathematical physics. As mathematical physics advances, it can gradually take on more of the domains filled by virtue physics (the piloting of airplanes seems like one area where this might have actually happened, in a sense, and medicine is in the middle of the process now).

So I totally support the existence of virtue epistemology but think that figuring out how to gradually replace it with something more mathematical (without going overboard and claiming we’ve already completely learned how to do that) is a potentially useful enterprise.

Chapman’s response is that

… if what I wrote in “how to think” looked like virtue ethics, it’s probably only because it’s non-systematic. It doesn’t hold out the possibility of any tidy answer.

I would love to have a tidy system for how to think; that would be hugely valuable. But I believe strongly that there isn’t one. Pursuing the fantasy that maybe there could be one is actively harmful, because it leads away from the project of finding useful, untidy heuristics.

This is reasonable, but I still find it slightly disappointing, in that it seems to undersell the project as he describes it elsewhere. It’s true that Chapman isn’t proposing a clean formal theory that will explain all of epistemology. But my understanding is that he is trying to do something more explanatory than just cataloguing a bunch of heuristics, and that doesn’t come across here. In other parts of his site he gives some indication of the sorts of routes to better understanding of cognition he finds promising.

Hopefully he’s going to expand on the details some time soon, but it’s tempting to peek ahead and try and work out the story now. Again, I’m no expert here, at all, so for the next section assume I’m doing the typical arrogant physicist thing.

The posts I linked above gave me lots of pieces of the argument, but at first I couldn’t see how to fit them into a coherent whole. Scott Alexander’s recent predictive processing post triggered a few thoughts that filled in some gaps, so I went and pestered Chapman in his blog comments to check I had the right idea.

Scott’s post is one of many posts where he distinguishes between ‘bottom-up’ and ‘top-down’ processing.

Bottom-up processing starts with raw sensory data and repackages it into something more useful: for vision, this would involve correcting for things like the retinal blind spot and the instability of the scene as we move our gaze. To quote from the recent post:

The bottom-up stream starts out as all that incomprehensible light and darkness and noise that we need to process. It gradually moves up all the cognitive layers that we already knew existed – the edge-detectors that resolve it into edges, the object-detectors that shape the edges into solid objects, et cetera.

Top-down processing is that thing where Scott writes ‘the the’ in a sentence and I never notice it, even though he always does it. It’s the top-down expectations (‘the word “the” isn’t normally repeated’) we’re imposing on our perceptions.

This division makes a lot of sense as a general ordering scheme: we know we’re doing both these sorts of things, and that we somehow have to take both into consideration at once when interpreting the scene. The problem is working out what’s relevant. There’s a gigantic amount of possibly relevant sense data, and a gigantic amount of possible relevant existing knowledge. We need to somehow extract the parts that are useful in our situation and make decisions on the fly.

On the bottom-up side, there are some reasonable ideas for how this could work. We can already do a good job of writing computer algorithms to process raw pixel data and extract important features. And there is often a reasonably clearcut, operational definition of what ‘relevant’ could possibly mean.

Relevant objects are likely to be near you rather than miles away; and the most salient objects are likely to be the ones that recently changed, rather than ones that have just sat there for the last week. These sort of rules reduce the pressure to have to take in everything, and push a lot of the burden onto the environment, which can cue you in.

This removes a lot of the work. If the environment can just tell us what to do, there’s no need to go to the effort of building and updating a formal internal model that represents it all. Instead of storing billions of Facts About Sense Data you can have the environment hand them to you in pieces as required. This is the route Chapman discusses in his posts, and the route he took as an AI researcher, together with his collaborator Phil Agre (see e.g. this paper (pdf) on a program, Pengi, that they implemented to play an arcade game with minimal representation of the game surroundings).

In the previous section I tried to weaken the importance of formal representations from the outside, by looking at how mathematical reasoning occurs in practice as a mishmash of cognitive faculties. Situated cognition aims to weaken it from the inside instead by building up models that work anyway, without the need for too much representation.

Still, we’re pushing some way beyond ‘virtue epistemology’, by giving ideas for how this would actually work. In fact, so far there might be no disagreement with Scott at all! Scott is interested in ideas like predictive processing and perceptual control theory, which also appear to look at changes in the sense data in front of you, rather than trying to represent everything as tidy propositions.

However, we also have to think about the top-down side. Scott has the following to say about it:

The top-down stream starts with everything you know about the world, all your best heuristics, all your priors, everything that’s ever happened to you before – everything from “solid objects can’t pass through one another” to “e=mc^2” to “that guy in the blue uniform is probably a policeman”.

This looks like the bit where the representation sneaks in. We escaped the billions of Facts About Sense Data, but that looks very like billions of Facts About Prior Experience to me. We’d still need to sort through them and work out what’s relevant somehow. I haven’t read the Clark book, and Scott’s review is very vague about how this works:

Each level receives the predictions from the level above it and the sense data from the level below it. Then each level uses Bayes’ Theorem to integrate these two sources of probabilistic evidence as best it can.

My response is sort of an argument from incredulity, at this point. Imagine expanding out the list of predictions, to cover all the things you know at the same level of specificity as ‘that guy in the blue uniform is probably a policeman’. That is an insane number of things! And you’re expecting people to sort through these on the fly, and compute priors for giant lists of hypotheses, and keep them reasonably consistent, and then run Bayesian updates on them? Surely this can’t be the story!

Arguments from incredulity aren’t the best kinds of arguments, so if you do think there’s a way that this could plausibly work in real time, I’d love to know. [6]

Coping with the complexity would be much plausible if we could run the same situational trick as with the bottom-up case, finding some way of avoiding having to represent all this knowledge by working out which parts are important in the current context. But this time it’s far harder to figure out operationally how that would work. There’s no obvious spatial metric on our thoughts such that we can determine ‘which ones are nearby’ or ‘which ones just changed’ as a quick proxy for relevance. And the sheer variety of types of thought is daunting – there are no obvious ordering principles like the three dimensions of the visual field.

Chapman’s reply when I asked him about it was:

It was when we realized we had no idea how to address this that Phil [Agre] and I gave up on AI. If you take a cognitivist approach—i.e. representing knowledge using something like language or logic—the combinatorics are utterly impossible. And we had no good alternative.

So it’s not like I can suggest anything concrete that’s better than the billions of Facts About Prior Experience. That’s definitely a major weakness of my argument here! But maybe it’s enough to see that we didn’t need this explicit formal representation so far, and that it’s going to be a combinatorial disaster zone if we bring in now. For me, that’s enough clues that I might want to look elsewhere.


Brief half-baked confused interlude: the mathematical gold standard

Maybe you could go out one stage further, though. OK, so we’re not consciously thinking through a series of formal steps. And maybe the brain isn’t doing the formal steps either. But it is true that the results of correct mathematical thinking are constrained by logic, that to count as correct mathematics, your sloppy intuitive hand-waving eventually has to cash out in a rigorous formal structure. It’s somehow there behind the scenes, like a gold standard backing our messy exchanges.

Mathematics really is unreasonably effective. Chains of logic do work amazingly well in certain domains.

I think this could be part of the MIRI intuition. ‘Our machines aren’t going to do any of this messy post-formal heuristic crap even if our brains are. What if it goes wrong? They’re going to actually work things out by actual formal logic.’ (Or at the very least, they’re going to verify things afterwards, with actual formal logic. Thanks to Kaj Sotala for pointing this distinction out to me a while ago.)

I don’t understand how this is possibly going to work. But I can’t pretend that I really know what’s going on here either. Maths feels like witchcraft to me a lot of the time, and most purported explanations of what it is make no sense to me. Philosophy of mathematics is bad in exactly the same way that moral philosophy is bad, and all the popular options are a mess. [7]


The cognitive flip

I think I want to get back to slightly more solid ground now. It’s not going to be much more solid though, because I’m still trying to work this out. There’s a bit of Chapman’s ‘Ignorant, irrelevant, and inscrutable’ post that puzzled me at first:

Many recall the transition from rationalism to meta-rationalism as a sudden blinding moment of illumination. It’s like the famous blotchy figure above: once you have seen its meaning, you can never unsee it again. After you get meta-rationality, you see the world differently. You see meanings rationalism cannot—and you can never go back.

This is probably an illusion of memory. The transition occurs only after years of accumulating bits of insight into the relationship between pattern and nebulosity, language and reality, math and science, rationality and thought. At some point you have enough pieces of the puzzle that the overall shape falls into place—but even then there’s a lot of work left to fill in the holes.

Now first of all, I don’t recall any such ‘blinding moment of illumination’ myself. That possibly doesn’t mean much, as it’s not supposed to be compulsory or anything. (Or maybe I just haven’t had it yet, and everything will make sense tomorrow…)

What was more worrying is that I had no clear idea of what the purported switch was supposed to be. I’ve thought about this a bit more now and I think I’m identifying it correctly. I think that the flip is to do with removing the clean split between having a world outside the head and a representation inside it.

In the ‘clean split’ worldview, you have sensory input coming in from the outside world, and your brain’s job is to construct an explicit model making sense of it. Here’s a representative quote summarising this worldview, taken from the Less Wrong wiki article on ‘The map is not the territory’:

Our perception of the world is being generated by our brain and can be considered as a ‘map’ of reality written in neural patterns. Reality exists outside our mind but we can construct models of this ‘territory’ based on what we glimpse through our senses.

This worldview is sophisticated enough to recognise that the model may be wrong in some respects, or it may be missing important details – ‘the map is not the territory’. However, in this view there is some model ‘inside the head’, whose job is to represent the outside world.

In the preceding sections we’ve been hacking away at the plausibility of this model. This breakdown frees us to consider different models of cognition, models that depend on interactions between the brain and the environment. Let’s take an example. One recent idea I liked is Sarah Perry’s proposed theory of mess.

Perry tackles the question of what exactly constitutes a mess. Most messes are made by humans. It’s rare to find something in the natural world that looks like a mess. Why is that?

Maybe this sounds like an obscure question. But it’s exactly the kind of question you might sniff out if you were specifically interested in breaking down the inside-the-head/outside-the-head split. (In fact, maybe this is part of the reason why metarationality tends to look contentless from the outside. Without the switch everything just looks like an esoteric special topic the author’s interested in that day. You don’t care about mess, or user interface design, or theatre improv, or mathematical intuition, or whatever. You came here for important insights on reality and the nature of cognition.)

You’re much better off reading the full version, with lots of clever visual examples, and thinking through the answer yourself. But if you don’t want to do that, her key thesis is:

… in order for mess to appear, there must be in the component parts of the mess an implication of extreme order, the kind of highly regular order generally associated with human intention. Flat uniform surfaces and printed text imply, promise, or encode a particular kind of order. In mess, this promise is not kept. The implied order is subverted.

So mess is out in the world – you need a bunch of the correct sort of objects in your visual field to perceive a mess. But mess is not just out in the world – you also have to impose your own expectation of order on the scene, based on the ordered contexts that the objects are supposed to appear in. Natural scenes don’t look like a mess because no such implied order exists.

Mess confuses the neat categories of ‘in the world’ and ‘in my head’:

It is as if objects and artifacts send out invisible tendrils into space, saying, “the matter around me should be ordered in some particular way.” The stronger the claim, and the more the claims of component pieces conflict, the more there is mess. It is these invisible, tangled tendrils of incompatible orders that we are “seeing” when we see mess. They are cryptosalient: at once invisible and obvious.

In the language of the previous section, we’re getting in bottom-up signals from our visual field, which is resolved into the bunch of objects. And then by some ‘magic’ (invisible tendrils? a cascade of Bayesian updates?) the objects are recognised from the top-down side as implying an incompatible pile of different ordering principles. We’re seeing a mess.

Here’s some of my mess:

2017-09-02 21.34.17

I can sort of still fit this into the map/territory scheme. Presumably the table itself and the pile of disordered objects are in the territory. And then the map would be… what? Some mental mess-detecting faculty that says ‘my model of those objects is that they should be stacked neatly, looks like they aren’t though’?

There is still some kind of principled distinction here, some way to separate the two. The territory corresponds pretty well to the bottom-up bit, and is characterised by the elements of experience that respond in unpredictable, autonomous ways when we investigate them. There’s no way to know a priori that my mess is going to consist of exercise books, a paper tetrahedron and a kitten notepad. You have to, like, go and look at it.

The map corresponds better to the top-down bit, the ordering principles we are trying to impose. These are brought into play by the specific objects we’re looking at, but have more consistency across environments – there are many other things that we would characterise as mess.

Still, we’ve come a long way from the neat picture of the Less Wrong wiki quote. The world outside the head and the model inside it are getting pretty mixed up. For one thing, describing the remaining ‘things in the head’ as a ‘model’ doesn’t fit too well. We’re not building up a detailed internal representation of the mess. For another, we directly perceive mess as mess. In some sense we’re getting the world ‘all at once’, without the top-down and bottom-up parts helpfully separated.

At this point I feel I’m getting into pretty deep metaphysical waters, and if I go much further in this direction I’ll make a fool of myself. Probably a really serious exploration in this direction involves reading Heidegger or something, but I can’t face that right now so I think I’ll finish up here.


Through fog to the other side

A couple of months ago I had an idea for a new blog post, got excited about it and wrote down this quick outline. That weekend I started work on it, and slowly discovered that every sentence in the outline was really an IOU for a thousand words of tricky exposition. What the hell had I got myself into? This has been my attempt to do the subject justice, but I’ve left out a lot. [8]

I hope I’ve at least conveyed that there is a lot there, though. I’ve mostly tried to do that through the methods of ‘yelling enthusiastically about things I think are worth investigating’ and ‘indicating via enthusiastic yelling that there might be a pile of other interesting things nearby, just waiting for us to dig them up’. Those are actually the things I’m most keen to convey, more even than the specifics in this post, but to do that I needed there to be specifics.

I care about this because I feel like I’m surrounded by a worrying cultural pessimism. A lot of highly intelligent people seem to be stuck in the mindset of ‘all the low-hanging fruit’s been plucked, everything interesting requires huge resources to investigate, you’re stuck being a cog in an incredibly complicated system you can barely understand, it’s impossible to do anything new and ambitious by yourself.’

I’ve gone through the PhD pimple factory myself, and I understand how this sort of constricting view takes hold. I also think that it is, to use a technical phrase, total bollocks.

My own mindset, which the pimple factory didn’t manage to completely destroy, is very different, and my favourite example to help explain where I’m coming from has always been the theory of evolution by natural selection. The basic idea doesn’t require any very complicated technical setup; you can explain the it in words to a bright ten-year-old. It’s also deeply explanatory: nothing in biology makes sense except in the light of it. And yet Darwin’s revolution came a couple of hundred years after the invention of calculus, which requires a lot more in the way of technical prerequisites to understand.

Think of all those great mathematicians — Gauss, Lagrange, Laplace — extending and applying the calculus in incredibly sophisticated ways, and yet completely clueless about basic questions that the bright ten-year-old could answer! That’s the situation I expect we’re still in. Many other deep ways of understanding the world are probably still hidden in fog, but we can clear more of it by learning to read new meaning into the world in the right ways. I don’t see any reason for pessimism yet.

This is where the enthusiastic yelling comes from. Chapman’s Meaningness project attacks the low-hanging-fruit depressive slump both head on by explaining what’s wrong with it, and indirectly by offering up an ambitious, large-scale alternative picture full of ideas worth exploring. We could do with a lot more of this.

It may look odd that I’ve spent most of this post trying to weaken the case for formal systems, and yet I’m finishing off by being really excitable and optimistic about the prospects for new understanding. That’s because we can navigate anyway! We might not think particularly formally when we do mathematics, for example, but nothing about that stops us from actually getting the answer right. A realistic understanding of how we reason our way through messy situations and come to correct conclusions anyway is likely to help us get better at coming up with new ideas, not worse. We can clear some more of that fog and excavate new knowledge on the other side.


Footnotes


1. I don’t really love the word ‘metarationality’, to be honest. It’s a big improvement on ‘postrationality’, though, which to me has strong connotations of giving up on careful reasoning altogether. That sounds like a terrible idea.

‘Metarationality’ sounds like a big pretentious -ism sort of word, but then a lot of the fault of that comes from the ‘rationality’ bit, which was never the greatest term to start with. I quite like Chapman’s ‘the fluid mode’, but ‘metarationality’ seems to be sticking, so I’ll go with that. (back)


2. There’s also a big social element that I didn’t get at the time. If you’re a beginner handing in homework for your first analysis course, you may need to put a lot of steps in, to convince the marker that you understand why they’re important. If you’re giving a broad overview to researchers in a seminar, you can assume they know all of that. There’s no one canonical standard of proof.

At the highest levels, in fact, the emphasis on rigour is often relaxed somewhat. Terence Tao describes this as:

The “post-rigorous” stage, in which one has grown comfortable with all the rigorous foundations of one’s chosen field, and is now ready to revisit and refine one’s pre-rigorous intuition on the subject, but this time with the intuition solidly buttressed by rigorous theory. (For instance, in this stage one would be able to quickly and accurately perform computations in vector calculus by using analogies with scalar calculus, or informal and semi-rigorous use of infinitesimals, big-O notation, and so forth, and be able to convert all such calculations into a rigorous argument whenever required.) The emphasis is now on applications, intuition, and the “big picture”. This stage usually occupies the late graduate years and beyond.

(Incidentally, this blog post is a good sanitised, non-obnoxious version of the Kegan levels idea.) (back)


3. I recently went to a fascinating introductory talk on divergent series, the subject that produces those weird Youtube videos on how 1 + 2 + 3 + … = -1/12. The whole thing was the most ridiculous tightrope walk over the chasm of total bullshit, always one careful definition away from accidentally proving that 1 = 0, and for once in my life I was appreciating the value of a bit of rigour. (back)


4. The list isn’t supposed to be comprehensive, either. I would definitely add aesthetics as an important category… sometimes an equation just looks unpleasant, like a clunky sentence or some badly indented code, and I feel a compulsion to tidy it into the ‘correct’ form. (back)


5. My current job is as a relative noob programmer supporting and extending existing software systems. I’ve been spending a lot of time being dumped in the middle of some large codebase where I have no idea what’s going on, or getting plonked in front of some tool I don’t understand, and flailing around uselessly while some more experienced colleague just knows what to do. There’s now a new guy who’s an even bigger noob on the project than me, and it’s fun to get to watch this from the other side! He’ll be staring blankly at a screen packed with information, fruitlessly trying to work out which bit could possibly be relevant, while I’ll immediately just see that the number two from the end on the right hand side has got bigger, which is bad, or whatever. (back)


6. When I griped about this in the SSC comments I was advised by Eli to read about Bayesian nonparametrics. Which is what people also say to nostalgebraist, and I really should learn what this stuff is. (back)


7. Has anyone else noticed the following?

Platonism virtue ethics
formalism deontology
logicism utilitarianism

I don’t know what this means but I’m pretty sure it’s all just bad. (back)


8. The worst omission is that I’ve only glancingly mentioned the difference between epistemic uncertainty and ontological ambiguity, the subject I started to hint about in this post. This is an extremely important piece of the puzzle. I don’t think I could do a good job of talking about it, though, and David Chapman is currently writing some sort of giant introduction to this anyway, so maybe it made sense to focus elsewhere. (back)

15 thoughts on “Metarationality: a messy introduction

  1. gavinrebeiro September 30, 2017 / 2:10 pm

    When it comes down to logic in the mathematics, people tend to forget inference rules come from just being “reasonable” to us. Fully formal developments aren’t appreciated until you find the need to be able to write proofs that aren’t readily available on the internet. The logical formalism is really good for one thing, showing the skeleton of the main argument. Like how a proof can be broken down into smaller parts, which universal statements need to be proved, which implication is part of which quantified statement, etc. I tend to like to work in the following steps: a)start out with trying to get an intuitive, and wherever possible, visual idea of what’s going on in the definitions. b) writing down a logical skeleton of what needs to be proved to satisfy the definitions. c) moving between intuitive thinking and formal logic. Even formal proofs need guesswork. Usually we end up with something we want to prove and work backwards with the deduction; this really helps ferreting out assumptions and relations which need to hold in order to prove what we want.

    Practical use of logical deduction is a lot like the Thurston article you quoted (speficially the ‘breaking down into smaller steps part’.)

    I was actually refreshing some analysis foundations this morning and, for the fun of it, formalised the deduction process of proving limits. The whole epsilon-delta shebang comes down to just finding an expression in terms of epsilon and making delta equal to it, where the stuff inside absolute value part of |x-c|<d is identical to the stuff inside the absolute value part of |f(x)-L|<e. The rest of the quantifiers are satisfied immediately after we take d=(expression in terms of epsilon). I'm actually going to make a post about this soon, on my own blog. I think there's some merit in looking at it this way. Limits are probably the biggest stumbling block for students who come into contact with rigorous maths for the first time. It's all the quantifier juggling going on. Unless the students know how to roughly translate an informal proof into a formal predicate deduction, they end up just memorising the process without understanding it. This is doubly true for topics like measure theory, where intuition alone is practically impossible to rely on when it comes to coming up with a proof, because of the forest of quantifiers.

    On another topic of your article, I think a big problem people have when they are trying to discuss 'thinking processes' is that they use words without making sure that the other person/people have the same definition for the word (at least roughly similar). Ends up just causing confusion. Another funny one to watch is when people start talking about 'consciousness' (a whole bunch of cognitive processes just get bunched up with this one word); even academics can be seen ending up just bickering because they are communicating (or attempting to) without making sure they have common/matching premises.

    Liked by 1 person

    • drossbucket September 30, 2017 / 4:39 pm

      > I was actually refreshing some analysis foundations this morning and, for the fun of it, formalised the deduction process of proving limits.

      Ah this sounds good, will look out for it! I do remember spending some time as a first year undergrad playing around with quantifiers trying to understand clearly what was going on (probably when we were doing uniform vs pointwise convergence) but it’s a while ago. And yeah, I took a measure theory course later but had no idea what was going on.

      > On another topic of your article, I think a big problem people have when they are trying to discuss ‘thinking processes’ is that they use words without making sure that the other person/people have the same definition for the word

      Agreed… and yep, ‘consciousness’ is particularly bad for that.

      Like

  2. Joseph Ratliff October 4, 2017 / 3:58 pm

    Reblogged this on Quaerere Propter Vērum and commented:
    “A messy introduction” … maybe … but a must-read on meta-rationality nonetheless. I especially liked the attention given to people who feel “all of the low-hanging fruit has been plucked” … because I was one of those people.

    Like

    • drossbucket October 17, 2017 / 7:06 pm

      Thanks, glad it was useful! Some interesting links to follow up in your presentation, and I liked the spoon joke 🙂

      Like

  3. Jeff A. October 23, 2017 / 6:10 am

    “I hope I’ve at least conveyed that there is a lot there, though.”

    A data point: You didn’t.

    (I wrote more, but redacted it for being meaner than I aspire to be in general. If you’re curious for more feedback, feel free to email me.)

    Like

    • drossbucket October 23, 2017 / 6:31 pm

      Sure, let’s have it.

      I’m too dumb to work out your email address, though, can you email me? bossdrucket at gmail dot com

      (edit: not quite too dumb. Have emailed.)

      Like

      • drossbucket October 23, 2017 / 6:34 pm

        (I’m more interested in the mean version than a redacted nice version, but send over what you want!)

        Like

  4. nostalgebraist May 18, 2019 / 10:27 pm

    I hadn’t seen this post until just now, although it was written over a year ago, and I just wanted to chime in and say that the central concept articulated in this post seems very valuable and important to me. Thank you for that.

    (An attempt to restate in my own words the concept I’m thinking of:

    Some people treat their perceptions as dubious but given, i.e. corrupted and error-prone, but only so due to mechanisms external to some set of higher cognitive process, so that those process can sift through and evaluate them “after” they are generating without any influence feeding back into the generation process. This often goes together with a rationalistic skepticism towards individual impressions and ideas combined with an optimism about the human ability to come to ultimate consensus on any given matter: we have simply to identify the factors corrupting the signal and learn how to correct for them, and then we’ll all see the same thing and we’ll all agree. And then some people, often after believing the above for some period of time, go further and treat their perceptions as dubious and also not given, i.e. created in part by the “judge” process itself, or (better) not even a distinct category separable from the activities of the “judge.”

    Liked by 1 person

    • Lucy Keer May 19, 2019 / 9:08 am

      Thanks, I’m glad you got something valuable out of this post! I’ve been fairly dissatisfied with it, as it’s long and unwieldy and generally not how I’d tackle it if I was going to do it again, so it’s nice to get a positive comment.

      Your summary is very much what I was going for, except that there’s one further step beyond your last sentence. On its own, treating perception as ‘dubious and also not given’ can lead to a kind of postmodern apathy/nihilism, in which there are endless possible frames with no way at all to decide between them (or maybe it all boils down to power dynamics, where privileged groups get to decide the narrative, or something).

      The final step is something like ‘well, we do actually make progress and come to useful decisions, even if there isn’t some attainable ‘ultimate consensus’ sitting out in the world, so let’s study how we manage to do that’. As I understand it so far, there are a couple of things going on.

      One is that we ground out formal systems with our pretheoretic lived experience of the world. E.g. that Thurston bit I quoted, where we use our facilities for language, spatial sense, comparisons etc to make progress in mathematics.

      The other (which I didn’t appreciate the significance of at all when I wrote that post) is that we also go the other way and engineer the world to be more amenable to rational analysis. E.g. a company will make distinct repeatable products where each instance of a product can be manufactured and priced the same, follow the same rules, etc, rather than just producing a mushy spectrum of one-off items.

      The long story of how things work seems to run through continental phenomenology and Wittgenstein through to ethnomethodology, and generally requires a whole lot of background reading. I’m still getting oriented. But I have a much better understanding than I did when I wrote the post.

      Like

  5. Kenny October 10, 2020 / 3:34 am

    This is a great post; thanks!

    David Chapman is my current favorite philosopher. (I really like his Buddhist and Tantra writing too.) I feel like his posts at Meaningness have really hit their stride in the past few months too. It seems like there’s enough infrastructure available in the form of the earlier posts for him to start explaining a lot new details of his ideas about metarationality.

    My favorite writer about ‘top down’ and ‘bottom up’ processing, and how they can do amazing things together, is Douglas Hofstadter. Here’s one of his books about some of his AI work demonstrating that.

    Like

Leave a comment