Written quickly and probably not very clear – it’s a workbook post not a polished-final-thoughts post. Vaguely inspired by this exchange between Julia Galef and Michael Nielsen.
One of my favourite things is the point in learning a new topic where it starts to get internalised, and you begin to be able to see more. You can read into a situation where previously you had no idea what was going on.
Sometimes the ‘seeing’ is metaphorical, but sometimes it’s literal. I go walking quite a lot, and this year I’m seeing more than before, thanks to an improved ability to read into the landscape.
I got this from Light and Colour in the Outdoors, a classic 30s book on atmospheric phenomena by the physicist Marcel Minnaert. It’s really good, and I’m now regretting being cheap and getting the Dover version instead of the fancy coffee-table book (note to self: never buy a black-and-white edition of a book with the word ‘colour’ in the title).
I’ve only read a few sections, but already I notice more. Last weekend I got the coach to London, and on the way out I saw a sun dog I’d probably have missed before. And then on the way back it was raining with the sun shining onto the coach windscreen in front, and I thought to myself, ‘I should probably look behind me’. I turned, and right on cue:
This is entry-level reading into the landscape, but still quite satisfying. Those with exceptional abilities seem to have superpowers. George Monbiot in Feral talks about his friend Ritchie Tassell:
… he has an engagement with the natural world so intense that at times it seems almost supernatural. Walking through a wood he will suddenly stop and whisper ‘sparrowhawk’. You look for the bird in vain. He tells you to wait. A couple of minutes later a sparrowhawk flies across the path. He had not seen the bird, nor had he heard it; but he had heard what the other birds were saying: they have different alarm calls for different kinds of threat.
This is the kind of learning that fascinates me! You can do it with maths as well as with sparrowhawks…
This has been on my mind recently as I read/reread Venkatesh Rao’s posts on ambiguity and uncertainty. I really need to do a lot more thinking on this, so this post might look stupid to me rather rapidly, but it’s already helping clarify my thoughts. Rao explains his use of the two terms here:
I like to use the term ambiguity for unclear ontology and uncertainty for unclear epistemology…
The ambiguity versus uncertainty distinction helps you define a simpler, though more restricted, test for whether something is a matter of ontology or epistemology. When you are missing information, that’s uncertainty, and an epistemological matter. When you are lacking an interpretation, that’s ambiguity, and an ontological matter.
Ambiguity is the one that maps to the reading-into-the-landscape sort of learning I’m most fascinated by, and reducing it is an act of fog-clearing:
20/ In decision-making we often use the metaphors of chess (perfect information) and poker (imperfect information) to compare decision-makers.
21/ The fog of intention breaks that metaphor because the game board /rules are inside people’s heads. Even if you see exactly what they see, you won’t see the game they see.
22/ Another way of thinking about this is that they’re making meaning out of what they see differently from you. The world is more legible to them; they can read/write more into it.
I think this is my main way of thinking about learning, and probably accounts for a fair amount of my confusion when interacting with the rationalist community. I’m obsessed with ambiguity-clearing, while the rationalists are strongly uncertainty-oriented.
For example, here’s Julia Galef on evaluating ‘crazy ideas’:
In my experience, rationalists are far more likely to look at that crazy idea and say: “Well, my inside view says that’s dumb. But my outside view says that brilliant ideas often look dumb at first, so the fact that it seems dumb isn’t great evidence about whether it will pan out. And when I think about the EV here [expected value] it seems clearly worth the cost of someone trying it, even if the probability of success is low.”
I’ve never thought like that in my life! I’d be hopeless at the rationalist strategy of finding a difficult, ambitious problem to work on and planning out high-risk steps for how to get there, but luckily there are other ways of navigating. I mostly follow my internal sense of what confusions I have that I might be able to attack, and try to clear a bit of ambiguity-fog at a time.
That sounds annoyingly vague and abstract. I plan to do a concrete maths-example post some time soon. In the meantime, have a picture of a sun dog:
[Inarticulate expression of fangirl enthusiasm]
Also: Thanks for pointing to the Galef/Nielson discussion (both interesting people!). The crux seems to be Michael’s statement: “Being overconfident in beliefs that most people hold is not at all the same as being overconfident in beliefs that few people hold.” Which seems very true! Much of the rest of his post seems right-on, too.
Hm I was guessing I originally found it from your Twitter feed – obviously not! I had a bit of a tab explosion weekend so who knows…
Yes I think that’s a nice observation about overconfidence. Also I like the ‘creative cocoon’ bit.
(Note: Things below might already be known to you.)
I think that part of what you’re pointing to when you mention ambiguity is well-described by the idea of drawing new boundaries around things. It looks like the Ribbonfarm article points to that by mentioning clearing up uncertainty in ontologies.
That is to say, for example, recognizing sun dogs is done when someone gives you an additional way of classifying / seeing the world, similar to the “can’t unsee” sorts of things, like how the KFC Colonel could be construed as a large head on a tiny body.
Which means that the additional noticing people get after an info-boost is because they’re now able to recognize / classify patterns that used to be mere apparent randomness to them. Which is what ontologies basically are.
I think that rationalists do this sort of thing a lot, especially with respect to things inside their head. This is how we get a lot of rationality techniques; by drawing new boundaries around previously unnoticed phenomena, we gain a useful abstraction layer with which to observe / manipulate things.
But they also seem quite fond of doing it in other contexts. Signaling, meta-levels, decision theory, and acausal trade (as some quick examples off the top of my head) all seem to point at new boundaries and abstractions that they use to notice more things in human interactions / decisions.
I’m wondering if this ambiguity vs uncertainty divide still seems like a salient one between the way you think vs typical rationalist?
Great comment, lots to respond to here. I’ll try not to write an essay, though I suppose it’s my blog and I can write an essay if I like.
Yes, ‘can’t unsee’ is exactly the right idea. To go back to the good old square-root-of-two-proof example, I now ‘can’t unsee’ that p^2 = 2q^2 is wrong, cos the number of prime factors don’t match. Whereas it used to look fine – even though I knew a proof saying it was wrong!
> I think that rationalists do this sort of thing a lot, especially with respect to things inside their head. This is how we get a lot of rationality techniques; by drawing new boundaries around previously unnoticed phenomena, we gain a useful abstraction layer with which to observe / manipulate things.
> But they also seem quite fond of doing it in other contexts. Signaling, meta-levels, decision theory, and acausal trade (as some quick examples off the top of my head) all seem to point at new boundaries and abstractions that they use to notice more things in human interactions / decisions.
Hm first of all I should point out that I’m aware my occasional griping about rationalists is *serious* narcissism-of-small-differences stuff. I read and enjoy SSC and a bunch of other rationalist blogs and hung around rationalist-adjacent tumblr for a good while. A lot attracts me, and a fair bit else feels off, but I’m really not coming from very far away.
I also haven’t had any contact with people IRL, and a couple of people have pointed out to me now that there’s a wide diversity of viewpoints and motivations in the community, which you don’t see so much if your introduction comes from browsing 2008-vintage Yudkowsky stuff online.
E.g. I’ve had an exchange with David Chapman starting with the comment linked below, and he says something like that in response:
My comment there and my follow-up one kind of detail my confusion. I *have* noticed what you say, that rationalists do do this kind of pointing-out previously unnoticed phenomena a lot. I think the instrumental rationality side is particularly good at this, as I say in that comment above, and I’ve learned some good techniques from them.
It’s more than I can explain coherently in a blog comment now, but I don’t understand how to mesh this with the MIRI-type side of the community that is trying to come up with formalised models of human cognition. That stuff looks good for dealing with *uncertainty* – problems where you have a well-defined hypothesis space and want to pick out the best one – but seems pretty useless for sniffing out new ways of seeing.
> I’m wondering if this ambiguity vs uncertainty divide still seems like a salient one between the way you think vs typical rationalist?
I think so, yes. E.g. that Galef piece I quote *does* seem like a classic rationalist uncertainty-based worldview, and that sort of thinking is valuable, but it’s very much not how I think. I need to put more work into explaining what I mean though.
Well I don’t expect this comment to have cleared up much! :p Hopefully as I write more on this site I will manage to explain myself better.
LikeLiked by 2 people
Cool, thanks for responding to both of my comments! Your response helped give some more insight into things, and I think the distinction is something I’m now thinking about 😀
I’ve been working recently on a pair of blog posts that are about epistemological uncertainty vs ontological indefiniteness, and about what that says about the relationship between rationalism and meta-rationalism. To quote myself (on twitter):
> Post wants to include Hartree-Fock water bond length computation, Heidegger’s version of aletheia, “the inscrutable dignity of the warrior,” Shakespeare’s Sonnet 110, Winograd’s AI program SHRDLU, spherical elastic cows, John 18:37-8, Tractatus Logico-Philosophicus, Bayesianism, … Understanding is a continuous fabric, like the quantum field. Cutting it into blog posts does violence, like pretending electrons are things
So I am slightly stuck at the moment! Either this thing turns into a 19-volume encyclopedia, or I carve some awkwardly-shaped pieces out of it.
To quote @kathrynschulz on twitter:
An eternal truth of this career:
things do not write themselves.
And that is why I wish someone
would send me little elves.
LikeLiked by 1 person
> I’ve been working recently on a pair of blog posts that are about epistemological uncertainty vs ontological indefiniteness
Ah that sounds great, this is something I’m fascinated by but still pretty confused about. I’m excited to see any and all awkwardly-shaped pieces you manage to cut from that lump!
Btw I happened to read the SHRDLU Wikipedia article recently out of curiosity, not really knowing anything about it… wow, that demonstration excerpt! In 1968! Really impressive.