I was inspired by John Nerst’s recent post to make a list of my own fundamental background assumptions. What I ended up producing was a bit of a odd mixed bag of disparate stuff. Some are something like factual beliefs, some of them are more like underlying emotional attitudes and dispositions to act in various ways.
I’m not trying to ‘hit bedrock’ in any sense, I realise that’s not a sensible goal. I’m just trying to fish out a few things that are fundamental enough to cause obvious differences in background with other people. John Nerst put it well on Twitter:
It’s not true that beliefs are derived from fundamental axioms, but nor is it true that they’re a bean bag where nothing is downstream from everything else.
I’ve mainly gone for assumptions where I tend to differ with the people I to hang around with online and in person, which skews heavily towards the physics/maths/programming crowd. This means there’s a pretty strong ‘narcissism of small differences’ effect going on here, and if I actually had to spend a lot of time with normal people I’d probably run screaming back to to STEM nerd land pretty fast and stop caring about these minor nitpicks.
Also I only came up with twenty, not thirty, because I am lazy.
- I’m really resistant to having to ‘actually think about things’, in the sense of applying any sort of mental effort that feels temporarily unpleasant. The more I introspect as I go about problem solving, the more I notice this. For example, I was mucking around in Inkscape recently and wanted to check that a square was 16 units long, and I caught myself producing the following image:
Apparently counting to 16 was an unacceptable level of cognitive strain, so to avoid it I made the two 4 by 4 squares (small enough to immediately see their size) and then arranged them in a pattern that made the length of the big square obvious. This was slower but didn’t feel like work at any point. No thinking required!
-
This must have a whole bunch of downstream effects, but an obvious one is a weakness for ‘intuitive’, flash-of-insight-based demonstrations, mixed with a corresponding laziness about actually doing the work to get them. (Slowly improving this.)
-
I picked up some Bad Ideas From Dead Germans at an impressionable age (mostly from Kant). I think this was mostly a good thing, as it saved me from some Bad Ideas From Dead Positivists that physics people often succumb to.
-
I didn’t read much phenomenology as such, but there’s some mood in the spirit of this Whitehead quote that always came naturally to me:
For natural philosophy everything perceived is in nature. We may not pick and choose. For us the red glow of the sunset should be as much part of nature as are the molecules and electric waves by which men of science would explain the phenomenon.
By this I mean some kind of vague understanding that we need to think about perceptual questions as well as ‘physics stuff’. Lots of hours as an undergrad on Wikipedia spent reading about human colour perception and lifeworlds and mantis shrimp eyes and so on.
-
One weird place where this came out: in my first year of university maths I had those intro analysis classes where you prove a lot of boring facts about open sets and closed sets. I just got frustrated, because it seemed to be taught in the same ‘here are some facts about the world’ style that, say, classical mechanics was taught in, but I never managed to convince myself that the difference related to something ‘out in the world’ rather than some deficiency of our cognitive apparatus. ‘I’m sure this would make a good course in the psychology department, but why do I have to learn it?’
This isn’t just Bad Ideas From Dead Germans, because I had it before I read Kant.
-
Same thing for the interminable arguments in physics about whether reality is ‘really’ continuous or discrete at a fundamental level. I still don’t see the value in putting that distinction out in the physical world – surely that’s some sort of weird cognitive bug, right?
-
I think after hashing this out for a while people have settled on ‘decoupling’ vs ‘contextualising’ as the two labels. Anyway it’s probably apparent that I have more time for the contextualising side than a lot of STEM people.
-
Outside of dead Germans, my biggest unusual pervasive influence is probably the New Critics: Eliot, Empson and I.A. Richards especially, and a bit of Leavis. They occupy an area of intellectual territory that mostly seems to be empty now (that or I don’t know where to find it). They’re strong contextualisers with a focus on what they would call ‘developing a refined sensibility’, by deepening sensitivity to tiny subtle nuances in expression. But at the same time, they’re operating in a pre-pomo world with a fairly stable objective ladder of ‘good’ and ‘bad’ art. (Eliot’s version of this is one of my favourite ever wrong ideas, where poetic images map to specific internal emotional states which are consistent between people, creating some sort of objective shared world.)
This leads to a lot of snottiness and narrow focus on a defined canon of ‘great authors’ and ‘minor authors’. But also the belief in reliable intersubjective understanding gives them the confidence for detailed close reading and really carefully picking apart what works and what doesn’t, and the time they’ve spent developing their ear for fine nuance gives them the ability to actually do this.
The continuation of this is probably somewhere on the other side of the ‘fake pomo blocks path’ wall in David Chapman’s diagram, but I haven’t got there yet, and I really feel like I’m missing something important.
-
I don’t understand what the appeal of competitive games is supposed to be. Like basically all of them – sports, video games, board games, whatever. Not sure exactly what effects this has on the rest of my thinking, but this seems to be a pretty fundamental normal-human thing that I’m missing, so it must have plenty.
-
I always get interested in specific examples first, and then work outwards to theory.
-
My most characteristic type of confusion is not understanding how the thing I’m supposed to be learning about ‘grounds out’ in any sort of experience. ‘That’s a nice chain of symbols you’ve written out there. What does it relate to in the world again?’
-
I have never in my life expected moral philosophy to have some formal foundation and after a lot of trying I still don’t understand why this is appealing to other people. Humans are an evolved mess and I don’t see why you’d expect a clean abstract framework to ever drop out from that.
-
Philosophy of mathematics is another subject where I mostly just think ‘um, you what?’ when I try to read it. In fact it has exactly the same subjective flavour to me as moral philosophy. Platonism feels bad the same way virtue ethics feels bad. Formalism feels bad the same way deontology feels bad. Logicism feels bad the same way consequentialism feels bad. (Is this just me?)
-
I’ve never made any sense out of the idea of an objective flow of time and have thought in terms of a ‘block universe’ picture for as long as I’ve bothered to think about it.
-
If I don’t much like any of the options available for a given open philosophical or scientific question, I tend to just mentally tag it with ‘none of the above, can I have something better please’. I don’t have the consistency obsession thing where you decide to bite one unappealing bullet or another from the existing options, so that at least you have an opinion.
-
This probably comes out of my deeper conviction that I’m missing a whole lot of important and fundamental ideas on the level of calculus and evolution, simply on account of nobody having thought of them yet. My default orientation seems to be ‘we don’t know anything about anything’ rather than ‘we’re mostly there but missing a few of the pieces’. This produces a kind of cheerful crackpot optimism, as there is so much to learn.
-
This list is noticeably lacking in any real opinions on politics and ethics and society and other people stuff. I just don’t have many opinions and don’t like thinking about people stuff very much. That probably doesn’t say anything good about me, but there we are.
-
I’m also really weak on economics and finance. I especially don’t know how to do that economist/game theoretic thing where you think in terms of what incentives people have. (Maybe this is one place where ‘I don’t understand competitive games’ comes in.)
-
I’m OK with vagueness. I’m happy to make a vague sloppy statement that should at least cover the target, and maybe try and sharpen it later. I prefer this to the ‘strong opinions, weakly held’ alternative where you chuck a load of precise-but-wrong statements at the target and keep missing. A lot of people will only play this second game, and dismiss the vague-sloppy-statement one as ‘just being bad at thinking’, and I get frustrated.
-
Not happy about this one, but over time this frustration led me to seriously go off styles of writing that put a strong emphasis on rigour and precision, especially the distinctive dialects you find in pure maths and analytic philosophy. I remember when I was 18 or so and encountered both of these for the first time I was fascinated, because I’d never seen anyone write so clearly before. Later on I got sick of the way that this style tips so easily into pedantry over contextless trivialities (from my perspective anyway). It actually has a lot of good points, though, and it would be nice to be able to appreciate it again.
Nice list. Very different from mine, yet it feels familiar and understandable (and like interesting avenues to explore in some cases). It does help me get a picture of your mind.
LikeLike
#1 this is actually the method I use all the time when doing pixel art because it is less error prone and more reproducible
#6 I don’t quite understand the distinction you’re making between the real world and people’s conceptualization of it. I understand their argument as disagreeing on what the optimal conceptualization is.
#10 to me the theory has always seemed to be contained in the examples so that with a few examples all the important parts of the theory don’t need to be told to me.
#11 one time in 9th grade I set up a trigonometry problem but I had to ask a friend to calculate it because I couldn’t both solve it and keep the relationship between the pictures and the expressions all in my head at the same time. The “live programming” people seem to be tackling this issue and I would be very interested in your thoughts on what exactly is involved in maintaining cognitive coupling and what kinds of tricks make it manageable.
#12 Wittenstein explains it well in explaining why it is ridiculous. The expectation is that human activities approach a perfect logical form because it is optimal, like flower heads approach a golden ratio spiral. And also because things should be globally true with differences caused by specialization to local details (because this is what it is like to apply an abstraction to a situation, the more perfect the abstraction the more global it is)
#18: this is a surprise because thinking in terms of what incentives things have is the major part of how I understand evolutionary theory. A prototypical “how to think” parable that my plant breeding professor shared with me was “Everyone knows that wheat that grows too tall falls over when heavily fertilized. So when you’re developing a new variety if you only want to keep the short wheat, it seem seems reasonable to set your mower blades at two feet and chop off everything taller than that. But in fact this selection pressure doesn’t incentivize short wheat as much as it incentivizes wheat with lots of heads. This is the opposite of what you want because more heads per plant means smaller grain size”
#19 I’m not sure what you’d call it but I prefer case studies
LikeLike
#1: That’s really interesting about the pixel art!
#6: I’ll have to have another think about exactly what I’m trying to get across, and how well your alternative reading that they are ‘disagreeing on what the optimal conceptualization is’ fits for me. I normally find it viscerally annoying to even try and read most physicists on the discrete/continuous question, though, in a way that suggests *some* kind of fundamental difference, though I’m not yet good at verbally isolating what it is.
I just looked up the FQXi essay winners for the year they had this as their question (https://fqxi.org/community/essay/winners/2011.1) as that should give some kind of sampling of physicists’ opinions (obviously biased towards the sorts of physicists who answer FQXi essay questions!). So if I can face reading through some of those abstracts I might be able to work out what I’m trying to say.
#10 Yeah, I also think of the theory as being contained in the examples. Obviously that can be dangerous and it’s easy to miss edge cases that your representative examples don’t work for (just been reading Lakatos on this actually). You can go a long way with ‘examples first’ though.
#11:
> The “live programming” people seem to be tackling this issue and I would be very interested in your thoughts on what exactly is involved in maintaining cognitive coupling and what kinds of tricks make it manageable.
I’m also really interested in what the live programming people are doing! And also this question of how you maintain cognitive coupling. (It’s pretty hard to do – often I find there’s part of an argument I ‘actually understand’, then I’ll switch to ‘just doing algebra’ for a few lines, then I’ll work out how to interpret the result and feel like I understand again.) I probably will write a post or two that are connected to this soon, though I’m not sure have anything particularly exciting to say about it.
#18: Hm, it’s true that thinking about incentives is also vital for understanding evolutionary theory. I guess this is something that isn’t very integrated in my thinking, so maybe I can do OK applying it in some cases and not others. The most salient example of ‘thinking about incentives’ when I was writing this was actually practical problems at work – e.g. you have some sort of task that needs doing, so how do you set up incentives so it just gets done? I find it hard to switch into that mode of thinking.
LikeLike