This is mainly a review of Brian Cantwell Smith’s latest book, The Promise of Artificial Intelligence: Reckoning and Judgment. But it’s also a second attempt to understand more of his overall project and worldview, after struggling through On the Origin of Objects a few years ago. I got a lot out of reading that, and wrote up what I did understand in my post on his idea of representations happening in the ‘middle distance’ between direct causal coupling and total irrelevance. But somehow the whole thing never cohered for me at the level I wanted to and I felt like I was missing something.
The new book is an easier read, but still not exactly straightforward. He’s telling an intricate story, and as with OOO the book is one single elegant arc of argument with little redundancy, so it’s not a forgiving format if you get lost. And I did get lost, in the sense that I’ve still got the ‘I’m missing something’ feeling. Part of the reason I’m posting in this on my ‘proper’ blog and not the notebucket is that I wanted the option of getting comments (this worked very well for the middle distance post and I got some extremely good ones). So if you can help me out with any of this, please do!
So, first, let’s explain the part that I do understand. The early part of the book is about the history of AI, and of course there’s been a whole lot more of this since OOO‘s publication in 1996. He divides this history into ‘first-wave’ GOFAI, with its emphasis on symbolic manipulation, and the currently successful ‘second wave’ of AI based on neural networks. There’s also a short ‘transition’ chapter on the 4E movement (’embodied, embedded, extended, enacted’) between the two waves, which he describes as important but not enough on its own, for reasons I’ll get into.
He’s mainly interested in what the first and second wave paradigms implicitly assume about the world. First-wave AI worked with logical inference on symbols that were supposed to directly map to discrete well-defined objects in the world. This assumes an ontology where that would actually work:
The ontology of the world is what I will call formal: discrete, well-defined, mesoscale objects exemplifying properties and standing in unambiguous relations.
And of course it mostly didn’t work, for most problems, because the world is mostly not like that.
Second-wave AI gets below these ready-made well-defined concepts to something more like the perceptual level. Objects aren’t baked in from the start but have to be recognised, distilled out of a gigantic soup of pixel-level data by searching for weak statistical correlations between huge numbers of variables. This has worked much better for tasks like image recognition and generation, suggesting that it captures something real about the complexity and richness of the world. Smith uses an analogy to a group of islands, where questions like ‘how many islands are there?’ depend on the level of detail you include:
Whether an outcropping warrants being called an island—whether it reaches “conceptual” height—is unlikely to have a determinate answer. In traditional philosophy such questions would be called vague, but I believe that label is almost completely inappropriate. Reality—both in the world and in these high-dimensional representations of it—is vastly richer and more detailed than can be “effably” captured in the idealized world of clear and distinct ideas.
There’s an interesting aside about how phenomenology has traditionally had a better grasp on this kind of richness than analytic philosophy, with its focus on logic and precision, which can mislead people into thinking it’s a subjective feature of our internal experience. Whereas really it’s about how the world is. Things are just too complicated to be fully captured by low-resolution logical systems:
That the world outstrips these schemes’ purview is a blunt metaphysical fact about the world — critical to any conceptions of reason and rationality worth their salt. Even if phenomenological philosophy has been more acutely aware of this richness than has the analytic tradition, the richness itself is a fundamental characteristic of the underlying unity of the metaphysics, not a uniquely phenomenological or subjective fact.
Smith would like to keep using the word ‘rationality’ for successful reasoning in general, not just the formal kind:
I want to reject the idea that intelligence and rationality are adequately modeled by something like formal logic, of the sort at which computers currently excel. That is: I reject any standard divide between “reason” as having no commitment, dedication, and robust engagement with the world, and emotion and affect as being the only locus of such “pro” action-oriented attitudes, on the other.
I haven’t decided whether I like this or not — I’ve kind of got used to David Chapman’s distinction between ‘reasonableness’ and ‘rationality’ so I’m feeling some resistance to using ‘rationality’ for the broader thing. At the least I still want a word for formal, systematic thinking.
OK, now we’re getting towards the bits I don’t understand so well. Smith doesn’t think that the resources of current ‘second-wave’ AI are going to be enough to reproduce anything like human thought. This is where the subtitle of the book, ‘Reckoning and Judgment’, comes in. First, here’s how he explains his use of ‘reckoning’:
… I use the term “reckoning” for the representation manipulation and other forms of intentionally and semantically interpretable behavior carried out by systems that are not themselves capable, in the full-blooded senses that we have been discussing, of understanding what it is that those representations are about—that are not themselves capable of holding the content of their representations to account, that do not authentically engage with the world’s being the way in which their representations represent it as being.
So, roughly, ‘reckoning’ refers to behaviour that can be understood intentionally but that isn’t itself produced by an intentional system. Current computers are capable of doing this kind of reckoning, but not the outward-facing participatory kind of thought he calls ‘judgment’:
I reserve the term “judgment,” in contrast, for the sort of understanding I have been talking about — the understanding that is capable of taking objects to be objects, that knows the difference between appearance and reality, that is existentially committed to its own existence and to the integrity of the world as world, that is beholden to objects and bound by them, that defers, and all the rest.
(‘Defers’ is another bit of his terminology — it means that the judging system knows that when the representation fails to match the world, it’s the world that should take precedence.)
This makes sense to me in broad strokes, but I still have the sense I had from OOO that I don’t really understand how much this is a high-level sketch and how much it’s supposed to use his specific ideas about representation.
This is where it might be useful to go back to his criticism of the 4E movement. This movement mostly focussed on the interaction of AI systems with their immediate environment, but this direct causal link is not enough. For example, take a computer interacting with a USB stick:
Surely, one might think, a computer can be oriented (or comport itself) toward a simple object, such as a USB stick. If I click a button that tells the computer to “copy the selected file to the USB stick in slot A”, and if in ordinary circumstances my so clicking causes the computer to do just that, can we not say that computer was oriented to the stick?
No, we cannot. Suppose that, just before the command is obeyed, a trickster plucks out the original USB stick and inserts theirs. The problem is not just that the computer would copy the file onto their stick without knowing the difference; it is that it does not have the capacity to distinguish the two cases, has no resources with which to comprehend the situation as different – cannot, that is, distinguish the description “what is in the drive” from the particular object that, at a given instant, satisfies that description.
This gets into Smith’s idea of representation as happening ‘in the middle distance’, not rigidly attached to the immediate situation like the computer is to the USB stick, and also not completely separate and irrelevant to it:
How could a computer know the difference between the stick and a description it satisfies (“the stick currently in the drive”), since at the moment of copying there need be no detectable physical difference in its proximal causal envelope between the two—and hence no way, at that moment, for the computer to detect the difference between the right stick and the wrong one? That is exactly what (normatively governed) representation systems are for: to hold systems accountable to, and via a vast network of social practices, to enable systems to behave appropriately toward, that which outstrips immediate causal coupling.
These ideas get folded in to his standards for ‘genuine intelligence’, along with several related capacities like being able to distinguish an object from representations of it, and care about the difference. This ability to ‘register’ an object is the key part of what he calls ‘judgment’ (‘the understanding that is capable of taking objects to be objects’).
So maybe I do understand this book after all, now that I’ve tried to write my thoughts down? Why do I still feel confused?
I think it’s the same disorientation I had with OOO, where I’m unsure when I’m reading a sketch of a detailed, specific mechanism and when I’m reading a more vision-level ‘insert future theory here’ thing. The middle distance idea is definitely a key part of his idea of judgment, and seems pretty specific, but then there are other vaguer parts about what the ability to take objects as objects would mean. And then, at the far end from concrete mechanism, judgment is also supposed to take on its ordinary language associations:
By judgment I mean that which is missing when we say that someone lacks judgment, in the sense of not fully considering the consequences and failing to uphold the highest principles of justice and humanity and the like. Judgment is something like phronesis, that is, involving wisdom, prudence, even virtue.
So the felt-sense feeling of confusion is something like an unsteadiness, an inability to pin down exactly how I’m supposed to be relating to this idea of judgment. I’m failing to successfully register it as an object, haha. I don’t know. I wish I could explain myself better ¯\_(ツ)_/¯
This is where some comments could be useful. If there’s anything specific that you think I’m missing, please let me know!
I’m pretty quiet on here currently. That’s because I have a different experiment going on instead: spamming out lots of short posts in one sitting on a notebook blog, Notebucket. I just realised I never linked to it from here, so… now I have.
The quality level is often low and it’s really not worth wading through all of those. But I’m pleased with some of them. Here are some of the more coherent and interesting ones:
Other than that, there’s a whole load of fragmented notes about some cluster of thoughts to do with Husserl, Derrida, mathematical notation as a technology… not sure exactly where I’m going with it, but I want to start combining it into more coherent blog posts soon, and posting them here again.
(Edit: AARGH!!! The WordPress editor gets more broken every time I try it, today it’s not even letting me preview my own post. I’m considering moving to Ghost eventually, which is where I host the notebook, but I need to sort out the commenting situation first. This is getting ridiculous though.)
I’ve recently been reading Drawing Theories Apart: The Dispersion of Feynman Diagrams in Postwar Physics, by David Kaiser. Feynman diagrams combine my longstanding interest in physics with my current weird-interest-of-the-moment, text as a technology (this was also the background inspiration for my recent visual programming post). They aren’t exactly text, but they’re a formalised, repeatable type of diagram that follow a certain set of rules, so they’re definitely text-adjacent. I ended up getting more interested in the details of the physics than in the text-as-technology angle, so that’s going to be the main focus of this somewhat rambling review, but a few other topics will come up too.
Feynman diagrams turn out to be an interesting lens for looking at the history of physics. One obvious way to think of physics is as a set of theories, like ‘thermodynamics’, ‘electromagnetism’, ‘quantum mechanics’, and so on, each with a sort of axiomatic core that various consequences can be developed from. This fits certain parts of physics rather well – special relativity is a particularly good fit, for instance, with its neat conceptual core of a few simple postulates.
At the other end of the scale is something like fluid dynamics. In theory I suppose most people in fluid dynamics are looking at the consequences of one theory, the Navier-Stokes equations, but that’s a horribly complicated set of nonlinear equations that nobody can solve in general. So in reality fluid dynamics is splintered into a bunch of subdisciplines studying various regimes where different approximations can be made – I’m not an expert here but stuff like supersonic flow, boundary layers, high viscosity – and each one has its bag of techniques and set of canonical examples. Knowing about Navier-Stokes is pretty useless on its own, you’re also going to need the bag of techniques for your subfield to make any progress. So a history of fluid dynamics needs to largely be a history of these techniques.
Quantum field theory, where Feynman diagrams were first developed, is also heavy on bags of techniques. These are harder than postulates to transmit clearly through a textbook, you really have to see a lot of examples and work exercises and so on, so tacit knowledge transmitted by experts is especially important. Kaiser makes this point early on (my bolds):
Once we shift from a view of theoretical work as selecting between preformed theories, however, to theoretical work as the crafting and use of paper tools, tacit knowledge and craft skill need not seem so foreign. Thomas Kuhn raised a similar problem with his discussion of “exemplars”. Kuhn wrote that science students must work to master exemplars, or model problems, before they can tackle research problems on their own. The rules for solving such model problems and generalizing their application are almost never adequately conveyed via appeals to overarching general principles and rarely appear in sufficient form within published textbooks.
This focus on ‘paper tools’ is in the tradition of Bruno Latour’s work on ‘inscriptions’, and in fact the title of Kaiser’s book comes from Latour’s paper, Visualisation and Cognition: Drawing Things Together [pdf]. Latour talks about the way that complicated laboratory procedures need to be condensed down into marks on paper in order to communicate with other scientists:
Like these scholars, I was struck, in a study of a biology laboratory, by the way in which many aspects of laboratory practice could be ordered by looking not at the scientists’ brains (I was forbidden access!), at the cognitive structures (nothing special), nor at the paradigms (the same for thirty years), but at the transformation of rats and chemicals into paper… Instruments, for instance, were of various types, ages, and degrees of sophistication. Some were pieces of furniture, others filled large rooms, employed many technicians and took many weeks to run. But their end result, no matter the field, was always a small window through which one could read a very few signs from a rather poor repertoire (diagrams, blots, bands, columns). All these inscriptions, as I called them, were combinable, superimposable and could, with only a minimum of cleaning up, be integrated as figures in the text of the articles people were writing. Many of the intellectual feats I was asked to admire could be rephrased as soon as this activity of paper writing and inscription became the focus for analysis.
These inscriptions are transportable and recombinable by scientists in different locations (‘immutable mobiles’):
If you wish to go out of your way and come back heavily equipped so as to force others to go out of *their* ways, the main problem to solve is that of *mobilization*. You have to go and to come back *with* the “things” if your moves are not to be wasted. But the “things” have to be able to withstand the return trip without withering away. Further requirements: the “things” you gathered and displaced have to be presentable all at once to those you want to convince and who did not go there. In sum, you have to invent objects which have the properties of being *mobile* but also *immutable*, *presentable*, *readable* and *combinable* with one another.
Kaiser’s focus is instead on the ways that diagrams elude this easy transmissibility, and the background of tacit knowledge that they rely on: ‘drawing theories apart’ rather than ‘drawing things together’. Here’s a representative anecdote:
… in the summer of 1949, Enrico Fermi had complained that he was unable to make sense of one of Bethe’s own recent papers, and hence could not reproduce and extend Bethe’s calculations. Fermi and Bethe were both experts in the field in question, and they had worked closely together throughout the war years; they knew the territory and they knew each other quite well.
Also, of course, they were Fermi and Bethe! If they can’t do it, there isn’t much hope for the rest of us.
What Feynman diagrams are…
Before I go any further, it might be useful to give a rough indication of what Feynman diagrams are, and what it’s like to calculate with them. (Disclaimer before I attempt to do this: I only have a basic knowledge of this myself!) The idea is that they’re a notational device used to translate big ugly equations into something easier to manipulate. Unlike most popular science explanations, I’m going to risk putting some of these big ugly equations on the screen, but the details of them are not important. I just want to give an idea of how they’re translated into diagrams.
The examples I’m using come from some excellent notes on Solving Classical Field Equations, by Robert Helling. These notes make the point that Feynman diagrams can be used in many contexts, including in classical physics – they’re not a quantum-only thing. It makes more sense to think of them as applying to a particular kind of mathematical method, rather than to a type of physical theory as such. This method is a specific kind of perturbation theory, a general class of techniques where you make a rough (‘zeroth-order’) approximation to a calculation and then add on successive (‘first-order’, ‘second-order’, ‘third-order’…) correction terms. If all goes well, each correction term is smaller enough than the last that the whole thing converges, and you get a better and better approximation the more terms you include.
Now let’s see how the correction terms map to diagrams. Here’s the first order correction for Helling’s example, in standard equation form:
And here’s the corresponding diagram:
I’m not going to go into the details of the exact rules for translating from equation to diagram, but hopefully you can see some correspondences – the cubed term translates into three branches, for example. The full rules are in Helling’s paper.
At this point there isn’t a big difference between the equation and the diagram in terms of total effort to write down. But in perturbation theory, the higher the order you go to, the more hairy looking the correction terms get – they’re built up in a kind of recursive way from pieces of the lower-level correction terms, and this gets fiddly quickly. For example, here’s the third order correction term:
Ugh. At this point, you can probably see why you want to avoid having to write this thing down. In diagram form this term becomes:
This is a lot less mistake-prone than writing down the big pile of integrals, and the rules tell you exactly what diagrams need to be included, what number to put in front of each one, etc. This is a big improvement. And that becomes even more important in quantum electrodynamics, where the calculations are much more complicated than these example ones.
… sort of
Well, that’s one view of what Feynman diagrams are, at least. As the subtitle indicates, this book is about the dispersion of Feynman diagrams through physics. A large part of this is about geographical dispersion, as physicists taught the new techniques to colleagues around the world, and another part is about the dispersion of the methods through different fields, but the most interesting parts for me were about the dispersion of the meaning of diagrams.
These differences in meaning were there from the start. In the section above I described Feynman diagrams as a notational device for making a certain kind of calculation easier. This mirrors the view of Freeman Dyson, who was the first person to understand Feynman’s diagrammatic method and show its equivalence to the existing mathematical version. Dyson was apparently always very careful to start with the standard mathematics, and then show how the diagrams could replicate this.
None of this fits with how Feynman himself viewed the diagrams. For Feynman, the diagrams were a continuation of an idiosyncratic path he’d been pursuing for some time already, where he tried to remove fields from his models of physics and replace them with direct particle interactions. He saw the diagrams themselves as describing actual particle interactions occurring in spacetime, and considered them to take precedence over the mathematical description:
… Feynman believed fervently that the diagrams were more primary and more important than any derivation that they might be given. In fact, Feynman continued to avoid the question of derivation in his articles, lecture courses and correspondence… Nowhere in Feynman’s 1949 article on the diagrams, for example, were the diagrams’ specific features or their strict one-to-one correlations with specific mathematical expressions derived or justified from first principles. Instead, Feynman avowed unapologetically that “Since the result was easier to understand than the derivation, it was thought best to publish the results first in this paper.”
This split persisted as methods were taught more widely and eventually condensed into textbooks. Some physicists stuck with the mathematical-formalism-first approach, while others took Feynman’s view to an extreme:
James Bjorken and Sidney Drell began their twin textbooks on relativistic quantum mechanics and quantum field theory from 1964 and 1965 with the strong statement that “one may go so far as to adopt the extreme view that the full set of all Feynman graphs is the theory.” Though they quickly backed off this stance, they firmly stated their “conviction” that the diagrams and rules for calculating directly from them “may well outlive the elaborate mathematical structure” of canonical quantum field theory, which, they further opined, might “in time come to be viewed more as a superstructure than as a foundation.”
I’d never thought about this before, but this line of argument makes a fair bit of sense to me. This was a new field and the mathematical formalism was not actually very much older than Feynman’s diagrams. So everything was still in flux, and if the diagrams looked simpler than the formalism then maybe that looked like an indication to start there instead? I’d be interested now to learn a bit more of the history.
A third motivation also appeared at this point. The immediate postwar years were a time of enormous expansion in physics funding, especially in the US, and huge numbers of new students were entering the field. These students mostly needed to calculate practical things quickly, and conceptual niceties were not important. Feynman diagrams were relatively straightforward to learn compared to the underlying formalism, so a diagram-first route that got students calculating quickly became popular.
This pragmatic motivation is one reason that Kaiser’s focus on diagrams works so well, compared to a theory-first approach. Most practitioners were not even trying to teach and apply consistent theories:
… textbooks during the 1950s and 1960s routinely threw together techniques of mixed conceptual heritage, encouraging students to apply an approximation based on nonrelativistic potential scattering here, a lowest-order Feynman diagram there.
There wasn’t any need to, when the pragmatic approach was working so well. New experimental results were coming out all the time, and theorists were running to keep up, finding ways of adapting their techniques to solve new problems. There was more than enough work to keep everyone busy without needing to worry about the conceptual foundations.
There’s something kind of melancholy about reading about this period now. This was the golden age of a particular type of physics, which worked astonishingly well right up until it didn’t. Eventually the new experimental results ran dry, theory caught up, and it was no longer obvious how to proceed further with current techniques. Other fields continued to flourish – astronomy, condensed matter – but particle physics lost its distinctive cultural position at the leading edge of knowledge, and hasn’t regained it.
Still, I enjoyed the book, and I’m hoping it might end up helping me make some more sense of the physics, as well as the history. Since reading Helling’s notes on Feynman diagrams in classical physics, I’ve been curious about how they connect to the quantum versions. There’s a big difference between the classical and quantum diagrams – the quantum ones have loops and the classical ones don’t – and I’d like to understand why this happens at a deeper level, but it’s kind of hard to compare them properly when the formalisms used are so different. Knowing more about the historical development of the theory has given me some clues for where to to start from. I’m looking forward to exploring this more.
One of the more interesting recurring topics is visual programming:
Visual Programming Doesn’t Suck. Or maybe it does? These kinds of arguments usually start with a few shallow rounds of yay/boo. But then often something more interesting happens. Some of the subthreads get into more substantive points, and people with a deep knowledge of the tool in question turn up, and at this point the discussion can become genuinely useful and interesting.
This is one of the things I genuinely appreciate about Hacker News. Most fields have a problem with ‘ghost knowledge’, hard-won practical understanding that is mostly passed on verbally between practitioners and not written down anywhere public. At least in programming some chunk of it makes it into forum posts. It’s normally hidden in the depths of big threads, but that’s better than nothing.
I decided to read a bunch of these visual programming threads and extract some of this folk wisdom into a more accessible form. The background for how I got myself into this is a bit convoluted. In the last year or so I’ve got interested in the development of writing as a technology. There are two books in particular that have inspired me:
Walter Ong’s Orality and Literacy: the Technologizing of the Word. This is about the history of writing and how it differs from speech; I wrote a sort of review here. Everything that we now consider obvious, like vowels, full stops and spaces between words, had to be invented at some point, and this book gives a high level overview of how this happened and why.
Catarina Dutilh Novaes’s Formal Languages in Logic. The title makes it sound like a maths textbook, but Novaes is a philosopher and really it’s much closer to Ong’s book in spirit, looking at formal languages as a type of writing and exploring how they differ from ordinary written language.
Dutilh Novaes focuses on formal logic, but I’m curious about formal and technical languages more generally: how do we use the properties of text in other fields of mathematics, or in programming? What is text good at, and what is it bad at? Comment threads on visual programming turn out to be a surprisingly good place to explore this question. If something’s easy in text but difficult in a specific visual programming tool, you can guarantee that someone will turn up to complain about it. Some of these complaints are fairly superficial, but some get into some fairly deep properties of text: linearity, information density, an alphabet of discrete symbols. And conversely, enthusiasm for a particular visual feature can be a good indicator of what text is poor at.
So that’s how I found myself plugging through a text file with 1304 comments pasted into it and wondering what the hell I had got myself into.
What I did
Note: This post is looong (around 9000 words), but also very modular. I’ve broken it into lots of subsections that can be read relatively independently, so it should be fairly easy to skip around without reading the whole thing. Also, a lot of the length is from liberal use of quotes from comment threads. So hopefully it’s not quite as as bad as it looks!
This is not supposed to be some careful scientific survey. I decided what to include and how to categorise the results based on whatever rough qualitative criteria seemed reasonable to me. The basic method, such as it was, was the following:
Type ‘visual programming’ into the HN search box and pull out the six entries on the first page that were a) about visual programming in general, not a specific tool and b) had long discussion threads (100+ comments). These six threads were:
Skim through the comments and do a rough triage, keeping anything that was on-topic and fairly substantive
Pull out interesting-looking parts of these comments into a spreadsheet, and tag with common themes that I noticed
Write this blog post
The basic structure of the rest of the post is the following:
A breakdown of what commenters normally meant by ‘visual programming’ in these threads. It’s a pretty broad term, and people come in with very different understandings of it.
Common themes. This is the main bulk of the post, where I’ve pulled out topics that came up in multiple threads.
A short discussion-type section with some initial questions that came to mind while writing this. There are many directions I could take this in, and this post is long enough without discussing these in detail, so I’ll just wave at some of them vaguely. Probably I’ll eventually write at least one follow-up post to pick up some of these strands when I’ve thought about them more.
Types of visual programming
There are also a lot of disparate visual programming paradigms that are all classed under “visual”, I guess in the same way that both Haskell and Java are “textual”. It makes for a weird debate when one party in a conversation is thinking about patch/wire dataflow languages as the primary VPLs (e.g. QuartzComposer) and the other one is thinking about procedural block languages (e.g. Scratch) as the primary VPLs.
One difficulty with interpreting these comments is that people often start arguing about ‘visual programming’ without first specifying what type of visual programming they mean. Sometimes this gets cleared up further into a comment thread, when people start naming specific tools, and sometimes it never gets cleared up at all. There were a few broad categories that came up frequently, so I’ll start by summarising them below.
There are a large number of visual programming tools that are roughly in the paradigm of ‘boxes with some arrows between them’, like the LabVIEW example above. I think the technical term for these is ‘node-based’, so that’s what I’ll call them. These ended up being the main topic of conversation in four of the six discussions, and mostly seemed to be the implied topic when someone was talking about ‘visual programming’ in general. Most of these tools are special-purpose ones that are mainly used in a specific domain. These domains came up repeatedly:Laboratory and industrial control. LabVIEW was the main tool discussed in this category. In fact it was probably the most commonly discussed tool of all, attracting its fair share of rants but also many defenders.
Game engines. Unreal Engine’s Blueprints was probably the second most common topic. This is a visual gameplay scripting system.
Music production. Max/MSP came up a lot as a tool for connecting and modifying audio clips.
Visual effects. Houdini, Nuke and Blender all have node-based editors for creating effects.
Data migration. SSIS was the main tool here, used for migrating and transforming Microsoft SQL Server data.
Other tools that got a few mentions include Simulink (Matlab-based environment for modelling dynamical systems), Grasshopper for Rhino3D (3D modelling), TouchDesigner (interactive art installations) and Azure Logic Apps (combining cloud services).
The only one of these I’ve used personally is SSIS, and I only have a basic level of knowledge of it.
This category includes environments like Scratch that convert some of the syntax of normal programming into coloured blocks that can be slotted together. These are often used as educational tools for new programmers, especially when teaching children.
This was probably the second most common thing people meant by ‘visual programming’, though there was some argument about whether they should count, as they mainly reproduce the conventions of normal text-based programming:
Scratch is a snap-together UI for traditional code. Just because the programming text is embedded inside draggable blocks doesn’t make it a visual language, its a different UI for a text editor. Sure, its visual, but it doesn’t actually change the language at all in any way. It could be just as easily represented as text, the semantics are the same. Its a more beginner-friendly mouse-centric IDE basically.
Drag-n-drop UI builders came up a bit, though not as much as I originally expected, and generally not naming any specific tool (Delphi did get a couple of mentions.) In particular there was very little discussion of the new crop of no-code/low-code tools, I think because most of these threads predate the current hype wave.
These tools are definitely visual, but not necessarily very programmatic — they are often intended for making one specific layout rather than a dynamic range of layouts. And the visual side of UI design tends to run into conflict with the ability to specify dynamic behaviour:
These tools also have less of the discretised, structured element that is usually associated with programming — for example, node-based tools still have a discrete ‘grammar’ of allowable box and arrow states that can be composed together. UI tools are relatively continuous and unstructured, where UI elements can be resized to arbitrary pixel sizes.
There’s a good argument for spreadsheets being a visual programming paradigm, and a very successful one:
I think spreadsheets also qualify as visual programming languages, because they’re two-dimensional and grid based in a way that one-dimensional textual programming languages aren’t.
The grid enables them to use relative and absolute 2D addressing, so you can copy and paste formulae between cells, so they’re reusable and relocatable. And you can enter addresses and operands by pointing and clicking and dragging, instead of (or as well as) typing text.
Spreadsheets are definitely not the canonical example anyone has in mind when talking about ‘visual programming’, though, and discussion of spreadsheets was confined to a few subthreads.
Visual enhancements of text-based code
As a believer myself, I think the problem is that visual programming suffers the same problem known as the curse of Artificial Intelligence:
“As soon as a problem in AI is solved, it is no longer considered AI because we know how it works.” 
Similarly, as soon as a successful visual interactive feature (be it syntax highlighting, trace inspectors for step-by-step debugging, “intellisense” code completion…) gets adopted by IDEs and become mainstream, it is no longer considered “visual” but an integral and inevitable part of classic “textual programming”.
There were several discussions of visual tooling for understanding normal text-based programs better, through debugging traces, dependency graphs, inheritance hierarchies, etc. Again, these were mostly confined to a few subthreads rather than being a central example of ‘visual programming’.
Several people also pointed out that even text-based programming in a plain text file has a number of visual elements. Code as written by humans is not a linear string of bytes, we make use of indentation and whitespace and visually distinctive characters:
Code is always written with “indentation” and other things that demonstrate that the 2d canvas distribution of the glyphs you’re expressing actually does matter for the human element. You’re almost writing ASCII art. The ( ) and [ ] are even in there to evoke other visual types. — nikki93
Brackets are a nice example — they curve towards the text they are enclosing, reinforcing the semantic meaning in a visual way.
Experimental or speculative interfaces
At the other end of the scale from brackets and indentation, we have completely new and experimental visual interfaces. Bret Victor’s Dynamicland and other experiments were often brought up here, along with speculations on the possibilities opened up by VR:
As long as we’re speculating: I kind of dream that maybe we’ll see programming environments that take advantage of VR.
Humans are really good at remembering spaces. (“Describe for me your childhood bedroom.” or “What did your third grade teacher look like?”)
There’s already the idea of “memory palaces”  suggesting you can take advantage of spatial memory for other purposes.
I wonder, what would it be like to learn or search a codebase by walking through it and looking around?
This is the most exciting category, but it’s so wide open and untested that it’s hard to say anything very specific. So, again, this was mainly discussed in tangential subthreads.
There were many talking points that recurred again and again over the six threads. I’ve tried to collect them here.
I’ve ordered them in rough order of depth, starting with complaints about visual programming that could probably be addressed with better tooling and then moving towards more fundamental issues that engage with the specific properties of text as a medium (there’s plenty of overlap between these categories, it’s only a rough grouping). Then there’s a grab bag of interesting remarks that didn’t really fit into any category at the end.
A large number of complaints in all threads were about poor tooling. As a default format, text has an enormous ecosystem of existing tools for input, search, diffing, formatting, etc etc. Most of these could presumably be replicated for any given visual format, but there are many kinds of visual formats and generally these are missing at least some of the conveniences programmers expect. I’ve discussed some of the most common ones below.
Unreal has a VPL and it is a pain to use. A simple piece of code takes up so much desktop real estate that you either have to slowly move around to see it all or have to add more monitors to your setup to see it all. You think spaghetti code is bad imagine actually having a visual representation of it you have to work with. Organization doesn’t exist you can go left, up, right, or down.
The standard counterargument to this was that LabVIEW and most other node-based environments do come with tools for encapsulation: you can generally ‘box up’ sets of nodes into named function-like subdiagrams. The extreme types of spaghetti code are mostly produced by inexperienced users with a poor understanding of the modularisation options available to them, in the same way that a beginner Python programmer with no previous coding experience might write one giant script with no functions:
Somehow people form the opinion that once you start programming in a visual language that you’re suddenly forced, by some unknown force, to start throwing everything into a single diagram without realizing that they separate their text-based programs into 10s, 100s, and even 1000s of files.
Poorly modularized and architected code is just that, no matter the paradigm. And yes, there are a lot of bad LabVIEW programs out there written by people new to the language or undisciplined in their craft, but the same holds true for stuff like Python or anything else that has a low barrier to entry.
Viewed through this lens there’s almost an argument that visual spaghetti is a feature not a bug — at least you can directly see that you’ve created a horrible mess, without having to be much of a programming expert.
There were a few more sophisticated arguments against node-based editors that acknowledged the fact that encapsulation existed but still found the mechanics of clicking through layers of subdiagrams to be annoying or confusing.
It may be that I’m just not a visual person, but I’m currently working on a project that has a large visual component in Pentaho Data Integrator (a visual ETL tool). The top level is a pretty simple picture of six boxes in a pipeline, but as you drill down into the components the complexity just explodes, and it’s really easy to get lost. If you have a good 3-D spatial awareness it might be better, but I’ve started printing screenshots and laying them out on the floor. I’m really not a visual person though…
IDEs for text-based languages normally have features like code folding and call hierarchies for moving between levels, but these conventions are less developed in node-based tools. This may be just because these tools are more niche and have had less development time, or it may genuinely be a more difficult problem for a 2D layout — I don’t know enough about the details to tell.
In general, all the dragging quickly becomes annoying. As a trained programmer, you can type faster than you can move your mouse around. You have an algorithm clear in your head, but by the time you’ve assembled it half-way on the screen, you already want to give up and go do something else.
Text-based languages also have a highly-refined interface for writing the language — most of us have a great big rectangle sitting on our desks with a whole grid of individual keys mapping to specific characters. In comparison, a visual tool based on a different paradigm won’t have a special input device, so it will have either have to rely on the mouse (lots of tedious RSI-inducing clicking around) or involve learning a new set of special-purpose keyboard shortcuts. These shortcuts can work well for experienced programmers:
If you are a very experienced programmer, you program LabVIEW (one of the major visual languages) almost exclusively with the keyboard (QuickDrop).
Let me show you an example (gif) I press “Ctrl + space” to open QuickDrop, type “irf” (a short cut I defined myself) and Enter, and this automatically drops a code snippet that creates a data structure for an image, and reads an image file.
Another tedious feature of many node-based tools is arranging all the boxes and arrows neatly on the screen. It’s irrelevant for the program output, but makes a big difference to readability. (Also it’s just downright annoying if the lines look wrong — my main memory of SSIS is endless tweaking to get the arrows lined up nicely).
Text-based languages are more forgiving, and also people tend to solve the problem with autoformatters. I don’t have a good understanding of why these aren’t common in node-based editors. (Maybe they actually are and people were complaining about the tools that are missing them? Or maybe the sort of formatting that is useful is just not automatable, e.g. grouping boxes by semantic meaning). It’s definitely a harder problem than formatting text, but there was some argument about exactly how hard it is to get at least a reasonable solution:
Automatic layout is hard? Yes, an optimal solution to graph layout is NP-complete, but so is register allocation, and my compiler still works (and that isn’t even its bottleneck). There’s plenty of cheap approximations that are 99% as good.
Same story again — text comes with a large ecosystem of existing tools for diffing, version control and code review. It sounds like at least the more developed environments like LabVIEW have some kind of diff tool, and an experienced team can build custom tools on top of that:
We used Perforce. So a custom tool was integrated into Perforce’s visual tool such that you could right-click a changelist and submit it for code review. The changelist would be shelved, and then LabVIEW’s diff tool (lvcompare.exe) would be used to create screenshots of all the changes (actually, some custom tools may have done this in tandem with or as a replacement of the diff tool). These screenshots, with a before and after comparison, were uploaded to a code review web server (I forgot the tool used), where comments could be made on the code. You could even annotate the screenshots with little rectangles that highlighted what a comment was referring to. Once the comments were resolved, the code would be submitted and the changelist number logged with the review. This is based off of memory, so some details may be wrong.
This is important because it shows that such things can exist. So the common complaint is more about people forgetting that text-based code review tools originally didn’t exist and were built. It’s just that the visual ones need to be built and/or improved.
Opinions were split on debugging. Visual, flow-based languages can make it easy to see exactly which route through the code is activated:
Debugging in unreal is also really cool. The “code paths” light up when activated, so it’s really easy to see exactly which branches of code are and aren’t being run – and that’s without actually using a debugger. Side note – it would be awesome if the lines of text in my IDE lit up as they were run. Also, debugging games is just incredibly fun and sometimes leads to new mechanics.
I remember this being about the only enjoyable feature of my brief time working with SSIS — boxes lit up green if everything went to plan, and red if they hit an exception. It was satisfying getting a nice run of green boxes once a bug was fixed.
On the other hand, there were problems with complexity again. Here are some complaints about LabVIEW debugging:
3) debugging is a pain. LabVIEW’s trace is lovely if you have a simple mathematical function or something, but the animation is slow and it’s not easy to check why the value at iteration 1582 is incorrect. Nor can you print anything out, so you end up putting an debugging array output on the front panel and scrolling through it.
4) debugging more than about three levels deep is painful: it’s slow and you’re constantly moving between windows as you step through, and there’s no good way to figure out why the 20th value in the leaf node’s array is wrong on the 15th iteration, and you still can’t print anything, but you can’t use an output array, either, because it’s a sub-VI and it’s going to take forever to step through 15 calls through the hierarchy.
There was a lot of discussion on what sort of problem domains are suited to ‘visual programming’ (which often turned out to mean node-based programming specifically, but not always).
Better for data flow than control flow
A common assertion was that node-based programming is best suited to data flow situations, where a big pile of data is tipped into some kind of pipeline that transforms it into a different form. Migration between databases would be a good example of this. On the other hand, domains with lots of branching control flow were often held to be difficult to work with. Here’s a representative quote:
Control flow is hard to describe visually. Think about how often we write conditions and loops.
That said – working with data is an area that lends itself well to visual programming. Data pipelines don’t have branching control flow and So you’ll see some really successful companies in this space.
I’m not sure how true this is? There wasn’t much discussion of why this would be the case, and it seems that LabVIEW for example has decent functionality for loops and conditions:
Aren’t conditionals and loops easier in visual languages? If you need something to iterate, you just draw a for loop around it. If you need two while loops each doing something concurrently, you just draw two parallel while loops. If you need to conditionally do something, just draw a conditional structure and put code in each condition.
One type of control structure I have not seen a good implementation of is pattern matching. But that doesn’t mean it can’t exist, and it’s also something most text-based languages don’t do anyway.
Maybe the issue is that there is a conceptual tension between data flow and control flow situations themselves, rather than just the representation of them? Data flow pipelines often involve multiple pieces of data going through the pipeline at once and getting processed concurrently, rather than sequentially. At least one comment addressed this directly:
One of the unappreciated facets of visual languages is precisely the dichotomy between easy dataflow vs easy control flow. Everyone can agree that
–> [A] –> [B] –>
represents (1) a simple pipeline (function composition) and (2) a sort of local no-op, but what about more complex representations? Does parallel composition of arrows and boxes represent multiple data inputs/outputs/computations occurring concurrently, or entry/exit points and alternative choices in a sequential process? Is there a natural “split” of flowlines to represent duplication of data, or instead a natural “merge” for converging control flows after a choice? Do looping diagrams represent variable unification and inference of a fixpoint, or the simpler case of a computation recursing on itself, with control jumping back to an earlier point in the program with updated data?
Visual programming is, unsurprisingly, well-suited to tasks that have a strong visual component. We see this on the small scale with things like colour pickers, which are far more helpful for choosing a colour than typing in an RGB code and hoping for the best. So even primarily text-based tools might throw in some visual features for tasks that are just easier that way.
Some domains, like visual effects, are so reliant on being able to see what you’re doing that visual tools are a no-brainer. See the TouchDesigner tutorial mentioned in this comment for an impressive example. If you need to do a lot of visual manipulation, giving up the advantages of text is a reasonable trade:
Why is plain text so important? Well for starters it powers version control and cut and pasting to share code, which are the basis of collaboration, and collaboration is how we’re able to construct such complex systems. So why then don’t any of the other apps use plain text if it’s so useful? Well 100% of those apps have already given up the advantages of plain text for tangential reasons, e.g., turning knobs on a synth, building a model, or editing a photo are all terrible tasks for plain text.
A related point was that visual tools are generally designed for niche domains, and rarely get co-opted for more general programming. A common claim was that visual tools favour concrete situations over abstract ones:
There is a huge difference between direct manipulation of concrete concepts, and graphical manipulation of abstract code. Visual programming works much better with the former than the latter.
It does seem to be the case that visual tools generally ‘stay close to the phenomena’. There’s a tension between between showing a concrete example of a particular situation, and being able to go up to a higher level of abstraction and dynamically generate many different examples. (A similar point came up in the section on drag-n-drop editors above.)
Deeper structural properties of text
“Text is the most socially useful communication technology. It works well in 1:1, 1:N, and M:N modes. It can be indexed and searched efficiently, even by hand. It can be translated. It can be produced and consumed at variable speeds. It is asynchronous. It can be compared, diffed, clustered, corrected, summarized and filtered algorithmically. It permits multiparty editing. It permits branching conversations, lurking, annotation, quoting, reviewing, summarizing, structured responses, exegesis, even fan fic. The breadth, scale and depth of ways people use text is unmatched by anything. There is no equivalent in any other communication technology for the social, communicative, cognitive and reflective complexity of a library full of books or an internet full of postings. Nothing else comes close.”
In this section I’ll look at properties that apply more specifically to text. Not everything in the quote above came up in discussion (and much of it is applicable to ordinary language more than to programming languages), but it does give an idea of the special position held by text.
I think the reason is that text is already a highly optimized visual way to represent information. It started with cave paintings and evolved to what it is now.
“Please go to the supermarket and get two bottles of beer. If you see Joe, tell him we are having a party in my house at 6 tomorrow.”
It took me a few seconds to write that. Imagine I had to paint it.
The communicative range of text came up a few times. I’m not convinced on this one. It’s true that ordinary language has this ability to finely articulate incredibly specific meanings, in a way that pictures can’t match. But the real reference class we want to compare to is text-based programming, not ordinary language. Programming languages have a much more restrictive set of keywords that communicate a much smaller set of ideas, mostly to do with quantity, logical implication and control flow.
In the supermarket example above, the if-then structure could be expressed in these keywords, but all the rest of the work would be being done by tokens like “bottlesOfBeer”, which are meaningless to the computer and only help the human reading it.
As soon as we’ve assigned something a variable name, we’ve already altered our code into a form to assist our cognition.
It seems much more reasonable that this limited structure of keywords can be ported to a visual language, and in fact a node-based tool like LabVIEW seems to have most of them. Visual languages generally still have the ability to label individual items with text, so you can still have a “bottlesOfBeer” label if you want and get the communicative benefit of language. (It is true that a completely text-free language would be a pain to deal with, but nobody seems to be doing that anyway.)
A more convincing related point is that text takes up very little space. We’re already accustomed to distinguishing letters, even if they’re printed in a smallish font, and they can be packed together closely. It is true that the text-based version of the supermarket program would probably take up less space that a visual version.
This complaint came up a lot in relation to mathematical tasks, which are often built up by composing a large number of simpler operations. This can become a massive pain if the individual operations take up a lot of space:
Graphs take up much more space on the screen than text. Grab a pen and draw a computational graph of a Fourier transformation! It takes up a whole screen. As a formula, it takes up a tiny fraction of it. Our state machine used to take up about 2m x 2m on the wall behind us.
Many node-based tools seem to have some kind of special node for typing in maths in a more conventional linear way, to get around this problem.
(Sidenote: this didn’t come up in any of the discussions, but I am curious as to how fundamental this limitation is. Part of it comes from the sheer familiarity of text. The first letters we learned as a child were printed a lot bigger! So presumably we could learn to distinguish closely packed shapes if we were familiar enough with the conventions. At this point, of course, with a small number of distinctive glyphs, it would share a lot of properties with text-based language. See the section on discrete symbols below.)
Humans are centered around linear communication. Spoken language is essentially linear, with good use of a stack of concepts. This story-telling mode maps better on a linear, textual representation than on a graphical representation. When provided with a graph, it is difficult to find the start and end. Humans think in graphs, but communicate linearly.
The linearity of text is a feature that is mostly preserved in programming. We don’t literally read one giant 1D line of symbols, of course. It’s broken into lines and there are special structures for loops. But the general movement is vertically downwards. “1.5 dimensions” is a nice description:
When you write text-based code, you are also restricted to 2 dimensions, but it’s really more like 1.5 because there is a heavy directionality bias that’s like a waterfall, down and across. I cannot copy pictures or diagrams into a text document. I cannot draw arrows between comments to the relevant code; I have to embed the comment within the code because of this dimensionality/directionality constraint. I cannot “touch” a variable (wire) while the program is running to inspect its value.
It’s true that many visual environments give up this linearity and allow more general positioning in 2D space (arbitrary placing of boxes and arrows in node-based programming, for example, or the 2D grids in spreadsheets). This has benefits and costs.
On the costs side, linear structures are a good match to the sequential execution of program instructions. They’re also easy to navigate and search through, top to bottom, without getting lost in branching confusion. Developing tools like autoformatters is more straightforward (we saw this come up in the earlier section on missing tooling).
On the benefits side, 2D structures give you more of an expressive canvas for communicating the meaning of your program: grouping similar items together, for example, or using shapes to distinguish between types of object.
In LabVIEW, not only do I have a 2D surface for drawing my program, I also get another 2D surface to create user interfaces for any function if I need. In text-languages, you only have colors and syntax to distinguish datatypes. In LabVIEW, you also have shape. These are all additional dimensions of information.
And the match to sequential execution is less important if your target domain is also non-sequential in some way:
If the program is completely non-sequential, visual tools which reflects the structure of the program are going to be much better than text. For example, if you are designing a electronic circuit, you draw a circuit diagram. Describing a electronic circuit purely in text is not going to be very helpful.
Written text IS a visual medium. It works because there is a finite alphabet of characters that can be combined into millions of words. Any other “visual” language needs a similar structure of primitives to be unambiguously interpreted.
This is a particularly important point that was brought up by several commenters in different threads. Text is built up from a small number of distinguishable characters. Text-based programming languages add even more structure, restricting to a constrained set of keywords that can only be combined in predefined ways. This removes ambiguity in what the program is supposed to do. The computer is much stupider than a human and ultimately needs everything to be completely specified as a sequence of discrete primitive actions.
At the opposite end of the spectrum is, say, an oil painting, which is also a visual medium but much more of an unconstrained, freeform one, where brushstrokes can swirl in any arbitrary pattern. This freedom is useful in artistic fields, where rich ambiguous associative meaning is the whole point, but becomes a nuisance in technical contexts. So different parts of the spectrum are used for different things:
Because each method has its pros and cons. It’s a difference of generality and specificity.
Consider this list as a ranking: 0 and 1 >> alphabet >> Chinese >> picture.
All 4 methods can be useful in some cases. Chinese has tens of thousands of characters, some people consider the language close to pictures, but real pictures have more than that （infinite variants).
Chinese is harder to parse than alphabet, and picture is harder than Chinese. (Imagine a compiler than can understand arbitrary picture!)
Visual programs are still generally closer to the text-based program end of the spectrum than the oil painting one. In a node-based programming language, for example, there might be a finite set of types of boxes, and defined rules on how to connect them up. There may be somewhat more freedom than normal text, with the ability to place boxes anywhere on a 2D canvas, but it’s still a long way from being able to slap any old brushstroke down. One commenter compared this to diagrammatic notation in category theory:
Category theorists deliberately use only a tiny, restricted set of the possibilities of drawing diagrams. If you try to get a visual artist or designer interested in the diagrams in a category theory book, they are almost certain to tell you that nothing “visual” worth mentioning is happening in those figures.
Visual culture is distinguished by its richness on expressive dimensions that text and category theory diagrams just don’t have.
Drag-n-drop editors are a bit further towards the freeform end of the spectrum, allowing UI elements to be resized continuously to arbitrary sizes. But there are still constraints — maybe your widgets have to be rectangles, for example, rather than any old hand-drawn shape. And, as discussed in earlier sections, there’s a tension between visual specificity and dynamic programming of many potential visual states at once. Drag-n-drop editors arguably lose a lot of the features of ‘true’ languages by giving up structure, and more programmatic elements are likely to still use a constrained set of primitives.
Finally, there was an insightful comment questioning how successful these constrained visual languages are compared to text:
I am not aware of a constrained pictorial formalism that is both general and expressive enough to do the job of a programming language (directed graphs may be general enough, but are not expressive enough; when extended to fix this, they lose the generality.)
… There are some hybrids that are pretty useful in their areas of applicability, such as state transition networks, dataflow models and Petri nets (note that these three examples are all annotated directed graphs.)
This could be a whole blog post topic in itself, and I may return to it in a follow-up post — Dutilh Novaes makes similar points in her discussion of tractability vs expressiveness in formal logic. Too much to go into here, but I do think this is important.
Grab bag of other interesting points
This section is exactly what it says — interesting points that didn’t fit into any of the categories above.
Allowing syntax errors
This is a surprising one I wouldn’t have thought of, but it came up several times and makes a lot of sense on reflection. A lot of visual programming tools are too good at preventing syntax errors. Temporary errors can actually be really useful for refactoring:
This is also one of the beauties of text programming. It allows temporary syntax errors while restructuring things.
I’ve used many visual tools where every block you laid out had to be properly connected, so in order to refactor it you had to make dummy blocks as input and output and all other kinds of crap. Adding or removing arguments and return values of functions/blocks is guaranteed to give you rsi from excessive mousing.
I don’t quite understand why this is so common in visual tools specifically, but it may be to do with the underlying representation? One comment pointed out that this was a more general problem with any kind of language based on an abstract syntax tree that has to be correct at every point:
For my money, the reason for this is that a human editing code needs to write something invalid – on your way from Valid Program A to Valid Program B, you will temporarily write Invalid Jumble Of Bytes X. If your editor tries to prevent you writing invalid jumbles of bytes, you will be fighting it constantly.
The only languages with widely-used AST-based editing is the Lisp family (with paredit). They get away with this because:
Lisp ‘syntax’ is so low-level that it doesn’t constrain your (invalid) intermediate states much. (ie you can still write a (let) or (cond) with the wrong number of arguments while you’re thinking).
Paredit modes always have an “escape hatch” for editing text directly (eg you can usually highlight and delete an unbalanced parenthesis). You don’t need it often (see #1) – but when you need it, you really need it.
Maybe this is more common as a way to build a visual language?
Take what we all see at the end of whiteboard sessions. We see diagrams composed of text and icons that represent a broad swath of conceptual meaning. There is no reason why we can’t work in the same way with programming languages and computer.
Another recurring theme was a wish for hybrid tools that combined the good parts of visual and text-based tools. One example that came up in the ‘information density’ section was doing maths in a textual format in an otherwise visual tool, which seems to work quite well:
UE4 Blueprints are visual programming, and are done very well. For a lot of things they work are excellent. Everything has a very fine structure to it, you can drag off pins and get context aware options, etc. You can also have sub-functions that are their own graph, so it is cleanly separated. I really like them, and use them for a lot of things.
The issue is that when you get into complex logic and number crunching, it quickly becomes unwieldy. It is much easier to represent logic or mathematics in a flat textual format, especially if you are working in something like K. A single keystroke contains much more information than having to click around on options, create blocks, and connect the blocks. Even in a well-designed interface.
Tools have specific purposes and strengths. Use the right tool for the right job. Some kind of hybrid approach works in a lot of use cases. Sometimes visual scripting is great as an embedded DSL; and sometimes you just need all of the great benefits of high-bandwidth keyboard text entry.
Even current text-based environments have some hybrid aspect, as most IDEs support syntax highlighting, autocompletion, code folding etc to get some of the advantages of visualisation.
Visualising the wrong thing
The last comment I’ll quote is sort of ranty but makes a deep point. Most current visual tools only visualise the kind of things (control flow, types) that are already displayed on the screen in a text-based language. It’s a different representation of fundamentally the same thing. But the visualisations we actually want may be very different, and more to do with what the program does than what it looks like on the screen.
‘Visual Programming’ failed (and continues to fail) simply because it is a lie; just because you surround my textual code with boxes and draw arrows showing the ‘flow of execution’ does not make it visual! This core misunderstanding is why all these ‘visual’ tools suck and don’t help anyone do anything practical (read: practical = complex systems).
When I write code, for example a layout algorithm for a set of gui elements, I visually see the data in my head (the gui elements), then I run the algorithm and see the elements ‘move’ into position dependent upon their dock/anchor/margin properties (also taking into account previously docked elements positions, parent element resize delta, etc). This is the visual I need to see on screen! I need to see my real data being manipulated by my algorithms and moving from A to B. I expect with this kind of animation I could easily see when things go wrong naturally, seeing as visual processing happens with no conscious effort.
Instead visual programming thinks I want to see the textual properties of my objects in memory in fancy coloured boxes, which is not the case at all.
I’m not going to try and comment seriously on this, as there’s almost too much to say — it points toward to a large number of potential tools and visual paradigms, many of which are speculative or experimental. But it’s useful to end here, as a reminder that the scope of visual programming is not just some boxes with arrows with between.
This post is long enough already, so I’ll keep this short. I collected all these quotes as a sort of exploratory project with no very clear aim in mind, and I’m not yet sure what I’m going to do with it. I probably want to write at least one follow-up post making links back to the Dutilh Novaes and Ong books on text as a technology. Other than that, here are a few early ideas that came to mind as I wrote it:
How much is ‘visual programming’ a natural category? I quickly discovered that commmenters had very different ideas of what ‘visual programming’ meant. Some of these are at least partially in tension with each other. For example, drag-n-drop UI editors often allow near-arbitrary placement of UI elements on the screen, using an intuitive visual interface, but are not necessarily very programmatic. On the other hand, node-based editors allow complicated dynamic logic, but are less ‘visual’, reproducing a lot of the conventions of standard text-based programming. Is there a finer-grained classification that would be more useful than the generic ‘visual programming’ label?
Meaning vs fluency. One of the most appealing features of visual tools is that they can make certain inherently visual actions much more intuitive (a colour picker is a very simple example of this). And proponents of visual programming are often motivated by making programming more understandable. At the same time, a language needs to be a fluent medium for writing code quickly. At the fluent stage, it’s common to ignore the semantic meaning of what you’re doing, and rely on unthinkingly executing known patterns of symbol manipulation instead. Desigining for transparent meaning vs designing for fluency are not the same thing — Vim is a great example of a tool that is incomprehensible to beginners but excellent for fluent text manipulation. It could be interesting to explore the tension between them.
‘Missing tooling’ deep dives. I’m not personally all that interested in following this up, it takes me some way from the ‘text as technology’ angle I came in from, but it seems like an obvious one to mention. The ‘missing tooling’ subsections of this post could all be dug into in far more depth. For each one, it would be valuable to compare many existing visual environments, and understand what’s already available and what the limitations are compared to normal text.
Is ‘folk wisdom from internet forums’ worth exploring as a genre of blog post? Finally, here’s a sort of meta question, about the form of the post rather than the content. There’s an extraordinary amount of hard-to-access knowledge locked up in forums like Hacker News. While writing this post I got distracted by a different rabbit hole about Delphi, which somehow led me to another one about Smalltalk, which… well, you know how it goes. I realised that there were many other posts in this genre that could be worth writing. Maybe there should be more of them?
If you have thoughts on these questions, or on anything else in the post, please leave them in the comments!
This is a genre of post I’ve been experimenting with where I pick a topic, set a one hour timer and see what I can find out in that time. Previously: Marx on alienation and the Vygotsky Circle.
I’ve been seeing the term ‘sensemaking’ crop up more and more often. I even went to a workshop with the word in the title last year! I quite like it, and god knows we could all do with making more sense right now, but I’m pretty vague on the details. Are there any nuances of meaning that I’m missing by interpreting it in its everyday sense? I have a feeling that it has a kind of ecological tinge, group sensemaking more than individual sensemaking, but I could be off the mark.
Also, what’s the origin of the term? I get the impression that it’s associated with some part of the internet that’s not too distant from my own corner, but I’m not exactly sure which one. Time to find out…
Sensemaking or sense-making is the process by which people give meaning to their collective experiences. It has been defined as "the ongoing retrospective development of plausible images that rationalize what people are doing" (Weick, Sutcliffe, & Obstfeld, 2005, p. 409). The concept was introduced to organizational studies by Karl E. Weick in the 1970s and has affected both theory and practice.
Karl Edward Weick (born October 31, 1936) is an American organizational theorist who introduced the concepts of "loose coupling", "mindfulness", and "sensemaking" into organizational studies.
And, um, what’s organizational studies?
Organizational studies is "the examination of how individuals construct organizational structures, processes, and practices and how these, in turn, shape social relations and create institutions that ultimately influence people".
OK, something sociology-related. It’s a stub so probably not a huge subfield?
Although he tried several degree programs within the psychology department, the department finally built a degree program specifically for Weick and fellow student Genie Plog called "organizational psychology".
Only quoting this bc Genie Plog is a great name.
So, enactment: ‘certain phenomena are created by being talked about’. Fine.
Loose coupling in Weick’s sense is a term intended to capture the necessary degree of flex between an organization’s internal abstraction of reality, its theory of the world, on the one hand, and the concrete material actuality within which it finally acts, on the other.
Hm that could be interesting but might take me too far off topic.
People try to make sense of organizations, and organizations themselves try to make sense of their environment. In this sense-making, Weick pays attention to questions of ambiguity and uncertainty, known as equivocality in organizational research that adopts information processing theory.
bit vague but the next bit is more concrete:
His contributions to the theory of sensemaking include research papers such as his detailed analysis of the breakdown of sensemaking in the case of the Mann Gulch disaster, in which he defines the notion of a ‘cosmology episode’ – a challenge to assumptions that causes participants to question their own capacity to act.
Mann Gulch was a big firefighting disaster:
As the team approached the fire to begin fighting it, unexpected high winds caused the fire to suddenly expand, cutting off the men’s route and forcing them back uphill. During the next few minutes, a "blow-up" of the fire covered 3,000 acres (1,200 ha) in ten minutes, claiming the lives of 13 firefighters, including 12 of the smokejumpers. Only three of the smokejumpers survived. The fire would continue for five more days before being controlled.
The United States Forest Service drew lessons from the tragedy of the Mann Gulch fire by designing new training techniques and safety measures that developed how the agency approached wildfire suppression. The agency also increased emphasis on fire research and the science of fire behavior.
This is interesting but I’m in danger of tab explosion here. Keep a tab open with the paper and move on. Can’t resist opening the cosmology episode page though:
A cosmology episode is a sudden loss of meaning, followed eventually by a transformative pivot, which creates the conditions for revised meaning.
ooh nice. Weick again:
"Representations of events normally hang together sensibly within the set of assumptions that give them life and constitute a ‘cosmos’ rather than its opposite, a ‘chaos.’ Sudden losses of meaning that can occur when an event is represented electronically in an incomplete, cryptic form are what I call a ‘cosmology episode.’ Representations in the electronic world can become chaotic for at least two reasons: The data in these representations are flawed, and the people who manage those flawed data have limited processing capacity. These two problems interact in a potentially deadly vicious circle."
This is the kind of page that looks like it was written by one enthusiast. But it is pretty interesting. Right, back to Weick.
‘Mindfulness’: this is at a collective, organisational level
The effective adoption of collective mindfulness characteristics by an organization appears to cultivate safer cultures that exhibit improved system outcomes.
I’m not going to look up ‘organizational information theory’, I have a bit of a ‘systems thinking’ allergy and I don’t wanna.
Right, back to sensemaking article. Roots in social psychology. ‘Shifting the focus from organizations as entities to organizing as an activity.’
‘Seven properties of sensemaking’. Ugh I hate these sort of numbered lists but fine.
Identity. ‘who people think they are in their context shapes what they enact and how they interpret events’
Retrospection. ‘the point of retrospection in time affects what people notice (Dunford & Jones, 2000), thus attention and interruptions to that attention are highly relevant to the process’.
Enaction. ‘As people speak, and build narrative accounts, it helps them understand what they think, organize their experiences and control and predict events’
Social activity. ‘plausible stories are preserved, retained or shared’.
Ongoing. ‘Individuals simultaneously shape and react to the environments they face… As Weick argued, "The basic idea of sensemaking is that reality is an ongoing accomplishment that emerges from efforts to create order and make retrospective sense of what occurs"’
Extract cues from the context.
Plausibility over accuracy.
The sort of gestalt I’m getting is that it focusses on social rather than individual thinking, and action-oriented contextual in-the-thick-of-it doing rather than abstract planning ahead. Some similar terminology to ethnomethodology I think? e.g. accountability.
Ah yeah: ‘Sensemaking scholars are less interested in the intricacies of planning than in the details of action’
The sensemaking approach is often used to provide insight into factors that surface as organizations address either uncertain or ambiguous situations (Weick 1988, 1993; Weick et al., 2005). Beginning in the 1980s with an influential re-analysis of the Bhopal disaster, Weick’s name has come to be associated with the study of the situated sensemaking that influences the outcomes of disasters (Weick 1993).
‘Categories and related concepts’:
The categories of sensemaking included: constituent-minded, cultural, ecological, environmental, future-oriented, intercultural, interpersonal, market, political, prosocial, prospective, and resourceful. The sensemaking-related concepts included: sensebreaking, sensedemanding, sense-exchanging, sensegiving, sensehiding, and sense specification.
Haha OK it’s this sort of ‘fluidity soup’ that I have an allergy to. Too many of these buzzwords together. ‘Systems thinking’ is just a warning sign.
‘Other applications’: military stuff. Makes sense, lots of uncertainty and ambiguity there. Patient safety (looks like another random paragraph added by an enthusiast).
There’s a big eclectic ‘see also’ list. None of those are jumping out as the obvious next follow. Back to google. What I really want to know is why people are using this word now in some internet subcultures. Might be quite youtube centred? In which case there is no hope of tracking it down in one speedrun.
Oh yeah let’s look at google images:
Looks like businessy death by powerpoint contexts, not so helpful.
31 minutes left. Shit this goes quick!!
Google is giving me lots of video links. One is Daniel Schmachtenberger, ‘The War on Sensemaking’. Maybe this is the subcultural version I’ve been seeing? His name is familiar. Ok google ‘daniel schmachtenberger sensemaking’. Rebel Wisdom. Yep I’ve vaguely heard of that.
There is a war going on in our current information ecosystem. It is a war of propaganda, emotional manipulation, blatant or unconscious lies. It is nothing new, but is reaching a new intensity as our technology evolves. The result is that it has become harder and harder to make sense of the world, with potentially fatal consequences. If we can’t make sense of the world, neither can we make good decisions or meet the many challenges we face as a species.
Yes this is the sort of context I was imagining:
In War on Sensemaking, futurist and visionary Daniel Schmachtenberger outlines in forensic detail the dynamics at play in this new information ecology — one in which we are all subsumed. He explores how companies, government, and media take advantage of our distracted and vulnerable state, and how we as individuals can develop the discernment and sensemaking skills necessary to navigate this new reality. Schmachtenberger has an admirable ability to diagnose this issue, while offering epistemological and practical ways to help repair the dark labyrinth of a broken information ecology.
It’d be nice to trace the link from Weick to this.
Some stuff about zero sum games and bullshit. Mentions Vervaeke.
Schmachtenberger also makes the point that in order to become a good sensemaker we need ‘stressors’ — demands that push our mind, body, and heart beyond comfort, and beyond the received wisdom we have inherited. It is not enough to passively consume information: we first need to engage actively with with information ecology we live in and start being aware of how we respond to it, where it is coming from, and why it is being used.
Getting the sense that ‘information ecology’ is a key phrase round here.
Oh yeah ‘Game B’! I’ve heard that phrase around. Some more names: ‘Jordan Hall, Jim Rutt, Bonnita Roy’.
‘Sovereignty’: ‘become responsibility for our own shit’… ‘A real social, ‘kitchen sink level’ of reality must be cultivated to avoid the dangers of too much abstraction, individualism, and idealism.’ Seems like a good idea.
‘Rule Omega’. This one is new to me:
Rule Omega is simple, but often hard to put into practice. The idea is that every message contains some signal and some noise, and we can train ourselves to distinguish truth and nonsense — to separate the wheat from the chaff. If we disapprove of 95% of a distasteful political rant, for instance, we could train ourselves to hear the 5% that is true.
Rule Omega means learning to recognise the signal within the noise. This requires a certain attunement and generosity towards the other, especially those who think differently than we do. And Rule Omega can only be applied to those who are willing to engage in a different game, and work with each other in good faith.
Also seems like a Good Thing. Then some stuff about listening to people outside your bubble. Probably a link here to ‘mememic tribes’ type people.
This is a well written article, glad I picked something good.
‘Information war’ and shadow stuff:
Certainly there are bad actors and conspiracies to harm us, but there is also the ‘shadow within’. The shadow is the unacknowledged part we play in the destruction of the commons and in the never-ending vicious cycle of narrative war. We need to pay attention to the subtle lies we tell ourselves, as much as the ‘big’ lies that society tells us all the time. The trouble is: we can’t help being involved in destructive game theory logic, to a greater or lesser degree.
‘Anti-rivalrous systems’. Do stuff that increases value for others as well as yourself. Connection to ‘anti-rivalrous products’ in economics.
‘Information immune system’. Yeah this is nice! It sort of somehow reminds me of the old skeptics movement in its attempts to help people escape nonsense, but rooted in a warmer and more helpful set of background ideas, and with less tribal outgroup bashing. Everything here sounds good and if it helps people out of ideology prisons I’m all for it. Still kind of curious about intellectual underpinnings… like is there a straight line from Weick to this or did they just borrow a resonant phrase?
‘The dangers of concepts’. Some self-awareness that these ideas can be used to create more bullshit and misinformation themselves.
As such it can be dangerous to outsource our sensemaking to concepts — instead we need to embody them in our words and actions. Wrestling with the snake of self-deception and illusion and trying to build a better world in this way is a tough game. But it is the only game worth playing.
Games seem to be a recurring motif. Maybe Finite and Infinite Games is another influence.
OK 13 minutes left, what to do? Maybe trace out the link? google ‘schmachtenberger weick’. Not finding much. I’m now on some site called Conversational Leadership which seems to be connected to this scene somehow. Ugh not sure what to do. Back to plain old google ‘sensemaking’ search.
Let’s try this article by Laura McNamara, an organizational anthropologist. Nice job title! Yeah her background looks really interesting:
Principal Member of Technical Staff at Sandia National Laboratories. She has spent her career partnering with computer scientists, software engineers, physicists, human factors experts, I/O psychologists, and analysts of all sorts.
OK maybe she is trying to bridge the gap between old and new usages:
Sensemaking is a term that gets thrown around a lot without much consideration about where the concept came from or what it really means. If sensemaking theory is democratizing, that’s good thing.
6 minutes left so I won’t get through all of this. Pick some interesting bits.
One of my favorite books about sensemaking is Karl Weick’s, Sensemaking in Organizations. I owe a debt of thanks to the nuclear engineer who suggested I read it. This was back in 2001, when I was at Los Alamos National Laboratory (LANL). I’d just finished my dissertation and was starting a postdoctoral position in the statistics group, and word got around that the laboratories had an anthropologist on staff. My nuclear engineer friend was working on a project examining how management changes were impacting team dynamics in one of LANL’s radiochemistry bench laboratories. He called me asking if I had time to work on the project with him, and he asked if I knew much about “sensemaking.” Apparently, his officemate had recently married a qualitative evaluation researcher, who suggested that both of these LANL engineers take the time to read Karl Weick’s book Sensemaking in Organizations.
My nuclear engineer colleague thought it was the most brilliant thing he’d ever read and was shocked, SHOCKED, that I’d never heard of sensemaking or Karl Weick. I muttered something about anthropologists not always being literate in organizational theory, got off the phone, and immediately logged onto Amazon and ordered it.
… a breathtakingly broad array of ideas – Emily Dickinson, Anthony Giddens, Pablo Neruda, Edmund Leach…
‘Recipe for sensemaking:’
Chapter Two of Sensemaking in Organizations contains what is perhaps Weick’s most cited sentence, the recipe for sensemaking: “How can I know what I think until I see what I say?”
And this from the intro paragraph, could be an interesting reference:
in his gorgeous essay Social Things (which you should read if you haven’t already), Charles Lemert reminds us that social science articulates our native social intelligence through instruments of theory, concepts, methods, language, discourse, texts. Really good sociology and anthropology sharpen that intelligence. They’re powerful because they enhance our understanding of what it means to be human, and they really should belong to everyone.
Something about wiki platforms for knowledge sharing:
For example, back in 2008, my colleague Nancy Dixon and I did a brief study—just a few weeks—examining how intelligence analysts were responding to the introduction of Intellipedia, a wiki platform intended to promote knowledge exchange and cross-domain collaboration across the United States Intelligence community.
DING! Time’s up.
That actually went really well! Favourite speedrun so far, felt like I found out a lot. Most of the references I ended up on were really well-written and clear this time, no wading through rubbish.
I’m still curious to trace the link between Weick and the recent subculture. Also I might read more of the disaster stuff, and read that last McNamara article more carefully. Lots to look into! If anyone has any other suggestions, please leave a comment 🙂
I did a ‘speedrun’ post a couple of months ago where I set a one hour timer and tried to find out as much as I could about Marx’s theory of alienation. That turned out to be pretty fun, so I’m going to try it again with another topic where I have about an hour’s worth of curiosity.
I saw a wikipedia link to something called ‘the Vygotsky Circle’ a while back. I didn’t click the link (don’t want to spoil the fun!) but from the hoverover it looks like that includes Vygotsky, Luria and… some other Russian psychologists, I guess? I’d heard of those two, but I only have the faintest idea of what they did. Here’s the entirety of my current knowledge:
Vygotsky wrote a book called Thought and Language. Something about internalisation?
Luria’s the one who went around pestering peasants with questions about whether bears in the Arctic are white. And presumably a load of other stuff… he pops up in pop books with some frequency. E.g. I think he did a study of someone with an extraordinary memory?
That’s about it, so plenty of room to learn more. And also anything sounds about ten times more interesting if it’s a Circle. Suddenly it’s an intellectual movement, not a disparate bunch of nerds. So… let’s give this a go.
The Vygotsky Circle (also known as Vygotsky–Luria Circle) was an influential informal network of psychologists, educationalists, medical specialists, physiologists, and neuroscientists, associated with Lev Vygotsky (1896–1934) and Alexander Luria (1902–1977), active in 1920-early 1940s in the Soviet Union (Moscow, Leningrad and Kharkiv).
So who’s in it?
The Circle included altogether around three dozen individuals at different periods, including Leonid Sakharov, Boris Varshava, Nikolai Bernstein, Solomon Gellerstein, Mark Lebedinsky, Leonid Zankov, Aleksei N. Leontiev, Alexander Zaporozhets, Daniil Elkonin, Lydia Bozhovich, Bluma Zeigarnik, Filipp Bassin, and many others. German-American psychologist Kurt Lewin and Russian film director and art theorist Sergei Eisenstein are also mentioned as the “peripheral members” of the Circle.
OK that’s a lot of people! Hm this is a very short article. Maybe the Russian one is longer? Nope. So this is the entirety of the history of the Circle given:
The Vygotsky Circle was formed around 1924 in Moscow after Vygotsky moved there from the provincial town of Gomel in Belarus. There at the Institute of Psychology he met graduate students Zankov, Solov’ev, Sakharov, and Varshava, as well as future collaborator Aleksander Luria.:427–428 The group grew incrementally and operated in Moscow, Kharkiv, and Leningrad; all in the Soviet Union. From the beginning of World War II 1 Sept 1939 to the start of the Great Patriotic War, 22 June 1941, several centers of post-Vygotskian research were formed by Luria, Leontiev, Zankov, and Elkonin. The Circle ended, however, when the Soviet Union was invaded by Germany to start the Great Patriotic War.
However, by the end of 1930s a new center was formed around 1939 under the leadership of Luria and Leontiev. In the after-war period this developed into the so-called the “School of Vygotsky-Leontiev-Luria”. Recent studies show that this “school” never existed as such.
There are two problems that are related to the Vygotsky circle. First was the historical recording of the Soviet psychology with innumerable gaps in time and prejudice. Second was the almost exclusive focus on the person, Lev Vygotsky, himself to the extent that the scientific contributions of other notable characters have been considerably downplayed or forgotten.
This is all a bit more nebulous than I was hoping for. Lots of references and sources at least. May end up just covering Vygotsky and Luria.
OK Vygotsky wiki article. What did he do?
He is known for his concept of the zone of proximal development (ZPD): the distance between what a student (apprentice, new employee, etc.) can do on their own, and what they can accomplish with the support of someone more knowledgeable about the activity. Vygotsky saw the ZPD as a measure of skills that are in the process of maturing, as supplement to measures of development that only look at a learner’s independent ability.
Also influential are his works on the relationship between language and thought, the development of language, and a general theory of development through actions and relationships in a socio-cultural environment.
OK here’s the internalisation thing I vaguely remembered hearing about:
… the majority of his work involved the study of infant and child behavior, as well as the development of language acquisition (such as the importance of pointing and inner speech) …
Influenced by Piaget, but differed on inner speech:
Piaget asserted that egocentric speech in children “dissolved away” as they matured, while Vygotsky maintained that egocentric speech became internalized, what we now call “inner speech”.
Not sure I’ve picked a good topic this time, pulls in way too many directions so this is going to be very shallow and skip around. And ofc there’s lots of confusing turbulent historical background, and all these pages refer to various controversies of interpretation 😦 Skip to Luria, can always come back:
Alexander Romanovich Luria (Russian: Алекса́ндр Рома́нович Лу́рия, IPA: [ˈlurʲɪjə]; 16 July 1902 – 14 August 1977) was a Russian neuropsychologist, often credited as a father of modern neuropsychological assessment. He developed an extensive and original battery of neuropsychological tests during his clinical work with brain-injured victims of World War II, which are still used in various forms. He made an in-depth analysis of the functioning of various brain regions and integrative processes of the brain in general. Luria’s magnum opus, Higher Cortical Functions in Man (1962), is a much-used psychological textbook which has been translated into many languages and which he supplemented with The Working Brain in 1973.
… became famous for his studies of low-educated populations in the south of the Soviet Union showing that they use different categorization than the educated world (determined by functionality of their tools).
OK so this was early on.
Some biographical stuff. Born in Kazan, studied there, then moved to Moscow where he met Vygotsky. And others:
During the 1920s Luria also met a large number of scholars, including Aleksei N. Leontiev, Mark Lebedinsky, Alexander Zaporozhets, Bluma Zeigarnik, many of whom would remain his lifelong colleagues.
Leontiev’s turned up a few times, open in another tab.
OK the phrase ‘cultural-historical psychology’ has come up. Open the wikipedia page:
Cultural-historical psychology is a branch of avant-garde and futuristic psychological theory and practice of the “science of Superman” associated with Lev Vygotsky and Alexander Luria and their Circle, who initiated it in the mid-1920s–1930s. The phrase “cultural-historical psychology” never occurs in the writings of Vygotsky, and was subsequently ascribed to him by his critics and followers alike, yet it is under this title that this intellectual movement is now widely known.
This all sounds like a confusing mess where I’d need to learn way more background than I’m going to pick up in an hour. Back to Luria. Here’s the peasant-bothering stuff:
The 1930s were significant to Luria because his studies of indigenous people opened the field of multiculturalism to his general interests. This interest would be revived in the later twentieth century by a variety of scholars and researchers who began studying and defending indigenous peoples throughout the world. Luria’s work continued in this field with expeditions to Central Asia. Under the supervision of Vygotsky, Luria investigated various psychological changes (including perception, problem solving, and memory) that take place as a result of cultural development of undereducated minorities. In this regard he has been credited with a major contribution to the study of orality.
That last bit has a footnote to Ong’s Orality and Literacy. Another place I’ve seen the name before.
In 1933, Luria married Lana P. Lipchina, a well-known specialist in microbiology with a doctorate in the biological sciences.
Then studied aphasia:
In his early neuropsychological work in the end of the 1930s as well as throughout his postwar academic life he focused on the study of aphasia, focusing on the relation between language, thought, and cortical functions, particularly on the development of compensatory functions for aphasia.
This must be another pop-science topic where I’ve come across him before. Hm where’s the memory bit? Oh I missed it:
Apart from his work with Vygotsky, Luria is widely known for two extraordinary psychological case studies: The Mind of a Mnemonist, about Solomon Shereshevsky, who had highly advanced memory; and The Man with a Shattered World, about a man with traumatic brain injury.
Ah this turns out to be late on in his career:
Among his late writings are also two extended case studies directed toward the popular press and a general readership, in which he presented some of the results of major advances in the field of clinical neuropsychology. These two books are among his most popular writings. According to Oliver Sacks, in these works “science became poetry”.
In The Mind of a Mnemonist (1968), Luria studied Solomon Shereshevskii, a Russian journalist with a seemingly unlimited memory, sometimes referred to in contemporary literature as “flashbulb” memory, in part due to his fivefold synesthesia.
In The Man with the Shattered World (1971) he documented the recovery under his treatment of the soldier Lev Zasetsky, who had suffered a brain wound in World War II.
OK 27 minutes left. I’ll look up some of the other characters. Leontiev first. Apparently he was ‘a Soviet developmental psychologist, philosopher and the founder of activity theory.’ What’s activity theory?
Activity theory (AT; Russian: Теория деятельности) is an umbrella term for a line of eclectic social sciences theories and research with its roots in the Soviet psychological activity theory pioneered by Sergei Rubinstein in 1930s. At a later time it was advocated for and popularized by Alexei Leont’ev. Some of the traces of the theory in its inception can also be found in a few works of Lev Vygotsky,. These scholars sought to understand human activities as systemic and socially situated phenomena and to go beyond paradigms of reflexology (the teaching of Vladimir Bekhterev and his followers) and classical conditioning (the teaching of Ivan Pavlov and his school), psychoanalysis and behaviorism.
So maybe he founded it or maybe he just advocated for it. This is all a bit of a mess. But, ok, it’s an umbrella term for moving past behaviourism.
One of the strengths of AT is that it bridges the gap between the individual subject and the social reality—it studies both through the mediating activity. The unit of analysis in AT is the concept of object-oriented, collective and culturally mediated human activity, or activity system.
This all looks sort of interesting, but a bit vague, and will probably take me down some other rabbithole. Back to Leontiev.
After Vygotsky’s early death, Leont’ev became the leader of the research group nowadays known as the Kharkov School of Psychology and extended Vygotsky’s research framework in significantly new ways.
Oh shit completely missed the whole thing about Vygotsky’s early death. Back to him… died aged 37! Of tuberculosis. Mostly became famous after his death, and through the influence of his students. Ah this bit on his influence might be useful. Soviet influence first:
In the Soviet Union, the work of the group of Vygotsky’s students known as the Vygotsky Circle was responsible for Vygotsky’s scientific legacy. The members of the group subsequently laid a foundation for Vygotskian psychology’s systematic development in such diverse fields as the psychology of memory (P. Zinchenko), perception, sensation, and movement (Zaporozhets, Asnin, A. N. Leont’ev), personality (Lidiya Bozhovich, Asnin, A. N. Leont’ev), will and volition (Zaporozhets, A. N. Leont’ev, P. Zinchenko, L. Bozhovich, Asnin), psychology of play (G. D. Lukov, Daniil El’konin) and psychology of learning (P. Zinchenko, L. Bozhovich, D. El’konin), as well as the theory of step-by-step formation of mental actions (Pyotr Gal’perin), general psychological activity theory (A. N. Leont’ev) and psychology of action (Zaporozhets).
That at least says something about what all of those names did. Open Zinchenko tab as first.
Then North American influence:
In 1962 a translation of his posthumous 1934 book, Thinking and Speech, published with the title,Thought and Language, did not seem to change the situation considerably. It was only after an eclectic compilation of partly rephrased and partly translated works of Vygotsky and his collaborators, published in 1978 under Vygotsky’s name as Mind in Society, that the Vygotsky boom started in the West: originally, in North America, and later, following the North American example, spread to other regions of the world. This version of Vygotskian science is typically associated with the names of its chief proponents Michael Cole, James Wertsch, their associates and followers, and is relatively well known under the names of “cultural-historical activity theory” (aka CHAT) or “activity theory”. Scaffolding, a concept introduced by Wood, Bruner, and Ross in 1976, is somewhat related to the idea of ZPD, although Vygotsky never used the term.[
Ah so Thought and Language was posthumous.
Then a big pile of controversy about how his work was interpreted. Now we’re getting headings like ‘Revisionist movement in Vygotsky Studies’, think I’ll bail out now. 16 minutes left.
OK let’s try Zinchenko page.
The main theme of Zinchenko’s research is involuntary memory, studied from the perspective of the activity approach in psychology. In a series of studies, Zinchenko demonstrated that recall of the material to be remembered strongly depends on the kind of activity directed on the material, the motivation to perform the activity, the level of interest in the material and the degree of involvement in the activity. Thus, he showed that following the task of sorting material in experimental settings, human subjects demonstrate a better involuntary recall rate than in the task of voluntary material memorization.
This influenced Leontiev and activity theory. That’s about all the detail there is. What to do next? Look up some of the other people I guess. Try a few, they’re all very short articles, give up with that.
Vygotsky’s closely reasoned, highly readable analysis of the nature of verbal thought as based on word meaning marks a significant step forward in the growing effort to understand cognitive processes. Speech is, he argues, social in origins. It is learned from others and, at first, used entirely for affective and social functions. Only with time does it come to have self-directive properties that eventually result in internalized verbal thought. To Vygotsky, “a word is a microcosm of human consciousness.”
OK, yeah that does sound interesting.
Not finding great sources. 8 minutes left. Zone of proximal development section of Vygotsky’s page:
“Zone of Proximal Development” (ZPD) is a term Vygotsky used to characterize an individual’s mental development. He originally defined the ZPD as “the distance between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem solving under adult guidance or in collaboration with more capable peers.” He used the example of two children in school who originally could solve problems at an eight-year-old developmental level (that is, typical for children who were age 8). After each child received assistance from an adult, one was able to perform at a nine-year-old level and one was able to perform at a twelve-year-old level. He said “This difference between twelve and eight, or between nine and eight, is what we call the zone of proximal development.” He further said that the ZPD “defines those functions that have not yet matured but are in the process of maturation, functions that will mature tomorrow but are currently in an embryonic state.” The zone is bracketed by the learner’s current ability and the ability they can achieve with the aid of an instructor of some capacity.
ZPD page itself:
Zygotsky spent a lot of time studying the impact of school instruction on children and noted that children grasp language concepts quite naturally, but that math and writing did not come as naturally. Essentially, he concluded that because these concepts were taught in school settings with unnecessary assessments, they were of more difficulty to learners. Piaget believed that there was a clear distinction between development and teaching. He said that development is a spontaneous process that is initiated and completed by the children, stemming from their own efforts. Piaget was a proponent of independent thinking and critical of the standard teacher-led instruction that was common practice in schools.
… He believed that children would not advance very far if they were left to discover everything on their own. It’s crucial for a child’s development that they are able to interact with more knowledgeable others. They would not be able to expand on what they know if this wasn’t possible.
OK 3 minutes left. Let’s wildly skip between tabs learning absolutely nothing. Hm maybe this would have been interesting? ‘Vygotsky circle as a personal network of scholars: restoring connections between people and ideas’.
Ding! Didn’t get much past reading the title.
Well that didn’t work as well as the alienation one. Sprawling topic, and I wasn’t very clear on what I wanted to get out of it. History of the Circle itself or just some random facts about what individual people in it did? I mostly ended up with the second one, and not much insight into what held it together conceptually, beyond some vague idea about ‘going beyond behaviourism’/’looking at general background of human activity, not just immediate task’.
Still, I guess I know a bit more about these people than I did going in, and would be able to orient more quickly if I wanted to find out anything specific.
Everybody hates neoliberalism, it’s the law. But what is it?
This is probably the topic I’m most ignorant about and ill-prepared-for on the whole list, and I wasn’t going to do it. But it’s good prep for the bullshit jobs post, which was a popular choice, so I’m going to try. I’m going to be trying to articulate my current thoughts, rather than attempting to say anything original. And also I’m not really talking about neoliberalism as a coherent ideology or movement. (I think I’d have to do another speedrun just to have a chance of saying something sensible.) More like “neoliberalism”, scarequoted, as a sort of diffuse cloud of associations that the term brings to mind. Here’s my cloud (very UK-centric):
Big amorphous companies with bland generic names like Serco or Interserve, providing an incoherent mix of services to the public sector, with no obvious specialism beyond winning government contracts
Public private partnerships
Metrics! Lots of metrics!
Incuriosity about specifics. E.g. management by pushing to make a number go up, rather than any deep engagement with the particulars of the specific problem
Food got really good over this period. I think this actually might be relevant and not just something that happened at the same time
Low cost short-haul airlines becoming a big thing (in Europe anyway – don’t really understand how widespread this is)
Thinking you’re on a public right of way but actually it’s a private street owned by some shopping centre or w/e. With private security and lots of CCTV
Post-industrial harbourside developments with old warehouses converted into a Giraffe and a Slug and Lettuce
A caricatured version of Tony Blair’s disembodied head is floating over the top of this whole scene like a barrage balloon. I don’t think this is important but I thought you’d like to know
I’ve had this topic vaguely in mind since I read a blog post by Timothy Burke, a professor of modern history, a while back. The post itself has a standard offhand ‘boo neoliberalism’ side remark, but then when challenged in the comments he backs it up with an excellent, insightful sketch of what he means. (Maybe this post should just have been a copy of this comment, instead of my ramblings.)
I’m sensitive to the complaint that “neoliberalism” is a buzz word that can mean almost everything (usually something the speaker disapproves of).
A full fleshing out is more than I can provide, though. But here’s some sketches of what I have in mind:
1) The Reagan-Thatcher assault on “government” and aligned conceptions of “the public”–these were not merely attempts to produce new efficiencies in government, but a broad, sustained philosophical rejection of the idea that government can be a major way to align values and outcomes, to tackle social problems, to restrain or dampen the power of the market to damage existing communities. “The public” is not the same, but it was an additional target: the notion that citizens have shared or collective responsibilities, that there are resources and domains which should not be owned privately but instead open to and shared by all, etc. That’s led to a conception of citizenship or social identity that is entirely individualized, privatized, self-centered, self-affirming, and which accepts no responsibility to shared truths, facts, or mechanisms of dispute and deliberation.
2) The idea of comprehensively measuring, assessing, quantifying performance in numerous domains; insisting that values which cannot be measured or quantified are of no worth or usefulness; and constantly demanding incremental improvements from all individuals and organizations within these created metrics. This really began to take off in the 1990s and is now widespread through numerous private and public institutions.
3) The simultaneous stripping bare of ordinary people to numerous systems of surveillance, measurement, disclosure, monitoring, maintenance (by both the state and private entities) while building more and more barriers to transparency protecting the powerful and their most important private and public activities. I think especially notable since the late 1990s and the rise of digital culture. A loss of workplace and civil protections for most people (especially through de-unionization) at the same time that the powerful have become increasingly untouchable and unaccountable for a variety of reasons.
4) Nearly unrestrained global mobility for capital coupled with strong restrictions on labor (both in terms of mobility and in terms of protection). Dramatically increased income inequality. Massive “shadow economies” involving illegal or unsanctioned but nevertheless highly structured movements of money, people, and commodities. Really became visible by the early 1990s.
A lot of the features in my association cloud match pretty well: metrics, surveillance, privatisation. Didn’t really pick up much from point 4. I think 2 is the one which interests me most. My read on the metric stuff is that there’s a genuinely useful tool here that really does work within its domain of application but is disastrous when applied widely to everything. The tool goes something like:
let go of a need for top-down control
fragment the system into lots of little bits, connected over an interface of numbers (money, performance metrics, whatever)
try to improve the system by hammering on the little bits in ways such that the numbers go in the direction you want. This could be through market forces, or through metrics-driven performance improvements.
If your problem is amenable to this kind of breakdown, I think it actually works pretty well. This is why I think ‘food got good’ is actually relevant and not a coincidence. It fits this playbook quite nicely:
It’s a known problem. People have been selling food for a long time and have some well-tested ideas about how to cook, prep, order supplies, etc. Theres’s innovation on top of that, but it’s not some esoteric new research field.
Each individual purchase (of a meal, cake, w/e) is small and low-value. So the domain is naturally fragmented into lots of tiny bits.
This also means that lots of people can afford to be customers, increasing the number of tiny bits
Fast feedback. People know whether they like a croissant after minutes, not years.
Relevant feedback. People just tell you whether they like your croissants, which is the thing you care about. You don’t need to go search for some convoluted proxy measure of whether they like your croissants.
Lowish barriers to entry. Not especially capital-intensive to start a cafe or market stall compared with most businesses.
Lowish regulations. There’s rules for food safety, but it’s not like building planes or someting.
No lock-in for customers. You can go to the donburi stall today and the pie and mash stall tomorrow.
All of this means that the interface layer of numbers can be an actual market, rather than some faked-up internal market of metrics to optimise. And it’s a pretty open market that most people can access in some form. People don’t go out and buy trains, but they do go out and buy sandwiches.
There’s another very important, less wonky factor that breaks you out of the dry break-it-into-numbers method I listed above. You ‘get to cheat’ by bringing in emotional energy that ‘comes along for free’. People actually like food! They start cafes because they want to, even when it’s a terrible business idea. They already intrinsically give a shit about the problem, and markets are a thin interface layer over the top rather than most of the thing. This isn’t going to carry over to, say, airport security or detergent manufacturing.
As you get further away from an idealised row of spherical burger vans things get more complicated and ambiguous. Low cost airlines are a good example. These actually did a good job of fragmenting the domain into lots of bits that were lumped together by the older incumbents. And it’s worked pretty well, by bringing down prices to the point where far more people can afford to travel. (Of course there’s also the climate change considerations. If you ignore those it seems like a very obvious Good Thing, once you include them it’s somewhat murkier I suppose.)
The price you pay is that the experience gets subtly degraded at many points by the optimisation, and in aggregate these tend to produce a very unsubtle crappiness. For a start there’s the simple overhead of buying the fragmented bits separately. You have to click through many screens of a clunky web application and decide individually about whether you want food, whether you want to choose your own seat, whether you want priority queuing, etc. All the things you’d just have got as default on the old, expensive package deal. You also have to say no to the annoying ads trying to upsell you on various deals on hotels, car rentals and travel insurance.
Then there are the all the ways the flight itself becomes crappier. It’s at a crap airport a long way from the city you want to get to, with crappy transport links. The flight is a cheap slot at some crappy time of the early morning. The plane is old and crappily fitted out. You’re having a crappy time lugging around the absolute maximum amount of hand luggage possible to avoid the extra hold luggage fee. (You’ve got pretty good at optimising numbers yourself.)
This is often still worth it, but can easily tip into just being plain Too Crappy. I’ve definitely over-optimised flight booking for cheapness and regretted it (normally when my alarm goes off at three in the morning).
Low cost airlines seem basically like a good idea, on balance. But then there are the true disasters, the domains that have none of the natural features that the neoliberal playbook works on. A good example is early-stage, exploratory academic research. I’ve spent too long on this post already. You can fill in the depressing details yourself.
I’ve got some half-written drafts for topics on the original list which I want to finish soon, but for now I seem to be doing better by going off-list and rambling about whatever’s in my head. Today it’s visual imagery.
I’ve ended up reading a bunch of things vaguely connected with mnemonics in the last couple of weeks. I’m currently very bad at concentrating on books properly, but I’m still reading at a similar rate, so everything is in this weird quarter-read state. Anyway here’s the list of things I’ve started:
Moonwalking with Einstein by Joshua Foer. Pop book about learning to compete in memory championships. This is good and an easy read, so there is some chance I’ll actually finish it.
Orality and Literacy by Walter Ong. One of the references I followed up. About oral cultures in general but there is stuff on memorisation (e.g. repetitive passages in Homer being designed for easy memorisation when writing it down is not an option)
Thesetwo interesting posts by AllAmericanBreakfast on Less Wrong this week about experimenting with memory palaces to learn information for a chemistry exam.
Those last two posts are interesting to me because they’re written by someone in the very early stages of fiddling around with this stuff who doesn’t consider themself to naturally have a good visual imagination. I’d put myself in the same category, but probably worse. Actually I’m really confused about what ‘visual imagery’ even is. I have some sort of – stuff? – that has a sort of visual component, maybe mixed in with some spatial/proprioceptive/tactile stuff. Is that what people mean by ‘visual imagery’? I guess so? It’s very transitory and hard to pin down in my case, though, and I don’t feel like I make a lot of use out of it. The idea of using these crappy materials to make something elaborate like a memory palace sounds like a lot of work. But maybe it would work better if I spent more time on it.
The thing that jumped out of the first post for me was this bit:
I close my eyes and allow myself to picture nothing, or whatever random nonsense comes to mind. No attempt to control.
Then I invite the concept of a room into mind. I don’t picture it clearly. There’s a vague sense, though, of imagining a space of some kind. I can vaguely see fleeting shadowy walls. I don’t need to get everything crystal clear, though.
This sounded a lot more fun and approachable to me than crafting a specific memory palace to memorise specific things. I didn’t even get to the point of ‘inviting the concept of a room in’, just allowed any old stuff to come up, and that worked ok for me. I’m not sure how much of this ‘imagery’ was particularly visual, but I did find lots of detailed things floating into my head. It seems to work better if I keep a light touch and only allow some very gentle curiosity-based steering of the scene.
Here’s the one I found really surprising and cool. I was imagining an intricately carved little jade tortoise for some reason, and put some mild curiosity into what its eyes were made of. And I discovered that they were tiny yellow plastic fake gemstones that were weirdly familiar. So I asked where I recognised them from (this was quite heavy-handed questioning that dragged me out of the imagery). And it turns out that they were from a broken fish brooch I had as a kid. I prised all the fake stones off with a knife at some point to use for some project I don’t remember.
I haven’t thought about that brooch in, what, 20 years? But I remember an impressive amount of detail about it! I’ve tried to draw it above. Some details like the fins are a best guess, but the blue, green and yellow stones in diagonal stripes are definitely right. It’s interesting that this memory is still sitting there and can be brought up by the right prompt.
I think I’ll play with this exercise a bit more and see what other rubbish I can dredge up.
I was inspired by John Nerst’s recent post to make a list of my own fundamental background assumptions. What I ended up producing was a bit of a odd mixed bag of disparate stuff. Some are something like factual beliefs, some of them are more like underlying emotional attitudes and dispositions to act in various ways.
I’m not trying to ‘hit bedrock’ in any sense, I realise that’s not a sensible goal. I’m just trying to fish out a few things that are fundamental enough to cause obvious differences in background with other people. John Nerst put it well on Twitter:
It’s not true that beliefs are derived from fundamental axioms, but nor is it true that they’re a bean bag where nothing is downstream from everything else.
I’ve mainly gone for assumptions where I tend to differ with the people I to hang around with online and in person, which skews heavily towards the physics/maths/programming crowd. This means there’s a pretty strong ‘narcissism of small differences’ effect going on here, and if I actually had to spend a lot of time with normal people I’d probably run screaming back to to STEM nerd land pretty fast and stop caring about these minor nitpicks.
Also I only came up with twenty, not thirty, because I am lazy.
I’m really resistant to having to ‘actually think about things’, in the sense of applying any sort of mental effort that feels temporarily unpleasant. The more I introspect as I go about problem solving, the more I notice this. For example, I was mucking around in Inkscape recently and wanted to check that a square was 16 units long, and I caught myself producing the following image:
Apparently counting to 16 was an unacceptable level of cognitive strain, so to avoid it I made the two 4 by 4 squares (small enough to immediately see their size) and then arranged them in a pattern that made the length of the big square obvious. This was slower but didn’t feel like work at any point. No thinking required!
This must have a whole bunch of downstream effects, but an obvious one is a weakness for ‘intuitive’, flash-of-insight-based demonstrations, mixed with a corresponding laziness about actually doing the work to get them. (Slowly improving this.)
I picked up some Bad Ideas From Dead Germans at an impressionable age (mostly from Kant). I think this was mostly a good thing, as it saved me from some Bad Ideas From Dead Positivists that physics people often succumb to.
I didn’t read much phenomenology as such, but there’s some mood in the spirit of this Whitehead quote that always came naturally to me:
For natural philosophy everything perceived is in nature. We may not pick and choose. For us the red glow of the sunset should be as much part of nature as are the molecules and electric waves by which men of science would explain the phenomenon.
By this I mean some kind of vague understanding that we need to think about perceptual questions as well as ‘physics stuff’. Lots of hours as an undergrad on Wikipedia spent reading about human colour perception and lifeworlds and mantis shrimp eyes and so on.
One weird place where this came out: in my first year of university maths I had those intro analysis classes where you prove a lot of boring facts about open sets and closed sets. I just got frustrated, because it seemed to be taught in the same ‘here are some facts about the world’ style that, say, classical mechanics was taught in, but I never managed to convince myself that the difference related to something ‘out in the world’ rather than some deficiency of our cognitive apparatus. ‘I’m sure this would make a good course in the psychology department, but why do I have to learn it?’
This isn’t just Bad Ideas From Dead Germans, because I had it before I read Kant.
Same thing for the interminable arguments in physics about whether reality is ‘really’ continuous or discrete at a fundamental level. I still don’t see the value in putting that distinction out in the physical world – surely that’s some sort of weird cognitive bug, right?
I think after hashingthisout for a while people have settled on ‘decoupling’ vs ‘contextualising’ as the two labels. Anyway it’s probably apparent that I have more time for the contextualising side than a lot of STEM people.
Outside of dead Germans, my biggest unusual pervasive influence is probably the New Critics: Eliot, Empson and I.A. Richards especially, and a bit of Leavis. They occupy an area of intellectual territory that mostly seems to be empty now (that or I don’t know where to find it). They’re strong contextualisers with a focus on what they would call ‘developing a refined sensibility’, by deepening sensitivity to tiny subtle nuances in expression. But at the same time, they’re operating in a pre-pomo world with a fairly stable objective ladder of ‘good’ and ‘bad’ art. (Eliot’s version of this is one of my favourite ever wrong ideas, where poetic images map to specific internal emotional states which are consistent between people, creating some sort of objective shared world.)
This leads to a lot of snottiness and narrow focus on a defined canon of ‘great authors’ and ‘minor authors’. But also the belief in reliable intersubjective understanding gives them the confidence for detailed close reading and really carefully picking apart what works and what doesn’t, and the time they’ve spent developing their ear for fine nuance gives them the ability to actually do this.
The continuation of this is probably somewhere on the other side of the ‘fake pomo blocks path’ wall in David Chapman’s diagram, but I haven’t got there yet, and I really feel like I’m missing something important.
I don’t understand what the appeal of competitive games is supposed to be. Like basically all of them – sports, video games, board games, whatever. Not sure exactly what effects this has on the rest of my thinking, but this seems to be a pretty fundamental normal-human thing that I’m missing, so it must have plenty.
I always get interested in specific examples first, and then work outwards to theory.
My most characteristic type of confusion is not understanding how the thing I’m supposed to be learning about ‘grounds out’ in any sort of experience. ‘That’s a nice chain of symbols you’ve written out there. What does it relate to in the world again?’
I have never in my life expected moral philosophy to have some formal foundation and after a lot of trying I still don’t understand why this is appealing to other people. Humans are an evolved mess and I don’t see why you’d expect a clean abstract framework to ever drop out from that.
Philosophy of mathematics is another subject where I mostly just think ‘um, you what?’ when I try to read it. In fact it has exactly the same subjective flavour to me as moral philosophy. Platonism feels bad the same way virtue ethics feels bad. Formalism feels bad the same way deontology feels bad. Logicism feels bad the same way consequentialism feels bad. (Is this just me?)
I’ve never made any sense out of the idea of an objective flow of time and have thought in terms of a ‘block universe’ picture for as long as I’ve bothered to think about it.
If I don’t much like any of the options available for a given open philosophical or scientific question, I tend to just mentally tag it with ‘none of the above, can I have something better please’. I don’t have the consistency obsession thing where you decide to bite one unappealing bullet or another from the existing options, so that at least you have an opinion.
This probably comes out of my deeper conviction that I’m missing a whole lot of important and fundamental ideas on the level of calculus and evolution, simply on account of nobody having thought of them yet. My default orientation seems to be ‘we don’t know anything about anything’ rather than ‘we’re mostly there but missing a few of the pieces’. This produces a kind of cheerful crackpot optimism, as there is so much to learn.
This list is noticeably lacking in any real opinions on politics and ethics and society and other people stuff. I just don’t have many opinions and don’t like thinking about people stuff very much. That probably doesn’t say anything good about me, but there we are.
I’m also really weak on economics and finance. I especially don’t know how to do that economist/game theoretic thing where you think in terms of what incentives people have. (Maybe this is one place where ‘I don’t understand competitive games’ comes in.)
I’m OK with vagueness. I’m happy to make a vague sloppy statement that should at least cover the target, and maybe try and sharpen it later. I prefer this to the ‘strong opinions, weakly held’ alternative where you chuck a load of precise-but-wrong statements at the target and keep missing. A lot of people will only play this second game, and dismiss the vague-sloppy-statement one as ‘just being bad at thinking’, and I get frustrated.
Not happy about this one, but over time this frustration led me to seriously go off styles of writing that put a strong emphasis on rigour and precision, especially the distinctive dialects you find in pure maths and analytic philosophy. I remember when I was 18 or so and encountered both of these for the first time I was fascinated, because I’d never seen anyone write so clearly before. Later on I got sick of the way that this style tips so easily into pedantry over contextless trivialities (from my perspective anyway). It actually has a lot of good points, though, and it would be nice to be able to appreciate it again.
I enjoyed alkjash’s recent Babble and Prune posts on Less Wrong, and it reminded me of a favourite quote of mine, Feynman’s description of science in The Character of Physical Law:
What we need is imagination, but imagination in a terrible strait-jacket. We have to find a new view of the world that has to agree with everything that is known, but disagree in its predictions somewhere, otherwise it is not interesting.
Imagination here corresponds quite well to Babbling, and the strait-jacket is the Pruning you do afterwards to see if it actually makes any sense.
For my tastes at least, early Less Wrong was generally too focussed on building out the strait-jacket to remember to put the imagination in it. An unfair stereotype would be something like this:
‘I’ve been working on being better calibrated, and I put error bars on all my time estimates to take the planning fallacy into account, and I’ve rearranged my desk more logically, and I’ve developed a really good system to keep track of all the tasks I do and rank them in terms of priority… hang on, why haven’t I had any good ideas??’
I’m poking fun here, but I really shouldn’t, because I have the opposite problem. I tend to go wrong in this sort of way:
‘I’ve cleared out my schedule so I can Think Important Thoughts, and I’ve got that vague idea about that toy model that it would be good to flesh out some time, and I can sort of see how Topic X and Topic Y might be connected if you kind of squint the right way, and it might be worth developing that a bit further, but like I wouldn’t want to force anything, Inspiration Is Mysterious And Shouldn’t Be Rushed… hang on, why have I been reading crap on the internet for the last five days??’
I think this trap is more common among noob writers and artists than noob scientists and programmers, but I managed to fall into it anyway despite studying maths and physics. (I’ve always relied heavily on intuition in both, and that takes you in a very different direction to someone who leans more on formal reasoning.) I’m quite a late convert to systems and planning and organisation, and now I finally get the point I’m fascinated by them and find them extremely useful.
One particular way I tend to fail is that my over-reliance on intuition leads me to think too highly of any old random thoughts that come into my head. And I’ve now come to the (in retrospect obvious) conclusion that a lot of them are transitory and really just plain stupid, and not worth listening to.
As a simple example, I’ve trained myself to get up straight away when the alarm goes off, and every morning my brain fabricates a bullshit explanation for why today is special and actually I can stay in bed, and it’s quite compelling for half a minute or so. I’ve got things set up so I can ignore it and keep doing things, though, and pretty quickly it just goes away and I never wish that I’d listened to it.
On the other hand, I wouldn’t want to tighten things up so much that I completely stopped having the random stream of bullshit thoughts, because that’s where the good ideas bubble up from too. For now I’m going with the following rule of thumb for resolving the tension:
Thoughts can be herded and corralled by systems, and fed and dammed and diverted by them, but don’t take well to being manipulated individually by systems.
So when I get up, for example, I don’t have a system in place where I try to directly engage with the bullshit explanation du jour and come up with clever countertheories for why I actually shouldn’t go back to bed. I just follow a series of habitual getting-up steps, and then after a few minutes my thoughts are diverted to a more useful track, and then I get on with my day.
A more interesting example is the common writers’ strategy of having a set routine (there’s a whole website devoted to these). Maybe they work at the same time each day, or always work in the same place. This is a system, but it’s not a system that dictates the actual content of the writing directly. You just sit and write, and sometimes it’s good, and sometimes it’s awful, and on rare occasions it’s genuinely inspired, and if you keep plugging on those rare occasions hopefully become more frequent. I do something similar with making time to learn physics now and it works nicely.
This post is also a small application of the rule itself! I was on an internet diet for a couple of months, and was expecting to generate a few blog post drafts in that time, and was surprised that basically nothing came out in the absence of my usual internet immersion. I thought writing had finally become a pretty freestanding habit for me, but actually it’s still more fragile and tied to a social context that I expected. So this is a deliberate attempt to get the writing flywheel spun up again with something short and straightforward.