Hacker News folk wisdom on visual programming

I’m a fairly frequent Hacker News lurker, especially when I have some other important task that I’m avoiding. I normally head to the Active page (lots of comments, good for procrastination) and pick a nice long discussion thread to browse. So over time I’ve ended up with a good sense of what topics come up a lot. “The Bay Area is too expensive.” “There are too many JavaScript frameworks.” “Bootcamps: good or bad?” I have to admit that I enjoy these. There’s a comforting familiarity in reading the same internet argument over and over again.

One of the more interesting recurring topics is visual programming:

This image has an empty alt attribute; its file name is image-1.png

Visual Programming Doesn’t Suck. Or maybe it does? These kinds of arguments usually start with a few shallow rounds of yay/boo. But then often something more interesting happens. Some of the subthreads get into more substantive points, and people with a deep knowledge of the tool in question turn up, and at this point the discussion can become genuinely useful and interesting.

This is one of the things I genuinely appreciate about Hacker News. Most fields have a problem with ‘ghost knowledge’, hard-won practical understanding that is mostly passed on verbally between practitioners and not written down anywhere public. At least in programming some chunk of it makes it into forum posts. It’s normally hidden in the depths of big threads, but that’s better than nothing.

I decided to read a bunch of these visual programming threads and extract some of this folk wisdom into a more accessible form. The background for how I got myself into this is a bit convoluted. In the last year or so I’ve got interested in the development of writing as a technology. There are two books in particular that have inspired me:

  • Walter Ong’s Orality and Literacy: the Technologizing of the Word. This is about the history of writing and how it differs from speech; I wrote a sort of review here. Everything that we now consider obvious, like vowels, full stops and spaces between words, had to be invented at some point, and this book gives a high level overview of how this happened and why.
  • Catarina Dutilh Novaes’s Formal Languages in Logic. The title makes it sound like a maths textbook, but Novaes is a philosopher and really it’s much closer to Ong’s book in spirit, looking at formal languages as a type of writing and exploring how they differ from ordinary written language.

Dutilh Novaes focuses on formal logic, but I’m curious about formal and technical languages more generally: how do we use the properties of text in other fields of mathematics, or in programming? What is text good at, and what is it bad at? Comment threads on visual programming turn out to be a surprisingly good place to explore this question. If something’s easy in text but difficult in a specific visual programming tool, you can guarantee that someone will turn up to complain about it. Some of these complaints are fairly superficial, but some get into some fairly deep properties of text: linearity, information density, an alphabet of discrete symbols. And conversely, enthusiasm for a particular visual feature can be a good indicator of what text is poor at.

So that’s how I found myself plugging through a text file with 1304 comments pasted into it and wondering what the hell I had got myself into.

What I did

Note: This post is looong (around 9000 words), but also very modular. I’ve broken it into lots of subsections that can be read relatively independently, so it should be fairly easy to skip around without reading the whole thing. Also, a lot of the length is from liberal use of quotes from comment threads. So hopefully it’s not quite as as bad as it looks!

This is not supposed to be some careful scientific survey. I decided what to include and how to categorise the results based on whatever rough qualitative criteria seemed reasonable to me. The basic method, such as it was, was the following:

The basic structure of the rest of the post is the following:

  • A breakdown of what commenters normally meant by ‘visual programming’ in these threads. It’s a pretty broad term, and people come in with very different understandings of it.
  • Common themes. This is the main bulk of the post, where I’ve pulled out topics that came up in multiple threads.
  • A short discussion-type section with some initial questions that came to mind while writing this. There are many directions I could take this in, and this post is long enough without discussing these in detail, so I’ll just wave at some of them vaguely. Probably I’ll eventually write at least one follow-up post to pick up some of these strands when I’ve thought about them more.

Types of visual programming

There are also a lot of disparate visual programming paradigms that are all classed under “visual”, I guess in the same way that both Haskell and Java are “textual”. It makes for a weird debate when one party in a conversation is thinking about patch/wire dataflow languages as the primary VPLs (e.g. QuartzComposer) and the other one is thinking about procedural block languages (e.g. Scratch) as the primary VPLs.

seanmcdirmid

One difficulty with interpreting these comments is that people often start arguing about ‘visual programming’ without first specifying what type of visual programming they mean. Sometimes this gets cleared up further into a comment thread, when people start naming specific tools, and sometimes it never gets cleared up at all. There were a few broad categories that came up frequently, so I’ll start by summarising them below.

Node-based interfaces

Labview code example.png
Example LabVIEW screen (source)

There are a large number of visual programming tools that are roughly in the paradigm of ‘boxes with some arrows between them’, like the LabVIEW example above. I think the technical term for these is ‘node-based’, so that’s what I’ll call them. These ended up being the main topic of conversation in four of the six discussions, and mostly seemed to be the implied topic when someone was talking about ‘visual programming’ in general. Most of these tools are special-purpose ones that are mainly used in a specific domain. These domains came up repeatedly:Laboratory and industrial control. LabVIEW was the main tool discussed in this category. In fact it was probably the most commonly discussed tool of all, attracting its fair share of rants but also many defenders.

Game engines. Unreal Engine’s Blueprints was probably the second most common topic. This is a visual gameplay scripting system.

Music production. Max/MSP came up a lot as a tool for connecting and modifying audio clips.

Visual effects. Houdini, Nuke and Blender all have node-based editors for creating effects.

Data migration. SSIS was the main tool here, used for migrating and transforming Microsoft SQL Server data.

Other tools that got a few mentions include Simulink (Matlab-based environment for modelling dynamical systems), Grasshopper for Rhino3D (3D modelling), TouchDesigner (interactive art installations) and Azure Logic Apps (combining cloud services).

The only one of these I’ve used personally is SSIS, and I only have a basic level of knowledge of it.

Block-based IDEs

Scratch development environment (source).

This category includes environments like Scratch that convert some of the syntax of normal programming into coloured blocks that can be slotted together. These are often used as educational tools for new programmers, especially when teaching children.

This was probably the second most common thing people meant by ‘visual programming’, though there was some argument about whether they should count, as they mainly reproduce the conventions of normal text-based programming:

Scratch is a snap-together UI for traditional code. Just because the programming text is embedded inside draggable blocks doesn’t make it a visual language, its a different UI for a text editor. Sure, its visual, but it doesn’t actually change the language at all in any way. It could be just as easily represented as text, the semantics are the same. Its a more beginner-friendly mouse-centric IDE basically.

dkersten

Drag-n-drop UI builders

Drag-n-drop UI builders came up a bit, though not as much as I originally expected, and generally not naming any specific tool (Delphi did get a couple of mentions.) In particular there was very little discussion of the new crop of no-code/low-code tools, I think because most of these threads predate the current hype wave.

These tools are definitely visual, but not necessarily very programmatic — they are often intended for making one specific layout rather than a dynamic range of layouts. And the visual side of UI design tends to run into conflict with the ability to specify dynamic behaviour:

The main challenge in this particular domain is describing what is supposed to happen to the layout when the size of the window changes, or if there are dependencies among visual elements (e.g. some element only appears when a check box is checked). When laying things out visually you can only ever design one particular instance of a layout. If all your elements are static, this works just fine. But if the layout is in any way dynamic (with window resizing being the most common case) you now have to either describe what you want to have happen when things change, or have the system guess. And there are a lot of options: scaling, cropping, letterboxing, overflowing, “smart” reflow… The possibilities are endless, so describing all of that complexity in general requires a full programming language. This is one the reasons that even CSS can be very frustrating, and people often resort to Javascript to get their UI to do the Right Thing.

lisper

These tools also have less of the discretised, structured element that is usually associated with programming — for example, node-based tools still have a discrete ‘grammar’ of allowable box and arrow states that can be composed together. UI tools are relatively continuous and unstructured, where UI elements can be resized to arbitrary pixel sizes.

Spreadsheets

There’s a good argument for spreadsheets being a visual programming paradigm, and a very successful one:

I think spreadsheets also qualify as visual programming languages, because they’re two-dimensional and grid based in a way that one-dimensional textual programming languages aren’t.

The grid enables them to use relative and absolute 2D addressing, so you can copy and paste formulae between cells, so they’re reusable and relocatable. And you can enter addresses and operands by pointing and clicking and dragging, instead of (or as well as) typing text.

DonHopkins

Spreadsheets are definitely not the canonical example anyone has in mind when talking about ‘visual programming’, though, and discussion of spreadsheets was confined to a few subthreads.

Visual enhancements of text-based code

As a believer myself, I think the problem is that visual programming suffers the same problem known as the curse of Artificial Intelligence:

“As soon as a problem in AI is solved, it is no longer considered AI because we know how it works.” [1]

Similarly, as soon as a successful visual interactive feature (be it syntax highlighting, trace inspectors for step-by-step debugging, “intellisense” code completion…) gets adopted by IDEs and become mainstream, it is no longer considered “visual” but an integral and inevitable part of classic “textual programming”.

[1] http://www.artificial-intelligence.com/comic/7

TuringTest

There were several discussions of visual tooling for understanding normal text-based programs better, through debugging traces, dependency graphs, inheritance hierarchies, etc. Again, these were mostly confined to a few subthreads rather than being a central example of ‘visual programming’.

Several people also pointed out that even text-based programming in a plain text file has a number of visual elements. Code as written by humans is not a linear string of bytes, we make use of indentation and whitespace and visually distinctive characters:

Code is always written with “indentation” and other things that demonstrate that the 2d canvas distribution of the glyphs you’re expressing actually does matter for the human element. You’re almost writing ASCII art. The ( ) and [ ] are even in there to evoke other visual types.
nikki93

Brackets are a nice example — they curve towards the text they are enclosing, reinforcing the semantic meaning in a visual way.

Experimental or speculative interfaces

At the other end of the scale from brackets and indentation, we have completely new and experimental visual interfaces. Bret Victor’s Dynamicland and other experiments were often brought up here, along with speculations on the possibilities opened up by VR:

As long as we’re speculating: I kind of dream that maybe we’ll see programming environments that take advantage of VR.

Humans are really good at remembering spaces. (“Describe for me your childhood bedroom.” or “What did your third grade teacher look like?”)

There’s already the idea of “memory palaces” [1] suggesting you can take advantage of spatial memory for other purposes.

I wonder, what would it be like to learn or search a codebase by walking through it and looking around?

[1] https://en.wikipedia.org/wiki/Method_of_loci

danblick

This is the most exciting category, but it’s so wide open and untested that it’s hard to say anything very specific. So, again, this was mainly discussed in tangential subthreads.

Common themes

There were many talking points that recurred again and again over the six threads. I’ve tried to collect them here.

I’ve ordered them in rough order of depth, starting with complaints about visual programming that could probably be addressed with better tooling and then moving towards more fundamental issues that engage with the specific properties of text as a medium (there’s plenty of overlap between these categories, it’s only a rough grouping). Then there’s a grab bag of interesting remarks that didn’t really fit into any category at the end.

Missing tooling

A large number of complaints in all threads were about poor tooling. As a default format, text has an enormous ecosystem of existing tools for input, search, diffing, formatting, etc etc. Most of these could presumably be replicated for any given visual format, but there are many kinds of visual formats and generally these are missing at least some of the conveniences programmers expect. I’ve discussed some of the most common ones below.

Managing complexity

This topic came up over and over again, normally in relation to node-based tools, and often linking to either this Daily WTF screenshot of LabVIEW nightmare spaghetti or the Blueprints from Hell website. Boxes and arrows can get really messy once there are a lot of boxes and a lot of arrows.

Unreal has a VPL and it is a pain to use. A simple piece of code takes up so much desktop real estate that you either have to slowly move around to see it all or have to add more monitors to your setup to see it all. You think spaghetti code is bad imagine actually having a visual representation of it you have to work with. Organization doesn’t exist you can go left, up, right, or down.

smilesnd

The standard counterargument to this was that LabVIEW and most other node-based environments do come with tools for encapsulation: you can generally ‘box up’ sets of nodes into named function-like subdiagrams. The extreme types of spaghetti code are mostly produced by inexperienced users with a poor understanding of the modularisation options available to them, in the same way that a beginner Python programmer with no previous coding experience might write one giant script with no functions:

Somehow people form the opinion that once you start programming in a visual language that you’re suddenly forced, by some unknown force, to start throwing everything into a single diagram without realizing that they separate their text-based programs into 10s, 100s, and even 1000s of files.

Poorly modularized and architected code is just that, no matter the paradigm. And yes, there are a lot of bad LabVIEW programs out there written by people new to the language or undisciplined in their craft, but the same holds true for stuff like Python or anything else that has a low barrier to entry.

bmitc

Viewed through this lens there’s almost an argument that visual spaghetti is a feature not a bug — at least you can directly see that you’ve created a horrible mess, without having to be much of a programming expert.

There were a few more sophisticated arguments against node-based editors that acknowledged the fact that encapsulation existed but still found the mechanics of clicking through layers of subdiagrams to be annoying or confusing.

It may be that I’m just not a visual person, but I’m currently working on a project that has a large visual component in Pentaho Data Integrator (a visual ETL tool). The top level is a pretty simple picture of six boxes in a pipeline, but as you drill down into the components the complexity just explodes, and it’s really easy to get lost. If you have a good 3-D spatial awareness it might be better, but I’ve started printing screenshots and laying them out on the floor. I’m really not a visual person though…

ianmcgowan

IDEs for text-based languages normally have features like code folding and call hierarchies for moving between levels, but these conventions are less developed in node-based tools. This may be just because these tools are more niche and have had less development time, or it may genuinely be a more difficult problem for a 2D layout — I don’t know enough about the details to tell.

Input

In general, all the dragging quickly becomes annoying. As a trained programmer, you can type faster than you can move your mouse around. You have an algorithm clear in your head, but by the time you’ve assembled it half-way on the screen, you already want to give up and go do something else.

TeMPOraL

Text-based languages also have a highly-refined interface for writing the language — most of us have a great big rectangle sitting on our desks with a whole grid of individual keys mapping to specific characters. In comparison, a visual tool based on a different paradigm won’t have a special input device, so it will have either have to rely on the mouse (lots of tedious RSI-inducing clicking around) or involve learning a new set of special-purpose keyboard shortcuts. These shortcuts can work well for experienced programmers:

If you are a very experienced programmer, you program LabVIEW (one of the major visual languages) almost exclusively with the keyboard (QuickDrop).

Let me show you an example (gif) I press “Ctrl + space” to open QuickDrop, type “irf” (a short cut I defined myself) and Enter, and this automatically drops a code snippet that creates a data structure for an image, and reads an image file.

link to gif

cdtwoaway

But it’s definitely a barrier to entry.

Formatting

If you have any desire for aesthetics, you’ll be spending lots of time moving wires around.

prewett

Another tedious feature of many node-based tools is arranging all the boxes and arrows neatly on the screen. It’s irrelevant for the program output, but makes a big difference to readability. (Also it’s just downright annoying if the lines look wrong — my main memory of SSIS is endless tweaking to get the arrows lined up nicely).

Text-based languages are more forgiving, and also people tend to solve the problem with autoformatters. I don’t have a good understanding of why these aren’t common in node-based editors. (Maybe they actually are and people were complaining about the tools that are missing them? Or maybe the sort of formatting that is useful is just not automatable, e.g. grouping boxes by semantic meaning). It’s definitely a harder problem than formatting text, but there was some argument about exactly how hard it is to get at least a reasonable solution:

Automatic layout is hard? Yes, an optimal solution to graph layout is NP-complete, but so is register allocation, and my compiler still works (and that isn’t even its bottleneck). There’s plenty of cheap approximations that are 99% as good.

ken

Version control and code review

Same story again — text comes with a large ecosystem of existing tools for diffing, version control and code review. It sounds like at least the more developed environments like LabVIEW have some kind of diff tool, and an experienced team can build custom tools on top of that:

We used Perforce. So a custom tool was integrated into Perforce’s visual tool such that you could right-click a changelist and submit it for code review. The changelist would be shelved, and then LabVIEW’s diff tool (lvcompare.exe) would be used to create screenshots of all the changes (actually, some custom tools may have done this in tandem with or as a replacement of the diff tool). These screenshots, with a before and after comparison, were uploaded to a code review web server (I forgot the tool used), where comments could be made on the code. You could even annotate the screenshots with little rectangles that highlighted what a comment was referring to. Once the comments were resolved, the code would be submitted and the changelist number logged with the review. This is based off of memory, so some details may be wrong.

This is important because it shows that such things can exist. So the common complaint is more about people forgetting that text-based code review tools originally didn’t exist and were built. It’s just that the visual ones need to be built and/or improved.

bmitc

But you don’t just get nice stuff out of the box.

Debugging

Opinions were split on debugging. Visual, flow-based languages can make it easy to see exactly which route through the code is activated:

Debugging in unreal is also really cool. The “code paths” light up when activated, so it’s really easy to see exactly which branches of code are and aren’t being run – and that’s without actually using a debugger. Side note – it would be awesome if the lines of text in my IDE lit up as they were run. Also, debugging games is just incredibly fun and sometimes leads to new mechanics.

phantom_package

I remember this being about the only enjoyable feature of my brief time working with SSIS — boxes lit up green if everything went to plan, and red if they hit an exception. It was satisfying getting a nice run of green boxes once a bug was fixed.

On the other hand, there were problems with complexity again. Here are some complaints about LabVIEW debugging:

3) debugging is a pain. LabVIEW’s trace is lovely if you have a simple mathematical function or something, but the animation is slow and it’s not easy to check why the value at iteration 1582 is incorrect. Nor can you print anything out, so you end up putting an debugging array output on the front panel and scrolling through it.

4) debugging more than about three levels deep is painful: it’s slow and you’re constantly moving between windows as you step through, and there’s no good way to figure out why the 20th value in the leaf node’s array is wrong on the 15th iteration, and you still can’t print anything, but you can’t use an output array, either, because it’s a sub-VI and it’s going to take forever to step through 15 calls through the hierarchy.

prewett

Use cases

There was a lot of discussion on what sort of problem domains are suited to ‘visual programming’ (which often turned out to mean node-based programming specifically, but not always).

Better for data flow than control flow

A common assertion was that node-based programming is best suited to data flow situations, where a big pile of data is tipped into some kind of pipeline that transforms it into a different form. Migration between databases would be a good example of this. On the other hand, domains with lots of branching control flow were often held to be difficult to work with. Here’s a representative quote:

Control flow is hard to describe visually. Think about how often we write conditions and loops.

That said – working with data is an area that lends itself well to visual programming. Data pipelines don’t have branching control flow and So you’ll see some really successful companies in this space.

macklemoreshair

I’m not sure how true this is? There wasn’t much discussion of why this would be the case, and it seems that LabVIEW for example has decent functionality for loops and conditions:

Aren’t conditionals and loops easier in visual languages? If you need something to iterate, you just draw a for loop around it. If you need two while loops each doing something concurrently, you just draw two parallel while loops. If you need to conditionally do something, just draw a conditional structure and put code in each condition.

One type of control structure I have not seen a good implementation of is pattern matching. But that doesn’t mean it can’t exist, and it’s also something most text-based languages don’t do anyway.

bmitc

Looking at some examples, these don’t look too bad.

Maybe the issue is that there is a conceptual tension between data flow and control flow situations themselves, rather than just the representation of them? Data flow pipelines often involve multiple pieces of data going through the pipeline at once and getting processed concurrently, rather than sequentially. At least one comment addressed this directly:

One of the unappreciated facets of visual languages is precisely the dichotomy between easy dataflow vs easy control flow. Everyone can agree that

–> [A] –> [B] –>

——>

represents (1) a simple pipeline (function composition) and (2) a sort of local no-op, but what about more complex representations? Does parallel composition of arrows and boxes represent multiple data inputs/outputs/computations occurring concurrently, or entry/exit points and alternative choices in a sequential process? Is there a natural “split” of flowlines to represent duplication of data, or instead a natural “merge” for converging control flows after a choice? Do looping diagrams represent variable unification and inference of a fixpoint, or the simpler case of a computation recursing on itself, with control jumping back to an earlier point in the program with updated data?

zozbot34

Overall I’d have to learn a fair bit more to understand what the problem is.

Accessible to non-programmers

Less controversially, visual tools are definitely useful for people with little programming experience, as a way to get started without navigating piles of intimidating syntax.

So the value ends up being in giving more people who are unskilled or less skilled in programming a way to express “programmatic thinking” and algorithms.

I have taught dozens of kids scratch and that’s a great application that makes programming accessible to “more” kids.

sfifs

Inherently visual tasks

Visual programming is, unsurprisingly, well-suited to tasks that have a strong visual component. We see this on the small scale with things like colour pickers, which are far more helpful for choosing a colour than typing in an RGB code and hoping for the best. So even primarily text-based tools might throw in some visual features for tasks that are just easier that way.

Some domains, like visual effects, are so reliant on being able to see what you’re doing that visual tools are a no-brainer. See the TouchDesigner tutorial mentioned in this comment for an impressive example. If you need to do a lot of visual manipulation, giving up the advantages of text is a reasonable trade:

Why is plain text so important? Well for starters it powers version control and cut and pasting to share code, which are the basis of collaboration, and collaboration is how we’re able to construct such complex systems. So why then don’t any of the other apps use plain text if it’s so useful? Well 100% of those apps have already given up the advantages of plain text for tangential reasons, e.g., turning knobs on a synth, building a model, or editing a photo are all terrible tasks for plain text.

robenkleene

Niche domains

A related point was that visual tools are generally designed for niche domains, and rarely get co-opted for more general programming. A common claim was that visual tools favour concrete situations over abstract ones:

There is a huge difference between direct manipulation of concrete concepts, and graphical manipulation of abstract code. Visual programming works much better with the former than the latter.

seanmcdirmid

It does seem to be the case that visual tools generally ‘stay close to the phenomena’. There’s a tension between between showing a concrete example of a particular situation, and being able to go up to a higher level of abstraction and dynamically generate many different examples. (A similar point came up in the section on drag-n-drop editors above.)

Deeper structural properties of text

“Text is the most socially useful communication technology. It works well in 1:1, 1:N, and M:N modes. It can be indexed and searched efficiently, even by hand. It can be translated. It can be produced and consumed at variable speeds. It is asynchronous. It can be compared, diffed, clustered, corrected, summarized and filtered algorithmically. It permits multiparty editing. It permits branching conversations, lurking, annotation, quoting, reviewing, summarizing, structured responses, exegesis, even fan fic. The breadth, scale and depth of ways people use text is unmatched by anything. There is no equivalent in any other communication technology for the social, communicative, cognitive and reflective complexity of a library full of books or an internet full of postings. Nothing else comes close.”

— Graydon Hoare, always bet on text, quoted by devcriollo

In this section I’ll look at properties that apply more specifically to text. Not everything in the quote above came up in discussion (and much of it is applicable to ordinary language more than to programming languages), but it does give an idea of the special position held by text.

Communicative ability

I think the reason is that text is already a highly optimized visual way to represent information. It started with cave paintings and evolved to what it is now.

“Please go to the supermarket and get two bottles of beer. If you see Joe, tell him we are having a party in my house at 6 tomorrow.”

It took me a few seconds to write that. Imagine I had to paint it.

Changu

The communicative range of text came up a few times. I’m not convinced on this one. It’s true that ordinary language has this ability to finely articulate incredibly specific meanings, in a way that pictures can’t match. But the real reference class we want to compare to is text-based programming, not ordinary language. Programming languages have a much more restrictive set of keywords that communicate a much smaller set of ideas, mostly to do with quantity, logical implication and control flow.

In the supermarket example above, the if-then structure could be expressed in these keywords, but all the rest of the work would be being done by tokens like “bottlesOfBeer”, which are meaningless to the computer and only help the human reading it.

As soon as we’ve assigned something a variable name, we’ve already altered our code into a form to assist our cognition.

sinker

It seems much more reasonable that this limited structure of keywords can be ported to a visual language, and in fact a node-based tool like LabVIEW seems to have most of them. Visual languages generally still have the ability to label individual items with text, so you can still have a “bottlesOfBeer” label if you want and get the communicative benefit of language. (It is true that a completely text-free language would be a pain to deal with, but nobody seems to be doing that anyway.)

Information density

A more convincing related point is that text takes up very little space. We’re already accustomed to distinguishing letters, even if they’re printed in a smallish font, and they can be packed together closely. It is true that the text-based version of the supermarket program would probably take up less space that a visual version.

This complaint came up a lot in relation to mathematical tasks, which are often built up by composing a large number of simpler operations. This can become a massive pain if the individual operations take up a lot of space:

Graphs take up much more space on the screen than text. Grab a pen and draw a computational graph of a Fourier transformation! It takes up a whole screen. As a formula, it takes up a tiny fraction of it. Our state machine used to take up about 2m x 2m on the wall behind us.

Regic

Many node-based tools seem to have some kind of special node for typing in maths in a more conventional linear way, to get around this problem.

(Sidenote: this didn’t come up in any of the discussions, but I am curious as to how fundamental this limitation is. Part of it comes from the sheer familiarity of text. The first letters we learned as a child were printed a lot bigger! So presumably we could learn to distinguish closely packed shapes if we were familiar enough with the conventions. At this point, of course, with a small number of distinctive glyphs, it would share a lot of properties with text-based language. See the section on discrete symbols below.)

Linearity

Humans are centered around linear communication. Spoken language is essentially linear, with good use of a stack of concepts. This story-telling mode maps better on a linear, textual representation than on a graphical representation. When provided with a graph, it is difficult to find the start and end. Humans think in graphs, but communicate linearly.

edejong

The linearity of text is a feature that is mostly preserved in programming. We don’t literally read one giant 1D line of symbols, of course. It’s broken into lines and there are special structures for loops. But the general movement is vertically downwards. “1.5 dimensions” is a nice description:

When you write text-based code, you are also restricted to 2 dimensions, but it’s really more like 1.5 because there is a heavy directionality bias that’s like a waterfall, down and across. I cannot copy pictures or diagrams into a text document. I cannot draw arrows between comments to the relevant code; I have to embed the comment within the code because of this dimensionality/directionality constraint. I cannot “touch” a variable (wire) while the program is running to inspect its value.

bmitc

It’s true that many visual environments give up this linearity and allow more general positioning in 2D space (arbitrary placing of boxes and arrows in node-based programming, for example, or the 2D grids in spreadsheets). This has benefits and costs.

On the costs side, linear structures are a good match to the sequential execution of program instructions. They’re also easy to navigate and search through, top to bottom, without getting lost in branching confusion. Developing tools like autoformatters is more straightforward (we saw this come up in the earlier section on missing tooling).

On the benefits side, 2D structures give you more of an expressive canvas for communicating the meaning of your program: grouping similar items together, for example, or using shapes to distinguish between types of object.

In LabVIEW, not only do I have a 2D surface for drawing my program, I also get another 2D surface to create user interfaces for any function if I need. In text-languages, you only have colors and syntax to distinguish datatypes. In LabVIEW, you also have shape. These are all additional dimensions of information.

bmitc

They can also help in remembering where things are:

One of the interesting things I found was that the 2-dimensional layout helped a lot in remembering where stuff was: this was especially useful in larger programs.

dpwm

And the match to sequential execution is less important if your target domain is also non-sequential in some way:

If the program is completely non-sequential, visual tools which reflects the structure of the program are going to be much better than text. For example, if you are designing a electronic circuit, you draw a circuit diagram. Describing a electronic circuit purely in text is not going to be very helpful.

nacc

Small discrete set of symbols

Written text IS a visual medium. It works because there is a finite alphabet of characters that can be combined into millions of words. Any other “visual” language needs a similar structure of primitives to be unambiguously interpreted.

c2the3rd

This is a particularly important point that was brought up by several commenters in different threads. Text is built up from a small number of distinguishable characters. Text-based programming languages add even more structure, restricting to a constrained set of keywords that can only be combined in predefined ways. This removes ambiguity in what the program is supposed to do. The computer is much stupider than a human and ultimately needs everything to be completely specified as a sequence of discrete primitive actions.

At the opposite end of the spectrum is, say, an oil painting, which is also a visual medium but much more of an unconstrained, freeform one, where brushstrokes can swirl in any arbitrary pattern. This freedom is useful in artistic fields, where rich ambiguous associative meaning is the whole point, but becomes a nuisance in technical contexts. So different parts of the spectrum are used for different things:

Because each method has its pros and cons. It’s a difference of generality and specificity.

Consider this list as a ranking: 0 and 1 >> alphabet >> Chinese >> picture.

All 4 methods can be useful in some cases. Chinese has tens of thousands of characters, some people consider the language close to pictures, but real pictures have more than that (infinite variants).

Chinese is harder to parse than alphabet, and picture is harder than Chinese. (Imagine a compiler than can understand arbitrary picture!)

c_shu

Visual programs are still generally closer to the text-based program end of the spectrum than the oil painting one. In a node-based programming language, for example, there might be a finite set of types of boxes, and defined rules on how to connect them up. There may be somewhat more freedom than normal text, with the ability to place boxes anywhere on a 2D canvas, but it’s still a long way from being able to slap any old brushstroke down. One commenter compared this to diagrammatic notation in category theory:

Category theorists deliberately use only a tiny, restricted set of the possibilities of drawing diagrams. If you try to get a visual artist or designer interested in the diagrams in a category theory book, they are almost certain to tell you that nothing “visual” worth mentioning is happening in those figures.

Visual culture is distinguished by its richness on expressive dimensions that text and category theory diagrams just don’t have.

theoh

Drag-n-drop editors are a bit further towards the freeform end of the spectrum, allowing UI elements to be resized continuously to arbitrary sizes. But there are still constraints — maybe your widgets have to be rectangles, for example, rather than any old hand-drawn shape. And, as discussed in earlier sections, there’s a tension between visual specificity and dynamic programming of many potential visual states at once. Drag-n-drop editors arguably lose a lot of the features of ‘true’ languages by giving up structure, and more programmatic elements are likely to still use a constrained set of primitives.

Finally, there was an insightful comment questioning how successful these constrained visual languages are compared to text:

I am not aware of a constrained pictorial formalism that is both general and expressive enough to do the job of a programming language (directed graphs may be general enough, but are not expressive enough; when extended to fix this, they lose the generality.)

… There are some hybrids that are pretty useful in their areas of applicability, such as state transition networks, dataflow models and Petri nets (note that these three examples are all annotated directed graphs.)

mannykannot

This could be a whole blog post topic in itself, and I may return to it in a follow-up post — Dutilh Novaes makes similar points in her discussion of tractability vs expressiveness in formal logic. Too much to go into here, but I do think this is important.

Grab bag of other interesting points

This section is exactly what it says — interesting points that didn’t fit into any of the categories above.

Allowing syntax errors

This is a surprising one I wouldn’t have thought of, but it came up several times and makes a lot of sense on reflection. A lot of visual programming tools are too good at preventing syntax errors. Temporary errors can actually be really useful for refactoring:

This is also one of the beauties of text programming. It allows temporary syntax errors while restructuring things.

I’ve used many visual tools where every block you laid out had to be properly connected, so in order to refactor it you had to make dummy blocks as input and output and all other kinds of crap. Adding or removing arguments and return values of functions/blocks is guaranteed to give you rsi from excessive mousing.

Too

I don’t quite understand why this is so common in visual tools specifically, but it may be to do with the underlying representation? One comment pointed out that this was a more general problem with any kind of language based on an abstract syntax tree that has to be correct at every point:

For my money, the reason for this is that a human editing code needs to write something invalid – on your way from Valid Program A to Valid Program B, you will temporarily write Invalid Jumble Of Bytes X. If your editor tries to prevent you writing invalid jumbles of bytes, you will be fighting it constantly.

The only languages with widely-used AST-based editing is the Lisp family (with paredit). They get away with this because:

  1. Lisp ‘syntax’ is so low-level that it doesn’t constrain your (invalid) intermediate states much. (ie you can still write a (let) or (cond) with the wrong number of arguments while you’re thinking).
  2. Paredit modes always have an “escape hatch” for editing text directly (eg you can usually highlight and delete an unbalanced parenthesis). You don’t need it often (see #1) – but when you need it, you really need it.

meredydd

Maybe this is more common as a way to build a visual language?

Hybrids

Take what we all see at the end of whiteboard sessions. We see diagrams composed of text and icons that represent a broad swath of conceptual meaning. There is no reason why we can’t work in the same way with programming languages and computer.

bmitc

Another recurring theme was a wish for hybrid tools that combined the good parts of visual and text-based tools. One example that came up in the ‘information density’ section was doing maths in a textual format in an otherwise visual tool, which seems to work quite well:

UE4 Blueprints are visual programming, and are done very well. For a lot of things they work are excellent. Everything has a very fine structure to it, you can drag off pins and get context aware options, etc. You can also have sub-functions that are their own graph, so it is cleanly separated. I really like them, and use them for a lot of things.

The issue is that when you get into complex logic and number crunching, it quickly becomes unwieldy. It is much easier to represent logic or mathematics in a flat textual format, especially if you are working in something like K. A single keystroke contains much more information than having to click around on options, create blocks, and connect the blocks. Even in a well-designed interface.

Tools have specific purposes and strengths. Use the right tool for the right job. Some kind of hybrid approach works in a lot of use cases. Sometimes visual scripting is great as an embedded DSL; and sometimes you just need all of the great benefits of high-bandwidth keyboard text entry.

mgreenleaf

Even current text-based environments have some hybrid aspect, as most IDEs support syntax highlighting, autocompletion, code folding etc to get some of the advantages of visualisation.

Visualising the wrong thing

The last comment I’ll quote is sort of ranty but makes a deep point. Most current visual tools only visualise the kind of things (control flow, types) that are already displayed on the screen in a text-based language. It’s a different representation of fundamentally the same thing. But the visualisations we actually want may be very different, and more to do with what the program does than what it looks like on the screen.

‘Visual Programming’ failed (and continues to fail) simply because it is a lie; just because you surround my textual code with boxes and draw arrows showing the ‘flow of execution’ does not make it visual! This core misunderstanding is why all these ‘visual’ tools suck and don’t help anyone do anything practical (read: practical = complex systems).

When I write code, for example a layout algorithm for a set of gui elements, I visually see the data in my head (the gui elements), then I run the algorithm and see the elements ‘move’ into position dependent upon their dock/anchor/margin properties (also taking into account previously docked elements positions, parent element resize delta, etc). This is the visual I need to see on screen! I need to see my real data being manipulated by my algorithms and moving from A to B. I expect with this kind of animation I could easily see when things go wrong naturally, seeing as visual processing happens with no conscious effort.

Instead visual programming thinks I want to see the textual properties of my objects in memory in fancy coloured boxes, which is not the case at all.

hacker_9

I’m not going to try and comment seriously on this, as there’s almost too much to say — it points toward to a large number of potential tools and visual paradigms, many of which are speculative or experimental. But it’s useful to end here, as a reminder that the scope of visual programming is not just some boxes with arrows with between.

Final thoughts

This post is long enough already, so I’ll keep this short. I collected all these quotes as a sort of exploratory project with no very clear aim in mind, and I’m not yet sure what I’m going to do with it. I probably want to write at least one follow-up post making links back to the Dutilh Novaes and Ong books on text as a technology. Other than that, here are a few early ideas that came to mind as I wrote it:

How much is ‘visual programming’ a natural category? I quickly discovered that commmenters had very different ideas of what ‘visual programming’ meant. Some of these are at least partially in tension with each other. For example, drag-n-drop UI editors often allow near-arbitrary placement of UI elements on the screen, using an intuitive visual interface, but are not necessarily very programmatic. On the other hand, node-based editors allow complicated dynamic logic, but are less ‘visual’, reproducing a lot of the conventions of standard text-based programming. Is there a finer-grained classification that would be more useful than the generic ‘visual programming’ label?

Meaning vs fluency. One of the most appealing features of visual tools is that they can make certain inherently visual actions much more intuitive (a colour picker is a very simple example of this). And proponents of visual programming are often motivated by making programming more understandable. At the same time, a language needs to be a fluent medium for writing code quickly. At the fluent stage, it’s common to ignore the semantic meaning of what you’re doing, and rely on unthinkingly executing known patterns of symbol manipulation instead. Desigining for transparent meaning vs designing for fluency are not the same thing — Vim is a great example of a tool that is incomprehensible to beginners but excellent for fluent text manipulation. It could be interesting to explore the tension between them.

‘Missing tooling’ deep dives. I’m not personally all that interested in following this up, it takes me some way from the ‘text as technology’ angle I came in from, but it seems like an obvious one to mention. The ‘missing tooling’ subsections of this post could all be dug into in far more depth. For each one, it would be valuable to compare many existing visual environments, and understand what’s already available and what the limitations are compared to normal text.

Is ‘folk wisdom from internet forums’ worth exploring as a genre of blog post? Finally, here’s a sort of meta question, about the form of the post rather than the content. There’s an extraordinary amount of hard-to-access knowledge locked up in forums like Hacker News. While writing this post I got distracted by a different rabbit hole about Delphi, which somehow led me to another one about Smalltalk, which… well, you know how it goes. I realised that there were many other posts in this genre that could be worth writing. Maybe there should be more of them?

If you have thoughts on these questions, or on anything else in the post, please leave them in the comments!

Speedrun: “Sensemaking”

This is a genre of post I’ve been experimenting with where I pick a topic, set a one hour timer and see what I can find out in that time. Previously: Marx on alienation and the Vygotsky Circle.

I’ve been seeing the term ‘sensemaking’ crop up more and more often. I even went to a workshop with the word in the title last year! I quite like it, and god knows we could all do with making more sense right now, but I’m pretty vague on the details. Are there any nuances of meaning that I’m missing by interpreting it in its everyday sense? I have a feeling that it has a kind of ecological tinge, group sensemaking more than individual sensemaking, but I could be off the mark.

Also, what’s the origin of the term? I get the impression that it’s associated with some part of the internet that’s not too distant from my own corner, but I’m not exactly sure which one. Time to find out…


OK start with wikipedia:

https://en.wikipedia.org/wiki/Sensemaking

> Sensemaking or sense-making is the process by which people give meaning to their collective experiences. It has been defined as "the ongoing retrospective development of plausible images that rationalize what people are doing" (Weick, Sutcliffe, & Obstfeld, 2005, p. 409). The concept was introduced to organizational studies by Karl E. Weick in the 1970s and has affected both theory and practice.

Who’s Weick?

> Karl Edward Weick (born October 31, 1936) is an American organizational theorist who introduced the concepts of "loose coupling", "mindfulness", and "sensemaking" into organizational studies.

And, um, what’s organizational studies?

Organizational studies is "the examination of how individuals construct organizational structures, processes, and practices and how these, in turn, shape social relations and create institutions that ultimately influence people".[1]

OK, something sociology-related. It’s a stub so probably not a huge subfield?

Weick ‘key contributions’ subheadings: ‘enactment’, ‘loose coupling’, ‘sensemaking’, ‘mindfulness’, ‘organizational information theory’

> Although he tried several degree programs within the psychology department, the department finally built a degree program specifically for Weick and fellow student Genie Plog called "organizational psychology".[3]

Only quoting this bc Genie Plog is a great name.

So, enactment: ‘certain phenomena are created by being talked about’. Fine.

Loose coupling:

> Loose coupling in Weick’s sense is a term intended to capture the necessary degree of flex between an organization’s internal abstraction of reality, its theory of the world, on the one hand, and the concrete material actuality within which it finally acts, on the other.

Hm that could be interesting but might take me too far off topic.

Sensemaking:

> People try to make sense of organizations, and organizations themselves try to make sense of their environment. In this sense-making, Weick pays attention to questions of ambiguity and uncertainty, known as equivocality in organizational research that adopts information processing theory.

bit vague but the next bit is more concrete:

> His contributions to the theory of sensemaking include research papers such as his detailed analysis of the breakdown of sensemaking in the case of the Mann Gulch disaster,[8] in which he defines the notion of a ‘cosmology episode’ – a challenge to assumptions that causes participants to question their own capacity to act.

Mann Gulch was a big firefighting disaster:

> As the team approached the fire to begin fighting it, unexpected high winds caused the fire to suddenly expand, cutting off the men’s route and forcing them back uphill. During the next few minutes, a "blow-up" of the fire covered 3,000 acres (1,200 ha) in ten minutes, claiming the lives of 13 firefighters, including 12 of the smokejumpers. Only three of the smokejumpers survived. The fire would continue for five more days before being controlled.

> The United States Forest Service drew lessons from the tragedy of the Mann Gulch fire by designing new training techniques and safety measures that developed how the agency approached wildfire suppression. The agency also increased emphasis on fire research and the science of fire behavior.

This is interesting but I’m in danger of tab explosion here. Keep a tab open with the paper and move on. Can’t resist opening the cosmology episode page though:

> A cosmology episode is a sudden loss of meaning, followed eventually by a transformative pivot, which creates the conditions for revised meaning.

ooh nice. Weick again:

> "Representations of events normally hang together sensibly within the set of assumptions that give them life and constitute a ‘cosmos’ rather than its opposite, a ‘chaos.’ Sudden losses of meaning that can occur when an event is represented electronically in an incomplete, cryptic form are what I call a ‘cosmology episode.’ Representations in the electronic world can become chaotic for at least two reasons: The data in these representations are flawed, and the people who manage those flawed data have limited processing capacity. These two problems interact in a potentially deadly vicious circle."

This is the kind of page that looks like it was written by one enthusiast. But it is pretty interesting. Right, back to Weick.

‘Mindfulness’: this is at a collective, organisational level

> The effective adoption of collective mindfulness characteristics by an organization appears to cultivate safer cultures that exhibit improved system outcomes.

I’m not going to look up ‘organizational information theory’, I have a bit of a ‘systems thinking’ allergy and I don’t wanna.

Right, back to sensemaking article. Roots in social psychology. ‘Shifting the focus from organizations as entities to organizing as an activity.’

‘Seven properties of sensemaking’. Ugh I hate these sort of numbered lists but fine.

  1. Identity. ‘who people think they are in their context shapes what they enact and how they interpret events’

  2. Retrospection. ‘the point of retrospection in time affects what people notice (Dunford & Jones, 2000), thus attention and interruptions to that attention are highly relevant to the process’.

  3. Enaction. ‘As people speak, and build narrative accounts, it helps them understand what they think, organize their experiences and control and predict events’

  4. Social activity. ‘plausible stories are preserved, retained or shared’.

  5. Ongoing. ‘Individuals simultaneously shape and react to the environments they face… As Weick argued, "The basic idea of sensemaking is that reality is an ongoing accomplishment that emerges from efforts to create order and make retrospective sense of what occurs"’

  6. Extract cues from the context.

  7. Plausibility over accuracy.

The sort of gestalt I’m getting is that it focusses on social rather than individual thinking, and action-oriented contextual in-the-thick-of-it doing rather than abstract planning ahead. Some similar terminology to ethnomethodology I think? e.g. accountability.

Ah yeah: ‘Sensemaking scholars are less interested in the intricacies of planning than in the details of action’

> The sensemaking approach is often used to provide insight into factors that surface as organizations address either uncertain or ambiguous situations (Weick 1988, 1993; Weick et al., 2005). Beginning in the 1980s with an influential re-analysis of the Bhopal disaster, Weick’s name has come to be associated with the study of the situated sensemaking that influences the outcomes of disasters (Weick 1993).

‘Categories and related concepts’:

> The categories of sensemaking included: constituent-minded, cultural, ecological, environmental, future-oriented, intercultural, interpersonal, market, political, prosocial, prospective, and resourceful. The sensemaking-related concepts included: sensebreaking, sensedemanding, sense-exchanging, sensegiving, sensehiding, and sense specification.

Haha OK it’s this sort of ‘fluidity soup’ that I have an allergy to. Too many of these buzzwords together. ‘Systems thinking’ is just a warning sign.

‘Other applications’: military stuff. Makes sense, lots of uncertainty and ambiguity there. Patient safety (looks like another random paragraph added by an enthusiast).

There’s a big eclectic ‘see also’ list. None of those are jumping out as the obvious next follow. Back to google. What I really want to know is why people are using this word now in some internet subcultures. Might be quite youtube centred? In which case there is no hope of tracking it down in one speedrun.

Oh yeah let’s look at google images:

Looks like businessy death by powerpoint contexts, not so helpful.

31 minutes left. Shit this goes quick!!

Google is giving me lots of video links. One is Daniel Schmachtenberger, ‘The War on Sensemaking’. Maybe this is the subcultural version I’ve been seeing? His name is familiar. Ok google ‘daniel schmachtenberger sensemaking’. Rebel Wisdom. Yep I’ve vaguely heard of that.

OK here is a Medium post about that series, by Andrew Sweeny:

> There is a war going on in our current information ecosystem. It is a war of propaganda, emotional manipulation, blatant or unconscious lies. It is nothing new, but is reaching a new intensity as our technology evolves. The result is that it has become harder and harder to make sense of the world, with potentially fatal consequences. If we can’t make sense of the world, neither can we make good decisions or meet the many challenges we face as a species.

Yes this is the sort of context I was imagining:

> In War on Sensemaking, futurist and visionary Daniel Schmachtenberger outlines in forensic detail the dynamics at play in this new information ecology — one in which we are all subsumed. He explores how companies, government, and media take advantage of our distracted and vulnerable state, and how we as individuals can develop the discernment and sensemaking skills necessary to navigate this new reality. Schmachtenberger has an admirable ability to diagnose this issue, while offering epistemological and practical ways to help repair the dark labyrinth of a broken information ecology.

It’d be nice to trace the link from Weick to this.

Some stuff about zero sum games and bullshit. Mentions Vervaeke.

> Schmachtenberger also makes the point that in order to become a good sensemaker we need ‘stressors’ — demands that push our mind, body, and heart beyond comfort, and beyond the received wisdom we have inherited. It is not enough to passively consume information: we first need to engage actively with with information ecology we live in and start being aware of how we respond to it, where it is coming from, and why it is being used.

Getting the sense that ‘information ecology’ is a key phrase round here.

Oh yeah ‘Game B’! I’ve heard that phrase around. Some more names: ‘Jordan Hall, Jim Rutt, Bonnita Roy’.

‘Sovereignty’: ‘become responsibility for our own shit’… ‘A real social, ‘kitchen sink level’ of reality must be cultivated to avoid the dangers of too much abstraction, individualism, and idealism.’ Seems like a good idea.

‘Rule Omega’. This one is new to me:

> Rule Omega is simple, but often hard to put into practice. The idea is that every message contains some signal and some noise, and we can train ourselves to distinguish truth and nonsense — to separate the wheat from the chaff. If we disapprove of 95% of a distasteful political rant, for instance, we could train ourselves to hear the 5% that is true.

> Rule Omega means learning to recognise the signal within the noise. This requires a certain attunement and generosity towards the other, especially those who think differently than we do. And Rule Omega can only be applied to those who are willing to engage in a different game, and work with each other in good faith.

Also seems like a Good Thing. Then some stuff about listening to people outside your bubble. Probably a link here to ‘mememic tribes’ type people.

This is a well written article, glad I picked something good.

‘Information war’ and shadow stuff:

> Certainly there are bad actors and conspiracies to harm us, but there is also the ‘shadow within’. The shadow is the unacknowledged part we play in the destruction of the commons and in the never-ending vicious cycle of narrative war. We need to pay attention to the subtle lies we tell ourselves, as much as the ‘big’ lies that society tells us all the time. The trouble is: we can’t help being involved in destructive game theory logic, to a greater or lesser degree.

‘Anti-rivalrous systems’. Do stuff that increases value for others as well as yourself. Connection to ‘anti-rivalrous products’ in economics.

‘Information immune system’. Yeah this is nice! It sort of somehow reminds me of the old skeptics movement in its attempts to help people escape nonsense, but rooted in a warmer and more helpful set of background ideas, and with less tribal outgroup bashing. Everything here sounds good and if it helps people out of ideology prisons I’m all for it. Still kind of curious about intellectual underpinnings… like is there a straight line from Weick to this or did they just borrow a resonant phrase?

‘The dangers of concepts’. Some self-awareness that these ideas can be used to create more bullshit and misinformation themselves.

> As such it can be dangerous to outsource our sensemaking to concepts — instead we need to embody them in our words and actions. Wrestling with the snake of self-deception and illusion and trying to build a better world in this way is a tough game. But it is the only game worth playing.

Games seem to be a recurring motif. Maybe Finite and Infinite Games is another influence.

OK 13 minutes left, what to do? Maybe trace out the link? google ‘schmachtenberger weick’. Not finding much. I’m now on some site called Conversational Leadership which seems to be connected to this scene somehow. Ugh not sure what to do. Back to plain old google ‘sensemaking’ search.

Let’s try this article by Laura McNamara, an organizational anthropologist. Nice job title! Yeah her background looks really interesting:

> Principal Member of Technical Staff at Sandia National Laboratories. She has spent her career partnering with computer scientists, software engineers, physicists, human factors experts, I/O psychologists, and analysts of all sorts.

OK maybe she is trying to bridge the gap between old and new usages:

> Sensemaking is a term that gets thrown around a lot without much consideration about where the concept came from or what it really means. If sensemaking theory is democratizing, that’s good thing.

6 minutes left so I won’t get through all of this. Pick some interesting bits.

> One of my favorite books about sensemaking is Karl Weick’s, Sensemaking in Organizations. I owe a debt of thanks to the nuclear engineer who suggested I read it. This was back in 2001, when I was at Los Alamos National Laboratory (LANL). I’d just finished my dissertation and was starting a postdoctoral position in the statistics group, and word got around that the laboratories had an anthropologist on staff. My nuclear engineer friend was working on a project examining how management changes were impacting team dynamics in one of LANL’s radiochemistry bench laboratories. He called me asking if I had time to work on the project with him, and he asked if I knew much about “sensemaking.” Apparently, his officemate had recently married a qualitative evaluation researcher, who suggested that both of these LANL engineers take the time to read Karl Weick’s book Sensemaking in Organizations.

> My nuclear engineer colleague thought it was the most brilliant thing he’d ever read and was shocked, SHOCKED, that I’d never heard of sensemaking or Karl Weick. I muttered something about anthropologists not always being literate in organizational theory, got off the phone, and immediately logged onto Amazon and ordered it.

Weick’s influences:

> … a breathtakingly broad array of ideas – Emily Dickinson, Anthony Giddens, Pablo Neruda, Edmund Leach…

‘Recipe for sensemaking:’

> Chapter Two of Sensemaking in Organizations contains what is perhaps Weick’s most cited sentence, the recipe for sensemaking: “How can I know what I think until I see what I say?”

And this from the intro paragraph, could be an interesting reference:

> in his gorgeous essay Social Things (which you should read if you haven’t already), Charles Lemert reminds us that social science articulates our native social intelligence through instruments of theory, concepts, methods, language, discourse, texts. Really good sociology and anthropology sharpen that intelligence. They’re powerful because they enhance our understanding of what it means to be human, and they really should belong to everyone.

Something about wiki platforms for knowledge sharing:

> For example, back in 2008, my colleague Nancy Dixon and I did a brief study—just a few weeks—examining how intelligence analysts were responding to the introduction of Intellipedia, a wiki platform intended to promote knowledge exchange and cross-domain collaboration across the United States Intelligence community.

DING! Time’s up.


That actually went really well! Favourite speedrun so far, felt like I found out a lot. Most of the references I ended up on were really well-written and clear this time, no wading through rubbish.

I’m still curious to trace the link between Weick and the recent subculture. Also I might read more of the disaster stuff, and read that last McNamara article more carefully. Lots to look into! If anyone has any other suggestions, please leave a comment 🙂

Speedrun: The Vygotsky Circle

I did a ‘speedrun’ post a couple of months ago where I set a one hour timer and tried to find out as much as I could about Marx’s theory of alienation. That turned out to be pretty fun, so I’m going to try it again with another topic where I have about an hour’s worth of curiosity.

I saw a wikipedia link to something called ‘the Vygotsky Circle’ a while back. I didn’t click the link (don’t want to spoil the fun!) but from the hoverover it looks like that includes Vygotsky, Luria and… some other Russian psychologists, I guess? I’d heard of those two, but I only have the faintest idea of what they did. Here’s the entirety of my current knowledge:

  • Vygotsky wrote a book called Thought and Language. Something about internalisation?
  • Luria’s the one who went around pestering peasants with questions about whether bears in the Arctic are white. And presumably a load of other stuff… he pops up in pop books with some frequency. E.g. I think he did a study of someone with an extraordinary memory?

That’s about it, so plenty of room to learn more. And also anything sounds about ten times more interesting if it’s a Circle. Suddenly it’s an intellectual movement, not a disparate bunch of nerds. So… let’s give this a go.


OK first go to that wiki article.

The Vygotsky Circle (also known as Vygotsky–Luria Circle[1][2]) was an influential informal network of psychologists, educationalists, medical specialists, physiologists, and neuroscientists, associated with Lev Vygotsky (1896–1934) and Alexander Luria (1902–1977), active in 1920-early 1940s in the Soviet Union (Moscow, Leningrad and Kharkiv).

So who’s in it?

The Circle included altogether around three dozen individuals at different periods, including Leonid Sakharov, Boris Varshava, Nikolai Bernstein, Solomon Gellerstein, Mark Lebedinsky, Leonid Zankov, Aleksei N. Leontiev, Alexander Zaporozhets, Daniil Elkonin, Lydia Bozhovich, Bluma Zeigarnik, Filipp Bassin, and many others. German-American psychologist Kurt Lewin and Russian film director and art theorist Sergei Eisenstein are also mentioned as the “peripheral members” of the Circle.

OK that’s a lot of people! Hm this is a very short article. Maybe the Russian one is longer? Nope. So this is the entirety of the history of the Circle given:

The Vygotsky Circle was formed around 1924 in Moscow after Vygotsky moved there from the provincial town of Gomel in Belarus. There at the Institute of Psychology he met graduate students Zankov, Solov’ev, Sakharov, and Varshava, as well as future collaborator Aleksander Luria.[5]:427–428 The group grew incrementally and operated in Moscow, Kharkiv, and Leningrad; all in the Soviet Union. From the beginning of World War II 1 Sept 1939 to the start of the Great Patriotic War, 22 June 1941, several centers of post-Vygotskian research were formed by Luria, Leontiev, Zankov, and Elkonin. The Circle ended, however, when the Soviet Union was invaded by Germany to start the Great Patriotic War.

However, by the end of 1930s a new center was formed around 1939 under the leadership of Luria and Leontiev. In the after-war period this developed into the so-called the “School of Vygotsky-Leontiev-Luria”. Recent studies show that this “school” never existed as such.

There are two problems that are related to the Vygotsky circle. First was the historical recording of the Soviet psychology with innumerable gaps in time and prejudice. Second was the almost exclusive focus on the person, Lev Vygotsky, himself to the extent that the scientific contributions of other notable characters have been considerably downplayed or forgotten.

This is all a bit more nebulous than I was hoping for. Lots of references and sources at least. May end up just covering Vygotsky and Luria.

OK Vygotsky wiki article. What did he do?

He is known for his concept of the zone of proximal development (ZPD): the distance between what a student (apprentice, new employee, etc.) can do on their own, and what they can accomplish with the support of someone more knowledgeable about the activity. Vygotsky saw the ZPD as a measure of skills that are in the process of maturing, as supplement to measures of development that only look at a learner’s independent ability.

Also influential are his works on the relationship between language and thought, the development of language, and a general theory of development through actions and relationships in a socio-cultural environment.

OK here’s the internalisation thing I vaguely remembered hearing about:

… the majority of his work involved the study of infant and child behavior, as well as the development of language acquisition (such as the importance of pointing and inner speech[5]) …

Influenced by Piaget, but differed on inner speech:

Piaget asserted that egocentric speech in children “dissolved away” as they matured, while Vygotsky maintained that egocentric speech became internalized, what we now call “inner speech”.

Not sure I’ve picked a good topic this time, pulls in way too many directions so this is going to be very shallow and skip around. And ofc there’s lots of confusing turbulent historical background, and all these pages refer to various controversies of interpretation 😦 Skip to Luria, can always come back:

Alexander Romanovich Luria (Russian: Алекса́ндр Рома́нович Лу́рия, IPA: [ˈlurʲɪjə]; 16 July 1902 – 14 August 1977) was a Russian neuropsychologist, often credited as a father of modern neuropsychological assessment. He developed an extensive and original battery of neuropsychological tests during his clinical work with brain-injured victims of World War II, which are still used in various forms. He made an in-depth analysis of the functioning of various brain regions and integrative processes of the brain in general. Luria’s magnum opus, Higher Cortical Functions in Man (1962), is a much-used psychological textbook which has been translated into many languages and which he supplemented with The Working Brain in 1973.

… became famous for his studies of low-educated populations in the south of the Soviet Union showing that they use different categorization than the educated world (determined by functionality of their tools).

OK so this was early on.

Some biographical stuff. Born in Kazan, studied there, then moved to Moscow where he met Vygotsky. And others:

During the 1920s Luria also met a large number of scholars, including Aleksei N. Leontiev, Mark Lebedinsky, Alexander Zaporozhets, Bluma Zeigarnik, many of whom would remain his lifelong colleagues.

Leontiev’s turned up a few times, open in another tab.

OK the phrase ‘cultural-historical psychology’ has come up. Open the wikipedia page:

Cultural-historical psychology is a branch of avant-garde and futuristic psychological theory and practice of the “science of Superman” associated with Lev Vygotsky and Alexander Luria and their Circle, who initiated it in the mid-1920s–1930s.[1] The phrase “cultural-historical psychology” never occurs in the writings of Vygotsky, and was subsequently ascribed to him by his critics and followers alike, yet it is under this title that this intellectual movement is now widely known.

This all sounds like a confusing mess where I’d need to learn way more background than I’m going to pick up in an hour. Back to Luria. Here’s the peasant-bothering stuff:

The 1930s were significant to Luria because his studies of indigenous people opened the field of multiculturalism to his general interests.[12] This interest would be revived in the later twentieth century by a variety of scholars and researchers who began studying and defending indigenous peoples throughout the world. Luria’s work continued in this field with expeditions to Central Asia. Under the supervision of Vygotsky, Luria investigated various psychological changes (including perception, problem solving, and memory) that take place as a result of cultural development of undereducated minorities. In this regard he has been credited with a major contribution to the study of orality.

That last bit has a footnote to Ong’s Orality and Literacy. Another place I’ve seen the name before.

In 1933, Luria married Lana P. Lipchina, a well-known specialist in microbiology with a doctorate in the biological sciences.

Then studied aphasia:

In his early neuropsychological work in the end of the 1930s as well as throughout his postwar academic life he focused on the study of aphasia, focusing on the relation between language, thought, and cortical functions, particularly on the development of compensatory functions for aphasia.

This must be another pop-science topic where I’ve come across him before. Hm where’s the memory bit? Oh I missed it:

Apart from his work with Vygotsky, Luria is widely known for two extraordinary psychological case studies: The Mind of a Mnemonist, about Solomon Shereshevsky, who had highly advanced memory; and The Man with a Shattered World, about a man with traumatic brain injury.

Ah this turns out to be late on in his career:

Among his late writings are also two extended case studies directed toward the popular press and a general readership, in which he presented some of the results of major advances in the field of clinical neuropsychology. These two books are among his most popular writings. According to Oliver Sacks, in these works “science became poetry”.[31]

In The Mind of a Mnemonist (1968), Luria studied Solomon Shereshevskii, a Russian journalist with a seemingly unlimited memory, sometimes referred to in contemporary literature as “flashbulb” memory, in part due to his fivefold synesthesia.

In The Man with the Shattered World (1971) he documented the recovery under his treatment of the soldier Lev Zasetsky, who had suffered a brain wound in World War II.

OK 27 minutes left. I’ll look up some of the other characters. Leontiev first. Apparently he was ‘a Soviet developmental psychologist, philosopher and the founder of activity theory.’ What’s activity theory?

Activity theory (AT; Russian: Теория деятельности)[1] is an umbrella term for a line of eclectic social sciences theories and research with its roots in the Soviet psychological activity theory pioneered by Sergei Rubinstein in 1930s. At a later time it was advocated for and popularized by Alexei Leont’ev. Some of the traces of the theory in its inception can also be found in a few works of Lev Vygotsky,[2]. These scholars sought to understand human activities as systemic and socially situated phenomena and to go beyond paradigms of reflexology (the teaching of Vladimir Bekhterev and his followers) and classical conditioning (the teaching of Ivan Pavlov and his school), psychoanalysis and behaviorism.

So maybe he founded it or maybe he just advocated for it. This is all a bit of a mess. But, ok, it’s an umbrella term for moving past behaviourism.

One of the strengths of AT is that it bridges the gap between the individual subject and the social reality—it studies both through the mediating activity. The unit of analysis in AT is the concept of object-oriented, collective and culturally mediated human activity, or activity system.

This all looks sort of interesting, but a bit vague, and will probably take me down some other rabbithole. Back to Leontiev.

After Vygotsky’s early death, Leont’ev became the leader of the research group nowadays known as the Kharkov School of Psychology and extended Vygotsky’s research framework in significantly new ways.

Oh shit completely missed the whole thing about Vygotsky’s early death. Back to him… died aged 37! Of tuberculosis. Mostly became famous after his death, and through the influence of his students. Ah this bit on his influence might be useful. Soviet influence first:

In the Soviet Union, the work of the group of Vygotsky’s students known as the Vygotsky Circle was responsible for Vygotsky’s scientific legacy.[42] The members of the group subsequently laid a foundation for Vygotskian psychology’s systematic development in such diverse fields as the psychology of memory (P. Zinchenko), perception, sensation, and movement (Zaporozhets, Asnin, A. N. Leont’ev), personality (Lidiya Bozhovich, Asnin, A. N. Leont’ev), will and volition (Zaporozhets, A. N. Leont’ev, P. Zinchenko, L. Bozhovich, Asnin), psychology of play (G. D. Lukov, Daniil El’konin) and psychology of learning (P. Zinchenko, L. Bozhovich, D. El’konin), as well as the theory of step-by-step formation of mental actions (Pyotr Gal’perin), general psychological activity theory (A. N. Leont’ev) and psychology of action (Zaporozhets).

That at least says something about what all of those names did. Open Zinchenko tab as first.

Then North American influence:

In 1962 a translation of his posthumous 1934 book, Thinking and Speech, published with the title,Thought and Language, did not seem to change the situation considerably.[citation needed] It was only after an eclectic compilation of partly rephrased and partly translated works of Vygotsky and his collaborators, published in 1978 under Vygotsky’s name as Mind in Society, that the Vygotsky boom started in the West: originally, in North America, and later, following the North American example, spread to other regions of the world.[citation needed] This version of Vygotskian science is typically associated with the names of its chief proponents Michael Cole, James Wertsch, their associates and followers, and is relatively well known under the names of “cultural-historical activity theory” (aka CHAT) or “activity theory”.[45][46][47] Scaffolding, a concept introduced by Wood, Bruner, and Ross in 1976, is somewhat related to the idea of ZPD, although Vygotsky never used the term.[

Ah so Thought and Language was posthumous.

Then a big pile of controversy about how his work was interpreted. Now we’re getting headings like ‘Revisionist movement in Vygotsky Studies’, think I’ll bail out now. 16 minutes left.

OK let’s try Zinchenko page.

The main theme of Zinchenko’s research is involuntary memory, studied from the perspective of the activity approach in psychology. In a series of studies, Zinchenko demonstrated that recall of the material to be remembered strongly depends on the kind of activity directed on the material, the motivation to perform the activity, the level of interest in the material and the degree of involvement in the activity. Thus, he showed that following the task of sorting material in experimental settings, human subjects demonstrate a better involuntary recall rate than in the task of voluntary material memorization.

This influenced Leontiev and activity theory. That’s about all the detail there is. What to do next? Look up some of the other people I guess. Try a few, they’re all very short articles, give up with that.

Fine I’ll just google ‘vygotsky thought and language’ and see what i get. MIT Press description:

Vygotsky’s closely reasoned, highly readable analysis of the nature of verbal thought as based on word meaning marks a significant step forward in the growing effort to understand cognitive processes. Speech is, he argues, social in origins. It is learned from others and, at first, used entirely for affective and social functions. Only with time does it come to have self-directive properties that eventually result in internalized verbal thought. To Vygotsky, “a word is a microcosm of human consciousness.”

OK, yeah that does sound interesting.

Not finding great sources. 8 minutes left. Zone of proximal development section of Vygotsky’s page:

“Zone of Proximal Development” (ZPD) is a term Vygotsky used to characterize an individual’s mental development. He originally defined the ZPD as “the distance between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem solving under adult guidance or in collaboration with more capable peers.” He used the example of two children in school who originally could solve problems at an eight-year-old developmental level (that is, typical for children who were age 8). After each child received assistance from an adult, one was able to perform at a nine-year-old level and one was able to perform at a twelve-year-old level. He said “This difference between twelve and eight, or between nine and eight, is what we call the zone of proximal development.” He further said that the ZPD “defines those functions that have not yet matured but are in the process of maturation, functions that will mature tomorrow but are currently in an embryonic state.” The zone is bracketed by the learner’s current ability and the ability they can achieve with the aid of an instructor of some capacity.

ZPD page itself:

Zygotsky spent a lot of time studying the impact of school instruction on children and noted that children grasp language concepts quite naturally, but that math and writing did not come as naturally. Essentially, he concluded that because these concepts were taught in school settings with unnecessary assessments, they were of more difficulty to learners. Piaget believed that there was a clear distinction between development and teaching. He said that development is a spontaneous process that is initiated and completed by the children, stemming from their own efforts. Piaget was a proponent of independent thinking and critical of the standard teacher-led instruction that was common practice in schools.

But also:

… He believed that children would not advance very far if they were left to discover everything on their own. It’s crucial for a child’s development that they are able to interact with more knowledgeable others. They would not be able to expand on what they know if this wasn’t possible.

OK 3 minutes left. Let’s wildly skip between tabs learning absolutely nothing. Hm maybe this would have been interesting? ‘Vygotsky circle as a personal network of scholars: restoring connections between people and ideas’.

Ding! Didn’t get much past reading the title.


Well that didn’t work as well as the alienation one. Sprawling topic, and I wasn’t very clear on what I wanted to get out of it. History of the Circle itself or just some random facts about what individual people in it did? I mostly ended up with the second one, and not much insight into what held it together conceptually, beyond some vague idea about ‘going beyond behaviourism’/’looking at general background of human activity, not just immediate task’.

Still, I guess I know a bit more about these people than I did going in, and would be able to orient more quickly if I wanted to find out anything specific.

“Neoliberalism”

IMG_20200625_110020941

[Written as part of Notebook Blog Month.]

Everybody hates neoliberalism, it’s the law. But what is it?

This is probably the topic I’m most ignorant about and ill-prepared-for on the whole list, and I wasn’t going to do it. But it’s good prep for the bullshit jobs post, which was a popular choice, so I’m going to try. I’m going to be trying to articulate my current thoughts, rather than attempting to say anything original. And also I’m not really talking about neoliberalism as a coherent ideology or movement. (I think I’d have to do another speedrun just to have a chance of saying something sensible.) More like “neoliberalism”, scarequoted, as a sort of diffuse cloud of associations that the term brings to mind. Here’s my cloud (very UK-centric):

  • Big amorphous companies with bland generic names like Serco or Interserve, providing an incoherent mix of services to the public sector, with no obvious specialism beyond winning government contracts
  • Public private partnerships
  • Metrics! Lots of metrics!
  • Incuriosity about specifics. E.g. management by pushing to make a number go up, rather than any deep engagement with the particulars of the specific problem
  • Food got really good over this period. I think this actually might be relevant and not just something that happened at the same time
  • Low cost short-haul airlines becoming a big thing (in Europe anyway – don’t really understand how widespread this is)
  • Thinking you’re on a public right of way but actually it’s a private street owned by some shopping centre or w/e. With private security and lots of CCTV
  • Post-industrial harbourside developments with old warehouses converted into a Giraffe and a Slug and Lettuce
  • A caricatured version of Tony Blair’s disembodied head is floating over the top of this whole scene like a barrage balloon. I don’t think this is important but I thought you’d like to know

I’ve had this topic vaguely in mind since I read a blog post by Timothy Burke, a professor of modern history, a while back. The post itself has a standard offhand ‘boo neoliberalism’ side remark, but then when challenged in the comments he backs it up with an excellent, insightful sketch of what he means. (Maybe this post should just have been a copy of this comment, instead of my ramblings.)

I’m sensitive to the complaint that “neoliberalism” is a buzz word that can mean almost everything (usually something the speaker disapproves of).

A full fleshing out is more than I can provide, though. But here’s some sketches of what I have in mind:

1) The Reagan-Thatcher assault on “government” and aligned conceptions of “the public”–these were not merely attempts to produce new efficiencies in government, but a broad, sustained philosophical rejection of the idea that government can be a major way to align values and outcomes, to tackle social problems, to restrain or dampen the power of the market to damage existing communities. “The public” is not the same, but it was an additional target: the notion that citizens have shared or collective responsibilities, that there are resources and domains which should not be owned privately but instead open to and shared by all, etc. That’s led to a conception of citizenship or social identity that is entirely individualized, privatized, self-centered, self-affirming, and which accepts no responsibility to shared truths, facts, or mechanisms of dispute and deliberation.

2) The idea of comprehensively measuring, assessing, quantifying performance in numerous domains; insisting that values which cannot be measured or quantified are of no worth or usefulness; and constantly demanding incremental improvements from all individuals and organizations within these created metrics. This really began to take off in the 1990s and is now widespread through numerous private and public institutions.

3) The simultaneous stripping bare of ordinary people to numerous systems of surveillance, measurement, disclosure, monitoring, maintenance (by both the state and private entities) while building more and more barriers to transparency protecting the powerful and their most important private and public activities. I think especially notable since the late 1990s and the rise of digital culture. A loss of workplace and civil protections for most people (especially through de-unionization) at the same time that the powerful have become increasingly untouchable and unaccountable for a variety of reasons.

4) Nearly unrestrained global mobility for capital coupled with strong restrictions on labor (both in terms of mobility and in terms of protection). Dramatically increased income inequality. Massive “shadow economies” involving illegal or unsanctioned but nevertheless highly structured movements of money, people, and commodities. Really became visible by the early 1990s.

A lot of the features in my association cloud match pretty well: metrics, surveillance, privatisation. Didn’t really pick up much from point 4. I think 2 is the one which interests me most. My read on the metric stuff is that there’s a genuinely useful tool here that really does work within its domain of application but is disastrous when applied widely to everything. The tool goes something like:

  • let go of a need for top-down control
  • fragment the system into lots of little bits, connected over an interface of numbers (money, performance metrics, whatever)
  • try to improve the system by hammering on the little bits in ways such that the numbers go in the direction you want. This could be through market forces, or through metrics-driven performance improvements.

If your problem is amenable to this kind of breakdown, I think it actually works pretty well. This is why I think ‘food got good’ is actually relevant and not a coincidence. It fits this playbook quite nicely:

  • It’s a known problem. People have been selling food for a long time and have some well-tested ideas about how to cook, prep, order supplies, etc. Theres’s innovation on top of that, but it’s not some esoteric new research field.
  • Each individual purchase (of a meal, cake, w/e) is small and low-value. So the domain is naturally fragmented into lots of tiny bits.
  • This also means that lots of people can afford to be customers, increasing the number of tiny bits
  • Fast feedback. People know whether they like a croissant after minutes, not years.
  • Relevant feedback. People just tell you whether they like your croissants, which is the thing you care about. You don’t need to go search for some convoluted proxy measure of whether they like your croissants.
  • Lowish barriers to entry. Not especially capital-intensive to start a cafe or market stall compared with most businesses.
  • Lowish regulations. There’s rules for food safety, but it’s not like building planes or someting.
  • No lock-in for customers. You can go to the donburi stall today and the pie and mash stall tomorrow.
  • All of this means that the interface layer of numbers can be an actual market, rather than some faked-up internal market of metrics to optimise. And it’s a pretty open market that most people can access in some form. People don’t go out and buy trains, but they do go out and buy sandwiches.

There’s another very important, less wonky factor that breaks you out of the dry break-it-into-numbers method I listed above. You ‘get to cheat’ by bringing in emotional energy that ‘comes along for free’. People actually like food! They start cafes because they want to, even when it’s a terrible business idea. They already intrinsically give a shit about the problem, and markets are a thin interface layer over the top rather than most of the thing. This isn’t going to carry over to, say, airport security or detergent manufacturing.

As you get further away from an idealised row of spherical burger vans things get more complicated and ambiguous. Low cost airlines are a good example. These actually did a good job of fragmenting the domain into lots of bits that were lumped together by the older incumbents. And it’s worked pretty well, by bringing down prices to the point where far more people can afford to travel. (Of course there’s also the climate change considerations. If you ignore those it seems like a very obvious Good Thing, once you include them it’s somewhat murkier I suppose.)

The price you pay is that the experience gets subtly degraded at many points by the optimisation, and in aggregate these tend to produce a very unsubtle crappiness. For a start there’s the simple overhead of buying the fragmented bits separately. You have to click through many screens of a clunky web application and decide individually about whether you want food, whether you want to choose your own seat, whether you want priority queuing, etc. All the things you’d just have got as default on the old, expensive package deal. You also have to say no to the annoying ads trying to upsell you on various deals on hotels, car rentals and travel insurance.

Then there are the all the ways the flight itself becomes crappier. It’s at a crap airport a long way from the city you want to get to, with crappy transport links. The flight is a cheap slot at some crappy time of the early morning. The plane is old and crappily fitted out. You’re having a crappy time lugging around the absolute maximum amount of hand luggage possible to avoid the extra hold luggage fee. (You’ve got pretty good at optimising numbers yourself.)

This is often still worth it, but can easily tip into just being plain Too Crappy. I’ve definitely over-optimised flight booking for cheapness and regretted it (normally when my alarm goes off at three in the morning).

Low cost airlines seem basically like a good idea, on balance. But then there are the true disasters, the domains that have none of the natural features that the neoliberal playbook works on. A good example is early-stage, exploratory academic research. I’ve spent too long on this post already. You can fill in the depressing details yourself.

Some rambling thoughts about visual imagery

IMG_20200621_115251676

[Written as part of Notebook Blog Month.]

I’ve got some half-written drafts for topics on the original list which I want to finish soon, but for now I seem to be doing better by going off-list and rambling about whatever’s in my head. Today it’s visual imagery.

I’ve ended up reading a bunch of things vaguely connected with mnemonics in the last couple of weeks. I’m currently very bad at concentrating on books properly, but I’m still reading at a similar rate, so everything is in this weird quarter-read state. Anyway here’s the list of things I’ve started:

  • Moonwalking with Einstein by Joshua Foer. Pop book about learning to compete in memory championships. This is good and an easy read, so there is some chance I’ll actually finish it.
  • Orality and Literacy by Walter Ong. One of the references I followed up. About oral cultures in general but there is stuff on memorisation (e.g. repetitive passages in Homer being designed for easy memorisation when writing it down is not an option)
  • Brienne Yudkowsky’s posts on mnemonics
  • These two interesting posts by AllAmericanBreakfast on Less Wrong this week about experimenting with memory palaces to learn information for a chemistry exam.
     

Those last two posts are interesting to me because they’re written by someone in the very early stages of fiddling around with this stuff who doesn’t consider themself to naturally have a good visual imagination. I’d put myself in the same category, but probably worse. Actually I’m really confused about what ‘visual imagery’ even is. I have some sort of – stuff? – that has a sort of visual component, maybe mixed in with some spatial/proprioceptive/tactile stuff. Is that what people mean by ‘visual imagery’? I guess so? It’s very transitory and hard to pin down in my case, though, and I don’t feel like I make a lot of use out of it. The idea of using these crappy materials to make something elaborate like a memory palace sounds like a lot of work. But maybe it would work better if I spent more time on it.

The thing that jumped out of the first post for me was this bit:

I close my eyes and allow myself to picture nothing, or whatever random nonsense comes to mind. No attempt to control.

Then I invite the concept of a room into mind. I don’t picture it clearly. There’s a vague sense, though, of imagining a space of some kind. I can vaguely see fleeting shadowy walls. I don’t need to get everything crystal clear, though.

This sounded a lot more fun and approachable to me than crafting a specific memory palace to memorise specific things. I didn’t even get to the point of ‘inviting the concept of a room in’, just allowed any old stuff to come up, and that worked ok for me. I’m not sure how much of this ‘imagery’ was particularly visual, but I did find lots of detailed things floating into my head. It seems to work better if I keep a light touch and only allow some very gentle curiosity-based steering of the scene.

Here’s the one I found really surprising and cool. I was imagining an intricately carved little jade tortoise for some reason, and put some mild curiosity into what its eyes were made of. And I discovered that they were tiny yellow plastic fake gemstones that were weirdly familiar. So I asked where I recognised them from (this was quite heavy-handed questioning that dragged me out of the imagery). And it turns out that they were from a broken fish brooch I had as a kid. I prised all the fake stones off with a knife at some point to use for some project I don’t remember.

I haven’t thought about that brooch in, what, 20 years? But I remember an impressive amount of detail about it! I’ve tried to draw it above. Some details like the fins are a best guess, but the blue, green and yellow stones in diagonal stripes are definitely right. It’s interesting that this memory is still sitting there and can be brought up by the right prompt.

I think I’ll play with this exercise a bit more and see what other rubbish I can dredge up.

20 Fundamentals

I was inspired by John Nerst’s recent post to make a list of my own fundamental background assumptions. What I ended up producing was a bit of a odd mixed bag of disparate stuff. Some are something like factual beliefs, some of them are more like underlying emotional attitudes and dispositions to act in various ways.

I’m not trying to ‘hit bedrock’ in any sense, I realise that’s not a sensible goal. I’m just trying to fish out a few things that are fundamental enough to cause obvious differences in background with other people. John Nerst put it well on Twitter:

It’s not true that beliefs are derived from fundamental axioms, but nor is it true that they’re a bean bag where nothing is downstream from everything else.

I’ve mainly gone for assumptions where I tend to differ with the people I to hang around with online and in person, which skews heavily towards the physics/maths/programming crowd. This means there’s a pretty strong ‘narcissism of small differences’ effect going on here, and if I actually had to spend a lot of time with normal people I’d probably run screaming back to to STEM nerd land pretty fast and stop caring about these minor nitpicks.

Also I only came up with twenty, not thirty, because I am lazy.


  1. I’m really resistant to having to ‘actually think about things’, in the sense of applying any sort of mental effort that feels temporarily unpleasant. The more I introspect as I go about problem solving, the more I notice this. For example, I was mucking around in Inkscape recently and wanted to check that a square was 16 units long, and I caught myself producing the following image:

    square

    Apparently counting to 16 was an unacceptable level of cognitive strain, so to avoid it I made the two 4 by 4 squares (small enough to immediately see their size) and then arranged them in a pattern that made the length of the big square obvious. This was slower but didn’t feel like work at any point. No thinking required!

  2. This must have a whole bunch of downstream effects, but an obvious one is a weakness for ‘intuitive’, flash-of-insight-based demonstrations, mixed with a corresponding laziness about actually doing the work to get them. (Slowly improving this.)

  3. I picked up some Bad Ideas From Dead Germans at an impressionable age (mostly from Kant). I think this was mostly a good thing, as it saved me from some Bad Ideas From Dead Positivists that physics people often succumb to.

  4. I didn’t read much phenomenology as such, but there’s some mood in the spirit of this Whitehead quote that always came naturally to me:

    For natural philosophy everything perceived is in nature. We may not pick and choose. For us the red glow of the sunset should be as much part of nature as are the molecules and electric waves by which men of science would explain the phenomenon.

    By this I mean some kind of vague understanding that we need to think about perceptual questions as well as ‘physics stuff’. Lots of hours as an undergrad on Wikipedia spent reading about human colour perception and lifeworlds and mantis shrimp eyes and so on.

  5. One weird place where this came out: in my first year of university maths I had those intro analysis classes where you prove a lot of boring facts about open sets and closed sets. I just got frustrated, because it seemed to be taught in the same ‘here are some facts about the world’ style that, say, classical mechanics was taught in, but I never managed to convince myself that the difference related to something ‘out in the world’ rather than some deficiency of our cognitive apparatus. ‘I’m sure this would make a good course in the psychology department, but why do I have to learn it?’

    This isn’t just Bad Ideas From Dead Germans, because I had it before I read Kant.

  6. Same thing for the interminable arguments in physics about whether reality is ‘really’ continuous or discrete at a fundamental level. I still don’t see the value in putting that distinction out in the physical world – surely that’s some sort of weird cognitive bug, right?

  7. I think after hashing this out for a while people have settled on ‘decoupling’ vs ‘contextualising’ as the two labels. Anyway it’s probably apparent that I have more time for the contextualising side than a lot of STEM people.

  8. Outside of dead Germans, my biggest unusual pervasive influence is probably the New Critics: Eliot, Empson and I.A. Richards especially, and a bit of Leavis. They occupy an area of intellectual territory that mostly seems to be empty now (that or I don’t know where to find it). They’re strong contextualisers with a focus on what they would call ‘developing a refined sensibility’, by deepening sensitivity to tiny subtle nuances in expression. But at the same time, they’re operating in a pre-pomo world with a fairly stable objective ladder of ‘good’ and ‘bad’ art. (Eliot’s version of this is one of my favourite ever wrong ideas, where poetic images map to specific internal emotional states which are consistent between people, creating some sort of objective shared world.)

    This leads to a lot of snottiness and narrow focus on a defined canon of ‘great authors’ and ‘minor authors’. But also the belief in reliable intersubjective understanding gives them the confidence for detailed close reading and really carefully picking apart what works and what doesn’t, and the time they’ve spent developing their ear for fine nuance gives them the ability to actually do this.

    The continuation of this is probably somewhere on the other side of the ‘fake pomo blocks path’ wall in David Chapman’s diagram, but I haven’t got there yet, and I really feel like I’m missing something important.

  9. I don’t understand what the appeal of competitive games is supposed to be. Like basically all of them – sports, video games, board games, whatever. Not sure exactly what effects this has on the rest of my thinking, but this seems to be a pretty fundamental normal-human thing that I’m missing, so it must have plenty.

  10. I always get interested in specific examples first, and then work outwards to theory.

  11. My most characteristic type of confusion is not understanding how the thing I’m supposed to be learning about ‘grounds out’ in any sort of experience. ‘That’s a nice chain of symbols you’ve written out there. What does it relate to in the world again?’

  12. I have never in my life expected moral philosophy to have some formal foundation and after a lot of trying I still don’t understand why this is appealing to other people. Humans are an evolved mess and I don’t see why you’d expect a clean abstract framework to ever drop out from that.

  13. Philosophy of mathematics is another subject where I mostly just think ‘um, you what?’ when I try to read it. In fact it has exactly the same subjective flavour to me as moral philosophy. Platonism feels bad the same way virtue ethics feels bad. Formalism feels bad the same way deontology feels bad. Logicism feels bad the same way consequentialism feels bad. (Is this just me?)

  14. I’ve never made any sense out of the idea of an objective flow of time and have thought in terms of a ‘block universe’ picture for as long as I’ve bothered to think about it.

  15. If I don’t much like any of the options available for a given open philosophical or scientific question, I tend to just mentally tag it with ‘none of the above, can I have something better please’. I don’t have the consistency obsession thing where you decide to bite one unappealing bullet or another from the existing options, so that at least you have an opinion.

  16. This probably comes out of my deeper conviction that I’m missing a whole lot of important and fundamental ideas on the level of calculus and evolution, simply on account of nobody having thought of them yet. My default orientation seems to be ‘we don’t know anything about anything’ rather than ‘we’re mostly there but missing a few of the pieces’. This produces a kind of cheerful crackpot optimism, as there is so much to learn.

  17. This list is noticeably lacking in any real opinions on politics and ethics and society and other people stuff. I just don’t have many opinions and don’t like thinking about people stuff very much. That probably doesn’t say anything good about me, but there we are.

  18. I’m also really weak on economics and finance. I especially don’t know how to do that economist/game theoretic thing where you think in terms of what incentives people have. (Maybe this is one place where ‘I don’t understand competitive games’ comes in.)

  19. I’m OK with vagueness. I’m happy to make a vague sloppy statement that should at least cover the target, and maybe try and sharpen it later. I prefer this to the ‘strong opinions, weakly held’ alternative where you chuck a load of precise-but-wrong statements at the target and keep missing. A lot of people will only play this second game, and dismiss the vague-sloppy-statement one as ‘just being bad at thinking’, and I get frustrated.

  20. Not happy about this one, but over time this frustration led me to seriously go off styles of writing that put a strong emphasis on rigour and precision, especially the distinctive dialects you find in pure maths and analytic philosophy. I remember when I was 18 or so and encountered both of these for the first time I was fascinated, because I’d never seen anyone write so clearly before. Later on I got sick of the way that this style tips so easily into pedantry over contextless trivialities (from my perspective anyway). It actually has a lot of good points, though, and it would be nice to be able to appreciate it again.

Imagination in a terrible strait-jacket

I enjoyed alkjash’s recent Babble and Prune posts on Less Wrong, and it reminded me of a favourite quote of mine, Feynman’s description of science in The Character of Physical Law:

What we need is imagination, but imagination in a terrible strait-jacket. We have to find a new view of the world that has to agree with everything that is known, but disagree in its predictions somewhere, otherwise it is not interesting.

Imagination here corresponds quite well to Babbling, and the strait-jacket is the Pruning you do afterwards to see if it actually makes any sense.

For my tastes at least, early Less Wrong was generally too focussed on building out the strait-jacket to remember to put the imagination in it. An unfair stereotype would be something like this:

‘I’ve been working on being better calibrated, and I put error bars on all my time estimates to take the planning fallacy into account, and I’ve rearranged my desk more logically, and I’ve developed a really good system to keep track of all the tasks I do and rank them in terms of priority… hang on, why haven’t I had any good ideas??’

I’m poking fun here, but I really shouldn’t, because I have the opposite problem. I tend to go wrong in this sort of way:

‘I’ve cleared out my schedule so I can Think Important Thoughts, and I’ve got that vague idea about that toy model that it would be good to flesh out some time, and I can sort of see how Topic X and Topic Y might be connected if you kind of squint the right way, and it might be worth developing that a bit further, but like I wouldn’t want to force anything, Inspiration Is Mysterious And Shouldn’t Be Rushed… hang on, why have I been reading crap on the internet for the last five days??’

I think this trap is more common among noob writers and artists than noob scientists and programmers, but I managed to fall into it anyway despite studying maths and physics. (I’ve always relied heavily on intuition in both, and that takes you in a very different direction to someone who leans more on formal reasoning.) I’m quite a late convert to systems and planning and organisation, and now I finally get the point I’m fascinated by them and find them extremely useful.

One particular way I tend to fail is that my over-reliance on intuition leads me to think too highly of any old random thoughts that come into my head. And I’ve now come to the (in retrospect obvious) conclusion that a lot of them are transitory and really just plain stupid, and not worth listening to.

As a simple example, I’ve trained myself to get up straight away when the alarm goes off, and every morning my brain fabricates a bullshit explanation for why today is special and actually I can stay in bed, and it’s quite compelling for half a minute or so. I’ve got things set up so I can ignore it and keep doing things, though, and pretty quickly it just goes away and I never wish that I’d listened to it.

On the other hand, I wouldn’t want to tighten things up so much that I completely stopped having the random stream of bullshit thoughts, because that’s where the good ideas bubble up from too. For now I’m going with the following rule of thumb for resolving the tension:

Thoughts can be herded and corralled by systems, and fed and dammed and diverted by them, but don’t take well to being manipulated individually by systems.

So when I get up, for example, I don’t have a system in place where I try to directly engage with the bullshit explanation du jour and come up with clever countertheories for why I actually shouldn’t go back to bed. I just follow a series of habitual getting-up steps, and then after a few minutes my thoughts are diverted to a more useful track, and then I get on with my day.

A more interesting example is the common writers’ strategy of having a set routine (there’s a whole website devoted to these). Maybe they work at the same time each day, or always work in the same place. This is a system, but it’s not a system that dictates the actual content of the writing directly. You just sit and write, and sometimes it’s good, and sometimes it’s awful, and on rare occasions it’s genuinely inspired, and if you keep plugging on those rare occasions hopefully become more frequent. I do something similar with making time to learn physics now and it works nicely.

This post is also a small application of the rule itself! I was on an internet diet for a couple of months, and was expecting to generate a few blog post drafts in that time, and was surprised that basically nothing came out in the absence of my usual internet immersion. I thought writing had finally become a pretty freestanding habit for me, but actually it’s still more fragile and tied to a social context that I expected. So this is a deliberate attempt to get the writing flywheel spun up again with something short and straightforward.

no better state than this

I’m writing a Long Post, but it’s a slog. In the meantime here are some more trivialities.

  1. I realised that the three images in my glaucoma machine post could be condensed down to the following: “Glaucoma, the ox responded / Gaily, to the hand expert with yoke and plough.”

    This is really stupid and completely impenetrable without context, and I love it.

  2. I’ve been using the fog-clearing metaphor for the process of resolving ambiguity. It’s a good one, and everyone else uses it.

    It’s probably not surprising that we reach for a visual metaphor, as sight is so important to us. It’s common to describe improved understanding in terms of seeing further. Galileo named his scientific society the Academy of Lynxes because the lynx was thought to have unparalleled eyesight, though unfortunately that finding seems not to have replicated. (That was the high point of naming scientific institutions, and after that we just got boring stuff like ‘The Royal Society’.)

    I’m more attached to smell as a metaphor, though. We do use this one pretty often, talking about having a ‘good nose’ for a problem or ‘sniffing out’ the answer. Or even more commonly when we talk about good or bad taste, given that taste is basically smell.

    I’m probably biased because I have atrocious eyesight, and a good sense of smell. I’d rather join an Academy of Trufflehogs. I do think smell fits really well, though, for several reasons:

    • It’s unmapped. Visual images map into a neat three-dimensional field; smell is a mess.
    • The vocabulary for smells is bad. There’s a lot more we can detect than we know how to articulate.
    • It’s deeply integrated into the old brain, strongly plugged into all sorts of odd emotions.
    • It’s real nonetheless. You can navigate through this mess anyway! Trufflehogs actually find truffles.

 
3. An even better metaphor, though, is this beautiful one I saw last week from M. John Harrison on Twitter. ‘You became a detector, but you don’t know what it detects’:

This mental sea change is one of my weird repetitive fascinations that I keep going on about, here and on the old tumblr. Seymour Papert’s ‘falling in love with the gears’, or the ‘positive affective tone’ that started attaching itself to boring geology captions on Wikipedia. The long process of becoming a sensitive antenna, and the longer process of finding out what it’s an antenna for. There is so absolutely NO BETTER STATE THAN THIS.

Three replies

These are responses to other people’s posts. They’re all a bit short for an individual post but a bit long/tangential/self-absorbed for a reply, so I batched them together here.

1. Easy Mode/Hard Mode inversions

I spend a lot of time being kind of confused and nitpicky about the rationalist community, but there’s one thing they do well that I really really value, which is having a clear understanding of the distinction between doing the thing and doing the things you need to do to look like you’re doing the thing.

Yudkowsky was always clear on this (I’m thinking about the bit on cutting the enemy), and people in the community get it.

I appreciate a lot this having done a PhD. In academia a lot of people seem to have spent so long chasing after the things you need to do to look like you’re doing the thing that they’ve forgotten how to do the thing, or even sometimes that there’s a thing there to do. In parts, the cargo cults have taken over completely.

Zvi Mowshowitz gives doing the thing and doing the things you need to do to look like you’re doing the thing the less unwieldy names of Hard Mode and Easy Mode (at least, I think that’s the key component of what he’s pointing at).

It got me thinking about cases where Easy Mode and Hard Mode could invert completely. In academia, Easy Mode involves keeping up with the state of the art in a rapidly moving narrow subfield, enough to get out a decent number of papers on a popular topic in highly ranked journals during your two year postdoc. You need to make sure you’re in a good position to switch to the new trendy subfield if this one appears to run out of steam, though, because you need to make sure you get that next two year postdoc on the other side of the world, so that …

… wait a minute. Something’s gone wrong here. That sounds really hard!

Hard Mode is pretty ill-defined right now, but I’m not convinced that it necessarily has to be any harder than Easy Mode. I have a really shitty plan and it’s still not obviously worse than the Easy Mode plan.

If there was a risk of a horrible, life-ruining failure in Hard Mode, I’d understand, but there isn’t. The floor, for a STEM PhD student with basic programming skills in a developed economy, is that you get a boring but reasonably paid middle class job and think about what you’re interested in in your spare time. I’m walking along this floor right now and it’s really not bad here. It’s also exactly the same floor you end up on if you fail out of Easy Mode, except you have a few extra years to get acquainted with it.

If there is a genuine inversion here, then probably it’s unstable to perturbations. I’m happy to join in with the kicking.


2. ~The Great Conversation~

Sarah Constantin had the following to say in a recent post:

… John’s motivation for disagreeing with my post was that he didn’t think I should be devaluing the intellectual side of the “rationality community”. My post divided projects into into community-building (mostly things like socializing and mutual aid) versus outward-facing (business, research, activism, etc.); John thought I was neglecting the importance of a community of people who support and take an interest in intellectual inquiry.

I agreed with him on that point — intellectual activity is important to me — but doubted that we had any intellectual community worth preserving. I was skeptical that rationalist-led intellectual projects were making much progress, so I thought the reasonable thing to do was to start fresh.

😮

‘Doubted that we had any intellectual community worth preserving’ is strong stuff! Apparently today is Say Nice Things About The Rationalists Day for me, because I really wanted to argue with it a bit.

I may be completely missing the point on what the ‘rationality community’ is supposed to be in this argument. I’m only arguing for the public-facing, internet community here, because that’s all I really know about. I have no idea about the in-person Berkeley one. Even if I have missed the point, though, I think the following makes sense anyway.

Most subcultures and communities of practice have a bunch of questions people get really exercised about and like to debate. I often internally think of this as ~The Great Conversation~, with satiric tumblr punctuation to indicate it’s not actually always all that great.

I’ve only been in this part of the internet for a few years. Before that I lurked on science blogs (which have some overlap). On science blogs ~The Great Conversation~ includes the replication crisis, alternatives to the current academic publishing system, endless identical complaints about the postdoc system (see part 1 of this post), and ranting about pseudoscience and dodgy alternative therapies.

Sometimes ~The Great Conversation~ involves the big names in the field, but most of the time it’s basically whoever turns up. People who enjoy writing, people who enjoy the sound of their own voice, people with weird new ideas they’re excited about, people on a moral quest to fix things, grumpy postdocs with an axe to grind, bored people, depressed people, lonely people, the usual people on the internet.

If you go to the department common room instead, the academics probably aren’t talking about the things on the science blogs. They’re talking about their current research, or the weird gossip from that other research group, or what the university administration has gone and done this time, or how shit the new coffee machine is. ~The Great Conversation~ is mostly happening elsewhere.

This means that the weirdos on the internet have a surprisingly large amount of control over the big structural questions in the field. This often extends to having control over what those questions are in the first place.

The rationalist community seems to be trying to have ~The Great Conversation~ for as much of human intellectual enquiry as it can manage (or at least as much as it takes seriously). People discuss the replication crisis, but they also discuss theories of cognition, and moral philosophy, and polarisation in politics, and the future of work, and whether Bayesian methods explain absolutely everything in the world or just some things.

The results are pretty mixed, but is there any reasonably sized group out there doing noticeably better, out on the public internet where anyone can join the conversation? If there is I’d love to know about it.

This is a pretty influential position, as lots of interesting people with wide-ranging interests are likely to find it and get sucked in, even if they’re mostly there to argue at the start. Scott Aaronson is one good example. He’s been talking about these funny Singularity people for years, but over time he’s got more and more involved in the community itself.

The rationalist community is some sort of a beacon for something, and to me that ought to count for ‘an intellectual community worth preserving’.


3. The new New Criticism

I saw this on nostalgebraist’s tumblr:

More importantly, the author approaches the game like an art critic in perhaps the best possible sense of that phrase (and with M:TG, there are a lot of bad senses). He treats card design as an art form unto itself (which it clearly is!), and talks about it like a poetic form, with various approaches to creativity within constraints, a historical trajectory with several periods, later work exhibiting a self-consciousness about that history (in Time Spiral, and very differently in Magic 2010), etc.

That is, he’s taking a relatively formal, “internal,” New Criticism-like approach, rather than a historicist approach (relate the work to contemporary extra-artistic phenomena) or an esoteric/Freudian/high-Theory-like approach (take a few elements of the work, link them to some complex of big ideas, uncover an iceberg of ostensibly hidden structure). I don’t think the former approach is strictly better than the latter, but it’s always refreshing because so much existing games criticism takes the latter two approaches.

I know absolutely nothing about M:TG beyond what the acronym stands for, but reading this I realised I’m also really craving sources of this sort of criticism. I recently read Steve Yegge’s giant review of the endgame of Borderlands, a first person shooter that I would personally hate and immediately forgot the name of. Despite this I was completely transfixed by the review, temporarily fascinated by tiny details of gun design, enjoying the detailed explorations of exactly what made the mechanics of the game work so well. This is exactly what I’m looking for! I’d rather have it for fiction or music than games, but I’ll take what I can get.

I kind of imprinted on the New Critics as my ideal of what criticism should be, and although I can see the limitations now (snotty obsession with narrow Western canon, tone deaf to wider societal influences) I still really enjoy the ‘internal’ style. But it’s much easier now to find situated criticism, that wants to relate a piece of art to, say, Marxism or the current political climate. And even easier to find lists of all the ways that that piece of art is problematic and you’re problematic for liking it.

Cynically I’d say that this is because the internal style is harder to do. Works of art are good or bad for vivid and specific internal reasons that require a lot of sensitivity to pinpoint, whereas they’re generally problematic for the same handful of reasons that everything else is problematic. But probably it’s mostly just that the internal style is out of fashion. I’d really enjoy a new New Criticism without the snotty high culture focus.

Two cultures: tacit and explicit

[Epistemic status: no citations and mostly pulled straight out of my arse, but I think there’s something real here]

While I was away it looks like there was some kind of Two Cultures spat on rationalist-adjacent tumblr.

I find most STEM-vs-the-humanities fight club stuff sort of depressing, because the arguments from the humanities side seem to me to be too weak. (This doesn’t necessarily apply this time – I haven’t tried to catch up on everyone’s posts.) Either people argue that the humanities teach exactly the same skills in systematic thinking that the sciences do, or else you get the really dire ‘the arts teach you to be a real human being‘ arguments.

I think there’s another distinction that often gets lost. There are two types of understanding I’d like to distinguish, that I’m going to call explicit and tacit understanding in this post. I don’t know if those are the best words, so let me know if you think I should be calling them something different. Both are rigorous and reliable paths to new knowledge, and both are important in both the arts and sciences. I would argue, however, that explicit understanding is generally more important in science, and tacit understanding is more important in the arts.

(I’m interested in this because my own weirdo learning style could be described as something like ‘doing maths and physics, but navigating by tacit understanding’. I’ve been saying for years that ‘I’m trying to do maths like an arts student’, and I’m just starting to understand what I mean by that. Also I feel like it’s been a bad, well, century for tacit understanding, and I want to defend it where I can.)

Anyway, let’s explain what I mean by this. Explicit understanding is the kind you come to by following formal logical rules. Scott Alexander gives an example of ‘people who do computer analyses of Shakespeare texts to see if they contain the word “the” more often than other Shakespeare texts with enough statistical significance to conclude that maybe they were written by different people’. This is explicit understanding as applied to the humanities. It produces interesting results there, just as it does in science. Also, if this was all people did in the humanities they would be horribly impoverished, whereas science might (debatably) just about survive.

Tacit understanding is more like the kind you ‘develop a nose for’, or learn to ‘just see’. That’s vague, so here are some examples:

  • Taking a piece of anonymised writing and trying to guess the date and author. This is a really rigorous and difficult thing my dad had to do in university (before pomo trashed the curriculum, [insert rant here]). It requires very wide-ranging historical reading, obviously, but also on-the-fly sensitivity to delicate tonal differences. You’re not combing through the passage saying ‘this specific sentence construction indicates that this passage is definitely from the late seventeenth century’. There might be some formal rules like this that you can extract, but it will take ages, and while you’re doing the thing you’re more relying on gestalt feelings of ‘this just looks like Dryden’. You don’t especially need to formalise it, because you can get it right anyway.

  • Parody. This is basically the same thing, except this time it’s you generating the writing to fit the author. Scott is excellent at this himself! Freddie DeBoer uses this technique to teach prose style, which sounds like a great way to develop a better ear for it.

  • Translation. I can’t say too much about this one, because I’ve never learned a foreign language :(. But you have the problem of matching the meaning of the source, except that every word has complex harmonic overtones of different meanings and associations, and you have to try and do justice to those as well as best as you can. Again, it’s a very skilled task that you can absolutely do a better or worse job at, but not a task that’s achieved purely through rule following.

I wish these kinds of tacit skills were appreciated more. If the only sort of understanding you value is explicit understanding, then the arts are going to look bad by comparison. This is not the fault of the arts!