Uncle Matthew: ‘You don’t have to go to some awful middle-class establishment to know who George III was. Anyway, who was he, Fanny?’
Alas, I always failed to shine on these occasions. My wits scattered to the four winds by my terror of Uncle Matthew, I said, scarlet in my face:
‘’He was king. He went mad.’
‘Most original, full of information,’ said Uncle Matthew, sarcastically. ‘Well worth losing every ounce of feminine charm to find that out, I must say. Legs like gateposts from playing hockey, and the worst seat on a horse of any woman I ever knew. Give a horse a sore back as soon as look at it. Linda, you’re uneducated, thank God, what have you got to say about George III?’
‘Well,’ said Linda, her mouth full, ‘he was the son of poor Fred and the father of Beau Brummel’s fat friend, and he was one of those vacillators you know. “I am his Highness’s dog at Kew, pray tell me, sir, whose dog are you?” she added, inconsequently. ‘Oh, how sweet!’
Uncle Matthew shot a look of cruel triumph at Aunt Emily.
Nancy Mitford, The Pursuit of Love
I left this really late this month. I was determined to finish a couple of tasks off by the end of February – writing an R script to do some analytical chemistry thing for a friend, and finishing the Eureka Factor book review I’ve been going on about for a while. I managed to get them both done, and now I can finally think about ALL THE OTHER THINGS again. But anyway, by the time I’d finished those I didn’t leave much time to write this. So I’ll just do a low effort post with a couple of Kindle highlights from The Eureka Factor, plus the bits I took out of the review because they were dragging me too far off on a tangent.
I’ve also got a short follow-up from David Chapman about two quotes I posted in the December newsletter, which I meant to ask if I could post last month but forgot. That will have to do for February.
Eureka Factor bits and pieces
OK, here’s the leftovers from the review. Also, in the process of writing the review I ended up making quite detailed chapter summaries and a big pile of Kindle highlights – if anyone wants them, let me know!
‘He was king. He went mad.’
I really like the Mitford quote at the top and considered including it, as it’s such a nice example of weak associations versus saying the obvious thing. Fanny lists off the most salient features of George III: he was king and went mad. (This also demonstrates another feature of insights that’s discussed in The Eureka Factor – they rarely appear when your wits are scattered to the four winds by terror. Fear tends to narrow the range of possibilities you consider.)
Linda casually throws out a couple of clever tangential historical allusions and ends with a Pope couplet. and generally sounds a whole lot more impressive. This fits with the way she’s learned history in the first place, by haphazardly collecting bits of knowledge, rather than the linear approach of a schoolbook:
The Radlett children read enormously by fits and starts in the library at Alconleigh, a good representative nineteenth-century library, which had been made by their grandfather, a most cultivated man.
But Mitford also talks about the dangers of this kind of undisciplined associational thinking:
But, while they picked up a great deal of heterogeneous information, and gilded it with their own originality, while they bridged gulfs of ignorance with their charm and high spirits, they never acquired any habit of concentration, they were incapable of solid hard work. One result, in later life, was that they could not stand boredom. Storms and difficulties left them unmoved, but day after day of ordinary existence produced an unbearable torture of ennui, because they completely lacked any form of mental discipline.
In the review I talked about how testing insight is hard: ‘a good reframing like the one in the Nine Dot Problem tends to be a bit of a one-off: once you’ve had the idea … it applies trivially to all similar puzzles, and not at all to other types of puzzle.’
The current version of the Wikipedia article is amusingly snotty about this feature of insight problems:
The pool of insight problems currently employed by psychologists is small and tepid, and due to its heterogeneity and often high difficulty level, is not conducive of validity or reliability.
One thing it doesn’t mention is Bongard problems. As far as I can see these fit the bill quite nicely. The reframing step is unique for each puzzle, but it’s slotted into a standardised format of twelve images, in a way that makes it relatively easy to generate a large number of similar puzzles.
Tip of the tongue
Like insights, but not the same:
The tip-of-the-tongue (TOT) phenomenon is another example of thought from the fringe. It sometimes occurs while you’re trying to remember something, usually a word. As with intuition, you can’t remember the specific word, but you know it’s there, because it feels like it’s on the tip of your tongue. Often, you might even know what letter the word starts with and how many syllables it has. If it’s the name of a person, you may even be able to bring to mind an image of the person’s face. If it’s the name of an actor that you’re trying to recall, you might even remember what movies she appeared in or some tabloid gossip about her. But her name remains elusive until it eventually pops into awareness. When this happens, like a sneeze, it relieves the discomfort.
Subjectively, the TOT phenomenon seems a lot like the kind of intuition that sometimes precedes insight, but there are differences. TOT states tend to become more common as a person ages, likely reflecting older people’s increasing difficulty in remembering words. However, intuition doesn’t seem to deteriorate with age. Another difference is that the TOT phenomenon is a difficulty in remembering a word that’s otherwise well known to you. In contrast, intuition is the anticipation of a creative act—the emergent insight reflects a novel idea or perspective rather than a familiar one. At present, it isn’t known whether intuition has any underlying similarity to the TOT phenomenon other than the fact that both seem to be tense anticipatory states preceding the sudden emergence of information into awareness. Research on the TOT phenomenon is reviewed in A. S. Brown, “A Review of the Tip-of-the-Tongue Experience,” Psychological Bulletin 109 (1991): 204–23.
There’s a nice description of an experiment on guessing implicit rules:
In the simplest version of his experiment, each letter group was generated by a computer program according to one of two sets of rules, which we’ll call “rule set A” and “rule set B.” These rules govern which specific letters in a group can follow other specific letters. For example, in one rule set, a “T” always follows an “M” or a “P” must always follow an “S.” For each letter group, a participant would have to guess whether that group was a member of family A or family B—that is, whether it had been generated by rule set A or rule set B. After a while, Reber’s participants got better and better at this. By itself, this isn’t remarkable. People tend to improve at most things if they keep working at them. What was surprising was that they weren’t able to explain the rules that described each of these families. They knew that they had somehow learned the rules. They just couldn’t bring them to awareness. Furthermore, when they were instructed to consciously try to figure out the rules while making their judgments, they did worse than when they used a more passive and intuitive guessing strategy. A deliberate, analytical mind-set squashed their intuition.
This is in this book by Reber. (According to the footnotes. I haven’t read it.)
Two quotes again
In the December newsletter I included the following two quotes, which I found weirdly similar.
One from Empson’s Seven Types of Ambiguity:
As a rule, all that you recognise as in your mind is the one final association of meanings which seems sufficiently rewarding to be the answer—‘now I have understood that’; it is only at intervals that the strangeness of the process can be observed. I remember once clearly seeing a word so as to understand it, and, at the same time, hearing myself imagine that I had read its opposite. In the same way, there is a preliminary stage in reading poetry when the grammar is still being settled, and the words have not all been given their due weight; you have a broad impression of what it is all about, but there are various incidental impressions wandering about in your mind; these may not be part of the final meaning arrived at by the judgment, but tend to be fixed in it as part of its colour.
And one from Agre’s Computation and Human Experience:
… a change to the input of a combinational logic circuit will cause the circuit to enter a logically inconsistent state until it finally manages to drive all of its outputs to their correct values. A small window of inconsistency opens up between the moment an input changes and the moment the last output assumes its correct value. While this window is open, it is important that nobody trust the output values, which are unreliable as long as the circuit is still settling down.
I got an interesting reply from David Chapman:
I’m tempted to endorse the possibility that this is not coincidental. One of the motivating insights for Phil’s and my work was that neurons are pathetically slow, so thinking must make massive use of parallelism. You accomplish reading fast enough that the depth of the circuit from your eyes to the sense you make of a sentence can only be a few dozen neurons. That strongly suggests that innumerable interpretations are considered in parallel. In the case of a straightforward sentence, nearly all are rejected before the process reaches consciousness. In poetry, syntactic difficulty means many linger.
This same insight motivated 1980s “connectionism,” which evolved into “deep learning.” An infuriating but entertaining aspect of deep learning is that it often runs to hundreds of layers, which is completely implausible if you take DL “neurons” as models of neurons.
It looks like the through-time for neurons is on the order of 10ms. And the time required for object recognition is on the order of 150ms. So maybe if we’re conservative it could go 20 neurons deep. Definitely not the hundreds used in some current “neural networks.”
So maybe I’m not pattern-matching too wildly. Who knows.
Physics, hopefully! Also maybe I’ll write a newsletter that’s better than this one.