Let’s try and understand Brian Cantwell Smith again

This is mainly a review of Brian Cantwell Smith’s latest book, The Promise of Artificial Intelligence: Reckoning and Judgment. But it’s also a second attempt to understand more of his overall project and worldview, after struggling through On the Origin of Objects a few years ago. I got a lot out of reading that, and wrote up what I did understand in my post on his idea of representations happening in the ‘middle distance’ between direct causal coupling and total irrelevance. But somehow the whole thing never cohered for me at the level I wanted to and I felt like I was missing something.

The new book is an easier read, but still not exactly straightforward. He’s telling an intricate story, and as with OOO the book is one single elegant arc of argument with little redundancy, so it’s not a forgiving format if you get lost. And I did get lost, in the sense that I’ve still got the ‘I’m missing something’ feeling. Part of the reason I’m posting in this on my ‘proper’ blog and not the notebucket is that I wanted the option of getting comments (this worked very well for the middle distance post and I got some extremely good ones). So if you can help me out with any of this, please do!


So, first, let’s explain the part that I do understand. The early part of the book is about the history of AI, and of course there’s been a whole lot more of this since OOO‘s publication in 1996. He divides this history into ‘first-wave’ GOFAI, with its emphasis on symbolic manipulation, and the currently successful ‘second wave’ of AI based on neural networks. There’s also a short ‘transition’ chapter on the 4E movement (’embodied, embedded, extended, enacted’) between the two waves, which he describes as important but not enough on its own, for reasons I’ll get into.

He’s mainly interested in what the first and second wave paradigms implicitly assume about the world. First-wave AI worked with logical inference on symbols that were supposed to directly map to discrete well-defined objects in the world. This assumes an ontology where that would actually work:

The ontology of the world is what I will call formal: discrete, well-defined, mesoscale objects exemplifying properties and standing in unambiguous relations.

And of course it mostly didn’t work, for most problems, because the world is mostly not like that.

Second-wave AI gets below these ready-made well-defined concepts to something more like the perceptual level. Objects aren’t baked in from the start but have to be recognised, distilled out of a gigantic soup of pixel-level data by searching for weak statistical correlations between huge numbers of variables. This has worked much better for tasks like image recognition and generation, suggesting that it captures something real about the complexity and richness of the world. Smith uses an analogy to a group of islands, where questions like ‘how many islands are there?’ depend on the level of detail you include:

Whether an outcropping warrants being called an island—whether it reaches “conceptual” height—is unlikely to have a determinate answer. In traditional philosophy such questions would be called vague, but I believe that label is almost completely inappropriate. Reality—both in the world and in these high-dimensional representations of it—is vastly richer and more detailed than can be “effably” captured in the idealized world of clear and distinct ideas.

There’s an interesting aside about how phenomenology has traditionally had a better grasp on this kind of richness than analytic philosophy, with its focus on logic and precision, which can mislead people into thinking it’s a subjective feature of our internal experience. Whereas really it’s about how the world is. Things are just too complicated to be fully captured by low-resolution logical systems:

That the world outstrips these schemes’ purview is a blunt metaphysical fact about the world — critical to any conceptions of reason and rationality worth their salt. Even if phenomenological philosophy has been more acutely aware of this richness than has the analytic tradition, the richness itself is a fundamental characteristic of the underlying unity of the metaphysics, not a uniquely phenomenological or subjective fact.

Smith would like to keep using the word ‘rationality’ for successful reasoning in general, not just the formal kind:

I want to reject the idea that intelligence and rationality are adequately modeled by something like formal logic, of the sort at which computers currently excel. That is: I reject any standard divide between “reason” as having no commitment, dedication, and robust engagement with the world, and emotion and affect as being the only locus of such “pro” action-oriented attitudes, on the other.

I haven’t decided whether I like this or not — I’ve kind of got used to David Chapman’s distinction between ‘reasonableness’ and ‘rationality’ so I’m feeling some resistance to using ‘rationality’ for the broader thing. At the least I still want a word for formal, systematic thinking.


OK, now we’re getting towards the bits I don’t understand so well. Smith doesn’t think that the resources of current ‘second-wave’ AI are going to be enough to reproduce anything like human thought. This is where the subtitle of the book, ‘Reckoning and Judgment’, comes in. First, here’s how he explains his use of ‘reckoning’:

… I use the term “reckoning” for the representation manipulation and other forms of intentionally and semantically interpretable behavior carried out by systems that are not themselves capable, in the full-blooded senses that we have been discussing, of understanding what it is that those representations are about—that are not themselves capable of holding the content of their representations to account, that do not authentically engage with the world’s being the way in which their representations represent it as being.

So, roughly, ‘reckoning’ refers to behaviour that can be understood intentionally but that isn’t itself produced by an intentional system. Current computers are capable of doing this kind of reckoning, but not the outward-facing participatory kind of thought he calls ‘judgment’:

I reserve the term “judgment,” in contrast, for the sort of understanding I have been talking about — the understanding that is capable of taking objects to be objects, that knows the difference between appearance and reality, that is existentially committed to its own existence and to the integrity of the world as world, that is beholden to objects and bound by them, that defers, and all the rest.

(‘Defers’ is another bit of his terminology — it means that the judging system knows that when the representation fails to match the world, it’s the world that should take precedence.)

This makes sense to me in broad strokes, but I still have the sense I had from OOO that I don’t really understand how much this is a high-level sketch and how much it’s supposed to use his specific ideas about representation.

This is where it might be useful to go back to his criticism of the 4E movement. This movement mostly focussed on the interaction of AI systems with their immediate environment, but this direct causal link is not enough. For example, take a computer interacting with a USB stick:

Surely, one might think, a computer can be oriented (or comport itself) toward a simple object, such as a USB stick. If I click a button that tells the computer to “copy the selected file to the USB stick in slot A”, and if in ordinary circumstances my so clicking causes the computer to do just that, can we not say that computer was oriented to the stick?

No, we cannot. Suppose that, just before the command is obeyed, a trickster plucks out the original USB stick and inserts theirs. The problem is not just that the computer would copy the file onto their stick without knowing the difference; it is that it does not have the capacity to distinguish the two cases, has no resources with which to comprehend the situation as different – cannot, that is, distinguish the description “what is in the drive” from the particular object that, at a given instant, satisfies that description.

This gets into Smith’s idea of representation as happening ‘in the middle distance’, not rigidly attached to the immediate situation like the computer is to the USB stick, and also not completely separate and irrelevant to it:

How could a computer know the difference between the stick and a description it satisfies (“the stick currently in the drive”), since at the moment of copying there need be no detectable physical difference in its proximal causal envelope between the two—and hence no way, at that moment, for the computer to detect the difference between the right stick and the wrong one? That is exactly what (normatively governed) representation systems are for: to hold systems accountable to, and via a vast network of social practices, to enable systems to behave appropriately toward, that which outstrips immediate causal coupling.

These ideas get folded in to his standards for ‘genuine intelligence’, along with several related capacities like being able to distinguish an object from representations of it, and care about the difference. This ability to ‘register’ an object is the key part of what he calls ‘judgment’ (‘the understanding that is capable of taking objects to be objects’).

So maybe I do understand this book after all, now that I’ve tried to write my thoughts down? Why do I still feel confused?

I think it’s the same disorientation I had with OOO, where I’m unsure when I’m reading a sketch of a detailed, specific mechanism and when I’m reading a more vision-level ‘insert future theory here’ thing. The middle distance idea is definitely a key part of his idea of judgment, and seems pretty specific, but then there are other vaguer parts about what the ability to take objects as objects would mean. And then, at the far end from concrete mechanism, judgment is also supposed to take on its ordinary language associations:

By judgment I mean that which is missing when we say that someone lacks judgment, in the sense of not fully considering the consequences and failing to uphold the highest principles of justice and humanity and the like. Judgment is something like phronesis, that is, involving wisdom, prudence, even virtue.

So the felt-sense feeling of confusion is something like an unsteadiness, an inability to pin down exactly how I’m supposed to be relating to this idea of judgment. I’m failing to successfully register it as an object, haha. I don’t know. I wish I could explain myself better ¯\_(ツ)_/¯

This is where some comments could be useful. If there’s anything specific that you think I’m missing, please let me know!

5 thoughts on “Let’s try and understand Brian Cantwell Smith again

  1. David Chapman June 4, 2022 / 5:18 pm

    Disclaimer: I haven’t read the book, and obviously should.

    I shared your feelings about OOO. There’s important observations in there, which he elaborates in a philosophical mode rather than a STEMy mode, which would be more to my liking. The “middle distance” insight is important, but his explanation of it sometimes seems epicyclic.

    I’m guessing his key observation in this book is that we don’t have a clear understanding of what “actually caring about the world” could mean, but doing that is a prerequisite for (original) intentionality, and it is entirely lacking in AI systems to date. I think that is both right and important. I haven’t got much more to say about it than that, and maybe he doesn’t either, but since it’s right and important it deserves a book?

    Liked by 1 person

    • Lucy Keer June 4, 2022 / 5:36 pm

      Yes. I think it’s the transition between the STEMy and philosophical parts that I find disorientating… like I expect a clear path between them, whereas really it’s more like a few concrete insights + a bigger-picture look at what current AI is missing. And both are important, but I find the mix hard to navigate.

      Like

  2. Wyrd Smythe June 5, 2022 / 7:32 pm

    FWIW, ever since I read Penrose’s The Emperor’s New Mind I’ve questioned whether any computational system is capable of the kind of “judgement” or understanding you’re discussing here. His main argument turns on Gödel’s Incompleteness theorems, and since current computational systems are purely mathematical, Gödel would apply (at least to some degree).

    That said, if the problem is solvable, I suspect the key lies in size and complexity on par with the human brain. Our judgement and understanding comes from the vast number of associations that come with every thought and perception. People who “lack judgement” either lack those associations (perhaps due to inexperience) or have difficulty “connecting the dots” of those associations. Wisdom is our ability to invoke that vast library of past experience usefully.

    All that said, I’m not sure exactly what question you’re asking, so I doubt I’ve answered it. 🙂

    Like

  3. Edgar July 11, 2023 / 7:50 pm

    B.C.S. is awesome and so is your blog post. Concerning the history of AI and what I take to be B.C.S.’s affinity with some form of idealism, you might be interested in the following paper on the history of philosophy of computability: https://link.springer.com/article/10.1007/s11023-023-09634-0

    I’ve been following B.C.S. for years now, and became intrigued once I read his ‘limits of correctness’ paper. Wish I could meet him in person

    Like

Leave a comment