sea's blog → Algebra, Lisp, and miscellaneous thoughts

Table of Contents

Introduction

Recently I was thinking of an interesting puzzle: What it means to understand something. That is to say, I wanted to gain an understanding of the concept of understanding.

Suppose that I had a general theory, or in the language of software, a class. This class tells me everything about the behavior of the objects it models, though to actually predict fully what a particular object does, I need to know its actual instantiated values.

Can I be said to have understood that class of objects?

It seems there must be some gap in the concept, because using the class theory I can clearly describe every internal and external relationship that these objects have. There is no behavior that the objects can exhibit which I cannot explain using the class definition. Every 'why' question is answered, in theory.

This example seems to illuminate that a behavioral and structural interrelational "understanding" (here used colloquially) is necessary for true understanding.

The Integers

But yet consider the example of the integers. I can understand the integers as a ring pretty well, though not completely since ring theory and number theory are extraordinarily deep subjects. However one thing I can say for certain is that the behavior of any integer is completely determined and I have the axiomatic theory that describes it fully.

Well then, why can't I give any decent foundational description of the global behavior of the Collatz map? I have a local understanding of the collatz function, but that does not imply a global understanding.

Likewise, a local understanding of integer behavior does not at all give any global understanding of the behavior of functions on integers, or systems built up from them.

In general, a local understanding does not imply a global understanding, for very vague definitions of 'local' and 'global' (perhaps using topos theory you can extend those two terms to encompass far more general subject matters)

Double Pendulum

As another example, consider the double pendulum.

We have the complete theory of its behavior, and given the initial parameters, we can predict exactly how it will behave. Our only issue is a problem of practical measurement, but even then, we have complete understanding of how the errors accumulate, why they accumulate, and using Lyapunov exponents we can even classify certain behaviors of the chaotic system itself.

I would say that people who study such things have an extraordinarily deep understanding of the double pendulum, even though in practice, we can't really say much about its particular behavior.

This example seems to imply that practical considerations should have no effect on whether a concept is understood.

Putting it together

Putting these illuminated concepts together, it's clear that while I don't have a complete theory of understanding, I do have some criteria that at least help.

I can say that a concept can be said to be understood when:

  1. A sufficiently large class of 'Why' questions can be answered about it.
  2. Given enough information about a particular instance of the concept, we may (to some degree) predict what it will do next, even if in practice we cannnot predict very far out we should at least have some predictive ability.
  3. Cases (1) and (2) apply not just to local behavior, but to global behavior, for loosely defined 'global' and 'local' ideas that seem to be context dependent.

Unfortunately, I am aware that 'concept' is poorly defined, and so are these criteria. It's not at all obvious how we know when we have hit the 'sufficiently large' criteria, nor even what kind of questions are valid 'Why' questions, and which ones are not. This vagueness is left to the intuition.

Criteria (3) is even more vague, and I suspect that the terminology for attacking what global and local even mean in general simply doesn't exist.

A matter of degree

Now with these vague and loose criteria, at least we can say things more confidently like "We understand X more than we understand Y", which is to say that an argument can be made that the criteria are better satisfied in X's case than they are in that of Y.

Intuition as a form of low-degree understanding

Criteria (2) seems to imply that, if you have some intuitive ability to predict what something might do next, even if you do not understand why, then you have a partial understanding of it.

For instance, we may be able to catch a ball that is thrown even before we have formal models of gravity, forces, and inertia, so it can be said that we have an innate partial understanding of mechanics, even though a deeper understanding comes later once we have enough theory to begin answering questions of Why.

Emergent Phenomena as partially met criteria 3

I'll just note here that emergent phenomenon appears to be a case of criteria 3 being partially satisfied. It's a case where the global behavior of something, for example, how an organism might behave, is better understood than the particular local behaviors (what its cells are up to).

It's fascinating that criteria 3 can be satisfied partially in this way. It seems strange that you can say things about the global behavior without knowing the details, but anyone that's ever seen an animal wandering around as encountered a prime example of exactly that.