Say what you’re not saying, don’t say it, say what you didn’t say
Last time I blogged that modelling is not limited to software engineering, play and simulation; but is universal in human endeavour. I mentioned that considering accuracy is important but not sufficient in assessing a model. What other considerations are there?
My favourite lens for looking at a model is abstraction. In philosophical terminology, abstraction is about grouping concepts together at decreasing levels of detail. So, a duck is a duck and no other thing is a duck (no matter how it looks or walks or sounds); but applying abstraction allows us to talk about birds and say useful things, which might be rather exasperating if we had to list every bird in the world to say them. This kind of classification is a particular feature of object-oriented programming languages (which may or may not be a good thing).
Leaving it out
However another way of considering abstraction is to pause before asking what a model is saying, and ask: what is this model not saying?
The model of biological change that we call evolution has incredible empirical support, so that its application has great explanatory and predictive power (some would even say that we don’t apply it enough). Strangely, though, it seems to cause an awful lot of consternation to those who subscribe to another model called creationism.
Why strange? At first sight, both of these models deal with how the world came to be the way it is. But evolution models a process, and has nothing whatsoever to say about how that process began, or why it began, or who began it. Conversely, creationism says nothing about how its proposed agent went about his craft (well, usually). He just did it. Apples and oranges.
Putting it back in
Any critical analysis or use of a model has to carefully stick to assessing or building upon what it actually models. This might sound simple, but humans find it remarkably tricky. We are fond of making cultural and doctrinal assumptions and applying intuitions without knowing about it. (In a black alley, a black cat spies a black rat. How?*) Unfortunately this is not only inevitable, it’s usually necessary.
Why so? Models almost always rely on background information. Of particular interest in computer science and artificial intelligence is the notion of semantics: the meaning of symbols. Tell a robot to fetch you a cuppa, and it may suffer the same semantic confusion as is now affecting US readers: a cuppa what?
However, problems arise when the semantics are ambiguous: and I submit that they almost always are. I find in my job that when presenting a model I have to spend a good chunk of the conversation heading off potential misunderstandings with sentences like, “Note I’m not saying there’s a connection, just that Professor Guo was in the Study at the time and you don’t use Lead Pipe to do Next Generation Sequencing.”
Schools concentrate on implanting into children a kind of approved default semantic background to equip children to understand what models are saying. I believe it is just as important to teach them how to question what models are not saying—and to be careful about filling the gap inappropriately with assumptions, intuitions, or beliefs.