Every well-constructed model is leverage against a problem

Why so?

A well-constructed model, for which the residuals cannot be further accounted for (see When is your statistical model right vs. good enough), is one which we can use to gain high explanatory power. (see: The goal of scientific model building is high explanatory power) In using these models, we can:

  1. map its key parameters to values of interest, which can then be used to in comparisons. This is the act of characterization.
  2. simulate what-if scenarios (including counterfactual scenarios). This is us thinking causally.

The reason this is leverage, is because we can engage in these actions without needing to spend real-world resources. (Apart from, of course, real-world validation.)

When is your statistical model right vs. good enough

Came from Michael Betancourt on Twitter.

How do you know that your model is right?
When the residuals contain no information.

How do you know that your model is good enough?
When the residuals contain no information that you can resolve.

Relates to this notion of The goal of scientific model building is high explanatory power, I think.

I'm going to let that one simmer for a bit before I comment further.

Mechanisms are better than population empirical evidence

As a general rule of thumb, I prefer mechanistic explanations over phenomological studies conducted on populations. I say this mostly in the context of "folklore medicine", where herbs are claimed to have certain beneficial effects.

I prefer a mechanistic explanation for how they work, as they give us leverage and control over how we use the medicine. It also allows us to build models, which thus add onto our leverage (see: Every well-constructed model is leverage against a problem). On the other hand, though, a mechanistic explanation isn't necessarily always forthcoming, because we don't always know enough about a system. In this case, I think it is sufficient to have a large population, case vs. control study to show that the herbal medicine as an intervention does have a measurable effect. (And even then, we have to be careful, becuase not everything that matters can be measured!)

Experimentation is getting a bit out of hand

Experiment with overparameteized models in domains that may not necessarily make sense, particularly from a first-principles perspective. One example: I have peer-reviewed a paper where 1-dimensional convolutional neural networks were unleashed on on tabular data: this makes no sense from a first-principles perspective, because in tabular data, there's usually no semantically meaningful spatial correlations between columns. These experiments, to the trained practitioner, leave the impression of the experimenter trying too hard to shoehorn a problem into a newly learned tool, with the model's assumptions not being sufficiently understood before being applied.

In the spirit of Finding the appropriate model to apply is key, I'd rather see models custom-built for each problem. After all, Every well-constructed model is leverage against a problem

The impossibility of low-rank representations for triangle-rich complex networks

News article on ScienceDaily. The original paper backing the news article is published in PNAS.

Quotables from the news article:

He also noted that new embedding methods are mostly being compared to other embedding methods. Recent empirical work by other researchers, however, shows that different techniques can give better results for specific tasks.

Benchmarks are quite important! See also: The craze with embeddings.

Given the growing influence of machine learning in our society, Seshadhri said it is important to investigate whether the underlying assumptions behind the models are valid.

Relates to the idea that Finding the appropriate model to apply is key. This is because Every well-constructed model is leverage against a problem; when the underlying assumptions behind our models are valid for our specific problem at hand, we gain leverage to solve our problems. (We should also keep keenly aware of When is your statistical model right vs. good enough.)

The goal of scientific model building is high explanatory power

Why does mechanistic thinking matter? In The end goals of research data science, we are in pursuit of the invariants, i.e. knowledge that stands the test of time. (How our business contexts exploit that knowledge for win-win benefit of society and the business is a matter to discuss another day).

When we build models, particularly of natural systems, predictive power matters only in the context of explanatory power, where we can map phenomena of interest to key parameters in a model. For example, in an Autoregressive Hidden Markov Model, the autoregressive coefficient may correspond to a meaningful properly in our research context.

Being able to look at a natural system and find the most appropriate model for the system is a key skill for winning the trust of the non-quantitative researchers that we serve. (ref: Finding the appropriate model to apply is key)