Representation Learning
The place for overparametrized models is tooling
I think that the place where overparametrized models make the most sense for application are not in models with "high explanatory power" (see The goal of scientific model building is high explanatory power). Rather, their best realms of application are a bit more pedantic. We don't always make these large, overparametrized models to explain the world, per se, but to automate some task that could have been done by a human being. In other words, we come right back to the notion of using computing to automate routine tasks. Basically one of the two classes of tasks:
Both of these are things that would have originally required human intervention.
Seems to me, then, that many so-called "AI" applications are merely (yes, merely) about automating tasks that would have otherwise taken much longer with humans.
Other references: