You're thinking of physicalism, of which materialism is a sub-category. Physicalism is the assertion that there are no supernatural phenomena (which I agree with); materialism suggests that the essence that everything is built out of is matter. I think most people would agree that materialism is outdated (given that atomism has been outdated for >100 years), but many of the implications of materialism (that the whole can be explained by the parts, ie reductionism, and the idea that all important properties can be quantified, as two examples) still persist.
The (superior) alternative to materialism is emergentism. Materialism and emergentism both imply other ideas and ways of thinking about the world, which are the actual important things.
Ah OK. I think you're splitting hairs a little bit there - I would consider those two terms (materialism and physicalism) to be interchangeable; the basic idea is the same but one is a refinement of the other using modern knowledge. You could say physicalism is materialism v2 :-)
As for emergentism... that's materialism v3, so (in my head at least) these are all different ways of saying the same thing.
yeah that's fair - the problem in my mind isn't that people are using the wrong word (who cares!), but that even though we know that matter isn't the essence of reality now, we still have many other beliefs that are dependent on that belief, that we haven't moved on from.
> many of the implications of materialism (...) reductionism
Is reductionism considered a consequence of materialism? To me, the two seem independent. Reductionism works just as well for abstract concepts as it does for physical matter.
> the idea that all important properties can be quantified
I'm not sure how this follows from materialism either. We quantify many things that have nothing to do with physical matter.
I also get the feeling that you consider these two "implications of materialism" to be outdated - I disagree with that view. For instance, I can't think of an example of a property or phenomena that is best left not quantified - there are plenty of important things in life that we can't quantify yet, because we lack the measurement tools or conceptual framework for it, but quantifying these is obviously doable in principle, and desirable.
> Is reductionism considered a consequence of materialism?
It is; fundamentally the theory is that the world is made up of lego bricks, so we can examine the world by looking at the pieces. This is a useful tool, but it doesn't actually work in all cases. For example, the areas of science where this does work are considered "hard sciences", and the areas that it doesn't are considered "soft sciences".
Reductionism isn't outdated in the sense that it is useless; it's outdated in the sense that it cannot be used to understand all phenomena, especially emergent phenomena. Systems science is and has been creating new tools that can be applied to understand emergent phenomena.
I don't want to get too navel-gazy with this, but there are many things that defy quantification, or our efforts at quantification are and can only ever be procrustean in nature. That doesn't mean that they cannot be modelled, but that the modelling must be process-oriented rather than state-oriented. For example, if you are modelling the behaviour of a thermometer at the level of its components, you can model the causal relationships between heater and sensor as a self-correcting feedback loop - this is a qualitative model. Only at the level of the total behaviour can you model the ambient temperature and desired temperature quantitatively.
Am I making sense here? I'm still working my way through the textbooks for some of these concepts so sometimes I find it difficult to put into words.
My core objection is, the way I see it, reductionism doesn't stop working for soft sciences in any fundamental way. There's no fundamental irreducibility of a phenomenon (uncertainty principle notwithstanding, soft sciences aren't anywhere near worrying about that); the limit is our computational capacity - of our brains, of our computers, of our scientific discourse. We just can't keep so many pieces in our heads simultaneously, so we don't bother, and create higher-level abstractions to make things easier on ourselves.
That's how I view emergence too: there's no new behavior suddenly appearing when your system is complex enough, behavior that couldn't be predicted from looking at the pieces - it's just too much work to deal with pieces directly. The discontinuity we see doesn't exists in the real world - it's caused by the rungs of our ladder of abstraction.
As an example of the rungs on the ladder: we study gases on a molecular level, modelling them as bouncy balls. We also study gases at a higher level, modelling them as fluids. We go further still, viewing them as a bunch of parameters (pressure, volume, temperature). Three different perspectives, three separate set of behaviors - yet there's no actual discontinuity in the real world, and a lot of interesting phenomena can be observed when we try to create a smooth transition between the models; that is, we look in between the rungs.
> For example, if you are modelling the behaviour of a thermometer at the level of its components, you can model the causal relationships between heater and sensor as a self-correcting feedback loop - this is a qualitative model. Only at the level of the total behaviour can you model the ambient temperature and desired temperature quantitatively.
The way I see it, casual models have only a coarse relationship with the real world. A self-correcting feedback loop can be analyzed in terms of its conceptual components, which are mathematical in nature - but you won't get from here to predicting the behavior of a real-world thermostat until you start plugging in physical models. How much complexity you'll have to deal with depends on the physical model you plug in. There's lots of space for reduction and quantification here, depending on the answers you seek. For example, the concept of "ambient temperature" is a very high-level abstraction in itself - if you're willing to break it apart, suddenly a lot more things across the model become more directly related to the real world, and easier to quantify.
---
The point I'm trying to express here is, in my view, there are three types of limits to reductionism and quantification:
- The Uncertainty Principle - the fundamental limit, around which you can't quantify some things. IANAPhysicist, but my feeling is, it's not a principled limit to quantification - it only reveals that we're trying to quantify measures that are ill-defined.
- Fundamental limits to computation - I'm thinking of the Halting Problem, Gödel's Incompleteness Theorems. We can't create quantitative metrics and build reductive models in ways that are uncomputable.
- Practical limits, aka. too hard to bother - this is what I believe is 99% of common arguments against reductionism and examples of emergence. We look at systems as a whole, because looking at pieces is too much work. But whether it's too much for our working memory, or too much for all computing power of our civilization - it's still not a fundamental, philosophical limit, and therefore not a philosophical argument against reductionism.
On that last point, if one can prove that a higher-granularity model would require more compute than the universe could provide over its lifetime, then I'll give it a solid shmaybe as a fundamental limit.
---
Addendum on systems science.
I have an interest in systems science, which I pursue to the extent my free time allows. I've studied the basics, done some toy modelling, and one thing I've learned so far is: the most insightful part of modelling a system is plugging numbers into it.
I now no longer trust models that aren't executable - it's too easy to create something that looks fine in the abstract, but is completely wrong. Making it run on real - quantified - data is the fastest way to discover the depths of one's ignorance, like I did in the example linked above.
(Well, to be honest, 80% of my ignorance was revealed by defining units of measurement for each sink and flow - so if you want a quick way to debullshit a systems model, I suggest starting with that.)
> That's how I view emergence too: there's no new behavior suddenly appearing when your system is complex enough, behavior that couldn't be predicted from looking at the pieces - it's just too much work to deal with pieces directly. The discontinuity we see doesn't exists in the real world - it's caused by the rungs of our ladder of abstraction.
Reductionism relies on dissecting the whole and viewing the parts individually. The fundamental distinction between something that can be reduced (let's call it a collection) and something that cannot (a system), is that when you dissect a system, a fundamental aspect of the system is lost. That isn't to say that said aspect has materialised of its own accord, but that it originates with the relationships between the parts. There is no seat of "car-ness" in a car, and if you were to try to understand a car by looking at the individual parts, you would not be able to unless you could intuit how those parts interact. The difference between an engineered system and a natural system is that in engineering we intentionally attempt to minimise the number of interrelations so that the object can be understood easily from an analytical perspective. In natural systems, the levels of interconnection are much greater; we cannot understand a society by examining each individual in isolation, we have to move up the ladder of abstraction to a level where coherent patterns can be identified. We have to look at the whole.
The thing that reductionism misses out on is non-linear causality. Feedback loops, inherently based in relationships between parts and not the parts themselves, give rise to higher-order behaviour, which brings us the concept of levels of abstraction. The classic idea of reductionism is that if we create the lowest-level model (at this point that would be quantum mechanics, but it could certainly go lower in future), then we can derive all the higher-order models from that. This may be theoretically possible (purely in the philosophical sense), but it's just not practically useful. The position of reductionism is that this is the only (or perhaps primary) valid way in which models can be constructed.
> The way I see it, casual models are almost completely abstract - they have only a coarse relationship with the real world. A self-correcting feedback loop can be analyzed in terms of its conceptual components, which are mathematical in nature - but you won't get from here to predicting the behavior of a real-world thermostat until you start plugging in physical models.
This is true of all models - "all models are wrong, but some are useful", "the map is not the territory", etc. Modelling reality inherently involves reducing it to the parts we're interested in, because a model of reality in its entirety would be the same size as reality itself, and thus unrepresentable (even if we could capture the total state of reality). A basic equation for Newtonian motion excludes things like drag, the variability of gravity, turbulence and so on. The closer we need to get to matching reality, the more factors we need to include until the model transitions from mathematical to a simulation, by sheer necessity. The amount of detail we include depends on what we're trying to do; models are purposive tools rather than descriptions of reality.
I think the name of reductionism does not help when discussing it; to reject reductionism is not to reject modelling, because we cannot operate in the world without models.
I agree that the argument is a philosophical one. The reductionist perspective is one of objectivism; science measures reality, and thus lower-order models are more high-resolution and we can abstract away from a very-high-resolution map of reality by focusing on the details we care about. The emergentist perspective is constructivist; it states that our models are tools that we create in order to interact with our environment but they are not reality itself, and thus you should use the model that is most useful for interacting with the environment based on its predictive capability.
[edit in response to your addendum]
I totally agree that executable models (in essence, simulations) are 1000x better in basically every way than static models. I'm trying to work in this space myself in order to bring these ideas into software development. But I believe that plugging in the numbers is useful precisely because it highlights the qualitative aspects of the model; how different variables are causally related. The temporal (heh) aspect of the simulation highlights how the variables are bound, but the specific values whether they be 1000 or 10,000 units are not the thing you're learning, unless those values happen to be a divergence point in the model.
Regarding quantifying everything - at a quantum mechanical level, all is not totally quantifiable, and that's fundamental. See https://en.wikipedia.org/wiki/Uncertainty_principle. I assume this is what the parent was referring to.
I think there was a big intuitive glimpse of this in dialectical materialism (but not on the stalinist "diamat" flavor). "Levels" of reality emerge were each one has his own emergent "laws", there are intertwined mutual influence between "levels", phase transitions are emphasized, relationships are more important than objects, objects are always "contradictory" (always divisible and in flux) and really relationships in disguise, all "categories" we made in our mind as always transitional and imperfect, etc.
The (superior) alternative to materialism is emergentism. Materialism and emergentism both imply other ideas and ways of thinking about the world, which are the actual important things.