Thirty years ago political scientist Francis Fukuyama published The End of History and the Last Man where he argues that, with the ascendancy of Western liberal democracy, which occurred after the Cold War and the dissolution of the Soviet Union, humanity had reached “… the end of history as such: the end-point of mankind’s ideological evolution and the universalization of Western liberal democracy as the final form of human government.” Last year Fukuyama published a new book, Identity: The Demand for Dignity and the Politics of Resentment. Here is an excellent review.
Fukuyama was influenced by German political philosophers Hegel and Marx and the principle of dialectic – thesis, antithesis, synthesis. Both take a teleological view, that there is a purpose underlying the unfolding of events and that the course followed by world history is a necessary one, that it is deterministic. The dialectic principle certainly provides great insight into the evolution of ideas but its proponents tend to use it post hoc to justify their prior beliefs: for Hegel, the superiority of the Prussian State, for Marx, the inevitable Dictatorship of the Proletariat, for Fukuyama, the perfection of Western Liberal Democracy.
If we substitute speculation, observation, theory for thesis, antithesis, synthesis, the dialectic principle applies well to the scientific method but without these deterministic and post hoc aspects. Theory then gives rise to further speculation which inspires further experiment and observation and so the process continues.
Unlike other great philosophical schemae, science has only existed for the last few centuries. While speculations about the nature of reality have always abounded, the essential step of testing-by-observation has tended to be ignored or overlooked by most thinkers throughout history. Because of this such speculations have tended gel into rigid dogmas where testing-by-observation becomes anathema. Science breaks down.
Unfortunately this is happening again. According to former particle physicist, Sabine Hossenfelder:
No one in physics dares say so, but the race to invent new particles is pointless. In private, many physicists admit they do not believe the particles they are paid to search for exist. They do it because their colleagues are doing it.
Since the 1980s, physicists have invented an entire particle zoo, whose inhabitants carry names like preons, sfermions, dyons, magnetic monopoles, simps, wimps, wimpzillas, axions, flaxions, erebons, accelerons, cornucopions, giant magnons, maximons, macros, wisps, fips, branons, skyrmions, chameleons, cuscutons, planckons and sterile neutrinos, to mention just a few.
There are many factors that have contributed to this sad decline of particle physics. Partly the problem is social: most people who work in the field genuinely believe that inventing particles is good procedure because it’s what they have learned, and what all their colleagues are doing. But I believe the biggest contributor to this trend is a misunderstanding of Karl Popper’s philosophy of science, which, to make a long story short, demands that a good scientific idea has to be falsifiable. Particle physicists seem to have misconstrued this to mean that any falsifiable idea is also good science. In the past, predictions for new particles were correct only when adding them solved a problem with the existing theories. For example, the currently accepted theory of elementary particles – the Standard Model – doesn’t require new particles; it works just fine the way it is. The Higgs boson, on the other hand, was required to solve a problem. The antiparticles that Paul Dirac predicted were likewise necessary to solve a problem, and so were the neutrinos that were predicted by Wolfgang Pauli. The modern new particles don’t solve any problems.
These “new particles” are not speculations about the nature of reality. They are the outcome of the routine application of a schema know as Particle Physics for the purpose of kudos and funding.
The speculation by the Swedish chemist Svante Arrhenius in 1896 that increasing atmospheric concentrations of CO2 could lead to global warming, was revived in the mid 20th Century under the aegis of the United Nation’s Intergovernmental Panel on Climate Change (IPCC). Unfortunately the testing-by-observation aspect of a truly scientific enterprise was intentionally omitted by the IPCC. Its Third Assessment Report specifically dismissed the need for rigorous testing when it stated: “our evaluation process is not as clear cut as a simple search for ‘falsification’” (Section 8.2.2 on page 474). Effectively what they are saying is: proper scientific testing is too hard and we are not going to bother doing it. All of the funding and effort goes into ever more complex numerical models. Here too, testing-by-observation has become anathema. A recent paper showing that aspects of the model carbon cycle did not fit the observations was rejected on the grounds that “exceptional claims require exceptional evidence”. A more appropriate comment might have been”The Emperor has New Clothes”.
Climate Modelling has been creaming climate funding for the last half-century. It is an ongoing boondoggle owned by Applied Mathematics. Other disciplines don’t get a look in, particularly Statistics which is seen as a threat. This is unfortunate because a synthesis of the two would provide useful insights. For example, regression modelling of local climate sensitivity due to increased CO2 shows that it varies widely, being least in the North Atlantic and strongest in northern Siberia and northern Canada. Increases in extreme weather events in some regions (such as NE NSW and SE Queensland) are indicated by significant increases in insurance claims for storm and bush-fire damage whereas there has been no significant change in tropical cyclone frequencies. It seems likely that climate variance and extreme events are related to the spatial gradient of local climate sensitivity rather than sensitivity itself, but such an effect is statistical and may not show up in a deterministic climate model.
The insurance industry will need to step up with some research funding for statistical modelling of climate if it wants reliable answers to such questions.
There are other areas of physics which appear to be stagnating. One such is Astrophysics and the concept of “Dark Matter”. Dark Matter plays a similar role to that of the luminiferous aether in the 19th Century; it is a blatant “fix up”.
The existence of aether was postulated because light forms interference fringes and therefore must be made up of waves. Waves require a medium to carry them, just as solids, liquids and gases carry sound waves. The aether was hypothesised solely as the medium which carries light. The Michelson-Morley experiment showed that this medium does not exist and Maxwell’s field equations provided an excellent description of the behavior of light as an electromagnetic wave.
In a similar way, Dark Matter has been postulated solely to account for the observed rotation of galaxies. The problem is that no-one knows what it is or how it comes to be distributed in just the right way to give rise to the observed galactic rotation. An explanation is more likely to be found in terms of a reinterpretation of Einstein’s Field Equations. Whatever the explanation, it seems unlikely that the answer lies in the plethora of proposed new particles discussed above.
Once again from Sabine Hossenfelder:
How do black holes destroy information and why is that a problem?
She concludes: As you have probably noticed, I didn’t say anything about information. That’s because really the reference to information in “black hole information loss” is entirely unnecessary and just causes confusion. The problem of black hole “information loss” really has nothing to do with just exactly what you mean by information. It’s just a term that loosely speaking says you can’t tell from the final state what was the exact initial state. There have been many, many attempts to solve this problem. Literally thousands of papers have been written about this.
Ay, there’s the rub – thousands of papers.
How many experiments?
11 Replies to “The End of Physics?”
Regarding the proliferation of new particles, the simplest argument to debunk most of them has been given by William of Occam (“Occam’s razor”): Entities are not to be multiplied beyond necessity.
Bravo. The questioning of dark matter is especially welcome. Does it explain the spiral shape of many galaxies? What is the distribution required to explain the rotation observed?
As for the black hole information removal, the multiverse explanation is worse than the problem.
The only concern I have for this end of Physics is that it did happen before, about 120 years ago. And it was conflict between theory and observation that then created Quantum Mechanics and Relativity.
Yes, I agree. There are more relevant problems concerning black holes. Why do they form an accretion disks? Are galaxies the accretion disks of super-massive black holes?
The Galactic Rotation problem has been solved by G. O. Ludwig (2021). The gravitomagnetic field of General Relativity is a sufficient explanation. No need for Dark Matter. See https://link.springer.com/article/10.1140/epjc/s10052-021-08967-3
You mean that the suggestion I made in my dark matter paper (https://blackjay.net.au/challenge-to-dark-matter/) has been proven to be correct?
Did Ludwig acknowledge the prior publication in Blackjay?
It looks like Ludwig is having recognition problems of his own. His paper was published 18 months ago and has seen little recognition in popular science journals or the media.
Good post. However,… we need to move from science-hijacked-by-priests to philosophy proper and analyse the politics at work in everything. Psychology is fully part of the political science at work. Political science itself excludes nothing in human society. Furthermore, psychological drives, structures and programming at all levels influences everything at all levels and equally is influenced at all levels by everything. We need a philosophy of everything.
Look at the development and subsequent stagnation of Tibetan society mediated through the Buddhist-Bon religion and the religio-psycho-politico cultural freeze which ended with the Chinese communist take-over and subsequent destruction of Tibetan society within that local framework for it to now be renewed in a global context.
One could posit that the West’s liberal-capitalist ideology, which has now hijacked all of western society and its civilisation from the base of international globalist high finance is one aspect of a duality the other of which is the Eastern science of mind.
The end of history may mean the end of compartmentalisation in human knowledge. It is this compartmentalisation and the strict control of the narrative of the various aspects of institutional and formalised human endeavours by the high priests of compartmentalisation, in turn guided in a pyramid of control the top of which is International high finance, which makes a mockery of factual truth and, indeed, is a direct attack on many fronts on the very foundation of civil society.
I agree about compartmentalism. Even in Physics. Information is (supposedly) conserved in Quantum Mechanics but is inversely related to Entropy in Thermodynamics.
There’s a lot to absorb in this post. I’ve had to come back to it several times to come up with a general view (mostly the examples are outside my area of expertise). The comments are helpful.
Compartmentalisation is probably the expression of a boundary of understanding. Science (even before it was formalised) appears to go in surges, somewhat randomly, perhaps. A breakthrough insight happens then it takes time for it to be digested by the current form of society. As society increases in size and complexity, the entire juggernaut of humanity muddles along to the next phase in a non linear process. The development of understanding and utilisation of that understanding has been spectacular in our lifetimes. Multi disciplinary understanding needs time to catch up. There will be hiccups and retrograde steps which is why the history of science is important. We keenly await the next beacon of understanding.
I am more pessimistic: “Science advances one funeral at a time.”
I guess first things first, a lot of people have arguably declared the “end of physics” or “end of science” etc. at various periods of history. After all, physics was pretty much “essentially solved” before all that pesky quantum and relativity stuff came along? –Heh. Also compare the discussion of whether string theory should be considered science (the book title about this codifying the phrase “not even wrong”).
And I suppose it’s this idea (“not even wrong”) that seems to be at the heart of *this* post, so let’s address to what extent data really *are* lacking.
I’ll talk about climate change data only briefly because it’s not an area of particular scientific expertise for me, and, it is the part of this discussion most likely to go overtly political. I’ll just say that it seems… incorrect to state that there hasn’t been climate data taken; if anything, we have a LOT MORE earth-observing satellite imagery and data over the last couple of decades than, well, probably anytime ever. See, e.g. https://skepticalscience.com/climate-models-basic.htm and https://skepticalscience.com/climate-models-intermediate.htm for discussions of climate-related data and how they are used together with models.
Turning more to the title subject of the post, in terms of physics, it’s not surprising that we don’t have more experimental data from, say, black holes because… we can’t do experiments with black holes. Pedantry aside, even *observations* are difficult because they are necessarily careful works of inference – black holes are indeed, as their name implies, *black*. Any “observation of a black hole” has to be, properly speaking, of the environment of a black hole and its interaction with the inferred object, and due to the nature of a black hole as we understand it, observing the *interior* of a black hole is arguably *impossible* (because nothing can leave the event horizon once within). Basically, it’s maybe not surprising that theory is way out ahead of observational data since observational data is hard to come by and in many ways indirect in nature, vis-a-vis what models are discussing.
And to come back around to particle physics, I suppose the idea that “doesn’t require new particles” or “works just fine the way it is” could have been used to defend Ptolemaic celestial dynamics which, while defining a certain level of complexity (epicycles, see e.g. https://physics.weber.edu/schroeder/ua/BeforeCopernicus.html) could be used to make *reasonably accurate predictions*. People didn’t believe in a geocentric system/Universe out of sheer bullheaded stupidity/egocentrism — the model made falsifiable predictions of reasonable accuracy. Should that have meant that we stopped looking?
That’s not to say that there aren’t potential or even real problems with particle physics at all, but that science runs into problems if we wind up in a position where data is so far ahead of theory that we have no idea how to interpret them.. until our theory can be built up to understand and be tested by such data, or, where theory is so far ahead of data that it is essentially untestable… until our observational/experimental capabilities catch up to a point where we can observe the effects needed to actually test said theories.
Sometimes, one side has to run ahead for a while in order for the other side to have something to chase and test. But it’s not tenable to get lost in numerical models forever, nor is it tenable to gather mountains of data with no way to mine or understand it indefinitely.
Comments are closed.