Our paper predicting future global average temperatures was recently rejected by Journal of Geophysical Research – Oceans, viz.:
Thank you for your submission to Journal of Geophysical Research – Oceans. At this time, we feel that the content of your manuscript “Estimating the Effect of Carbon Emissions on Global Average Temperature” is not within the scope of the journal. I regret to inform you that it has been declined for publication in Journal
of Geophysical Research – Oceans.
We are at a loss to understand why our submission has been declined by your journal. Your home page states:
JGR: Oceans publishes original research articles on the physics, chemistry, biology and geology [of] the oceans and their interaction with other components of the Earth system.
Our article is an original use of statistical methods to examine the physics of the diffusion of heat and carbon dioxide between the oceans and other components of the Earth system. In what way is it “not within the scope of the journal”? We note from the Detailed Status Information that our submission was “Waiting for Reviewer Assignment” until you intervened to prevent publication. As Editor in Chief that is your right but it does suggest a political motivation.
I can assure you that my decision was based on my judgement of the scientific merit of your research and its suitability for JGR-Oceans.
Well that explains it then. The paper actually was within the scope of the journal but it lacked scientific merit.
How does she know that?
Isn’t that precisely the purpose of peer review? We submitted the names of six potential reviewers, some of whom had the statistical expertise necessary to properly evaluate the paper. Why not consult them about “scientific merit”?
We can offer two other explanations for the paper’s rejection.
Firstly, we believe that many science organisations, such as the AGU, the proprietor of JGR, is run by well-meaning activists whose passion to “save the planet” is likely to override a more objective assessment of scientific value. Phrases such as “diversity and inclusion”, “sustainability practices”, “protect our planet for future generations”, “our changing planet” and so on abound in the online CVs of AGU Board Members and Editors. Some belong to activist organisations such as Thriving Earth Exchange. Praiseworthy as this may be, it does suggest a possible failure of objectivity when dealing with ideas which challenge climate orthodoxy.
Secondly, the currently accepted paradigm in climate science concerns the continued refinement and application of General Circulation Models (GCMs) of the ocean-atmosphere system. The belief that carbon emissions remain in the atmosphere for millennia is based, not on real world observations, but on experiments with GCMs. This is the foundation of Alarmist predictions of ultimate catastrophe unless we can control emissions. Any questioning of it is bound to meet resistance. The possibility of a human influence on climate became a real issue in the 1980s when computer power began growing exponentially and the possibility of constructing a fluid dynamic model of the whole earth became a reality. Modellers were eager to to find relevance and financial support for their new toys and Climate Change filled the bill.
The careers of thousands of mathematicians and programmers now rely on the funding of the GCM approach to Climate Science.
It was a wonderful idea! We set up a computer model with millions of little boxes, each one representing the physical state of a small part of the ocean-atmosphere system at an instant in time. Then we apply the laws of physics to predict the states of all the boxes a short time later. We repeat the process indefinitely and see what
happens. This is how meteorological models worked but they didn’t include the ocean – measured ocean temperatures forced the model. Oceanographic models also worked like this but were externally forced by seasonal wind stresses. Climate models are a combination of the two. They are called “Coupled Ocean Atmosphere General Circulation Models” and the only external forcings are solar radiation and greenhouse gases. Wow! Now we know everything! It’s Laplace on steroids.
But there is a catch. All models have a finite grid size. If we halve the grid size we must increase the number of calculations by 24 , i.e. by 16. As a consequence the grid size remains so large that a tropical storm can easily fit inside it. Furthermore there is a random element involved due to turbulence (e.g. inside a tropical storm) so that it is not always possible to accurately predict each subsequent state and errors build up. These problems can mostly be circumvented by adding various rule-of-thumb fudges to the code (and by funding ever faster and bigger computers).
But it gets worse! Turbulence arises spontaneously in the wake of mountain ranges and ocean ridges. It always starts small and gets bigger until it is damped down by viscous forces like the wake of a ship. This happens within models as well as in the real world but the grid scale in the models is very large compared with the scale of the viscous forces which act to damp turbulence. When this happens in a numerical model, the model crashes because the numbers representing quantities like velocity and pressure become very large, very fast and rapidly exceed the word length of the computer. Because of this, the failure to damp sub-grid-scale turbulence, General Circulation Models (and many other models in Computational Fluid Dynamics) are fundamentally unstable and must be tweaked and cajoled to give any sensible answers at all. This is done by smoothing out sharp edges from the topography and by setting viscosity parameters to unrealistically high values.
GCMs are not merely unstable, they are unstable and over-damped.
While they may be useful in providing insights into complex, physical processes, they are particularly ill-suited to forecasting. Our paper describes an alternative, statistical approach which is mathematically rigorous and does not depend on GCMs.