Open letter to Patrick Frank – a discussion of your “propagation of errors” theory Patrick Frank’s theory of propagation of errors, if it is correct, would mean that the world is proceeding with completely useless predictions of future trends in global warming. It would, in effect, be like the last few moments of the Titanic – with the crew being unable to detect the “signal” of iceberg ahead until it was far too late to change course, such was the inertia in the ship. The inertia in the human system is comparably large in relation to the size of a temperature signal that would need to emerge from Patrick Frank’s suggested error ranges. On some estimates, that temperature signal would have to be something of the order of plus 20 degrees centigrade ( or minus 20) by 2100. Refer to Patrick Brown in 2017 – about 11 minutes through this video: https://www.youtube.com/watch?v=rmTuPumcYkI Clearly, failing to act to correct the situation only when a temperature rise of 20 degrees happens would be a nonsense. It would be far too late to take any corrective action. By then, the world would already have tipped into a very different climate equilibrium and human systems would be in chaos, collapse and possibly on the road to human near-extinction at the very least. So, it seems very important for the survival and thriving of our species to work on developing recommendations for how we can either
Regarding a), I’m not well qualified to talk in any detail about how the science of measurement can be assessed and improved. I’ll leave that to the many climate scientists who are designing, building, testing, monitoring and comparing the thousands of measuring devices of all sorts of types, methods and designs, in the large variety of locations around the world, with all their various accuracy levels, as tested in laboratories and in the field. Regarding b), as I’m an MA in Mathematics and an MBA, I have some appreciation and skill in the main elements of statistics as applied to measurements. While I wouldn’t claim to be an expert in the statistical analysis of global temperature measurement specifically, I have some thoughts about matters that could be considered with potential to improve the situation, ie to provide a better temperature signal (if it exists), to help it to appear out of the “noise” of error ranges. The first observation is that it should be possible to reduce error ranges by increasing the frequency and number of temperature measurements. This seems intuitive. However, I can conceive of situations where this would not be the case. An example would be if the Hawthorne Effect is applicable. If the system being measured (in this case, the human system and its impacts on temperature) is influenced by the taking and reporting of a measurement, then increasing the frequency of measurements might increase the Hawthorne Effect, (and therefore increase the associated error ranges) thereby making the situation worse rather than better. https://en.wikipedia.org/wiki/Hawthorne_effect In the context of temperature rises and climate change, the equivalent effect could perhaps be that increasing the frequency of measurements could increase the reported error range, and when people see that, they reason that there is even less reason to curb carbon emissions, so they become more carbon profligate, which increases the warming (with reduced danger of temperature changes reaching the edge of the error range – there is more headroom for any genuine temperature signal to be obscured by the larger error ranges rather than revealed and acted upon). Another possibility could be, rather than increasing the frequency of measurement, decreasing it instead, to reduce the error ranges calculated by Patrick Frank’s error propagation theory. This would reduce the Hawthorne Effect (if it exists in this context) and would also reduce the error ranges (as calculated by Patrick Frank’s propagation theory). This seems, in some way, counterintuitive. Increasing the frequency of measurement should improve the information we have, which should improve our ability to forecast the future. Reducing the frequency of measurement should reduce the amount and usefulness of the information we are using. However, intuition is not always born out by scientific exploration. Human perception is notoriously fallible. That’s why many people follow the scientific process. So, as a thought experiment, let’s look at how we might go about reducing the frequency of measurement. Fortunately, we don’t have to actually reduce the frequency of measurements taken by scientists, because they can continue to measure what they measure and as frequently as they measure it currently. This is important, because if our thought experiment fails to provide new insights, we have not stopped the process of taking measurements, the accuracy and usefulness of which will continue to be improved. We will not have made matters any worse by our thought experiment. Our thought experiment is a desk exercise, with no detrimental effects on the existing measurement infrastructure. As a desk exercise, therefore, we can pretend we only took measurements less frequently, for example by taking as data sets measurements at different intervals in time, and treating them as if they were the only readings to be analysed. The state of the world will not be changed by taking this approach (unlike the situation of the Hawthorne Effect described earlier). We could take any temporal measurement interval and it should not affect the long-term temperature trends in the underlying earth/human system that we were measuring when those readings were taken. It would clearly make sense to look at a long enough timeline to be able to detect any human impacts on temperature increase, since that is the main thing we can influence if we need to (that’s the hand on the steering wheel of the human system). Therefore, the timeline we cover should include, say, the last 200 years, and perhaps also the 200 years before that should be covered in some way, as a separate exercise, for comparison and baselining (ie to set a baseline of temperature and errors, to compare the later 200 year timeline against, and then to look at whether this would provide useful information for forecasting the next 200 years from now). An important rider on this is to avoid “cherry picking” a pattern of time intervals that would support a particular assumption about trends, or which would fall foul of falling on particularly unusual years regarding temperatures. One way to avoid this would be to take a series of data sets that would “roll forward” by, say, a year each time. An example will help illustrate what I mean. Supposing we are doing the calculation for the 200 years from 1800 to 2000. We would select our interval as, say, 40 years. The selection of interval is somewhat arbitrary. It is logical to suggest that the result should to a large degree be independent of interval selected (except in absurdum, eg if we were to pick 200 years as the interval). On the other hand, we could select an interval of 1 year (as Patrick Frank has done in his published work on his theory of propagating errors) or, indeed, we could select an interval of one month, one week, or one day etc. As I said earlier, it would be intuitive that the shorter the interval selected, the better the information we should be able to glean and the better the forecasts of the future. However, suppose we have decided to be more scientific than intuitive for this exercise. Suppose we select a time interval of 40 years. This occurs to me because it is a typical investment life cycle for significant infrastructure such as energy generation and transmission equipment. When I worked for BP, this was the sort of timescale over which major capital investment appraisal calculations were scoped, to include the costs of decommissioning at the end of that time. An example might be a North Sea oil platform. The investment life cycle for renewable energy technologies is probably a lot smaller than this in many instances (eg think of solar panels and wind turbines that can be manufactured, installed, and eventually decommissioned and recycled with each of these parts of the life cycle carried out in timespans of weeks rather than years). The operational phase of renewable energies lasts anything from a couple of years – when payback has usually been reached – to a few decades. However, an advantage of taking the 40 year interval is that if and when it allowed the detection of a significant temperature change signal, the signal could be used to stop all future 40-year investments of capital in new fossil fuel production assets. With a 40 year measurement interval selected, then, this would mean taking measurement data at 6 points in time between 1800 and 2000 (1800, 1840, 1880, 1920, 1960 and 2000) and calculating/propagating the “measurement error ranges” 5 times (the first data point being subject to a baseline error rather than a “propagation error”). To avoid the risk of cherry-picking, we could then do a second calculation, rolling forward each of the data points we analyse by one year. So, we would take and analyse data from 1801, 1841, 1921, 1961 and 2001. A third calculation would take data at 40 year intervals from 1802 to 2002 and so on. This would provide 40 separate calculations, on 40 separate data sets. Each of those 40 calculations could be examined statistically, to see if they provide a good means to forecast forward, and the calculated error ranges could be compared with other statistical approaches to error ranges. We could then do the “baseline” calculation for the previous 200 years from 1600 to 1800 in a similar way, and compare them with our calculations for 1800 to 2000. As a small technical point, it would not be necessary or appropriate to include the propagated error calculated from the 1600 – 1800 data analysis as the baseline error in the calculations for the 1800 – 2000 data set. It should be obvious that we can improve our forecasting ability by simply using an independently calculated baseline measurement error in the 1800 data measurements, as a starting point for our 1800 – 2000 calculations. For an 1800 starting point, there would be no point in applying the error range calculated using an error propagation method as the baseline error, since in all likelihood we would have a legitimate but much smaller error range (and therefore a much more useful result) from establishing the measurement accuracy error ranges for that individual year (without any error propagation occurring at that starting point). It will also be obvious that we could repeat these calculations for any past timespan of 200 years for which we have temperature data, to see if the error ranges get larger or smaller over time. One difficulty we then find is how to determine a relevant error number to use, relating to the time periods we do our calculations on. Patrick Frank suggests +/- 4 W per sq metre PER YEAR from Lauer and Hamilton and then he "propagates" (ie compounds) that error each and every year. Lauer and Hamilton can be found here: https://journals.ametsoc.org/view/journals/clim/26/11/jcli-d-12-00451.1.xml However, as Frank Brown has pointed out, it is by no means clear that this amount, as quoted from Lauer and Hamilton, was ever meant by those authors to be used in the way Patrick Frank does. See also: moyhu.blogspot.com/2019/09/another-round-of-pat-franks-propagation.html I've looked at the paper by Lauer and Hamilton. Although Lauer and Hamilton describe the number's derivation using expressions such as "the observed multiyear annual mean", and they describe it as a root mean square error, it's far from clear what they meant by the expression "multiyear annual mean". It was derived from monthly measurement data. So, it's entirely possible that what they meant was that they added twelve monthly measurements up and divided by twelve. That doesn't mean that we should attribute the 4 W/ square metre to each month, and then compound that error each month, which would be equally valid to the approach Patrick Frank suggests with his compounding ("propagating") of errors annually. Equally, it would be valid, under Patrick Frank's logic, to add up all the monthly data for 20 x 12 months and divide the total by 240 to get a "multiyear bidecadal mean" (that would be quite close to the 4 W/ square metre) and apply that number to each twenty year period (compounding it only once for each twenty years). Patrick Frank seems to suggest that they meant that the radiative forcing for the whole of a year could be under- or over-estimated by this amount for the entirety of the year (and compounded in the following year). But it is not at all clear that this is what Lauer and Hamilton meant. So, I'd like to see a resolution of the controversy concerning this error number and its applicability in a temporal context. It is critical, because it is fundamental to Patrick Frank's "propagation" (aka "compounding") approach to his calculations. If his use of the error number in the way he has falls, then his whole argument falls because it is the compounding ("propagating") that fundamentally drives the dramatic and all-swallowing shape of his error ranges. I’d be very interested in the results of the calculations I've suggested, if any climate scientists or statisticians with access to the underlying data were to undertake them, once the matter of the validity or otherwise of the compounding ("propagating") of the 4 W/ square metre error number has been resolved. My final point is to ask Patrick Frank if he has any ideas of his own for how we can improve our ability to detect a temperature change signal from the error “noise” which he suggests is swamping any such signal. One of the great things about the scientific method is that we don’t need to stop when we have a calculated result. Especially in a matter that is so important for not only the thriving of humanity, but potentially the very survival of humanity, we can always do something to improve the ways we measure, calculate and understand what is going on, and can improve our forecasts of the future. In some ways, that is the most important part of the scientific process – the constant striving to make our scientific tools better, more accurate and more meaningful as sources of information for use in policy making in response to what we think is likely to happen in the future. I’ve not yet seen any such improvement suggestions from Patrick Frank. Perhaps there will be some after he has seen this discussion, or he can signpost some if he has already provided them somewhere. As a footnote to this story, shortly after it was published, in exchanges with Dr Frank, I asked him if he had contacted the authors of Lauer and Hamilton, to establish if they agreed that the numbers which he used that came from that work could be used in the way Patrick Frank had done. I think one of the authors died a few years ago, so we'll never know his view on the matter. However, regarding the other author, Dr Frank replied that he had contacted him: "I wrote to Axel Lauer. He never replied."
0 Comments
Leave a Reply. |
AuthorThe Planetary CFO - working towards a sustainable World Balance Sheet. Categories
All
Archives
February 2025
|