In the absence of robust actions by social media platforms to rout out, delete or counter the growing tide of anti-AGW disinformation, I sometimes challenge and rebut the disinformation, especially when there are persistent and frequent posts by the same person.
To save my time and the readers' time, I'm starting a Rogues' Gallery of prolific and persistent climate disinformers, including detailed rebuttals and analyses of their disinformation techniques.
Click on a name in the list below to see more detailed analyses of their disinformation practices.
This way, when the repeat offenders create new posts, I can simply point readers to this page rather than having to create a detailed response of my own each time.
Let's start by defining important terms. The diagram below is adapted from Treen (2020). The list of rogues starts below it.
To save my time and the readers' time, I'm starting a Rogues' Gallery of prolific and persistent climate disinformers, including detailed rebuttals and analyses of their disinformation techniques.
Click on a name in the list below to see more detailed analyses of their disinformation practices.
This way, when the repeat offenders create new posts, I can simply point readers to this page rather than having to create a detailed response of my own each time.
Let's start by defining important terms. The diagram below is adapted from Treen (2020). The list of rogues starts below it.
(as an aside, there is also a handy list of climate misinformers at skepticalscience.com (https://skepticalscience.com/misinformers.php)
List of Climate Disinformer Rogues
1) Eric Keyser (Member of the CO2 Coalition, retired Geologist, former fossil fuel sector employee for 50 years) Added 04/03/2025
List of Climate Disinformer Rogues
1) Eric Keyser (Member of the CO2 Coalition, retired Geologist, former fossil fuel sector employee for 50 years) Added 04/03/2025
Eric Keyser
Eric's posts about climate change are so blatantly misleading, and he frequently ignores the many responders who indicate deficiencies in his logic and scientific-sounding arguments. It's difficult to avoid the conclusion that he is deliberately a super-spreader of misinformation intended to be a seed for AGW dismissives and deniers to feed on. That would make him a disinformer.
Challenging disinformation activities is important, on occasion, because disinformation in public channels delays actions to tackle AGW (such as Global Net Zero). Disinformation makes it far more likely that the global temperature anomaly will get worse than it would otherwise be, that the damages from AGW will be worse than they would otherwise be, and that the chances of crossing tipping point thresholds will be increased.
The weight of scientific evidence for AGW (Anthropogenic Global Warming) is so overwhelming that the IPCC says, in AR6 WG1 (2021):
"It is unequivocal that human influence has warmed the atmosphere, ocean and land. Widespread and rapid changes in the atmosphere, ocean, cryosphere and biosphere have occurred."
Despite that, or perhaps because of it, Eric Keyser (Retired Geophysicist who worked for 50 years, mainly in the fossil fuel industry, from Calgary, Alberta, Canada) has been a prolific poster on LinkedIn since January 2025. Most days there is at least one, sometimes more than one, post of his making some spurious claims that temperature data from a single monitoring site in some way disproves AGW and shows the scientific consensus about AGW to be wrong.
Rather than pen a response to each individual post he makes, it seems that it will save me (and perhaps others) a lot of time if I post a general response to his views here, which can be referenced when responding to his future posts of a similar nature.
Eric accuses the IPCC of having a "narrative". However, it seems, from the evidence of his own posts, that Eric is the one presenting a narrative. He uses pseudoscience arguments.
From oxfordreference.com:
"Pseudoscience - Theories, ideas, or explanations that are represented as scientific but that are not derived from science or the scientific method. Pseudoscience often springs from claims or folk wisdom or selective reading without independent data collection or validation."
His main approach from January to March 2025 seems to have been to cite temperature data from individual land-based monitoring sites and "JaQing", making oblique or direct claims about what the data from that one site implies about AGW, human drivers and natural variability at a global level. However, in March 2025 he started to drip-feed descriptions of his overall pseudoscience approach to data selection and analysis. See my comments about this further below.
He also often phrases his comments in a way that conflates weather and climate, at local, regional and global levels. When he does that, his comments seem very confused, and certainly they are not supported by the data he presents.
From:
https://en.wikipedia.org/wiki/Just_Asking_Questions#:~:text=%22Just%20Asking%20Questions%22%20(JAQ,by%20framing%20them%20as%20questions.
""Just Asking Questions" (JAQing); ... is a pseudoskeptical tactic often used by conspiracy theorists to present false or distorted claims by framing them as questions. If criticized, the proponent of such a claim may then defend themselves by asserting they were merely asking questions which may upset the mainstream consensus.[2][3][4] The name of the tactic is therefore derived from the typical response of "I'm not saying it was necessarily a conspiracy; I'm just asking questions."
Before March 2025, he didn't indicate what his criteria are for selecting each individual monitoring site he uses as the core for each new post. It's very likely that he has been cherry-picking a particular site each time in order to make a point or inference about AGW or about the temperature "Hockey Stick" of global average temperatures.
An example of a legitimate and credible Hockey Stick is the following, from IPCC AR6 WG1 (2021).
This diagram from the IPCC shows, on the left, the sharp uptick in global average temperatures since industrialisation (the "Hockey Stick"). It also shows, on the right, the human driver 'signal' strongly emerging from the natural variability 'noise'.
It has been well demonstrated that, from the early days of the movement to address AGW and decarbonise the global economy, AGW deniers and dismissives have often attacked both the IPCC and the Hockey Stick. They seem to think that if they can discredit the Hockey Stick, or the climate scientists who present it, then the whole of AGW theory and evidence will somehow crack and fall aside, leaving their beloved fossil fuel industry to continue polluting the world. They don't seem to be concerned about the impacts their actions (and those of the fossil fuel industry they are defending) are having on the legacy we are collectively leaving to following generations.
Eric says things like "Challenging the Michael Mann Hypothesis".
That's clearly an attempt to make the argument personal. Eric seems to ignore the fact that it is not 'Mann's hypothesis', and that the work Mann did has been independently recreated and verified by many other climate scientists. Also, it sits alongside many other lines of evidence supporting the IPCC's conclusions about AGW.
Eric talks about:
"The Dalton Minimum (1790–1830) ... fewer sunspots and lower solar irradiance,... the Maunder Minimum. Understanding these natural cycles is crucial for interpreting long-term climate trends"
That's an example of misinformation. He makes a true statement, but the context in which he makes such comments is clearly meant to imply that, for example, solar variations are driving recent global warming, rather than human activities being the primary driver.
Eric claims:
"Historical data shows that climate variations before 1850 closely resemble those after 1850. This contradicts Michael Mann’s claim"
However, the data he refers to in such a statement is single monitoring site data. He does not present any statistically significant data or analysis about average global temperatures. He seems to be deliberately conflating the local with the global, which is a major flaw in his argument. This has been pointed out to him by many responders. He has not addressed the deficiency they have highlighted.
Eric claims:
"... the IPCC's Northern Hemisphere model does a poor job at representing Prague’s climate trends."
That is another example of Eric conflating the local and the global, without presenting any analysis that could legitimately make the connection between the two.
It also illustrates another deficiency in Eric's argument. Climate models (of which there are many, not just one) belong to scientists. The IPCC does not operate any climate models. It merely reports and concludes on the climate models operated by many independent climate scientists around the world. This is an example of Eric attempting to demonise and discredit the IPCC.
Another example. Eric says:
"Medicine Hat, Alberta... Located in the middle of the prairies and away from major urban centers, Medicine Hat is less influenced by the urban heat island effect."
'It's not us, it's the Urban Heat Island effect that is causing the measured temperature anomaly" is a frequently seen anti-AGW trope. It is debunked here, at skepticalscience.com:
Does Urban Heat Island effect exaggerate global warming trends? https://skepticalscience.com/urban-heat-island-effect.htm
from which:
"The Urban Heat Island Effect (UHI) is a phenomenon whereby the concentration of structures and waste heat from human activity (most notably air conditioners and internal combustion engines) results in a slightly warmer envelope of air over urbanised areas when compared to surrounding rural areas. It has been suggested that UHI has significantly influenced temperature records over the 20th century with rapid growth of urban environments. Scientists have been very careful to ensure that UHI is not influencing the temperature trends."
Eric says:
"I fully agree that a single station cannot be used to make global projections."
That statement is somewhat bizarre, given that Eric frequently attempts to claim, or infer, something about global warming from individual temperature monitoring sites. He does this repeatedly and frequently, ignoring the criticisms from respondents.
He goes on:
"However, I can highlight differences between the IPCC’s global projections and data from key long-term stations. Since 2013, I have compiled a database of nearly 50,000 stations and am now focusing on analyzing those with records predating 1850. My goal is to conduct straightforward statistical analyses and share my findings—you are welcome to examine the data."
He introduces the term "key long-term stations" without explaining what he means by that. It is likely that he is using that term as a means to exclude data from many other monitoring sites. This is an important point, from the perspective of statistical analysis. If his criteria for excluding other monitoring sites is not transparent and justifiable, it will be possible for him to generate any desired result from his statistical analysis. This illustrates why scientific peer review is important - to challenge and debunk false statistical methods and the claims that might come from them.
An example of what Eric claims from looking at a single site:
"My standard data reduction process includes plotting raw data, generating histograms of monthly average temperatures, and computing yearly minimum (winter), maximum (summer), average, and median temperatures. These methods help capture long-term trends and natural variability"
The main flaws in his stated standard process are that:
1) he is obviously assuming (or implying) that the data movements he is tracking are 'natural variability' - he does not set out to track human drivers of changes. This is perhaps inevitable in his approach, because a single site cannot say anything about AGW, which is a global phenomenon, because of the very nature of the global mixing of greenhouse gases, whatever the location of their emission from human activities
2) natural variability in individual sites cannot say anything about natural variability at a global level - he is essentially confusing weather and climate, and confusing local and global - something he does repeatedly and frequently.
3) each time he graphs temperature data from an individual station, he seems to stop at 2013. There has been significant global warming since 2013. It seems unlikely to be a coincidence that 2013 is at the tail end of the well-known temporary "global warming hiatus" as per:
https://en.wikipedia.org/wiki/Global_warming_hiatus
See the GIF below, which shows why stopping a temp chart in 2013 is misleading, and that the strongly rising trend in global warming recommenced from 2013 to 2024, after the so-called "hiatus" from 1998 to 2012.
It has been well demonstrated that, from the early days of the movement to address AGW and decarbonise the global economy, AGW deniers and dismissives have often attacked both the IPCC and the Hockey Stick. They seem to think that if they can discredit the Hockey Stick, or the climate scientists who present it, then the whole of AGW theory and evidence will somehow crack and fall aside, leaving their beloved fossil fuel industry to continue polluting the world. They don't seem to be concerned about the impacts their actions (and those of the fossil fuel industry they are defending) are having on the legacy we are collectively leaving to following generations.
Eric says things like "Challenging the Michael Mann Hypothesis".
That's clearly an attempt to make the argument personal. Eric seems to ignore the fact that it is not 'Mann's hypothesis', and that the work Mann did has been independently recreated and verified by many other climate scientists. Also, it sits alongside many other lines of evidence supporting the IPCC's conclusions about AGW.
Eric talks about:
"The Dalton Minimum (1790–1830) ... fewer sunspots and lower solar irradiance,... the Maunder Minimum. Understanding these natural cycles is crucial for interpreting long-term climate trends"
That's an example of misinformation. He makes a true statement, but the context in which he makes such comments is clearly meant to imply that, for example, solar variations are driving recent global warming, rather than human activities being the primary driver.
Eric claims:
"Historical data shows that climate variations before 1850 closely resemble those after 1850. This contradicts Michael Mann’s claim"
However, the data he refers to in such a statement is single monitoring site data. He does not present any statistically significant data or analysis about average global temperatures. He seems to be deliberately conflating the local with the global, which is a major flaw in his argument. This has been pointed out to him by many responders. He has not addressed the deficiency they have highlighted.
Eric claims:
"... the IPCC's Northern Hemisphere model does a poor job at representing Prague’s climate trends."
That is another example of Eric conflating the local and the global, without presenting any analysis that could legitimately make the connection between the two.
It also illustrates another deficiency in Eric's argument. Climate models (of which there are many, not just one) belong to scientists. The IPCC does not operate any climate models. It merely reports and concludes on the climate models operated by many independent climate scientists around the world. This is an example of Eric attempting to demonise and discredit the IPCC.
Another example. Eric says:
"Medicine Hat, Alberta... Located in the middle of the prairies and away from major urban centers, Medicine Hat is less influenced by the urban heat island effect."
'It's not us, it's the Urban Heat Island effect that is causing the measured temperature anomaly" is a frequently seen anti-AGW trope. It is debunked here, at skepticalscience.com:
Does Urban Heat Island effect exaggerate global warming trends? https://skepticalscience.com/urban-heat-island-effect.htm
from which:
"The Urban Heat Island Effect (UHI) is a phenomenon whereby the concentration of structures and waste heat from human activity (most notably air conditioners and internal combustion engines) results in a slightly warmer envelope of air over urbanised areas when compared to surrounding rural areas. It has been suggested that UHI has significantly influenced temperature records over the 20th century with rapid growth of urban environments. Scientists have been very careful to ensure that UHI is not influencing the temperature trends."
Eric says:
"I fully agree that a single station cannot be used to make global projections."
That statement is somewhat bizarre, given that Eric frequently attempts to claim, or infer, something about global warming from individual temperature monitoring sites. He does this repeatedly and frequently, ignoring the criticisms from respondents.
He goes on:
"However, I can highlight differences between the IPCC’s global projections and data from key long-term stations. Since 2013, I have compiled a database of nearly 50,000 stations and am now focusing on analyzing those with records predating 1850. My goal is to conduct straightforward statistical analyses and share my findings—you are welcome to examine the data."
He introduces the term "key long-term stations" without explaining what he means by that. It is likely that he is using that term as a means to exclude data from many other monitoring sites. This is an important point, from the perspective of statistical analysis. If his criteria for excluding other monitoring sites is not transparent and justifiable, it will be possible for him to generate any desired result from his statistical analysis. This illustrates why scientific peer review is important - to challenge and debunk false statistical methods and the claims that might come from them.
An example of what Eric claims from looking at a single site:
"My standard data reduction process includes plotting raw data, generating histograms of monthly average temperatures, and computing yearly minimum (winter), maximum (summer), average, and median temperatures. These methods help capture long-term trends and natural variability"
The main flaws in his stated standard process are that:
1) he is obviously assuming (or implying) that the data movements he is tracking are 'natural variability' - he does not set out to track human drivers of changes. This is perhaps inevitable in his approach, because a single site cannot say anything about AGW, which is a global phenomenon, because of the very nature of the global mixing of greenhouse gases, whatever the location of their emission from human activities
2) natural variability in individual sites cannot say anything about natural variability at a global level - he is essentially confusing weather and climate, and confusing local and global - something he does repeatedly and frequently.
3) each time he graphs temperature data from an individual station, he seems to stop at 2013. There has been significant global warming since 2013. It seems unlikely to be a coincidence that 2013 is at the tail end of the well-known temporary "global warming hiatus" as per:
https://en.wikipedia.org/wiki/Global_warming_hiatus
See the GIF below, which shows why stopping a temp chart in 2013 is misleading, and that the strongly rising trend in global warming recommenced from 2013 to 2024, after the so-called "hiatus" from 1998 to 2012.
That second point is clearly illustrated by Eric's comments in the very same post:
"At Prague-Klementinum, a clear trend emerges: winters are warming at 1.21°C per century, while summers are warming at only 0.148°C per century. This pattern suggests the climate is becoming less extreme, shifting toward a more temperate state—where, eventually, the ice caps may disappear."
Here, he is clearly making an unsubstantiated leap from data at a local temperature monitoring station and claiming it says something about global climate trends ("climate is becoming less extreme"). Such an inference is false. He tries to bluff his way from the local to the global, using a recourse to 'data' and 'analysis' - his argument is a form of 'pseudoscience'.
Occasionally, he picks a temp station where the rate of average temperature rise is high. And using a station where warming is faster than one climate temperature data set used by climate scientists, eg (for a single cherry-picked station) : "Overall Warming Rate: +0.886°C per century (R² = 52%), compared to the HadCRUT5 model’s +0.65°C per century (R² = 71%)"
This does not get him off the hook. The core rebuttal of his line of reasoning is that it's not possible to infer anything about AGW, a global phenomenon, from any individual temp measurement station. Perhaps Eric picks a fast warming station sometimes, hoping that a responder will say "aha - that fast warming station shows that AGW is real and climate scientists are right!", so that he can then say "so, you agree that my approach of looking at individual stations is an acceptable one and says something about AGW!"
Another example from one of Eric's other posts:
"The WIEN_HOHE_WARTE station in Vienna, Austria, has been recording temperature data since 1774, offering crucial insights into pre-industrial climate trends and long-term climate variability. As Martin F. Hock observed, "This nicely reflects the Dalton Minimum of sunspots... " ... Additionally, historical records indicate year-to-year temperature fluctuations of up to 2°C (marked by blue dots), emphasizing the significant role of natural climate variability in temperature changes."
Note the use of the phrase "climate variability" and reference to "Dalton Minimum of sunspots".
He also uses the expression "temperature changes" in an ambiguous way in this text. He might be deliberately using such phrasing to conflate temperature fluctuations due to natural variability and the global average temperature anomaly (the change since industrialisation).
In some of his posts, Eric infers lack of correlation between global atmospheric CO2 concentrations and global average temperatures. That inference is debunked here:
https://skepticalscience.com/co2-temperature-correlation.htm
It will be interesting to see how many of the anti-AGW tropes listed at skepticalscience.com Eric will be trotting out over the coming months:
https://skepticalscience.com/argument.php?f=percentage
People might want to tick them off against that list as and when they appear in Eric's posts.
He is also gish-galloping - posting his disinformation so frequently (sometimes more than once per day) which makes it more difficult for readers to keep up with all his unsubstantiated claims, in order to fact check and rebut them.
https://en.wikipedia.org/wiki/Gish_gallop
He also ignores rebuttals and criticisms pointing out the flaws in his argument.
The gish galloper hopes to wear down the opposition, and that not all of the gish galloper's posts / arguments / claims or inferences will be rebutted. The gish galloper can then claim "Aha - see - nobody has rebutted this particular point I posted!" even if that is only one point and hundreds (even thousands) of their other points have been rebutted.
In another post, Eric says:
"This is the best comment I have seen to date:
"I am fully convinced that climate change is real. I am also fully convinced that it has always existed and has always been actively changing. However, I am convinced that the idea of human-caused change as presented by Al Gore is, in fact, a hoax—designed to generate fear and, in turn, generate profits." (David Harvey - Los Alamos National Laboratory)"
I think that post says a lot about Eric's views about AGW.
In March 2025, Eric posted more information about his pseudoscience approach to analysing temperature data.
I asked him:
"Please provide the sampling approach in more detail, the list of sites used in your analysis, and the ones excluded (with reasons), the data, and the statistical and other analyses and calculations, and your findings/ conclusions, for fact checking purposes. Better still, submit a paper, with all these elements in it, to a scientific journal, for peer review and publication. That's the way science works."
His response was:
"It's on the way!"
However, so far, he has posted incomplete details in multiple LinkedIn posts, not a single, complete paper.
I reproduce here some of the relevant exchanges between us.
Eric's first description of his analysis approach:
"Steps to Create a Global Temperature Curve Using Selected Stations
Identify and Gather Data – Select weather stations with more than 90 years of recorded temperature data.
Generate Yearly Statistics – Run a script to compute yearly minimum, maximum, average, and median temperatures.
Smooth the Average Curve – Apply a smoothing algorithm and anchor the curve to HadCRUT5 at the year 1980.
Data Verification – Review the processed data; assistance is needed for this step. Participants are encouraged to use Google Sheets for sharing and collaboration.
Fit Trend Lines – Compute a straight-line fit for each station and determine the slope and R² goodness-of-fit.
Validate Data – Plot residuals and trend curves for selected stations. Which stations do you find most interesting?
Filter Data – Cross-plot R² vs. slope and use it as a filtering criterion to reject unreliable data.
Create Preliminary Curves – Select a subset of stations and generate an average temperature curve.
Evaluate Results – Assess whether the mathematical approach is sound and whether the results make sense.
Run Full Analysis – Execute the script on all selected stations and visualize the complete dataset.
Final Integration – Combine the generated curve with the HadCRUT5 dataset and publish the final result for broader review.
Feedback Requested:
Does this workflow make sense? Are there any steps that need clarification or improvement? Data and software will be available upon request."
My responses to Eric:
Your proposed approach includes:
1) " Select weather stations with more than 90 years of recorded temperature data."
As explained at:
https://science.nasa.gov/earth/climate-change/the-raw-truth-on-global-temperature-records/
"Scientists have been building estimates of Earth’s average global temperature for more than a century, using temperature records from weather stations. But before 1880, there just wasn’t enough data to make accurate calculations, resulting in uncertainties in these older records."
Your approach of focussing more on temp stations with a longer track record will inevitably result in bigger errors / uncertainties in your results. Do you calculate uncertainties in your analysis?
2) "Apply a smoothing algorithm"
What algorithm, specifically? Why select that one? What error/uncertainty ranges will apply?
3) "Which stations do you find most interesting?"
That falls foul of the deficiency that inferences about AGW at global level cannot be made from looking at individual stations.
4) "Fit Trend Lines – Compute a straight-line fit for each station... Filter Data – Cross-plot R² vs. slope and use it as a filtering criterion to reject unreliable data."
You are suggesting using linear regression and best-fit.
Take note of Jarvis (2024) "Estimated human-induced warming from a linear temperature and atmospheric CO2 relationship":
https://www.nature.com/articles/s41561-024-01580-5
from which the attached shows the impacts of human influence on warming ("HIW").
Care is needed in using linear regression, as most scientists think the warming response to human drivers is non-linear, when all significant feedbacks are included. However, you might find this interesting, from Jarvis:
"Linearity between increases in atmospheric CO2 and temperature offers a framework ... producing human-induced warming estimates that are at least 30% more certain than alternative methods. Here, for 2023, we estimate humans have caused a global increase of 1.49 ± 0.11 °C relative to a pre-1700 baseline."
5) "Select a subset of stations ... Combine the generated curve with the HadCRUT5 dataset... visualize the complete dataset... publish the final result for broader review"
Why only HadCRUT5?
Why not include the many other sources of global average temp data?
Why not include hypothesis testing for one or more hypotheses of your choice?
Are you intending to try to establish what proportion of global warming is caused by natural variability and what proportion is human-driven? That is what the IPCC has already reported on in the attached (from AR6 WG1), in which the conclusion is that almost all warming since industrialisation is from human drivers. You could compare your results with this. But are you using data that will enable such a comparison to be made, and statistically justified?
"Linearity between increases in atmospheric CO2 and temperature offers a framework ... producing human-induced warming estimates that are at least 30% more certain than alternative methods. Here, for 2023, we estimate humans have caused a global increase of 1.49 ± 0.11 °C relative to a pre-1700 baseline."
5) "Select a subset of stations ... Combine the generated curve with the HadCRUT5 dataset... visualize the complete dataset... publish the final result for broader review"
Why only HadCRUT5?
Why not include the many other sources of global average temp data?
Why not include hypothesis testing for one or more hypotheses of your choice?
Are you intending to try to establish what proportion of global warming is caused by natural variability and what proportion is human-driven? That is what the IPCC has already reported on in the attached (from AR6 WG1), in which the conclusion is that almost all warming since industrialisation is from human drivers. You could compare your results with this. But are you using data that will enable such a comparison to be made, and statistically justified?
With reference to his second description in a separate LinkedIn post:
https://lnkd.in/gNBbCh4B
Your data analysis approach ("Steps to Create a Global Temperature Curve Using Selected Stations") addressed:
6) "Identify and Gather Data – Select weather stations with over 100 years of recorded temperature data. The initial dataset includes 325 stations worldwide, primarily in the Northern Hemisphere."
You've already said that you trust older temperature measurements more than younger ones. That will introduce larger uncertainty ranges into your analyses, and will skew the data compared with the full data set from all stations. And you admit that the initial data set is mostly located in the Northern Hemisphere. That provides geographic bias in any analysis you commence.
7) "Focus on Pre-1980 Data – For recent data, we will use the University of Alabama (Huntsville) Northern Hemisphere (Land) dataset."
You've already said that you trust older temperature measurements more than younger ones. That will introduce larger uncertainty ranges into your analysis, and will skew the data compared with the full data set from all stations. Why exclude data after 1980? There has been rapid global warming since 1980. See the "Global Warming Hiatus" (included above).
8) "Review and Contribute – If you have a preferred station, let me know. All I need are time-series data with daily or monthly temperature values in degrees Celsius, and I can add it to the list."
Many responders have pointed out that picking a single station to focus on does not say anything about human-driven global warming ("AGW"). You have not responded to that rebuttal of your approach. More details here:
https://lnkd.in/g_wfawep
9) adding reviewers' suggested stations to the list you analyse and chart is not going to produce a valid sampling method for purposes of creating valid statistical analysis. What you will get is a bias towards a polled opinion by those reviewers who make such suggestions for addition. A clear case of selection bias, if ever there was one.
My conclusion:
Your approach is an example of "pseudoscience" and looks designed to be misleading. Not only that, but you seem to be encouraging others to engage in, and to also carry out, your brand of pseudoscience.
As a footnote on this, Eric said recently in a comment about his analysis:
" [re-] significant warming trend since the 1980s. For our purposes, uncorrected temperatures are sufficient." [by uncorrected, you mean raw temperature data, without corrections for known errors or biases]
My response was:
Really?
What purposes do you have for the analysis you are doing, if you are not going to correct for known errors and biases?
Eric posted the following a day later:
"Regression Analysis on 319 Meteorological Stations:
As an initial step, I conducted a simple linear regression analysis on all 319 station datasets, as shown in the chart below ["R squared versus temperature gradient [ie temperature change versus pre-industrial]"]. Most stations exhibit a positive slope, while 14 stations show a negative trend.
Notable Findings:
Frostburg stands out among the negative trends (previously posted).
Jakarta, despite having a slope of less than +2°C per century, had the highest R² (goodness of fit) among all stations.
Parry Sound, Ontario, Canada, recorded the steepest warming trend of all stations.
From my analysis, all stations appear to provide valid data.
Now, the question is: What happens when we aggregate them all?"
The main problems with Eric's approach:
1) No mention of the basis of selection of the 319 temp stations - selection bias and geographical bias are almost certain to exist in his data set, as he has not previously included in his stated approach any random sampling from the tens of thousands of known temp measurement stations. Instead, he selects stations with data "that looks interesting"
2) No mention of the dates covered and the number of data points for each station, although Eric has previously said he trusts older data more than more recent data, so he's probably using data from before the recent significant increases in rate of rise of the global temperature anomaly. Therefore, any results will not be showing a complete picture of the modern era, where most human-driven warming has occurred.
3) He uses raw, unadjusted, data, which introduces known inaccuracies well-researched and evidenced by climate scientists.
4) His claim that "all stations appear to provide valid data" is not substantiated by anything in his post. It's also irrelevant if the selection criteria for the 319 stations introduces bias
5) He gives no indication of uncertainty/error ranges in his analysis and results
6) The answer to his question "What happens when we aggregate them all?" is that there is an invalid result, whatever it shows. Perhaps the only thing it can possibly show is what you can produce with a statistically invalid approach.
7) His post is just a more elaborate than average example of "JaQing"
His post is clearly an example of pseudoscience disinformation.
Summing up
I could go on, citing and debunking the many claims or obvious inferences in many more of Eric's other posts.
Instead, I'll just summarise.
Eric appears to be promoting a narrative around:
- Being dismissive of the scientific evidence supporting AGW, and climate scientists
- Attacking the IPCC
- Using a pseudoscience approach
- Focussing on data from individual temperature monitoring sites and making unsubstantiated claims or inferences about AGW from them
- using "JaQing"
- Failing to respond to many respondents who point out the obvious flaws in his claims and inferences
- Using well-worn anti-AGW tropes, sprinkling them liberally through his posts
- Encouraging respondents who express anti-AGW opinions
It's difficult to avoid the conclusion that Eric is an anti-AGW disinformation propagandist.
With reference to his second description in a separate LinkedIn post:
https://lnkd.in/gNBbCh4B
Your data analysis approach ("Steps to Create a Global Temperature Curve Using Selected Stations") addressed:
6) "Identify and Gather Data – Select weather stations with over 100 years of recorded temperature data. The initial dataset includes 325 stations worldwide, primarily in the Northern Hemisphere."
You've already said that you trust older temperature measurements more than younger ones. That will introduce larger uncertainty ranges into your analyses, and will skew the data compared with the full data set from all stations. And you admit that the initial data set is mostly located in the Northern Hemisphere. That provides geographic bias in any analysis you commence.
7) "Focus on Pre-1980 Data – For recent data, we will use the University of Alabama (Huntsville) Northern Hemisphere (Land) dataset."
You've already said that you trust older temperature measurements more than younger ones. That will introduce larger uncertainty ranges into your analysis, and will skew the data compared with the full data set from all stations. Why exclude data after 1980? There has been rapid global warming since 1980. See the "Global Warming Hiatus" (included above).
8) "Review and Contribute – If you have a preferred station, let me know. All I need are time-series data with daily or monthly temperature values in degrees Celsius, and I can add it to the list."
Many responders have pointed out that picking a single station to focus on does not say anything about human-driven global warming ("AGW"). You have not responded to that rebuttal of your approach. More details here:
https://lnkd.in/g_wfawep
9) adding reviewers' suggested stations to the list you analyse and chart is not going to produce a valid sampling method for purposes of creating valid statistical analysis. What you will get is a bias towards a polled opinion by those reviewers who make such suggestions for addition. A clear case of selection bias, if ever there was one.
My conclusion:
Your approach is an example of "pseudoscience" and looks designed to be misleading. Not only that, but you seem to be encouraging others to engage in, and to also carry out, your brand of pseudoscience.
As a footnote on this, Eric said recently in a comment about his analysis:
" [re-] significant warming trend since the 1980s. For our purposes, uncorrected temperatures are sufficient." [by uncorrected, you mean raw temperature data, without corrections for known errors or biases]
My response was:
Really?
What purposes do you have for the analysis you are doing, if you are not going to correct for known errors and biases?
Eric posted the following a day later:
"Regression Analysis on 319 Meteorological Stations:
As an initial step, I conducted a simple linear regression analysis on all 319 station datasets, as shown in the chart below ["R squared versus temperature gradient [ie temperature change versus pre-industrial]"]. Most stations exhibit a positive slope, while 14 stations show a negative trend.
Notable Findings:
Frostburg stands out among the negative trends (previously posted).
Jakarta, despite having a slope of less than +2°C per century, had the highest R² (goodness of fit) among all stations.
Parry Sound, Ontario, Canada, recorded the steepest warming trend of all stations.
From my analysis, all stations appear to provide valid data.
Now, the question is: What happens when we aggregate them all?"
The main problems with Eric's approach:
1) No mention of the basis of selection of the 319 temp stations - selection bias and geographical bias are almost certain to exist in his data set, as he has not previously included in his stated approach any random sampling from the tens of thousands of known temp measurement stations. Instead, he selects stations with data "that looks interesting"
2) No mention of the dates covered and the number of data points for each station, although Eric has previously said he trusts older data more than more recent data, so he's probably using data from before the recent significant increases in rate of rise of the global temperature anomaly. Therefore, any results will not be showing a complete picture of the modern era, where most human-driven warming has occurred.
3) He uses raw, unadjusted, data, which introduces known inaccuracies well-researched and evidenced by climate scientists.
4) His claim that "all stations appear to provide valid data" is not substantiated by anything in his post. It's also irrelevant if the selection criteria for the 319 stations introduces bias
5) He gives no indication of uncertainty/error ranges in his analysis and results
6) The answer to his question "What happens when we aggregate them all?" is that there is an invalid result, whatever it shows. Perhaps the only thing it can possibly show is what you can produce with a statistically invalid approach.
7) His post is just a more elaborate than average example of "JaQing"
His post is clearly an example of pseudoscience disinformation.
Summing up
I could go on, citing and debunking the many claims or obvious inferences in many more of Eric's other posts.
Instead, I'll just summarise.
Eric appears to be promoting a narrative around:
- Being dismissive of the scientific evidence supporting AGW, and climate scientists
- Attacking the IPCC
- Using a pseudoscience approach
- Focussing on data from individual temperature monitoring sites and making unsubstantiated claims or inferences about AGW from them
- using "JaQing"
- Failing to respond to many respondents who point out the obvious flaws in his claims and inferences
- Using well-worn anti-AGW tropes, sprinkling them liberally through his posts
- Encouraging respondents who express anti-AGW opinions
It's difficult to avoid the conclusion that Eric is an anti-AGW disinformation propagandist.