I am curious but I have pointed out similar discrepancies quite a few times but never received a satisfactory response. Now a days, I am pretty much afraid to even point out such discrepancies in other data too. I can share another example if anyone's curious.
are you familiar with the blog made by tony heller (also on youtube) ? He regularely points out those exact same remark.
I am very careful to wait until other people confirms that there is indeed a methodological problem before jumping to conclusion, but it really sounds very interesting.
I read Heller's blog. I've also done a bit of double checking of his work because indeed, what he keeps uncovering is extremely interesting and should I feel really have been discussed in popular journalism about climate before now.
I want to check more of his claims in future. But for now, I only checked how many data points in the NOAA US temperature dataset are now the output of model simulations and not actual measurements:
I verified the "Percent of USHCN Monthly Temperature Data which is fabricated" graph by downloading the source dataset and writing some scripts to process it. I was able to replicate his numbers. It's alarming that over half of reported temperature readings in the USA are actually not readings at all but rather estimates by a computer program.
I also spent some time reading the source code of the programs doing the temperature simulation but wasn't impressed. It's very old FORTRAN that has no unit tests of any sort. There are code comments giving the basic gist of what it's doing, but it's clear that the code has been grown and patched ad-hoc since the 1980s. There's nothing resembling real software engineering.
However there are also reasons to not be alarmed.
Some years ago climate skeptic bloggers started pointing out extensive problems with the network of weather stations used to calculate US temperatures. It had degraded over time as measuring stations dropped out, moved, had things built next to them etc. This in turn led to climatologists making extensive adjustments to the raw data, which is a very sketchy thing for scientists to be doing.
Congress agreed there was a problem and released funding to build a pristine new temperature network, which started operating some years ago. The skeptic bloggers agreed that the design of this new network was excellent, and the good news is the output of it matches the adjusted output of the old network (in fact, the adjusted old network measurements are still being used as canonical, which is a bit weird). So it seems that the current set of adjustments is not a problem even though it looks alarming.
The bad news is that this is partly because the adjustments made to modern measurements are quite small compared to the extent to which historical temperatures have been adjusted. Heller has been investigating the TOB adjustment in particular, which is near non-existent now but massively alters past readings. It's based on the belief that for most of the 20th century weather station operators didn't know how to properly use a min/max thermometer and that this methodological failure was never documented in primary sources, but has to be inferred from the recorded data.
I haven't made up my mind about these adjustments yet and remain in the neutral "they're probably OK" position. TOB is one but there are many others. The issue for Heller is not only methodological but also "where there's smoke there's fire", that is, the adjustments might be correct but everything around them is extremely suspicious, starting with the fact that - as this thread shows - climatologists keep adjusting even very recent data. How hard can it possibly be to read a thermometer and write down the numbers? Apparently, very hard.
i think you and parent should create some kind of blog confirming all this. It makes a huge difference between having one guy ( tony heller ) blogging about this, and multiple independant people confirming the findings.
It also makes for different POV about the same issue ( data fixing methodology and code quality), and it’s always a good way to bring improved arguments in the debate.
I really thank you for your comment, and the time you took to honestly assess the current situation. The one thing that i still have a hard time accepting is that a whole community of scientist accepts doing science on such a fragile basis (such as the fact that more than three quarter of the world wasn't recording temperature until 50 years ago).
It seems impossible that so many people keep doing their work on fragile data (and code, apparently), while at the same time seeing their work used by politicians all over the world to advocate massive policy changes.
I still haven't reached a personal conclusion, but i would be glad to read about the progress made by "hands deep in the dirt" people like you in their investigation.
It seems impossible that so many people keep doing their work on fragile data (and code, apparently), while at the same time seeing their work used by politicians all over the world to advocate massive policy changes.
Mmmm. Surely it's the other way around, this is the expected and indeed only possible outcome.
I was in a different HN thread this week where I pointed out that there are some conclusions that might be right but which some sections of academia institutionally cannot reach, conclusions like:
1. We don't know enough to make predictions in this field.
2. Our datasets are inadequate for use.
3. Our research is unimportant and doesn't need to be done.
Note that commercial research can easily reach any of these conclusions; that's the function of senior management who are motivated by some fundamental ground truth goal rather than research for the sake of it.
Climatology is almost entirely driven by academia and other government institutions. They cannot reach a conclusion like, "old temperature datasets are of too low quality to derive models from" because then they'd invalidate the basis of their own careers. My impression is that climatologists have few transferable skills. Perhaps it's just small sample sizes, but it seems like their maths skills aren't really "hard" enough to outcompete physicists, mathmos or CS profs for jobs in finance or other exit routes.
Given that climatology has a single main theory (global warming), and that theory is based primarily on a single dataset (temperature), problems with that dataset have to be addressed by adjusting the data. Otherwise what's left for the field to do?
But that’s the thing : there is definitely a science for climate modeling, but it’s probably at its infancy. Someone brave enough to correctly asses the limits of the field should be recognized in the field itself. It’s a bit similar in spirit to the « reproducability » issues in biology research.
It's because anyone who tries to point out holes in the claims gets demonized with a label (denier) and loses all grants and funding. Even IPCC (and therefore UN) no longer provides grants to anyone who points out skepticism.
> In honor of the late AUGIE AUER, Professor of Atmospheric Science at the University of Wyoming, Chief Meteorologist for the MetService, co-founder of the NZ Climate Science Coalition and much-loved scientist of the highest integrity, members of the Coalition have established a fund now totaling $10,000 to be granted to the first applicant to present real-world evidence showing that the man-made fraction of airborne carbon dioxide causes dangerous global warming. Professor Augie Auer was appalled that climate scientists should be denied funding and be branded ‘deniers’ unless they submitted to the political global warming narrative and renounced the ancient principle that science is never settled. Those refusing the NZCSC’s request for evidence of dangerous man-made global warming include the IPCC, the Royal Society, the Royal Society of New Zealand, the NZ Ministry for the Environment and Professor James Renwick. To the first person who proves what they cannot, we offer this prize in the name of our late, incorruptible colleague.
I don't think it's maintenance starved exactly. It seems more like a priorities issue.
One issue Nic Lewis has repeatedly highlighted is how many climatology papers contain subtle statistical errors. He finds them, papers get retracted or adjusted, but his more general point doesn't land - that climatologists should be collaborating more with professional statisticians. Well, arguably that's true of quite a few scientific fields, but I guess climatology is by now much higher impact than even economics or psychology.
They could also easily collaborate with computer scientists and professional software engineering firms. Many software specialists would love to help I'm sure, even do it pro bono. But climatologists see no reason to engage such people: it seems they don't even realise their work falls short of the highest standards, or they don't care. That then opens up routes for skeptics to attack them, because surely a field whose work is used to justify such massive public spending and policy goals should have the highest standards?
A lot of stuff is just footgun shooting. Lewis investigated some code for a model that underlies a lot of climate papers published over the past 20 years. It was hard to get and then turned out to be written in a proprietary programming language of the sort where you can't even find out how much it costs: Lewis tried and the firm that makes it wouldn't even get back to him. He started on a project to rewrite the code in R, but it's apparently a huge effort. The NOAA code is at least in FORTRAN. The whole field gives off a strong vibe of "here are my results, programming is easy, trust us that we got it right". Of course professional programmers know programming isn't easy at all.
Tony Heller is a well credentialed denier[1], been in the game since at least 2008, and is at least good friends with the deniers that get oil money to keep going [2]. I estimate his honesty as comparable to that of Giuliani post 2017.
Simply coming up with a demonizing label for him (denier) and applying these label as a castigation to anyone who even tries to point out skepticism is not how you should make arguments.
> good friends with the deniers
What does that have to do with anything when he's pointing out with raw data which is provided by science and anybody has the ability to point out what he's doing wrong? If you don't have the ability to point out holes in his data, then I don't think you are acting in good faith.
Also political attacks in a civil debate about science isn't good.
This isn’t a debate about science, this is a political debate about who to trust as you really ought to know if you paid any attention to the climate debate.
It's ironical you think about Heller's work as "political", as one of the arguments he makes is that the climate science field became completely crippled once politicians started to put their nose in it.
It's also a huge problem that once a scientific theory (man-made climate change due to CO2) becomes "mainstream" and labelled as "official", you can't oppose it without immediately having all kinds of people with various interests supporting you for sometimes bad reasons. It's unfortunately inevitable.
Worth reading how Dr James Hansen and senator Timothy Wirth (who's the president of UN for many years now) sabotaged the air conditioner back in 1988 and opened the windows to make things appear worse for the cameras:
I didn't bring in politics, I simply states sources from scientific sources and old newspapers. If anyone can point out what's wrong with it, then I am okay for a debate. The problem is that people have made this into a political debate because there's a lot of money involved in climate change policy making.
I’d be very interested in another example. It’s not too often we can hear evidence against AGW so it’s worth the downvotes that it will inevitably bring.
One of the major claims Dr James Hansen made in his testimony in 1988 (which led to the creation of the IPCC) was that the arctic sea ice is melting and it would disappear. In 2008, he was asked about it and he said he was sure it would happen. Dr. James Hansen made his prediction on June 22 1988 and repeated them in 2008.
The data shows the exact opposite of the claims which were made and even made now a days by the media. It shows it has gotten thicker, more volume and area.
You can confirm this from another source by using the data provided here too:
If you wouldn't use the word "magic" in your post, I'd perceive it as neutral/curious.