Crash course on climate change

42 Comments
Posted February 9th, 2013 in Science. Tags: , , , , , , .

Before the Industrial Revolution in the 1800s, the concentration of carbon dioxide (CO2) in the atmosphere was about 280 parts per million (ppm). This means that for every million molecules in the atmosphere, about 280 of them were CO2.

However, climate.nasa.gov shows that we’ve burned so much coal and oil that atmospheric CO2 is now approaching 400 ppm. It hasn’t been this high for millions of years. The last time Earth’s atmosphere had this much CO2, our species (and many others) hadn’t yet evolved.

Scientists performed experiments in the 1800s which showed that CO2 absorbs heat better than it absorbs visible light, and concluded that increasing CO2 would warm the planet. More recently, scientists have studied the ancient climate by analyzing rocks, fossils, and gas bubbles trapped in ancient ice. They repeatedly got the same answer: increasing CO2 has amplified the ice age cycle, caused temperatures to spike during the end-Permian extinction and the Paleocene-Eocene Thermal Maximum, and even thawed Snowball Earth.

If the Sun got brighter, that would also warm the planet. However, satellites haven’t seen a significant change in the Sun’s brightness since 1950. In fact, scientific studies accounting for many natural factors tend to suggest that the Earth would have cooled slightly since 1950 if we hadn’t burned so much coal and oil.

.

.

What have we observed?

  • The Earth’s surface is warming. Satellites confirm that the lower atmosphere is also warming, and thousands of Argo floats confirm that the deep ocean is also warming.
  • Glaciers all over the world are melting and sliding into the ocean. The GRACE satellites have shown that Greenland and West Antarctica are losing ice at an accelerating rate.
  • Arctic sea ice is melting so quickly that the Arctic ocean could be essentially ice-free in September by 2030. This would briefly expose most of the Arctic ocean to the atmosphere for the first time in hundreds of thousands thousands of years. (See this comment.)
.

.

Conclusion: The world is warming, and our skyrocketing CO2 emissions have very likely caused most of the global warming since 1950. That’s why 13 national science academies signed a joint statement in 2009 telling world leaders that “the need for urgent action to address climate change is now indisputable.” Sadly, we’re still pumping out CO2 at least 10 times faster than during the previous record high, which was set 250 million years ago… right before the end-Permian extinction.

What should we expect?

  • Global warming makes heat waves and droughts more severe, which combine to increase the risk of wildfires.
  • Precipitation will probably fall in more intense bursts, and wet regions will probably get wetter while dry regions get drier.
  • Hurricanes might not be more frequent, but they will tend to be stronger because warmer oceans provide more energy for a hurricane’s heat engine.
  • By 2100, the oceans will rise by about a meter (or more). This will flood some coastal cities and threaten more by adding onto the storm surges of stronger storms.
  • Warmer oceans evaporate more water vapor into the atmosphere. Global warming is making winters shorter, but the added water vapor allows for more snow. Disappearing Arctic sea ice also weakens the Polar Vortex, letting cold Arctic air spill southwards. Update: See debate.
  • Global warming is happening faster than many species can adapt by migrating or evolving. Unfortunately, the species that will thrive are mainly pests like insects and jellyfish.
  • Some species we like to eat are vulnerable, which is disturbing because in the future there will be more humans to feed. For instance, rice grows 10% less with every 1°C of night-time warming.
  • Our CO2 emissions are also acidifying the oceans, which threatens species like corals that are the foundation of many oceanic ecosystems.

What should we do?

Science can’t answer questions about what we should do, so here I speak as a human being rather than as a scientist.

I think we should reduce CO2 emissions as quickly as possible. For example, we could treat the CO2 waste from coal power plants like waste from nuclear plants. Currently, government regulations force nuclear plants to pay for waste disposal, but coal plants get to treat our atmosphere as a free sewer. Many economists support charging coal plants for their CO2 waste, and returning that money directly to us, the people. This will create jobs by jumpstarting a new industrial revolution based on clean energy, and improve the food and water security for future generations.

In the freely-available video series “Earth: The Operators’ Manual,” Richard Alley explains that switching to clean energy will cost about as much as building our sewer system. I doubt that many people would give up indoor plumbing just to save that money, so I’m baffled that so many people seem willing to risk the water and food security of future generations just to save that money. Personally, I think we should try to buy some time for future generations to clean up our mess.


Here’s an index of comments:

  1. TinyCO2 helpfully corrects me regarding the age of permanent Arctic sea ice, and asks questions about the warming, etc.

  2. Mike helpfully corrects me by linking a paper showing an ice free Arctic ocean in the early Holocene, and asks about GRACE and ICESat, aerosol and cloud uncertainties, attributing warming to climate cycles or humans, climate sensitivity, etc.

  3. Pete Ridley supports Beck’s CO2 “record” and asks if I know something that Professor Wolff needs to be made aware of.

  4. I correct “15 million years” after reading more recent research.

  5. LOVO asks me to address some questions.

  6. Sidd asks about Greenland’s missing 2013 melt season.

Last modified December 1st, 2014
.
    
.

42 Responses to “Crash course on climate change”

  1. TinyCO2 posted on 2013-02-11 at 08:06

    Ah, where to start? Perhaps with a question.

    Even if we decide that it is warmer now than the Medieval Warm Period and even the Roman Warm Period and the Minoan Warm Period, it’s not in any doubt that it’s not as warm now as the Holocene Optimum that lasted thousands of years. So how did the Arctic sea ice survive that?

    I mean, we know that the ice was much reduced during the 30s compared to 1979 when satellite records began,

    http://hidethedecline.eu/media/Nautisk/fig16.gif

    And that was at cooler temperatures than now, on our way out of the Little Ice Age, one of the coldest periods of the last 10,000 years. A point where the ice should have been particularly durable.

    Even if you expect more warming between now and 2030, how much is needed to exceed the Holocene? It would need to go some, especially as the UK Met Office sees little to no warming in the next 5 years.

    Doesn’t it seem almost certain that the Arctic must have been in a similar state to current conditions countless times before?

    • I mean, we know that the ice was much reduced during the 30s compared to 1979 when satellite records began, http://hidethedecline.eu/media/Nautisk/fig16.gif

      Those graphs seem popular. Fig. 2(a) from Polyak et al. 2010 shows that the reconstructed Arctic sea ice minimum extent in the 1930s was comparable to that in 1979. More importantly, the modern trend is much steeper:

      .
      Fig. 2(a)
      .

      And that was at cooler temperatures than now, on our way out of the Little Ice Age, one of the coldest periods of the last 10,000 years. A point where the ice should have been particularly durable.

      The Arctic was actually warm in the 1930s, as I’ve discussed at the bottoms of these comments.

      I based my “hundreds of thousands of years” claim on Worsley and Herman 1980. They found fossils of photosynthetic organisms in Arctic sediment that lived in the mid-Pleistocene, but not since then. This implies that the ice cover has blocked sunlight even during the summer for hundreds of thousands of years. Clark 1982 criticized this result on page 137:

      “The sediment record as well as theoretical considerations make strong argument against alternating ice-covered and ice-free conditions (Donn and Shaw, 1966; Clark et al., 1980). Although the time of development of the pack ice for the central Arctic Ocean is unknown, to date there is no evidence that precludes a Miocene origin.”

      A Miocene origin would be at least 5 million years old. But your comment prompted me to take a closer look at more recent literature:

      “Arctic sea-ice extent and volume are declining rapidly. Several studies project that the Arctic Ocean may become seasonally ice-free by the year 2040 or even earlier. Putting this into perspective requires information on the history of Arctic sea-ice conditions through the geologic past. This information can be provided by proxy records from the Arctic Ocean floor and from the surrounding coasts. Although existing records are far from complete, they indicate that sea ice became a feature of the Arctic by 47 Ma, following a pronounced decline in atmospheric pCO2 after the Paleocene–Eocene Thermal Optimum, and consistently covered at least part of the Arctic Ocean for no less than the last 13–14 million years. Ice was apparently most widespread during the last 2–3 million years, in accordance with Earth’s overall cooler climate. Nevertheless, episodes of considerably reduced sea ice or even seasonally ice-free conditions occurred during warmer periods linked to orbital variations. The last low-ice event related to orbital forcing (high insolation) was in the early Holocene, after which the northern high latitudes cooled overall, with some superimposed shorter-term (multidecadal to millennial-scale) and lower-magnitude variability. The current reduction in Arctic ice cover started in the late 19th century, consistent with the rapidly warming climate, and became very pronounced over the last three decades. This ice loss appears to be unmatched over at least the last few thousand years and unexplainable by any of the known natural variabilities.” [Polyak et al. 2010]

      After reading this paper, I changed “hundreds of thousands of years” to “thousands of years”. Thank you for correcting me on this point.

      Even if you expect more warming between now and 2030, how much is needed to exceed the Holocene? It would need to go some, especially as the UK Met Office sees little to no warming in the next 5 years.

      I’ve repeatedly failed to explain that trends over short time periods have large uncertainties. That’s why scientists can be uncertain about temperatures 5 years from now and be more confident that 20 year averages of global temperatures will keep increasing as the radiative forcing from our CO2 continues to trap heat:

      .

      The Escalator

      .

      But the reason I say the Arctic ocean could be essentially ice-free in September by 2030 has little to do with surface temperatures, because they don’t account for the fact that ~90% of the heat from global warming goes into the oceans. This heat can melt the Arctic sea ice from underneath without warming thermometers on the surface. As that ice melts, it absorbs heat without warming at all. That’s why the total heat content in the Earth’s climate (surface, oceans, cryosphere, atmosphere) is a better diagnostic of global warming than surface temperatures alone.

      The actual basis for my 2030 claim is that the Arctic sea ice minimum is clearly shrinking faster than the models predicted, so a simpler approach may be desirable. Extrapolating the lowess smooth of minimum sea ice volume (which is more uncertain but also more thermodynamically relevant than extent) hits zero in just 5 years. Extrapolating far into the future is a fool’s errand, but 5 years isn’t that far. And we still have 17 years until 2030.

      Doesn’t it seem almost certain that the Arctic must have been in a similar state to current conditions countless times before?

      Sure! As noted above, Arctic sea ice only appeared 47 million years ago, after the CO2 from the PETM declined. It’s only consistently covered the Arctic for about the last 14 million years. Now that we’ve increased CO2 to a level not seen in about 15 million years, we’ll get to see what that world looks like.

      Update: See below.

    • Here’s a very informative 6 minute video with Dr. Jennifer Francis, Dr. Julienne Stroeve, Dr. Mike MacCracken and Admiral David Titley:

      .

      .

      Here’s an excellent presentation by Dr. Francis on the possible consequences of this rapid ice loss, and a related 2 minute video.

      Here are a few (loud) 30 second graphics showing Arctic sea ice volume from PIOMAS, which has recently been confirmed by Cryosat-2.

  2. Mike posted on 2013-02-11 at 10:55

    Let me first state that the onus that you place on me to meet a DH4 level or higher response is difficult given that you have not sourced your data for your observations. As an example New Scientist is not an acceptable source to support your observation of Earth’s surface warming. I offer as a way of improving your POV that you offer climate center data set sources (see below) or peer reviewed research if you want higher level feedback on your thoughts.

    http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt

    “The GRACE satellites have shown that Greenland and West Antarctica are losing ice at an accelerating rate”

    I don’t understand why you choose only to consider WAIS and not the entire continent of Antarctica? Secondly the initial GRACE data estimates were wrong and had to be corrected and providing a source would have allowed readers to know which data you were using to support your observation. The revised GRACE data estimates of ice mass balance loss is only about a third to a half of the most recently published GRACE estimates. I exchanged comments with a GRACE scientist just last week on this issue but here is a peer reviewed study on the issue.

    http://www.nature.com/nature/journal/v491/n7425/full/nature11621.html

    Another satellite data set IceSAT states:

    “During 2003 to 2008, the mass gain of the Antarctic ice sheet from snow accumulation exceeded the mass loss from ice discharge by 49 Gt/yr (2.5% of input), as derived from ICESat laser measurements of elevation change.”

    http://ntrs.nasa.gov/search.jsp?R=20120013495

    “Arctic sea ice is melting so quickly that the Arctic ocean could be essentially ice-free in September by 2030. This would briefly expose most of the Arctic ocean to the atmosphere for the first time in hundreds of thousands of years.”

    Again I am frustrated that you provide no published peer reviewed articles on this. The following is one of over a dozen of peer reviewed studies that I have read indicating that the Arctic experienced numerous ice free summer episodes in the early Holocene 6,000 to 10,000 YBP.

    http://adsabs.harvard.edu/abs/2007AGUFMPP11A0203F

    “Conclusion: The world is warming, and our skyrocketing CO2 emissions have very likely caused most of the global warming since 1950.”

    Again you did not show any calculations in how you arrived at your conclusion while the world is warming, peer reviewed science suggests that the jury is still out on % attribution of man made CO2 to that warming. For instance peer reviewed evidence suggests that, “Quantitatively, the recurrent multidecadal internal variability,often underestimated in attribution studies, accounts for 40% of the observed recent 50-y warming trend.”

    http://depts.washington.edu/amath/research/articles/Tung/journals/Tung_and_Zhou_2013_PNAS.pdf

    Climate scientists admit freely that they still don’t understand what role clouds play in the warming and the same is true with aerosols. Your conclusion that man made CO2 contributes > 50% to warming since 1950, but from the research I’ve seen I am not willing to bet $10,000.00 that it is.

    One issue you didn’t cover which is relevant to your POV is the growing problem of climate sensitivity estimates which are being questioned and revised downward by many high profile climate scientists like Michael Schlesinger. I have been following this closely since 2007 and have communicated with the MET Office directly and I AM prepared to bet $10,000.00 that climate sensitivity estimate will be revised lower within 2 to 10 years. Since GCM scenarios are dependent on climate sensitivity as a key parameter in future scenarios I feel it that warming estimates out to the year 2100 will be revised downward and so will rates of climate change progression.

    • More to come, but first:

      Again I am frustrated that you provide no published peer reviewed articles on this. The following is one of over a dozen of peer reviewed studies that I have read indicating that the Arctic experienced numerous ice free summer episodes in the early Holocene 6,000 to 10,000 YBP.

      You’re right about this. I just changed the text to “thousands of years” after writing this comment. Thank you for that link, and thank you for correcting me on this point.

    • New Scientist is not an acceptable source to support your observation of Earth’s surface warming. I offer as a way of improving your POV that you offer climate center data set sources (see below) or peer reviewed research if you want higher level feedback on your thoughts.

      The fact that Earth is warming isn’t just my observation or POV (see below). It’s an elementary fact. This crash course is tagged introductory-science because it was written for Bloko.info, a Spanish newspaper for teenagers. Anyone who wants more rigorously sourced claims should read Abrupt climate change, which is tagged intermediate-science and offers many climate data links. (The index lists the “higher level feedback” I enjoyed after offering these datasets.)

      You linked to the raw UAH satellite dataset which measures the lower atmosphere. I’ve recently analyzed that dataset, using my own code and the excellent WoodForTrees database which allows students to easily explore many datasets.

      I think New Scientist’s warming world app is a great way for students to quickly see what’s happening to surface temperatures where they live. Don’t miss the map selector at the top of the screen, which colors the map based on 20-year trends. By default it shows 1993-2012, which shows widespread warming that’s more pronounced in the Arctic than the Antarctic. This is consistent with predictions made by climate scientists. Try comparing this map to previous time periods to see how the modern warming compares to that in previous decades.

    • I don’t understand why you choose only to consider WAIS and not the entire continent of Antarctica?

      West and East Antarctica respond differently to warming because of their different geographies. Warming the Earth heats the oceans, which evaporate more water vapor into the atmosphere. Because West Antarctica juts out into the Antarctic Circumpolar Current (ACC), those warming waters are thinning its ice sheet at an accelerating rate. Because East Antarctica doesn’t jut out as much into the ACC, those warming waters don’t mix as much with the cold water around East Antarctica. However, the increased water vapor in the atmosphere still allows for more snowfall. In East Antarctica, this increasing accumulation causes mass gain. Both phenomena are consistent with the physics of our warming planet.

      Surprisingly, West Antarctica is among the most rapidly warming regions on Earth. Its ice sheet is also mainly grounded below sealevel, making it more vulnerable to the warming oceans than the East’s which is mainly grounded above sealevel.

      Secondly the initial GRACE data estimates were wrong and had to be corrected and providing a source would have allowed readers to know which data you were using to support your observation. The revised GRACE data estimates of ice mass balance loss is only about a third to a half of the most recently published GRACE estimates. I exchanged comments with a GRACE scientist just last week on this issue but here is a peer reviewed study on the issue.

      Three authors of that study summarized its results:

      “Our recently published Nature paper (King et al, 2012), used GRACE gravity data to infer Antarctic ice mass trends as in previous work, but with an updated estimate of the GIA correction. Most of the co-authors on our paper were also involved in a much larger study using all three techniques, with results reported in Science (Shepherd et al., 2012) in entirely in agreement with our estimate of Antarctic mass change. …”

      Then they explain that GRACE estimates changed (mainly in West Antarctica) because scientists developed a new model of glacial isostatic adjustment (GIA). Since I’ve previously cited Shepherd et al. 2012, I’m “entirely in agreement” with King et al. 2012.

      .

      .

      Last month, I also mentioned that I study GRACE data which show accelerating ice mass loss in both Greenland and West Antarctica. As do GPS data, laser altimetry, estimates of precipitation minus glacier discharge, etc.”

      Note that “accelerating ice mass loss” links to IMBIE, which is the same study as Shepherd et al. 2012.

      Also, note that this GIA controversy doesn’t change the fact that Greenland and West Antarctica are losing ice at an accelerating rate. I’ve explained that changing the GIA model can only change the trend, not the acceleration.

      Another satellite data set IceSAT states: “During 2003 to 2008, the mass gain of the Antarctic ice sheet from snow accumulation exceeded the mass loss from ice discharge by 49 Gt/yr (2.5% of input), as derived from ICESat laser measurements of elevation change.”

      Note that Zwally’s abstract repeatedly refers to accelerating ice flow in West Antarctica and says “A slow increase in snowfall with climate warming, consistent with model predictions, may be offsetting increased dynamic losses.”

      So Zwally’s presentation is consistent with mainstream climate science. His ICESat analysis just concludes that the warming-driven ice sheet thinning in the West is outweighed by the warming-driven snowfall accumulation in the East.

      However, Zwally’s conclusion disagrees with GRACE which shows that Antarctica as a whole is losing mass.

      Both WUWT and Steven Goddard implied that this presentation by Zwally represents “the” ICESat data set. In reality, ICESat data are used by many scientists, including the authors of Shepherd et al. 2012:

      .
      Fig. 3: Greenland, Antarctica mass balance
      .

      Fig. 3 from Shepherd et al. 2012 compares mass trends in Greenland and Antarctica using GRACE gravimetry, radar altimetry, laser altimetry (ICESat), and the input-output method which subtracts glacier discharge from estimates of precipitation. Note that all four methods agree in West Antarctica. In East Antarctica, ICESat alone shows significant mass gain. In general, GRACE estimates lie between those from ICESat and the input-output method.

      Both GRACE and ICESat need to subtract a GIA model (described above) to account for the movement of rock under the ice. However, GRACE directly measures mass while ICESat uses a laser altimeter to measure the height (and thus volume) of the ice. This volume is then converted to mass using a model of ice and snow densities.

      GRACE doesn’t require a model of ice and snow densities because it directly measures mass. Considering the contempt shown for models at WUWT, it’s interesting that they prefer Zwally’s ICESat analysis over GRACE as a way to measure ice mass.

      Finally, note that this controversy doesn’t change the fact that Greenland and West Antarctica are losing ice at an accelerating rate.

    • … while the world is warming, peer reviewed science suggests that the jury is still out on % attribution of man made CO2 to that warming. … Your conclusion that man made CO2 contributes > 50% to warming since 1950, but from the research I’ve seen I am not willing to bet $10,000.00 that it is.

      First, I’ve explained that this wording understates the human contribution, and included a graph comparing estimates of human and natural contributions from 6 peer reviewed papers:

      .
      Results of 6 attribution studies
      .

      Second, it’s not just my conclusion. The IPCC’s conclusion that most of the warming since 1950 is very likely due to human emissions of greenhouse gases has been endorsed by the National Academy of Sciences, the National Aeronautics and Space Administration, the National Center for Atmospheric Research, the National Oceanic and Atmospheric Administration, the American Geophysical Union, the American Institute of Physics, the American Physical Society, the American Meteorological Society, the American Statistical Association, the American Association for the Advancement of Science, the Federation of American Scientists, the American Quaternary Association, the American Society of Agronomy, the Crop Science Society of America, the Soil Science Society of America, the American Astronomical Society, the American Chemical Society, the Geological Society of America, the American Institute of Biological Sciences, the American Society for Microbiology, the Society of American Foresters, the Australian Institute of Physics, the Australian Meteorological and Oceanographic Society, the Australian Bureau of Meteorology and the CSIRO, the Geological Society of Australia, the Federation of Australian Scientific and Technological Societies, the Australian Coral Reef Society, the Royal Society of the UK, the Royal Meteorological Society, the British Antarctic Survey, the Geological Society of London, the Society of Biology (UK), the Canadian Foundation for Climate and Atmospheric Sciences, the Canadian Meteorological and Oceanographic Society, the Royal Society of New Zealand, NIWA, MetService, the Polish Academy of Sciences, the European Science Foundation, the European Geosciences Union, the European Physical Society, the European Federation of Geologists, the Network of African Science Academies, the International Union for Quaternary Research, the International Union of Geodesy and Geophysics, the Wildlife Society (International), and the World Meteorological Organization.

      There aren’t any national or international scientific societies disputing the conclusion that most of the warming since 1950 is very likely to be due to human emissions of greenhouse gases, though a few are non-committal. The last organization to oppose this conclusion was the American Association of Petroleum Geologists (AAPG). They changed their position statement in 2007 to a non-committal position because they recognized that AAPG doesn’t have experience or credibility in the field of climate change and wisely said “… as a group we have no particular claim to knowledge of global atmospheric geophysics through either our education or our daily professional work.”

    • (Ed. note: A slightly modified version of this comment was published at Skeptical Science, eliciting a few comments. Dr. Tung later responded.)

      Again you did not show any calculations in how you arrived at your conclusion. While the world is warming, peer reviewed science suggests that the jury is still out on % attribution of man made CO2 to that warming. For instance peer reviewed evidence suggests that, “Quantitatively, the recurrent multidecadal internal variability, often underestimated in attribution studies, accounts for 40% of the observed recent 50-y warming trend.” [Mike]

      If a student asks why Alaska warms from January to July, saying “that’s the annual cycle” isn’t a real answer. It’s just a new name for the mystery. A real answer would involve physics: the northern hemisphere receives more energy in July because it points toward the Sun more directly in July than in January.

      Similarly, many people blame the recent 50 year warming trend on various “climate cycles” but don’t explain the physics causing Earth to gain energy. The paper you linked, Tung and Zhou 2013, is just the latest attempt to blame a large part of Earth’s warming on the Atlantic Multidecadal Oscillation (AMO), which is a long-term fluctuation in N. Atlantic sea surface temperatures. Previous attempts include Bob Tisdale’s claim and Zhou and Tung 2012 which WUWT advertised.

      Fig. 4 just subtracts any apparent “oscillations” with periods between 50 and 90 years, without explaining the physics causing Earth to gain and lose energy. Tung and Zhou seem to imply that these cycles are more compelling than the mainstream explanation which involves nonrepeating events. But the universe is full of cycles and nonrepeating events, all of which obey the laws of physics. Because mainstream science addresses the energy balance, I find it more compelling than math without a basis in physics.

      Fig. 5 repeats the analysis of Foster and Rahmstorf 2011 (PDF) while also removing the AMO, and obtains an anthropogenic warming trend over the last 33 years of 0.07°C/decade, less than half of Foster and Rahmstorf’s 0.17°C/decade.

      .

      .

      I’ve discussed a short video based on Foster and Rahmstorf 2011, which subtracted some natural phenomena that cause Earth to gain and lose energy:

      1. Solar variations, which can add energy to the Earth’s surface. Importantly, anthropogenic global warming (AGW) doesn’t make the Sun brighter. This means it can be subtracted without ignoring AGW.
      2. Volcanic eruptions, which can block energy from reaching Earth’s surface. Global warming doesn’t change how often volcanos erupt, so it can also be subtracted without ignoring AGW.
      3. ENSO, which can be defined based on pressure differences and mostly trades energy between the deep ocean and the surface. Even though global warming might indirectly affect ENSO, it’s important to note that it hasn’t yet: the ENSO index doesn’t have a significant 50-year trend. This means it can be subtracted without ignoring AGW.

      Tung and Zhou also subtracted the AMO index, which is defined as detrended N. Atlantic sea surface temperatures. It mostly adds energy to the surface by cooling the deep ocean (see below). Note that the AMO index is defined solely on temperatures, so anthropogenic global warming can increase the AMO index. This means subtracting it potentially ignores AGW.

      Their analysis concludes that:

      “The underlying net anthropogenic warming rate in the industrial era is found to have been steady since 1910 at 0.07-0.08°C/decade…” [Tung and Zhou 2013]

      First, that’s absurd. Anthropogenic warming is caused primarily by the radiative forcings of greenhouse gases, which didn’t skyrocket until about 1950 when our population and energy use per person both skyrocketed. More comprehensive analyses also show total anthropogenic radiative forcings increasing dramatically after 1950. Basic physics show that the anthropogenic warming rate should be higher after 1950.

      Second, their absurd claim isn’t really a conclusion; it’s actually the assumption that (if true) would have justified removing the AMO to determine the anthropogenic warming trend. Their paper is a circular argument. Here’s why:

      “The removal of the AMO in the determination of the anthropogenic warming trend is justified if one accepts our previous argument that this multidecadal variability is mostly natural.” [Tung and Zhou 2013]

      No. Removing the AMO to determine anthropogenic warming would only be justified if detrending the AMO from 1856-2011 actually removed the trend due to anthropogenic warming. But that’s absurd: basic physics show that the anthropogenic warming rate should be higher after 1950. As a result, their approach overestimates anthropogenic warming before 1950, and underestimates it after 1950.

      Warming the globe also warms the N. Atlantic. Tung and Zhou have subtracted N. Atlantic temperatures that contain an anthropogenic trend over the last 33 years from global temperatures, and seem surprised to find a lower anthropogenic trend over the last 33 years. I’m not.

      Tung and Zhou implicitly assumed that the anthropogenic warming rate is constant before and after 1950, and (surprise!) that’s what they found. This led them to circularly blame about half of global warming on regional warming. Where’d that heat come from, if not from CO2?

      Since the words “energy” and “heat” don’t appear in Tung and Zhou 2013, let’s approximate the energy they’re ignoring. Warming a crude model of the atmosphere and upper layer of the ocean by 1°C requires about 10,000,000 megaton atomic bombs of energy. Tung and Zhou 2013 magic away 0.1°C/decade over the last 33 years, so they’ve ignored about 3,000,000 megaton atomic bombs of energy.

      Internal variability moves heat around the Earth’s climate without altering its total heat content. I’ve already noted that the deep ocean can’t be the source of surface warming because the Argo probes show that the deep ocean is also warming. However, that claim was based on global averages. If AMO internal variability were causing surface warming through changes in circulation, the deep N. Atlantic should be losing enough heat to account for the surface warming. But the N. Atlantic as a whole has warmed over the past 50 years (PDF). So where’d that heat come from, if not from CO2?

      Also, it’s strange that Tung and Zhou acknowledge comments by Isaac Held, given his analysis of internal variability.

      Tung and Zhou are mathematicians, so it’s understandable that their math lacks a basis in physics. But it’s baffling that their math got past peer review at journals which should have noticed these flaws.

      • Thanks for bringing this interesting topic to my attention. I basically agree with your comments. Earth’s climate change is owing to both internal variability that does not involve the change of the total energy content of the Earth system, and variability that does. However, it seems to me that few of the “cycles” and “nonrecurrent climate events” are qualified for being strictly internal variability. Even short-term variability like ENSO has signals in the total energy content caused by the variations of total solar energy received. I believe AMO also has such signals. Any ocean-atmosphere coupled phenomena would cause some variability in the solar energy received by Earth (via albedo, greenhouse gases, etc.). To differentiate between the effects of man-made warming from pure natural processes on these cycles and events is the crux of the challenge. [Lee]

        I agree, and only focused on strictly internal variability because Tung and Zhou 2013 seemed to. We can agree that changes in albedo and insulation due to cloud cover variability at different heights change Earth’s total heat content.

        I even think differentiating such changes using the AMO index might be possible. But first it would require carefully removing the non-linear anthropogenic trend. Instead of the physically absurd linear detrending of the AMO index, perhaps removing the physics-based anthropogenic radiative forcings (or even just detrending while letting the trend change in 1950) would help overcome this challenge of differentiation.

        P.S. I’ve added a few “mostlies” to ENSO and AMO to avoid giving that strict impression. Thanks Lee.

      • Isaac Held’s post on Atlantic multi-decadal variability and aerosols is relevant and highly recommended.

      • The Economist also referenced Tung and Zhou 2013.

      • Dr. Tung and Bob Tisdale responded. Gabriela copies part of Dr. Tung’s response below.

      • Prof. Judith Curry also references Tung and Zhou 2013 on page 6 of her contrarian statements to the U.S. House of Representatives.

        Update: … and keeps digging.

      • Dr. Tung responds again; my response is deep in the comments. Also, Bob Tisdale tells me how to be taken seriously, like he is.

      • (Ed. note: This comment was copied from here.)

        In my original post, I claimed that regressing global temperatures against the linearly-detrended Atlantic Multidecadal Oscillation (AMO) to determine anthropogenic global warming (AGW) assumes that AGW is linear. Thus, even if AGW actually were faster after 1950, Dr. Tung’s method would conclude that AGW is linear anyway.

        Dr. Tung dismissed Dikran Marsupial’s MATLAB simulation because its conclusion is “obvious” and suggested coming up with a better example without technical problems. I disagree with this criticism and would like to again thank Dikran for his contribution, which inspired this open-source simulation written in the “R” programming language.

        Can regressing against the linearly-detrended AMO detect nonlinear AGW?

        Imagine that AGW is very nonlinear, such that the total human influence on surface temperatures (not the radiative forcing) is a 5th power polynomial from 1856-2011:

        t = 1856:2011
        human = (t-t[1])^5
        human = 0.8*human/human[length(t)]

        Its value in 2011 is 0.8°C, to match Tung and Zhou 2013‘s claim that 0.8°C of AGW has occurred since 1910. The exponent “5” was chosen so its linear trend after 1979 is 0.17°C/decade, to match Foster and Rahmstorf 2011.

        Tung and Zhou 2013 describes an AMO with a 0.2°C amplitude and a 70 year period which peaks in the year 2000:

        nature = 0.2*cos(2*pi*(t-2000)/70)
         
        Fig 1
         

        Global surface temperatures are caused by both, along with weather noise described by a gaussian with standard deviation 0.2°C for simplicity:

        global = human + nature + rnorm(t,mean=0,sd=0.2)

        N. Atlantic sea surface temperatures (SST) are a subset of global surface temperatures, with added 0.1°C regional noise:

        n_atlantic = global + rnorm(t,mean=0,sd=0.1)
         
        Fig 2
         

        Compare these simulated time series to the actual time series. The AMO is linearly-detrended N. Atlantic SST:

        n_atlantic_trend = lm(n_atlantic~t)
        trend = coef(summary(n_atlantic_trend))[2,1]
        amo = n_atlantic - trend*(t-t[1])
         
        Fig 3
         

        Regress global surface temperatures against this AMO index and the exact human influence, after subtracting means so the intercept handles any non-zero bias:

        human_p = human - mean(human)
        amo_p = amo - mean(amo)
        regression = lm(global~human_p+amo_p)

        Will Dr. Tung’s method detect this nonlinear AGW? Here are the residuals:

         
        Fig 4
         

        Let’s add the residuals back as Dr. Tung does, then calculate the trends after 1979 for the true and estimated human influences:

         
        Fig 5
         

        Note that the absolute values are meaningless. A Monte Carlo simulation of 1000 runs was performed, and the estimated human trends since 1979 are shown here:

         
        Fig 6
         

        Notice that even though we know that the true AGW trend after 1979 is 0.17°C/decade, Dr. Tung’s method insists that it’s about 0.07°C/decade. That’s similar to the result in Tung and Zhou 2013, even though we know it’s an underestimate here.

        The linearity of the estimated human influence was measured by fitting linear and quadratic terms over 1856-2011 to the same 1000 runs. The same procedure applied to the true human influence yields a quadratic term of 5.9×10-5 °C/year2.

         
        Fig 7
         

        Even though we know that the true quadratic term is 5.9×10-5 °C/year2, the average quadratic term from Dr. Tung’s method is less than half that. The true AGW term’s nonlinearity is 5th order, so this is a drastic understatement.

        Conclusion: Tung and Zhou 2013 is indeed a circular argument. By subtracting the linearly-detrended AMO from global temperatures, their conclusion of nearly-linear AGW is guaranteed, which also underestimates AGW after ~1950.

      • (Ed. note: This comment was copied from here.)

        … I need to reiterate the basic premise of parameter estimation: if A is the true value and B is an estimate of A but with uncertainty, then B should not be considered to underestimate or overestimate A if A is within the 95% confidence interval (CI) of B. … Not taking CI of the estimate into account is not the only problem in your post 117. … [KK Tung]

        My Monte Carlo histograms estimated the confidence intervals. To make these 95% confidence intervals more explicit, gaussians are now fit to the histograms. For comparison, 95% confidence intervals from the post-1979 trend regressions are also reported now.

        A more serious problem is your creation of an almost trivial example for the purpose of arguing your case. … I said this is an almost trivial example because if this small “regional” noise were zero it would have been a trivial case (see later). Even with the small regional noise, your n_atlantic is highly correlated with your global data at higher than 0.8 correlation coefficient at all time scales. [KK Tung]

        My original timeseries were simply chosen to look like these real timeseries from GISS and NOAA.

        I wrote a new R program that downloads the HadCRUT4 annual global surface temperatures, and calculates annual averages of NOAA’s long monthly AMO index and N. Atlantic SST:

        .
        Real Data
        .

        The correlation coefficient between annual HadCRUT4 and N. Atlantic SST is 0.79. Linear regression is based on correlations, so my original synthetic timeseries were too highly correlated. Thanks for pointing this out, Dr. Tung.

        To make your synthetic data slightly more realistic while retaining most of their features that you wanted we could either increase the standard deviation of the regional noise from 0.1 to 0.3—this change is realistic because the regional variance is always larger than the global mean variance… [KK Tung]

        In my original simulation, the regional variance was already larger than the global mean variance because the regional noise was added to the global noise. In reality, the annual N. Atlantic SST variance is 0.04°C2 but the annual global HadCRUT4 variance is 0.07°C2. Adding 0.3°C of regional noise yields a reasonable correlation coefficient, but it doesn’t look realistic:

        .
        Synthetic 0.3C noise
        .

        Or we could retain the same combined standard deviation as your two noise terms in n_atlantic, but from a different draw of the random variable than the random variable in global… [KK Tung]

        That looked more realistic but the average correlation coefficient over 10,000 runs was 0.64±0.08, which is too small. So I chose new simulation parameters to match the real correlation coefficient (0.79) and produce more realistic timeseries:

        human     = (t-t[1])^7
        human     = 0.7*human/human[length(t)]
        nature    = 0.15*cos(2*pi*(t-2000)/70)
        global    = human+nature+rnorm(t,sd=0.11)
        n_atlantic= human+nature+rnorm(t,sd=sqrt(2*0.11^2))

        Now the AMO’s amplitude is 0.15°C and the total human contribution is 0.7°C, both of which match the lower bounds in Tung and Zhou 2013. The nonlinearity is now 7th order to keep the true post-1979 human trend at 0.17°C/decade.

        .
        New synthetic data
        .

        Averaged over 10,000 Monte Carlo runs, the synthetic correlation coefficient was 0.74±0.06, which contains the real value (0.79). The synthetic global variance is 0.06±0.01°C2 which contains the real value (0.07°C2). The synthetic N. Atlantic variance is 0.07±0.01°C2, which is still larger than the real value (0.04°C2). However, the discrepancy has shrunk and the LOWESS smooth removes fast fluctuations anyway.

        You can also make your example less deterministic and hence less trivial, by smoothing your AMO index as we did in our paper. [KK Tung]

        To match your paper, a 25-year LOWESS smooth was applied to the real and synthetic AMO indices:

        smoothed_amo = lowess(t,amo,f=25/length(t))$y
        amo = smoothed_amo
        .
        Real AMO
        .
        Synthetic AMO
        .

        Here are the real residuals:

        .
        Real residuals
        .

        Here are the synthetic residuals:

        Synthetic residuals
        .

        Here are the real results:

        .
        Real results
        .

        (For easier comparison, the true human curve was shifted so it has the same post-1979 mean as the estimated human trend.)

        Those results only use a white-noise model for comparison to the simulations. The real data are autocorrelated, and the caption of Fig. 1 in Zhou and Tung 2012 says the noise is order AR(4), which yields a trend of +0.12±0.05°C/decade.

        Here are the synthetic results:

        .
        Synthetic results
        .

        Averaged over 10,000 runs, the synthetic post-1979 trends and 95% confidence intervals are +0.09±0.04°C/decade. This histogram provides another similar estimate:

        .
        Synthetic trends histogram
        .

        The true quadratic term is +5.15×10-5 °C/year2, but the estimated value and its 95% confidence interval is +3.24±0.72×10-5 °C/year2. The estimated human influence is still significantly more linear than the true human influence, which is actually 7th order.

        … In the MLR analysis of the real observation the degree of collinearity is much smaller, hence our error bars are much smaller, and so our MLR analysis gave useful results while your artificially constructed case did not yield useful results. … [KK Tung]

        Despite smoothing the AMO index and choosing simulation parameters that yield timeseries and correlation coefficients that are more realistic, the true post-1979 trend of 0.17°C/decade is still above the 95% confidence interval. The same procedure applied to real data yields similar trends and uncertainties. Therefore, I still think Tung and Zhou 2013 is a circular argument.

        In our PNAS paper, we said that because of the importance of the results we needed to show consistency of the results obtained by different methods. The other method we used was wavelet. Applying the wavelet method to your example and to all the cases mentioned here we obtain the correct estimate of the true value foranthropogenic warming rate over 98% of the time. The wavelet method does not involve detrending and can handle both linear or nonlinear trends. [KK Tung]

        The wavelet method may have its own problems, but these problems are orthogonal to the problem of the MLR as no detrending is involved. [KK Tung]

        The wavelet method is just another way to curve-fit, which is also inadequate because attribution is really a thermodynamics problem. Again, your curve-fitting claim that ~40% of the surface warming over the last 50 years can be attributed to a single mode of internal variability contradicts Isaac Held and Huber and Knutti 2012 who used thermodynamics to conclude that all modes of internal variability couldn’t be responsible for more than about 25% of this surface warming.

      • (Ed. note: this comment was copied from here.)

        Thanks to Bob Loblaw for running this new R script on his computer.

        We argued in our PNAS paper that it is the low-frequency component of the regional variability that has an effect on the global mean. So although you tried to match the high correlation of the two quantities in the observed, this was accomplished by the wrong frequency part of the variance. [KK Tung]

        Linear regression depends on the overall correlations, so a realistic simulation will match that rather than trying to match the correlations at specific frequencies. After you criticized my original simulation’s high correlation coefficient, I chose new parameters so the synthetic correlation was slightly below the real value. The low-frequency component of my synthetic N. Atlantic SST already affects the global mean because the 70-year “nature” sinusoid is present in both timeseries.

        If you agree with the amplitudes of the noise in your previous example, then we can proceed with this example. Your only concern in this case was that the correlation coefficient between N. Atlantic and global data is 0.64, a bit smaller than the observed case of 0.79. “That looked more realistic but the average correlation coefficient over 10,000 runs was 0.64±0.08, which is too small.” I suggest that we do not worry about this small difference. Your attempt to match them using the wrong part of the frequency makes the example even less realistic. [KK Tung]

        Let’s judge realism by first considering the real timeseries:

        .
        Real Data
        .

        In contrast, here are synthetic timeseries using Dr. Tung’s preferred parameters:

        .
        Dr. Tung's synthetic timeseries
        .
        • Dr. Tung’s synthetic correlation coefficient between global and N. Atlantic SST averaged to 0.64±0.08 over 1,000,000 runs, which doesn’t contain the real value (0.79).
        • Dr. Tung’s synthetic global variance is 0.12±0.02°C2, which doesn’t contain the real value (0.07°C2).
        • Dr. Tung’s synthetic N. Atlantic SST variance is 0.13±0.02°C2, which doesn’t contain the real value (0.05°C2).

        Now here are synthetic timeseries using my preferred parameters:

        .
        Dumb Scientist's synthetic data
        .
        • My synthetic correlation coefficient between global and N. Atlantic SST averaged to 0.74±0.06 over 1,000,000 runs, which contains the real value (0.79).
        • My synthetic global variance is 0.06±0.01°C2, which contains the real value (0.07°C2).
        • My synthetic N. Atlantic SST variance is 0.07±0.01°C2, which doesn’t contain the real value (0.05°C2). However, altering this would violate Dr. Tung’s claim that “the regional variance is always larger than the global mean variance”. (This counterintuitive result is due to the real data after ~1986, when the N. Atlantic warmed slower than the globe.)

        Your claim that my new example is “even less realistic” is completely unsupported. I suggest that we do worry about this “small difference” between the correlation coefficients of your suggested timeseries vs. those of the real timeseries because MLR is based on correlations, and your suggested timeseries’ correlation coefficient is so low that the real value doesn’t even lie within its 95% confidence interval.

        We performed 10,000 Monte- Carlo simulations of your example, and found that the true value of anthropogenic response, 0.17 C per decade, lies within the 95% confidence interval of the MLR estimate 94% of the time. So the MLR is successful in this example. If you do not believe our numbers you can perform the calculation yourself to verify. If you agree with our result please say so, so that we can bring that discussion to a close, before we move to a new example. Lack of closure is what confuses our readers. [KK Tung]

        For your preferred parameters, I actually find an even higher success rate. This shouldn’t be surprising, because your correlation coefficient is much lower than the real value, which causes the regression to underweight the AMO and thus increases the estimated trend. Your timeseries also have variances that are much larger than the real values, which inflates the uncertainties. Here’s a boxplot of your post-1979 trend uncertainties vs. the trends:

        .
        Dr. Tung's uncertainty boxplot
        .

        The comparable white-noise uncertainty for real data is 0.034°C/decade, which is much smaller than your synthetic uncertainties. If you increase the variances even higher above the real variances, the uncertainties will be so large that you’ll be able to claim 100% success. But that wouldn’t mean anything, and neither does your current claim.

        From your first sentence: “My Monte Carlo histograms estimated the confidence intervals”, we can infer that you must have used a wrong confidence interval (CI). We have not realized that you have been using a wrong CI until now. The real observation is one realization and it is the real observation that Tung and Zhou (2013) applied the multiple linear regression (MLR) to. There is no possibility of having 10,000 such parallel real observations for you to build a histogram and estimate your confidence interval! So the CI that we were talking about must be different, and it must be applicable to a single realization. … [KK Tung]

        My original simulation’s Monte Carlo histograms estimated the confidence intervals. For comparison, my second simulation also calculated 95% confidence intervals around each realization; these came from the least squares fit using the standard procedure you descibed. Here’s a boxplot of my post-1979 trend uncertainties vs. the trends:

        .
        Dumb Scientist's uncertainty boxplot
        .

        The comparable white-noise uncertainty for real data is 0.034°C/decade, which lies within my synthetic uncertainties. The true post-1979 trend lies within the 95% confidence interval only 9% of the time, but this statement doesn’t report the best-fit trend or the uncertainties so I think the boxplots and histograms are more informative.

        In post 153, you created yet a new example. This example is even more extreme in that the true anthropogenic warming is a seventh order polynomial, from the fifth order polynomial in your original example in post 117, and the second order polynomial in Dikran Marsupial’s examples. This is unrealistic since in this example most of the anthropogenic warming since 1850 occurs post 1979. Before that it is flat. This cannot be justified even if we take all of the observed increase in temperature as anthropogenically forced. It also increases faster than the known rates of increase of the greenhouse gases. You decreased the standard deviation of the global noise of your original example by half. You took my advice to have a different draw of the random number generator for n_atlantic but you reduced the variance from your original example. [KK Tung]

        As discussed above, these new parameters were chosen to address your concerns about the correlation coefficients and variances of the real vs. synthetic time series. The exponents were chosen so the true post-1979 anthropogenic trend is 0.17°C/decade in both cases.

        These are thought experiments, which eliminate real-world complications to focus on the key issue. I’m not suggesting that the real human influence is a 5th or 7th order polynomial. But if your method can detect nonlinear AGW, it should recover the true post-1979 trend in these hypothetical cases.

        It’s strange that you’re disputing the shape of my thought experiment’s total human influence. We can easily measure the variances and correlations of the real timeseries (and my preferred synthetic timeseries match better than yours), but you’ve pointed out that aerosols are uncertain so we can’t easily measure the total human radiative forcing. Also, the total human influence on temperature is roughly proportional to the time integral of these total human radiative forcings, so it should grow faster than the forcings.

        Our criticism of your original example was mainly that the noise in your N. Atlantic data was the same as the noise in the global mean data. In fact, they came from the realization. [KK Tung]

        As you note, I already addressed this criticism by drawing global and regional noises from different realizations. But even in my original example, the N. Atlantic noise wasn’t the same as the global noise because the N. Atlantic data had extra regional noise added to the global noise.

        Using your exact example and your exact method, we repeated your experiment 10,000 times, and found that the true human answer lies within the 95% confidence level of the estimate 93% of the time. This is using the linearly detrended n_atlantic as the AMO index, unsmoothed as in your original example. If this AMO index is smoothed, the success rate drops to 33%. In our PNAS paper we used a smoothed AMO index and we also looked at the unsmoothed index (though not published), and in that realistic case there is only a small difference between the result obtained using the smooth index vs using the unsmoothed index. In your unrealistic case this rather severe sensitivity is a cause of alarm, and this is the time for you to try a different method, such as the wavelet method, for verification. [KK Tung]

        Again, my case is more realistic than yours in terms of timeseries appearance, correlation, variances, and error bars. Ironically, I think my case is more sensitive to smoothing than yours because I took your advice to make the regional noise proportionally larger compared to the global noise. Your global noise (0.2°C) is twice as large as your regional noise (0.1°C) but mine are both equal to 0.11°C, so smoothing my AMO index removes proportionally more uncorrelated noise than smoothing yours. I tested this by setting “custom” parameters equal to mine (with my 7th order human influence, etc.) but with your noise parameters, and observed similar sensitivity to smoothing the AMO index over 10,000 runs.

        You casually dismissed the wavelet method as “curve-fit”. Wavelet analysis is an standard method for data analysis. In fact most empirical methods in data analysis can be “criticized” as “curve-fit”. The MLR method that you spent so much of your time on is a least-square best fit method. So it is also “curve-fit”. [KK Tung]

        Indeed, that’s why I don’t think wavelets are different enough from linear regression to provide independent methodological support. Again, attribution is really a thermodynamics problem that needs to be calculated in terms of energy, not curve-fitting temperature timeseries. Your curve-fitting claim that ~40% of the surface warming over the last 50 years can be attributed to a single mode of internal variability contradicts Isaac Held and Huber and Knutti 2012 who used thermodynamics to conclude that all modes of internal variability couldn’t be responsible for more than about 25% of this surface warming.

      • I set “custom” parameters equal to mine (with my 7th order human influence, etc.) but with Dr. Tung’s noise parameters, and 10,000 runs seemed to show that the sensitivity to smoothing was similar to simulations using Dr. Tung’s overall parameters. This would have suggested that my simulation’s sensitivity to smoothing the AMO index was related to the relative noise levels. However, running 100,000 simulations of the custom parameters reveals sensitivity similar to my overall parameters, so my hypothesis was wrong.

        I still don’t know why my simulation is more sensitive to smoothing, but I think the important point is still that my parameters produce more realistic timeseries, correlations, variances, and error bars. (Also, attribution is still really a thermodynamics problem.)

      • We start with the HadCRUT4 surface temperature data. We fit it with a 6th order polynomial over the entire period of 1850-2011, instead of just over the period 1979-2011 as DS did. This produces the observed 0.17 C per decade of warming after 1979, the same as in DS. But in contrast, the warming here exists over the entire period, not just after 1979. The polynomial is smoothed by a cubic spline so that the trend is monotonic before 1979. This anthropogenic component will be called human. It is denoted by the red curve in Figure a. To create the AMO the-above-obtained human is subtracted from HadRUT4 data. The difference is smoothed with a 50-90 year wavelet band pass filter. This is the AMO (note: not the AMO Index). This is called nature (denoted by the purple curve in Figure b) and is the counterpart to DS’s 70-year sinusoid. [KK Tung]

        I’m sorry for the long delay; my day job has consumed my life. I like your new simulation, and tried to reproduce it in R, though I used a Fourier transform band pass filter instead of wavelets. Regardless, the human and natural influences look similar to those in Dr. Tung’s plots (which are sadly no longer visible in his comment).

        .
        New human influence
        .
        .
        New natural influence
        .

        My first tests had 10,000 Monte Carlo simulations each:

        1. 0.132±0.012°C/decade, 19% contain true trend.
        2. 0.156±0.012°C/decade, 100% contain true trend.
        3. 0.137±0.012°C/decade, 70% contain true trend.
        4. 0.159±0.012°C/decade, 100% contain true trend.
        5. 0.168±0.012°C/decade, 100% contain true trend.
        6. 0.125±0.013°C/decade, 2% contain true trend.
        7. 0.140±0.012°C/decade, 86% contain true trend.

        They varied so much that I ran a few tests with 10,000,000 simulations each:

        1. 0.129±0.012°C/decade, 42% contain true trend.
        2. 0.121±0.012°C/decade, 2% contain true trend.

        I also tried matching ARMA(p,q) noise parameters to those of the real residuals.

        I still don’t know why the mean trends vary so much when using 10,000 Monte Carlo simulations. I agree with Dr. Tung that millions of runs shouldn’t be necessary, but for some reason “merely” 10,000 runs yield wildly varying results. More disturbingly, none of the trends in either of the 10,000,000 runs overlap with the mean trend in some of the 10,000 run cases. Maybe I’m not using the random number generator correctly?

        Without seeing how Dr. Tung’s code differs from mine, I don’t know how he was able to specify the percentage of 10,000 simulations that contained the true trend to two significant digits (“91%”) when my estimates vary from 2% to 100%, and most of those 20,070,000 white noise simulations don’t include the true trend.

        First, attribution is not necessarily a thermodynamics problem. The method adopted by IPCC AR4, the “optimal fingerprint detection and attribution method”. “is based on a regression of the observation onto model simulated patterns and relies on the spatio-temporal response patterns from different forcings being clearly distinct… The global energy budget is not necessarily conserved and observed changes in the energy budget are not considered”. This quote came from Huber and Knutti, 2012. [KK Tung]

        That’s a fair point; I should’ve qualified those statements as my opinion to avoid implying that everyone agrees. Personally, I read Huber and Knutti’s statement as a criticism of the optimal fingerprint method because it doesn’t consider or conserve the energy budget. This is an unusual situation where I’ve criticized fingerprints used by the IPCC and many researchers while agreeing with Dr. Pielke Sr. that ocean heat content is a better diagnostic than surface temperatures or stratospheric cooling fingerprints, etc. In my opinion, diagnostics more closely related to conservation of energy are more compelling. But you’re right, this is just my opinion.

        Towards the end of the paper, the authors compared the 50-year linear trends derived from unforced control runs in the CMIP3 models with the observed 50-year trends. These models do have internal ocean variability. ( DS, please note, this part is not based on a thermodynamic argument, but the result was what you referred to as from a thermodynamic argument.) [KK Tung]

        Why isn’t this part based on a thermodynamic argument? Since CMIP3 models have internal ocean variability, aren’t they simulating heat transfer between the deep ocean and surface (i.e. thermodynamics)?

        The authors concluded “For global surface temperature it is extremely unlikely (<5% probability) that internal variability contributed more than 26+/-12% and 18+/-9% to the observed trends over the last 50 and 100 years, respectively". So the "upper bound" is 38% for the last 50 years and 27% for the past 100 years, respectively. [KK Tung]

        Attribution over the last 50 years is based on Fig. 3(c) from Huber and Knutti 2012:

        .
        Huber and Knutti 2012 Fig. 3(c)
        .

        Given mean radiative forcings (etc.), the upper bound on the post-1950 surface trend due to internal variability is 26%. The lower bound is -26%, implying that internal variability actually offset surface warming. Regarding the “+/-12%” Huber and Knutti state “The probabilistic ranges presented here account for uncertainties in the observations, radiative forcing, internal variability and model inadequacy (see Methods).”

        Since the upper bound in Huber and Knutti is itself given as a probabilistic range, I don’t see why Dr. Tung’s singular estimate of “40%” should be compared to the upper bound of the upper bound rather than the best estimate of the upper bound.

        Given the uncertainty in the model’s oceans, I do not think these upper bounds rule out our ~40% and ~0% contribution of internal variability to the 50-year and 100-year trends, respectively. [KK Tung]

        Even if we compare your estimate to the upper bound on the upper bound of Huber and Knutti 2012, the probabilistic range on that upper bound at least attempts to account for “model inadequacy.” And 38% here is the upper bound on the upper bound of all modes of natural variability summed together. Even if this upper bound on the upper bound is appropriate and needs to be expanded because it didn’t fully account for model inadequacy… doesn’t this leave very little room for the PDO (for instance) to affect surface temperatures (at the same phase)?

        On Isaac Held’s blog#16 that DS refers to as providing an upper bound of 25% for the contribution of internal variability to the surface warming for the past 50 years: We need to recognize that Held is using a very simple two-box ocean model to illustrate the process of energy balance that can be used to constrain the contribution from internal variability. The exact figure of 25% as the upper bound should not be taken too seriously, and it could easily be 40%, given the fact that there is at least a factor of two variation in climate sensitivity in the IPCC models and he picked one particular value of climate sensitivity from one of the GFDL models for illustrative purpose. There were many other simplifying assumptions so that an analytic result could be obtained. [KK Tung]

        Yes, all thermodynamic estimates involve simplifications but in my opinion simplifications are preferable to not mentioning energy or heat altogether. And 40% here is the upper bound of all modes of natural variability summed together. Even if this upper bound needs to be expanded… doesn’t this leave very little room for the PDO (for instance) to affect surface temperatures (at the same phase)?

      • I’ve failed to communicate once again.

    • Climate scientists admit freely that they still don’t understand what role clouds play in the warming and the same is true with aerosols. Your conclusion that man made CO2 contributes > 50% to warming since 1950, but from the research I’ve seen I am not willing to bet $10,000.00 that it is.

      Clouds are a feedback (response) to warming, and they’re probably the most uncertain feedback. Aerosols are a forcing which cause short-term cooling, and they’re probably the most uncertain forcing. I’ve freely admitted these uncertainties:

      … each group independently tunes a handful of parameterizations controlling still-not-well-understood components of the climate, particularly the highly uncertain indirect effects aerosols have on cloud albedo (i.e. how strong a negative feedback X gigatons of aerosols constitutes). … GCMs can’t yet fully account for ENSO and other oscillations, need improved moist convection and cloud parameterizations, etc. … Decreasing the error bars further will require better understanding of cloud formation and aerosol interactions… The summary’s forcing chart clearly shows a huge, lopsided error bar on the cloud albedo effect, and lists the Level Of Scientific Understanding as “low”.

      I’ve also mentioned that Sir Bob Watson’s 2012 AGU talk addresses aerosol uncertainty. If you think aerosol uncertainty is an excuse to keep emitting massive quantities of CO2, you might want to watch Sir Watson’s talk. Or just think about speeding on an unfamiliar road when fog rolls in. Do you react to the added uncertainty of the fog by driving faster or slower?

      .

      .
    • One issue you didn’t cover which is relevant to your POV is the growing problem of climate sensitivity estimates which are being questioned and revised downward by many high profile climate scientists like Michael Schlesinger. I have been following this closely since 2007 and have communicated with the MET Office directly and I AM prepared to bet $10,000.00 that climate sensitivity estimate will be revised lower within 2 to 10 years. Since GCM scenarios are dependent on climate sensitivity as a key parameter in future scenarios I feel it that warming estimates out to the year 2100 will be revised downward and so will rates of climate change progression. [Mike]

      Mike’s already betting the future of civilization on his feeling that mainstream estimates of climate sensitivity will be revised lower, so why worry about $10,000?

      Briefly, equilibrium climate sensitivity is how much the climate will eventually warm if CO2 is doubled. Here are some introductions. Scientists often discuss the “Charney sensitivity” which ignores slow feedbacks like permafrost, oceans, methane hydrates, ice sheets, vegetation, etc.

      .
      From RC: climate sensitivity
      .

      The IPCC estimates that the Charney sensitivity is very unlikely to be less than 1.5°C, and likely to be less than 4.5°C per doubled CO2, based on many studies. Knutti and Hegerl 2008 also reviewed estimates of the equilibrium climate sensitivity:

      .
      Various estimates of climate sensitivity
      .

      Paleoclimate estimates like Rohling et al. 2012 (PDF) and Royer et al. 2007 (PDF) conclude that “a climate sensitivity greater than 1.5°C has probably been a robust feature of the Earth’s climate system over the past 420 million years”.

      Perhaps Mike is referring to Ring et al. 2012 which Schlesinger co-authored. Here’s the abstract:

      “Measurements show that the Earth’s global-average near-surface temperature has increased by about 0.8°C since the 19th century. It is critically important to determine whether this global warming is due to natural causes, as contended by climate contrarians, or by human activities, as argued by the Intergovernmental Panel on Climate Change. This study updates our earlier calculations which showed that the observed global warming was predominantly human-caused. Two independent methods are used to analyze the temperature measurements: Singular Spectrum Analysis and Climate Model Simulation. The concurrence of the results of the two methods, each using 13 additional years of temperature measurements from 1998 through 2010, shows that it is humanity, not nature, that has increased the Earth’s global temperature since the 19th century. Humanity is also responsible for the most recent period of warming from 1976 to 2010. Internal climate variability is primarily responsible for the early 20th century warming from 1904 to 1944 and the subsequent cooling from 1944 to 1976. It is also found that the equilibrium climate sensitivity is on the low side of the range given in the IPCC Fourth Assessment Report.”

      Ironically, this is yet another study which concludes that human forcings are the primary cause of the observed warming since the 19th century.

      Remember that the Charney sensitivity ignores slow feedbacks like permafrost, oceans, methane hydrates, ice sheets, vegetation, etc. Let’s think about those, and see if scientists tend to err on the side of least drama.

      Zimov et al. 2006 notes that there’s more carbon in permafrost than in the atmosphere, and that thawing permafrost releases CO2 and methane:

      .

      .

      Sea level has risen faster than predicted, which is partially due to water expanding as it’s heated, even down to 2000 meters.

      The Arctic sea ice minimum extent and the Arctic snow cover extent in June are both shrinking faster than predicted.

      Methane releases from the East Siberian Arctic Shelf may be much larger and faster than anticipated (press release, summary).

      I’ve mentioned that the ice sheets of Greenland and West Antarctica are losing mass at an accelerating rate, faster than the ice sheet models predicted. West Antarctica is among the most rapidly warming regions on Earth, with an ice sheet that’s vulnerable to the warming oceans because it’s mainly grounded below sealevel. In July 2012, about 97% of the surface of Greenland melted, which is unprecedented in the satellite record.

      We once hoped that vegetation would provide negative feedback by growing faster given more CO2. But wildfires appear more sensitive to global temperature than we thought, which just release that stored carbon anyway.

      Note that Schlesinger said “…for argument’s sake, let’s suppose the [climate sensitivity] is larger than the values we determined… humanity must act sooner and more rapidly…”

      Given that the real Earth system sensitivity includes all these feedbacks while Schlesinger’s study doesn’t, humanity must act soon and rapidly.

      People who call themselves lukewarmers accept most mainstream climate science, but think climate sensitivity is low. Even if that were true, we still need to reduce CO2 emissions. Sadly, many self-proclaimed lukewarmers spread misinformation to downplay the climate crisis.

      On the other hand, Admiral David Titley and the Pentagon actually have to take responsibility for protecting people. Here’s what they said in the 2010 Quadrennial Defense Review: “Climate change will contribute to food and water scarcity, will increase the spread of disease, and may spur or exacerbate mass migration.”

      .

      .

      The Sky Dragon Slayers’ silly belief that climate sensitivity is exactly 0.0°C contradicts centuries of physics and even the existence of blankets. They’re the only ones who should oppose the reasonable risk avoidance suggested by Republicans Art Laffer and Bob Inglis:

      .

      .

      And even the Sky Dragon Slayers should worry about ocean acidification because that doesn’t depend on climate sensitivity at all.

      It’s time for lukewarmers to decide whether they want to share the legacy of the Sky Dragon Slayers.

  3. I discovered your blog from comments you posted at “Watching The Deniers”. I have bookmarked it and will visit regularly, though will lurk more often than commenting. Thanks for clarifying the GRACE ICESat “controversy”. It is good to read the work of someone who understands the issues.

  4. Hi Dumbscientist,

    You confidently state that ” .. Before the Industrial Revolution in the 1800s, the concentration of carbon dioxide (CO2) in the atmosphere was about 280 parts per million (ppm) .. ” and that it hasn’t been as high as it is now for 15 million years.

    Are you sure of that? Accurate measurements of atmospheric CO2 didn’t start until the 19th century and those measurements contradict that claim (http://www.anenglishmanscastle.com/180_years_accurate_Co2_Chemical_Methods.pdf).

    Do you really believe that the air in snow flakes that are squashed over decades to eventually form ice sheets containing air bubbles retains its original composition throughout those decades, centuries and millennia that it is “trapped” in the ice?

    If you do then perhaps you can answer a question that has been puzzling me for several years. Why is it that CO2 molecules, being so much smaller than those of atmospheric gases such as N2, O2 and Ar, do not continue travelling down the pressure gradient from the firn deep in the ice sheet towards the surface long after those larger molecules are too big to escape from those air pockets.

    I put this question to ice-core “expert” Professor Eric Wolff on my thread “Another Hockey Stick Illusion” on the science forum of the University of Cambridge’s “Naked Scientists” project and his closing comment was ” .. I think that none of us has a definite molecular-level understanding of the physical process occurring at closeoff, and it would be great if someone can do the experiments in the lab to understand that better .. ” (http://www.thenakedscientists.com/forum/index.php?topic=38675.75).

    More on that topic can be found at “Molecular Fractionation in Ice” (http://globalpoliticalshenanigans.blogspot.co.uk/2010/12/smogbound-on-molecular-fractionation-in.html).

    Do you know something that Professor Wolff needs to be made aware of?

    Best regards, Pete Ridley (http://globalpoliticalshenanigans.blogspot.co.uk)

    • You confidently state that ” .. Before the Industrial Revolution in the 1800s, the concentration of carbon dioxide (CO2) in the atmosphere was about 280 parts per million (ppm) .. ” and that it hasn’t been as high as it is now for 15 million years. Are you sure of that?

      Atmospheric CO2 is now approaching 400 ppm. It hasn’t been this high for about 15 million years:

      “The highest estimates of pCO2 occur during the Mid-Miocene Climatic Optimum (MMCO; ~16 to 14 Ma), the only interval in our record with levels higher than the 2009 value of 387 ppmv. Climate proxies indicate the MMCO was associated with reduced ice volume and globally higher sea level (25 to 40 meters) (3), as well as warmer surface and deep-water temperatures (2, 20). These results are consistent with foraminiferal d11B data that indicate surface waters were more acidic ~20 Ma (12).” [Tripati et al. 2009]

      Update: See below.

      Accurate measurements of atmospheric CO2 didn’t start until the 19th century and those measurements contradict that claim.

      Wrong. You linked to E. G. Beck’s “180 years accurate CO2” which is unphysical nonsense. Graeme Bird also supports Beck’s nonsense, and a few other bizarre ideas.

      I put this question to ice-core “expert” Professor Eric Wolff on my thread “Another Hockey Stick Illusion” on the science forum of the University of Cambridge’s “Naked Scientists” project and his closing comment was ” .. I think that none of us has a definite molecular-level understanding of the physical process occurring at closeoff, and it would be great if someone can do the experiments in the lab to understand that better .. ”

      Note that Wolff’s next sentence was “But it won’t alter the empirical facts” (emphasis added):

      “My first post (353863) was intended to be a very clear and direct answer to your original question; additionally I have since contacted Professors Alley and Severinghaus: although I have not seen their correspondence with you, they both tell me that they felt they had already answered you, in similar terms, in private e-mails in the past. Given this, I am surprised that you are still claiming your question has not been answered. I will therefore make one closing attempt to be absolutely clear. Your specific question was:

      ‘why do paleo-climatologists use collision diameter in preference to kinetic diameter when considering the migration of air molecules through firn and ice?’.

      The problem that we are all having is that this is a false or loaded question: the implication elsewhere in your posts is that the idea that CO2 is unfractionated compared to N2 comes from authors making this assumption. In fact it is quite the other way round: the empirical evidence that CO2 is not fractionated on enclosure (as well as the observation that Ar is less fractionated than O2) is what led these authors to hypothesise that collision diameter was the controlling variable. So the specific answer is that they use collision diameter because this is what allows them to rationalise the data they observe.

      I think that none of us has a definite molecular-level understanding of the physical process occurring at closeoff, and it would be great if someone can do the experiments in the lab to understand that better. But it won’t alter the empirical facts.

      Incidentally it may be worth mentioning that, in their expts, Severinghaus and Huber used trace gases (such as Ar and O2) that are understood to have been invariant in concentration over recent decades, precisely so they could look at the diffusion and enclosure processes free from any assumptions about temporal change in concentration.

      You raised several other points in different posts, and I simply don’t have time to answer them all. … I will be happy to answer other well-formulated questions about ice cores, but I don’t think I have any more to add to this thread.”

      Do you know something that Professor Wolff needs to be made aware of?

      No, not really. Judging by his comments, Wolff and I are on the same page:

      “… This is a really direct and elegant measurement which shows that, at least at Law Dome, there is no fractionation of CO2 on enclosure.

      One can supplement this direct evidence:
      a) For pre-industrial ice, one gets the same concentration at several different sites in Antarctica, so the conclusion of no fractionation for Law Dome must be true of other sites also;
      b) One gets an excellent overlap between measurements in the atmosphere at South Pole, and measurements in enclosed bubbles of the same age air at Law Dome (Etheridge et al 1996, Fig 3) – the same concentrations and trends are seen.”

      Just like Wolff, I don’t think I have any more to add to this thread.

    • Newer evidence shows that CO2 levels in the mid-Pliocene (~3 million years ago) were about 400ppm, comparable to today’s.

      Tripati et al. 2009 and a few other papers claim that Pliocene CO2 didn’t exceed ~370 ppm. But the newer studies are probably using more proxy data, so I’ll change “about 15 million years” to “millions of years”. Note that this is still older than the human race. Also note that temperatures during the mid-Pliocene were ~3°C higher than today, with sea levels ~20 meters higher.

      • After reading Tripati et al. 2009 more closely, it looks like they didn’t reconstruct CO2 during the period from 5 to 3.5 million years ago.

        When I first read the paper, I thought they’d reconstructed CO2 over all of the last 20 million years. Sorry for the confusion.

  5. LOVO posted on 2013-04-05 at 21:39

    G’day, Lovo (who often comments on Cafe Whispers) here, having a bit of trouble with some merchants of doubt here ….. please Super DS can you leap a tall building and help :D

    • LOVO asked me to answer questions. I’m very busy so I probably won’t be able to follow up, but here’s my brief contribution:

      Joe claims the Northwest Passage was completely ice-free during summer in the Medieval Warm Period, but the provided link didn’t work for me. Regardless, the crash course LOVO just linked provides many references showing that the Arctic sea ice as a whole is smaller than it’s been in thousands of years.

      Joe disputes that the increase in CO2 is due to human emissions. But our emissions are ~100x larger than those from volcanos; this can be verified by simple historical records and by isotope analyses. Also, volcanic CO2 wouldn’t use up atmospheric oxygen, but combustion of fossil fuel does use oxygen. And atmospheric oxygen has decreased as CO2 has increased over the last century.

      Joe also points out that satellites have only been around since 1979, complicating historical analyses of storm intensity. That’s why I found Grinsted et al. 2012 so fascinating, and discussed it in the comments of my article “Climate destabilization”. Short version: they used tide gauge records that extend back to the early 1900s to detect the storm surges of large hurricanes. They detected a statistically significant increase in large storm surges, with larger surges in hotter years.

      Mercucio disputes the fact that the world has continued to warm over the last 17 years. That’s only possible if one puts on self-imposed blinders by focusing only on surface temperatures, even though the warming oceans absorb ~90% of the extra heat trapped by our CO2. Even still, any noisy time series like surface temperatures has large statistical uncertainties when considering short timespans. So a more accurate description is that there hasn’t been a statistically significant change in the warming rate.

      I’ve repeatedly failed to explain the nuances of this common statistical misconception. Here’s my most recent attempt, where I calculated the statistical significance of the warming in the UAH satellite temperature dataset for different starting years (see the PDF graph).

  6. LOVO posted on 2013-04-06 at 00:18

    Thanks mate, ‘they’ seem to have disappeared since your little visit. I hope ‘they’ don’t come and pester you here like Peter Ridley did last time you visited down under. ….. again thanks Super DS :)

    • That didn’t last long. Mercucio stopped, but Joe’s epic Gish Gallop actually sped up. These “conversations” are so depressingly futile…

  7. the deduced anthropogenic trend can be corrected by adding the trend in the Residual to e*Anthro(t). It turns out that this trend deduced from the Adjusted temperature is not always the same that in e*Anthro(t). This procedure of deducing the trend from Adjusted temperature is actually not sensitive to the assumed form of Anthro(t), as long as it has a long-term trend: When the assumed Anthro(t) has a too large a trend after 1978 compared to before 1978, for example, the Residual (t) will show a negative trend after 1978 and a positive trend before that time. See the example discussed in Zhou and Tung [2013] . Since we do not know a priori what the form of net anthropogenic forcing is because of its large uncertain tropospheric sulfate aerosol component, to be agnostic we used a linear function for Anthro(t) in the intermediate step as a “placeholder”. That was probably the source of the circular argument criticism from DS: “Tung and Zhou implicitly assumed that the anthropogenic warming rate is constant before and after 1950, and (surprise!) that’s what they found. This led them to circularly blame about half of global warming on regional warming.” It is important to note that the trend we were talking about is the trend of the Adjusted data, and not the presumed anthropogenic predictor.

    • That was copied from Dr. Tung’s response to my critique of his AMO paper.

      I repeatedly pointed out that the form of the anthropogenic regressor or adding the residual back is not the problem that concerns me. It’s the fact that warming the globe also warms the N. Atlantic, and that net anthropogenic warming was faster after 1950.

  8. sidd posted on 2014-06-25 at 19:34

    forgive me for attempting to reach you here, but i have no email for you. Hopefully this will get through.

    I understand you are familiar with the GRACE results. Here is something i am wondering about. The following text is a comment i have posted on realclimate, arctic sea ice forum and now here:

    Perhaps you might enlighten me:

    At

    http://polarportal.dk/en/groenlands-indlandsis/nbsp/total-masseaendring/

    i see a graph of GRACE derived GIS mass loss through early 2014. Shocking, to me, is the almost nonexistent annual drop in summer 2013, as opposed to a pronounced fall every other summer for the period covered (2003 on). Why ?

    sidd

    • No problem. Short answer: I don’t know.

      Longer answer: This only seems unusual because Greenland mass has shown little inter-annual variability since GRACE was launched. Compared to West Antarctica, Greenland is much better represented by a trend + acceleration + annual + semi-annual. Except for 2013, as you point out.

      This reminds me of the record low Arctic sea ice minimum in 2012, followed by an unremarkable 2013 melt season. Similarly, the Greenland mass loss in 2012 was larger than any previous year. The “regression to the mean” in Greenland mass during 2013 was more severe than with Arctic sea ice but I can’t help but wonder if they’re connected.

      Inter-annual variability is harder to analyze than long-term trends, but perhaps this event will help us determine which factors influence Greenland’s mass loss. Were ocean waters at the bases of the major outlet glaciers colder than usual? Have those outlet glaciers decelerated because of internal dynamics, e.g. their grounding lines interacting with the underlying topography? Was the weather exceptionally cloudy or cold? Was less soot deposited on the snow? Alternatively, did mass loss remain constant but get offset by anomalously large amounts of snowfall accumulation in the interior? Etc.

      • I just noticed RaenorShine’s answer to your question, which is that GRACE was shut off during August and September to avoid stressing the batteries. This is indeed a worsening problem with GRACE, and apparently it caused us to miss most of the 2013 melt season.

        I didn’t notice this at first because I’m still working with a GRACE dataset that ends in early 2013, but PO.DAAC confirms that GRACE was shut off during August/September 2013.

        That’s kind of a shame. I was looking forward to seeing which influence (above) correlated most strongly with the missing melt season. Darn 12-year-old batteries.

  9. Micky posted on 2015-03-01 at 06:20

    Climate change is a sad reality of our world today, and I don’t know if we regular persons we can do much to change this.
    It is good that presidents like Barack Obama or Francois Hollande are fighting against climate change and many world governments are investing in renewable energy.
    Renewables are the only source of clean energy that can save us from burning more fossil fuels.
    http://www.alternative-energies.net/a-few-solutions-to-fight-climate-change-in-2015/

  10. Dan posted on 2018-12-05 at 13:35

    Sadly, the effects of global warming and climate change can be felt today all over the planet.
    As an example, in November 2018 (in the North hemisphere) we had summer temperatures during the day (between 20 and 25 C).
    At the end of November, the temperatures drop to -13 C for a couple of days during the night and -6 and 09 C during the day.
    Now at the start of December, we are back in hot temperatures because during the day we have 9 and 12 C, and 1 C during the night, so no freezing.
    It seems that we all need to learn how to lower our carbon footprint reading the article https://www.alternative-energies.net/how-to-stop-global-warming-climate-change/ and this is the only way to help nature.

.