GreenMageddon and The Coming Economic Apocalypse

GreenMageddon is no hyperbole. It’s is the virtually certain outcome of attempting to purge CO2 emissions from a modern energy system and economy that literally breathes and exhales fossilized carbon. Indeed, the very idea of converting today’s economy to an alternative energy respiratory system is so far beyond rational possibility as to defy common sense. Yet that is exactly where the COP26 powers that be and their megaphones in the global corporate media are leading us.


With the current and ubiquitous Climate Change hysteria and the recent COP26 conference, it’s not too soon to start clanging the alarm bells—not about climate catastrophe, of course, but about the stupidest act of the assembled nations since Versailles, when the vindictive WWI victors laid the groundwork for the catastrophes of depression, WWII, the Holocaust, Soviet tyranny, the Cold War and Washington’s destructive global hegemony, all of which followed hard upon the next.

Politicians and their allies in the mainstream media, think tanks, lobbies and Big Business (with its cowardly sleep-walking leaders) are fixing to do nothing less than destroy the prosperity of the world and send global life careening into a modern economic Dark Ages. And worse still, it’s being done in the service of a bogus climate crisis narrative that is thoroughly anti-science and wholly inconsistent with the actual climate and CO2 history of the planet.

Cutting to the chase, during the past 600 million years, the earth has rarely been as cool as at present, and almost never has it had as low CO2 concentrations as the 420 ppm level that today’s climate howlers decry.

In fact, according to the careful reconstructions of actual earth scientists who have studied ocean sediments, ice cores and the like, there have been only two periods encompassing about 75 million years (13% of that immensely long 600 million year stretch of time) where temperatures and CO2 concentrations were as low as it present. These were the Late Carboniferous/Early Permian time from 315 to 270 million years ago and the Quaternary Period, which hosted modern man 2.6 million years ago.

You might say, therefore, that the possibility of a warmer, CO2-richer environment is a case of planetary “been there, done that”. And it is most certainly not a reason to wantonly dismantle and destroy the intricate, low-cost energy system that is the root source of today’s unprecedented prosperity and human escape from poverty and want.

But that’s hardly the half of it. What actually lies smack in the center of our warmer past is a 220-million-year interval from 250 million years ago through the re-icing of Antarctica about 33 million years ago that was mainly ice-free.

As shown by the blue line in the chart below, during most of that period (highlighted in the brown panels), temperatures were up to 12C higher than at present, and Mother Earth paid no mind to the fact that she lacked polar ice caps or suitable habitats for yet un-evolved polar bears.

Global Temperature And Atmospheric CO2 Over Geologic Time

Image 1.gif

As it happened, during what has been designated as the Mesozoic Age, the planet was busy with another great task, namely, salting away the vast deposits of coal, oil and gas that power the modern economy and allow billions of people to have a living standard enjoyed only by kings just a few centuries ago.

There is no mystery as to how this serendipitous gift to present-day man happened. In a world largely bereft of ice and snow, the oceans were at vastly higher levels and flooded much of the landmass, which, in turn, was verdant with plant and animal life owing to warmer temperatures and abundant rainfall.

Stated differently, Mother Nature was harvesting massive amounts of solar energy in the form of carbon-based plant and animal life, which, over the eons of growth and decay, resulted in the build-up of vast sedimentary basins. As the tectonic plates shifted (i.e., the single continent of Pangaea broke up into its modern continental plates) and the climates oscillated, these sedimentary deposits were buried under shallow oceans and, with the passage of time, heat and pressure, were converted into the hydrocarbon deposits that dot the first 50,000 feet (at least) of the earth’s crust.

In the case of coal, the most favorable conditions for its formation occurred 360 million to 290 million years ago during the Carboniferous (“coal-bearing”) Period. However, lesser amounts continued to form in some parts of the Earth during subsequent times, in particular, the Permian (290 million to 250 million years ago) and throughout the Mesozoic Era (250 million to 66 million years ago).

Likewise, the formation of petroleum deposits began in warm shallow oceans, where dead organic matter fell to the ocean floors. These zooplankton (animals) and phytoplankton (plants) mixed with inorganic material that entered the oceans by rivers. It was these sediments on the ocean floors that then formed oil sands while buried during eons of heat and pressure. That is to say, the energy embodied in petroleum initially came from the sunlight, which had become trapped in chemical form in dead plankton.

Moreover, the science behind this isn’t a matter of academic armchair speculation for the simple reason that it has been powerfully validated in the commercial marketplace. That is, trillions of dollars have been deployed in the last century in the search for hydrocarbons, based on immensely complicated petroleum engineering research, theory and geologic models. Oil drillers weren’t throwing darts at a wildcatter’s wall but were coincidentally proving these “facts” of climate history are correct, given that they led to the discovery and extraction of several trillions of BOEs (barrels of oil equivalent).

Consequently, it is solidly estimated by industry experts that today’s petroleum deposits were roughly formed as follows:

  • About 70% during the Mesozoic age (brown panels, 252 to 66 million years ago) which was marked by a tropical climate, with large amounts of plankton in the oceans;
  • 20% was formed in the dryer, colder Cenozoic age (last 65 million years);
  • 10% were formed in the earlier warmer Paleozoic age (541 to 252 million years ago).

Indeed, at the end of the day, petroleum engineering is rooted in climate science because it was climate itself that produced those economically valuable deposits.

And a pretty awesome science it is. After all, billions of dollars have been pushed down the wellbores in up to two miles of ocean waters and 40,000 feet below the surface in what amounts to an amazingly calibrated and targeted search for oil-bearing needles in a geologic haystack.

For instance, the Cretaceous Period from 145 million to 66 million years ago, which was especially prolific for oil formation, was a period with a relatively warm climate, resulting in high open sea levels and numerous shallow inland seas. These oceans and seas were populated with now-extinct marine reptiles, ammonites and rudists, while dinosaurs continued to dominate on land. And it is knowing this science that permits multi-billion barrel hydrocarbon needles to be found in the earth’s vasty deep.

Needless to say, the climate warmed sharply during the Cretaceous, rising by about 8 degrees C, and eventually reached a level 10 degrees C warmer than today’s on the eve of the asteroid-driven Great Extinction Event of 66 million years ago. As shown in the graph below, at that point, there were no ice caps at either pole, and Pangaea was still coming apart at the seams–so there was no circulating ocean conveyor system in the infant Atlantic.

Yet during the Cretaceous, CO2 levels actually went down while temperatures were rising sharply. That’s the very opposite of the Climate Alarmists’ core claim that it is rising CO2 concentrations which are currently forcing global temperatures higher.

Moreover, we are not talking about a marginal reduction in CO2 concentrations in the atmosphere. Levels actually dropped sharply from about 2,000 ppm to 900 ppm during that 80 million year stretch. This was all good for hydrocarbon formation and today’s endowment of nature’s stored work, but it was also something more.

To wit, it was yet another proof that planetary climate dynamics are far more complicated and ridden with crosscurrents than the simple-minded doom loops now being used to model future climate states from the current far lower temperature and CO2 levels.

As it happens, during the periods since the Great Extinction Event 66 million years ago, both vectors have steadily fallen; CO2 levels continued to drop to the 300–400 ppm of modern times, and temperatures dropped another 10 degrees Celsius.

Image 2.jpg

It is surely one of the great ironies of our times that today’s fanatical crusades against fossil fuels are being carried out with not even a nod to the geologic history that contradicts the entire “warming” and CO2 concentration hysteria and made present energy consumption levels and efficiencies possible.

That is to say, the big, warm and wet one (the Mesozoic) got us here. True global warming is not the current and future folly of mankind; it is the historical enabler of present-day economic blessings. Yet, here we are on the eve of COP26, manically focused on reducing emissions to the levels required to keep global temperatures from rising more than 1.5 degrees Celsius from preindustrial levels.

Then again, exactly which pre-industrial level might that be?

We will address the more recent evolution, including the Medieval Warm Period and the Little Ice Age in Part 2, but suffice it to say that the chart below reflects broadly accepted geologic science. Still, we are hard-pressed—even with the aid of a magnifying glass—to see any time in the last 66 million years in which the global temperatures weren’t a lot higher than 1.5 degrees Celsius above current levels—even during much of the far-right margin labeled the “Pleistocene Ice Age” of the past 2.6 million years.

If your brain is not addled by the climate change narrative, the very term rings a resoundingly loud bell. That’s because there have been on the order of 20 distinct “ice ages” and interglacial warming periods during the Pleistocene, the latest of which ended about 18,000 years ago and from which we have been digging out ever since.

Of course, the climb away from retreating glaciers in Michigan, New England, northern Europe, etc. to warmer, more hospitable climes has not been continuously smooth, but rather a syncopated sequence of advances and retreats. Thus, it is believed that the world got steadily warmer until about 13,000 years ago, which progress was then interrupted by the Younger Dryas, when the climate became much drier and colder and caused the polar ice caps to re-expand and ocean levels to drop by upwards of 100 feet as more of the earth’s fixed quantity of water was reabsorbed back into the ice packs.

After about 2,000 years of retreat, however, and with no help from the humans who had repaired to cave living during the Younger Dryas, the climate system swiftly regained its warming mojo. About 8,000 years ago, during the subsequent run-up to what the science calls the Holocene Optimum, global temperatures rose by upwards of 3 degrees Celsius on average and up to 10 degrees Celsius in the higher latitudes.

And it happened quite rapidly. One peer-reviewed study showed that in parts of Greenland, temperatures rose 10°C (18°F) in a single decade. Overall, scientists believe that half of the rebound from the “ice age” conditions of the Younger Dryas may have occurred in barely 15 years. Ice sheets melted, sea levels rose, forests expanded, trees replaced grass and grass replaced desert—all with startling alacrity.

In contrast to today’s climate models, Mother Nature clearly did not go off the rails in some kind of linear doomsday loop of ever-increasing temperatures and without any hectoring from Greta, either. Actually, Greenland got all frozen up and thawed several more times thereafter.

Needless to say, the Holocene Optimum 8,000 years ago is not the “preindustrial” baseline from which the Climate Howlers are pointing their phony hockey sticks. In fact, other studies show that, even in the Arctic, it was no picnic time for the polar bears. Among 140 sites across the western Arctic, there is clear evidence for conditions that were warmer than now at 120 sites. At 16 sites for which quantitative estimates have been obtained, local temperatures were on average 1.6 °C higher during the optimum than they are today.

Say what? Isn’t that the same +1.6 degrees C above current levels that the COP26 folks are threatening to turn off the lights of prosperity to prevent?

In any event, what did happen was far more beneficent. To wit, the warmer and wetter Holocene Optimum and its aftermath gave rise to the great river civilizations 5,000 years ago, including the Yellow River in China, the Indus River in the Indian subcontinent, the Tigris-Euphrates and the Nile River civilizations among the most notable.

Stated differently, that +1.6 degrees C was reflective of the climate-based catalyzing forces that actually made today’s world possible. From the abundances of the river civilizations, there followed the long march of agriculture and the economic surpluses and abundance that enabled cities, literacy, trade and specialization, advancement of tools and technology and modern industry—the latter being the ultimate human escape from a life based on the back muscles of man and his domesticated animals alone.

At length, the quest for higher and higher industrial productivity spurred the search for ever-cheaper energy, even as intellectual, scientific and technological advances which flowed from these civilizations led to the rise of a fossil fuel-powered economy based on energy companies harvesting the condensed and stored solar BTUs captured by Mother Nature during the planet’s long warmer and wetter past.

In a word, what powers prosperity is ever more efficient “work,” such as moving a ton of freight by a mile or converting a kilogram of bauxite into alumina or cooking a month’s worth of food. Alas, during the 230 million mainly ice-free years of the Mesozoic, the planet itself accomplished one of the greatest feats of “work” ever known: Namely, the conversion of massive amounts of diffuse solar energy into the high-density BTU packages embodied in coal, oil and gas-based fuels.

As it happens, when one of the previous “preindustrial” warming eras (the Roman Warming) was coming to an end in the late 4th century AD, St. Jerome admonished the faithful “never look a gift horse in the mouth.”

Yet that’s exactly what the assembled nation’s will be doing at COP26.
Image 3.png


The assembled governments of the world meeting in Glasgow for COP26 are fixing to declare war on the backbone of modern economic life and the abundance and relief from human poverty and suffering with which it has gifted the world. We are referring, of course, to its agenda to essentially drive fossil fuels—which currently make up 80% of BTU consumption—from the global energy supply system over the next several decades.

All of this is being done in the name of preventing global temperatures from rising by 1.5 degrees Celsius above “pre-industrial” levels.

But when it comes to the crucial matter of exactly which pre-industrial baseline level, you can see the skunk sitting on the woodpile a mile away. That’s because, as we showed in part 1, global temperatures have been higher than the present—often by upward of 10–15 degrees Celsius—for most of the past 600 million years!

Moreover, during the more recent era since the great extinction event 66 million years ago, the decline in temperatures has been almost continuous, touching lower than current levels only during the 100,000-year glaciation cycles of the last 2.6 million years of the Pleistocene ice ages. Not unsurprisingly, therefore, the Climate Howlers have chosen to ignore 599,830,000 of those years in favor of the last 170 years (since 1850) alone.

They actually put old William Jennings Bryan of the Scopes Trial to shame. At least he thought the world was 6,000 years old!

Still, the juxtaposition of the temperature record of the last 66 million years and the sawed-off charts of the climate alarmists tells you all you need to know: to wit, they have simply banished all the “inconvenient” science from the narrative.

Global Temperature Trend during the Past 65 Million Years (+/− in degrees Celsius from the present)
Image 1.png
Global Average Temperature Trend, 1850–2018 per the Global Warming Narrative

Image 2.png

Needless to say, there is a reason why they start the graphs in 1850, and it is not just because it was the tail-end of the Little Ice Age (LIA), from which low point the temperature trend might well climb upwards for a time as climatic conditions normalized.

Actually, the intellectual deception is far more egregious. To wit, the Climate Howlers want you to believe the absolutely anti-scientific notion that the global climate was in general equipoise until the coal barons and the John D. Rockefeller’s of the mid-19th century set off a dangerous chain of climate dysfunction as they brought the stored solar energy embedded in coal and petroleum to the surface and released its combustion by-products–especially CO2—into the ambient air.

The Risible Myth Of Climate Equipoise

The global warming narrative is the most risible manifestation yet of this leap into self-righteous disregard for evidence, logic, and plausibility. For when you step back from the shrill, sanctimonious narrative that passes for the global warming catechism, the ridiculousness of its central claim that industrial society is destroying the climatic equipoise of the planet is self-evident.

For crying out loud, there has never been equipoise!

What there’s been is 4.5 billion years of wildly oscillating and often violent geologic evolution and climate disequilibrium owing to manifold natural causes, including:

  • plate tectonics that has sometimes violently impacted climate systems, especially the assembly and breakup of Pangaea between 335 million and 175 million years ago, and the continuous drift of the present-day continents thereafter;
  • asteroid bombardments;
  • the 100,000-year cycles of the Earth’s orbital eccentricity (it gets colder when it’s at maximum elongation);
  • the 41,000-year cycles of the Earth’s tilt on its axis, which oscillates between 22.1 and 24.5 degrees and thereby impacts the level of solar intake;
  • the wobble or precession of the earth’s rotation which impacts climate over the course of its 26,000 year cycles;
  • the recent 150,000 year glaciation and inter-glacial warming cycles;
  • the 1500 year sunspot cycles, where earth temperatures fall during solar minimums like the Maunder Minimum of 1645-1715 at the extreme of the LIA when sunspot activity virtually ceased.

The natural climate change now underway is, therefore, the product of powerful planetary forces that long predated the industrial age and which massively exceed the impact of industrial era emissions. As we indicated in Part 1, that the present conflation of these forces has resulted in a warming cycle is nothing new—warming has happened repeatedly even in modern times.

These modern warmings include the previously discussed Holocene Climate Optimum (5000 to 3000 BC); the Roman Warming (200 BC to AD 500); and, most recently, the Medieval Warm Period (AD 1000-1300).

Contrary to the false claims of the Climate Howlers,

  • Current mildly rising temperatures are in keeping with the historical truth that warmer is better for humanity and most other species, too;
  • Continued planetary equipoise requires no interventions whatsoever by the state to retard the use of prosperity-fostering fossil fuels or to subsidize and accelerate the adoption of high-cost renewable energy.

So the question recurs. What “pre-industrial” temperature baseline can be picked out of all these eras and all these climate change forces that would be anything but an arbitrary political, not science-based, choice?

After all, the science is agnostic. Mother Earth has weathered every kind of climate disequilibrium at both the colder and warmer ends of the spectrum and, crucially, experienced the eventual release of countervailing forces that took both temperature and CO2 levels back in the other direction.

We think the planet’s climatic resilience is especially evident in the fact that, after five major ice ages, warming forces returned with robust energy until they reversed again, thereby proving there is no doomsday loop that leads in linear fashion to inexorable catastrophe as is embedded in the climate models.

There have naturally been extended periods of global warming in between these ice ages, but the last three listed below are of special significance. They all occurred during the last 600 million years of generally much hotter temperatures and CO2 concentrations that were 2–6 times higher than current readings.

That is to say, the last three ice ages prove better than anything else that the planet’s subsequent warming cycles have been self-limiting and self-correcting. If that were not true, the earth would have been boiling into perdition eons ago:

  • Huronian (2.4–2.1 billion years ago),
  • Cryogenian (850–635 million years ago),
  • Andean-Saharan (460–430 million years ago),
  • Karoo (360–260 million years ago),
  • Quaternary (2.6 million years ago–present,

As we indicated in part 1, regarding the most recent Quaternary era, the last glacier retreat gathered warming steam about 14,000 years ago until it was interrupted by a sudden cooling at about 10,000–8500 BC, known as the aforementioned Younger Dryas. The warming resumed by 8500 BC.

By 5000 to 3000 BC, average global temperatures reached their maximum level during the Holocene Optimum and were 1 to 2 degrees Celsius warmer than they are today.

As we noted, during the Holocene Optimum, many of the Earth’s great ancient civilizations began and flourished because conditions were especially hospitable for agriculture and the generation of economic surpluses. The Nile River, for instance, had an estimated three times its present volume, indicating a much larger tropical region. In fact, 6,000 years ago, the Sahara was far more fertile than today and supported large herds of animals, as evidenced by the Tassili N’Ajjer frescoes of Algeria.

That is to say, warmer and wetter was far better for mankind than prior bouts of cold.

Nevertheless, from 3000 to 2000 BC, a renewed cooling trend occurred. The latter caused large drops in sea level and the emergence of many islands (Bahamas) and coastal areas that are still above sea level today.

A short warming trend took place from 2000 to 1500 BC and the associated renewal of the Egyptian dynasties, followed once again by colder conditions from 1500 to 750 BC. This caused renewed ice growth in European continental glaciers and alpine glaciers, and a sea-level drop of between 2 and 3 meters below present-day levels. Incidentally, that period is also known as the Dark Ages and preceded the flowering of Greek and Roman civilizations.

The period from 750 BC to AD 800 brought a general warming trend, but it was not as strong as the Holocene Optimum. During the time of the Roman Empire, in fact, a cooling began that intensified after AD 600 and resulted in a renewed dark age that lasted until about AD 900.

During the AD 600–900 Dark Ages, global average temperatures were significantly colder than they are today. From writings of the time, we know that at its height, the cooling caused the Nile River (AD 829) and the Black Sea (AD 800–801) to freeze.

Thereafter came the crucial Medieval Warm Period from AD 1000 to 1300. As shown in the chart below, temperatures were at or above current readings during most of the period, which saw a rejuvenation of economic life, trade, and civilization in Europe.

Indeed, prior to the post-1850 warming, there had been five distinct warming periods (red areas) since the last glaciers with temperatures above current levels. Never, of course, does this chart see the light of day in the mainstream climate change narrative.

Image 3.gif

Also, during this period, the Vikings established settlements in Iceland and Greenland. Long before the industrial era, Greenland was so warm, wet, and fertile that major colonization occurred after AD 980. At its peak, it included upward of 10,000 settlers, extensive farming, numerous Catholic churches, and a parliament that eventually voted for union with Norway.

So, obviously, the Vikings named their settlement not because they were color blind but because it was hospitable to human settlement.

As another measure of comparison, studies show that the snow line in the Rocky Mountains was about 370 meters above current levels (it was warmer then than today).

Thereafter, the climate trend again reversed in the colder direction. There are ample records from around the world of floods, great droughts, and extreme seasonal climate fluctuations up to the 1400s. Horrendous floods devastated China in 1332 (reported to have killed several million people).

Likewise, by the 14th century, the Viking colony was lost to sea ice expansion, and the growing season got ever shorter, thereby undermining the economic viability of these farming settlements. Food eventually got so scarce that the remaining settlers’ last winter turned out to be one of rampant cannibalism, as archeologists have documented with respect to the remains of the settlement pictured below.

As we said, warmer is better for mankind!

Image 4.jpg

Nor was the reversal from the hospitable climate of the Viking era settlements in Greenland merely a regional anomaly has some Climate Howlers have claimed. During the Medieval Warm period, great civilizations flourished in many other areas, which then became uninhabitable.

For instance, a great drought in the American southwest occurred between 1276 and 1299. Grand settlements like those in Chaco Canyon and Mesa Verde were abandoned. Tree-ring analysis has identified a period of no rain between 1276 and 1299 in these areas.

Needless to say, these extreme weather perturbations were not caused by industrial activity because there was none, and they occurred during a period when it was getting colder, not warmer!

From 1550 to AD 1850, global temperatures were at their coldest since the beginning of the Holocene 12,000 years ago. Hence the designation of this period as the Little Ice Age (LIA).

In Europe, glaciers came down the mountains, thereby covering houses and villages in the Swiss Alps while canals in Holland froze for three months straight, a rare occurrence before or after. Agricultural productivity also dropped significantly, even becoming impossible in parts of northern Europe. The cold winters of the Little Ice Age were famously recorded in Dutch and Flemish paintings, such as Hunters in the Snow by Pieter Bruegel (c. 1525–69).

From 1580 to 1600, the western part of the United States also experienced one of its longest and most severe droughts in the last 500 years. Cold weather in Iceland from 1753 to 1759 caused 25% of the population to die from crop failure and famine. Newspapers in New England called 1816 “the year without a summer.”

Self-evidently, when the LIA finally ended around 1850, global temperatures were at a modern nadir (no wonder the Climate Howlers start their charts in the middle of the 19th century).

But the significance of this fact goes well beyond cropping the temperature charts at 1850. Actually, in order to erase the above-described oscillations of the modern climate, climate change advocates have actually gone so far as to literally attempt to airbrush them out of existence.

We are referring to what we call the climate “Piltdown Mann,” named for one Michael Mann, a newly minted Ph.D. (1998) who became the International Panel on Climate Change’s (IPCC) lead investigator and advocate for what famously became the “hockey stick” proof of global warming.

The latter, of course, was the blatant fraud embedded in the image that Al Gore made famous in his propagandistic movie “An Inconvenient Truth” in 2006. Suffice to say, the purpose of the hockey stick was to wipe out all the evidence summarized above.

That is, in lieu of the planet’s long-term and recent severe climate oscillations, the IPCC posited an entirely opposite thesis. Namely, for the pre-industrial millennium before 1900, global temperatures were nearly as flat as a board.

Accordingly, only when the industrial age got a head of steam and reached full force after 1950 did today’s warming temperatures first appear, or so it was alleged. The suggestion, of course, was that an uncontrolled temperature breakout to the upside was well underway and that a planetary disaster was just around the corner.

The only problem is that Mann’s graph was as phony as the Piltdown Man itself—the latter famously being confected in England in 1912 and conveniently “discovered” by an amateur anthropologist who claimed it was the missing link in human evolution. At length, it was shown that the fossil was a forgery; it consisted of a modern human cranium and an orangutan jaw with filed-down teeth.

Image 5.jpg

In the case of the graph, Professor Mann and his accomplices at the IPCC doctored the evidence, used misleading data from southwestern US tree rings in lieu of abundant alternative data showing the contrary, and jiggered their computer models to generate pre-specified results.

That is, the models were produced by goal-seeking on the part of Mann and his associates to prove the man-made warming thesis. In essence, this was accomplished by simply pasting modern temperature records showing steady increases on top of a pre-industrial baseline that never happened.

The phony pre-industrial baseline is depicted by the yellow area in the graph for the period 1400–1900. The hockey stick-like eruption of the yellow space after 1900, of course, allegedly depicts the man-made temperature rise since the onset of the hydrocarbon age.

By contrast, the corrected version is in blue. In this version—which comports with the history of climate oscillations cited above—there is no hockey stick because the shaft never happened; it was invented by computer model manipulations, not extracted from the abundant scientific data on which the Mann study was allegedly based.

Image 6.gif

So the question is answered. The mid-19th century is exactly the wrong baseline from which to measure global temperature change during modern times.

The blue area of the chart, in fact, is the smoking gun that obliterates the whole predicate on which COP26 is being foisted upon the everyday people of the world.


The geological and paleontological evidence overwhelmingly says that today’s average global temperature of about 15 degrees C and CO2 concentrations of 420 ppm are nothing to fret about. Even if they rise to about 17–18 degrees C and 500–600 ppm by the end of the century, it may well balance or improve the lot of mankind.

After all, bursts of civilization during the last 10,000 years uniformly occurred during the red portion of the graph below. The aforementioned river civilizations—the Minoan, the Greco-Roman era, the Medieval flowering, and the industrial and technological revolutions of the present era. At the same time, the several lapses into the dark ages happened when the climate turned colder (blue).

And that’s only logical. When it’s warmer and wetter, growing seasons are longer, and crop yields are better—regardless of the agricultural technology and practices of the moment. And it’s better for human and community health, too—most of the deadly plagues of history have occurred in colder climates, such as the Black Death of 1344–1350.

Image 1.gif

Yet, the Climate Crisis Narrative shitcans this massive body of “the science” by means of two deceptive devices that invalidate the entire Anthropogenic Global Warming (AGW) story.

First, it ignores the entirety of the planet’s pre-Holocene (last 10,000 years) history, even though the science shows that more than 50% of the time in the last 600 million years, global temperatures were in the range of 25 degrees C or 67% higher than current levels and far beyond anything projected by the most unhinged climate models today. But, crucially, the planetary climate systems did not go into a doomsday loop of scorching meltdown—warming was always checked and reversed by powerful countervailing forces.

Even the history the alarmists do acknowledge has been grotesquely falsified. As we showed in Part 2, the so-called hockey stick of the past 1000 years in which temperatures were flat until 1850 and are now rising to allegedly dangerous levels is a complete crock. It was fraudulently manufactured by the IPCC (International Panel on Climate Change) to cancel the fact that temperatures in the pre-industrial world of the Medieval Warm Period (AD 1000–1200) were actually higher than at present.

Secondly, it is falsely claimed that global warming is a one-way street in which rising concentrations of greenhouse gases (GHGs) and especially CO2 are causing the Earth’s heat balance to continuously increase. The truth, however, is that higher CO2 concentrations are a consequence and by-product, not a driver and cause, of the current naturally rising temperatures.

Again, the now “canceled” history of the planet knocks the CO2-driver proposition into a cocked hat. During the Cretaceous Period between 145 and 66 million years ago, a natural experiment provided complete absolution for the vilified CO2 molecule. During that period, global temperatures rose dramatically from 17 degrees C to 25 degrees C—a level far above anything today’s Climate Howlers have ever projected.

Alas, CO2 wasn’t the culprit. According to science, ambient CO2 concentrations actually tumbled during that 80-million-year expanse, dropping from 2,000 ppm to 900 ppm on the eve of the Extinction Event 66 million years ago.

You would think that this powerful countervailing fact would give the CO2 witch-hunters pause, but that would be to ignore what the whole climate change brouhaha is actually about. That is, it’s not about science, human health and well-being or the survival of planet Earth; it’s about politics and the ceaseless search of the political class and the apparatchiks and racketeers who inhabit the beltway for still another excuse to aggrandize state power.

Indeed, the climate change narrative is the kind of ritualized policy mantra that is concocted over and over again by the political class and the permanent nomenklatura of the modern state—professors, think-tankers, lobbyists, career apparatchiks, officialdom—in order to gather and exercise state power.

To paraphrase the great Randolph Bourne, inventing purported failings of capitalism—such as a propensity to burn too much hydrocarbon—is the health of the state. Indeed, fabrication of false problems and threats that purportedly can only be solved by heavy-handed state intervention has become the modus operandi of a political class that has usurped near-complete control of modern democracy.

In doing so, however, the ruling elites have gotten so used to such unimpeded success that they have become sloppy, superficial, careless and dishonest. For instance, the minute we get a summer heatwave, these natural weather events are jammed into the global warming mantra with nary a second thought by the lip-syncing journalists of the MSM.

Yet there is absolutely no scientific basis for all this tom-tom beating. In fact, NOAA publishes a heatwave index based on extended temperature spikes, which last more than 4 days and which would be expected to occur once every 10 years based on the historical data.

As is evident from the chart below, the only true heatwave spikes we have had in the last 125 years were during the dust bowl heat waves of the 1930s. The frequency of mini-heatwave spikes since 1960 is actually no greater than it was from 1895 to 1935.

Image 2.png

Likewise, all it takes is a good Cat 2 hurricane and they are off to the races, gumming loudly about AGW. Of course, this ignores entirely NOAA’s own data as summarized in what is known as the ACE (accumulated cyclone energy) index.

This index was first developed by renowned hurricane expert and Colorado State University professor William Gray. It uses a calculation of a tropical cyclone’s maximum sustained winds every six hours. The latter is then multiplied by itself to get the index value and accumulated for all storms for all regions to get an index value for the year as shown below for the past 170 years (the blue line is the seven-year rolling average).

Your editor has a special regard for the expertise of William Gray. Back in our private equity days, we invested in a Property-Cat company, which was in the super-hazardous business of insuring against the extreme layers of damage caused by very bad hurricanes and earthquakes. Correctly setting the premiums was no trifling business, and it was the analytics, long-term databases, and current-year forecasts of Professor Gray upon which our underwriters crucially depended.

That is to say, hundreds of billions of dollars of insurance coverage were then and still is being written with ACE as a crucial input. Yet, if you examine the 7-year rolling average (blue line) in the chart, it is evident that ACE was as high or higher in the 1950s and 1960s as it is today and that the same was true of the late 1930s and the 1880–1900 periods.
Image 3.png

The above is an aggregate index of all storms and is therefore as comprehensive a measure as exists. But for want of doubt, the next three panels look at hurricane data at the individual storm count level. The pink portion of the bars represents the number of big Cat 3–5 storms, while the red portion reflects the number of Cat 1–2 storms and the blue the number of tropical storms that did not reach Cat 1 intensity.

The bars accumulate the number of storms in 5-year intervals and reflect recorded activity back to 1851. The reason we present three panels—for the Eastern Caribbean, Western Caribbean, and Bahamas/Turks & Caicos, respectively—is that the trends in these three sub-regions clearly diverge. And that’s the smoking gun.

If global warming were generating more hurricanes as the MSM constantly maintains, the increase would be uniform across all of these subregions, but it’s clearly not. Since the year 2000, for example,

  • the Eastern Caribbean has had a modest increase in both tropical storms and higher-rated Cats relative to most of the past 170 years;
  • the Western Caribbean has not been unusual at all, and, in fact, has been well below the counts during the 1880–1920 period; and
  • the Bahamas/Turks & Caicos region, since 2000, has actually been well weaker than during 1930–1960 and 1880–1900.

The actual truth of the matter is that Atlantic hurricane activity is generated by atmospheric and ocean temperature conditions in the eastern Atlantic and North Africa. Those forces, in turn, are heavily influenced by the presence of an El Niño or La Niña in the Pacific Ocean. El Niño events increase the wind shear over the Atlantic, producing a less-favorable environment for hurricane formation and decreasing tropical storm activity in the Atlantic basin. Conversely, La Niña causes an increase in hurricane activity due to a decrease in wind shear.

These Pacific Ocean events, of course, have never been correlated with the low level of natural global warming now underway.

The number and strength of Atlantic hurricanes may also undergo a 50- to 70-year cycle known as the Atlantic multidecadal oscillation. Again, these cycles are unrelated to global warming trends since 1850.

Still, scientists have reconstructed Atlantic major hurricane activity back to the early eighteenth century (the 1700s) and found five periods averaging 3–5 major hurricanes per year and lasting 40–60 years each and six other periods averaging 1.5–2.5 major hurricanes per year and lasting 10–20 years each. These periods are associated with a decadal oscillation related to solar irradiance, which is responsible for enhancing/dampening the number of major hurricanes by 1–2 per year and is clearly not a product of AGW.

Moreover, like in all else, the long-term records of storm activity also rule out AGW because there was none for most of the time during the last 3,000 years, for instance. Yet, according to a proxy record for that period from a coastal lake in Cape Cod, hurricane activity has increased significantly during the last 500–1,000 years compared to earlier periods.

In short, there is no reason to believe that these well-understood precursor conditions and longer-term trends have been impacted by the modest increase in average global temperatures since the Little Ice Age (LIA) ended in 1850.
Image 4.png
Image 5.png
Image 6.png

As it happens, the same story is true with respect to wildfires—the third category of natural disasters that the Climate Howlers have glommed onto. But in this case, it’s bad forestry management, not man-made global warming, which has turned much of California into a dry wood fuel dump.

But don’t take our word for it. This comes from the George Soros-funded Pro Publica, which is not exactly a right-wing tin foil hat outfit. It points out that environmentalists had shackled federal and state forest management agencies so much so that today’s tiny “controlled burns” are but an infinitesimal fraction of what Mother Nature herself accomplished before the helping hand of today’s purportedly enlightened political authorities arrived on the scene.

“Academics believe that between 4.4 million and 11.8 million acres burned each year in prehistoric California. Between 1982 and 1998, California’s agency land managers burned, on average, about 30,000 acres a year. Between 1999 and 2017, that number dropped to an annual 13,000 acres. The state passed a few new laws in 2018 designed to facilitate more intentional burning. But few are optimistic this, alone, will lead to significant change.

We live with a deathly backlog. In February 2020, Nature Sustainability published this terrifying conclusion: California would need to burn 20 million acres—an area about the size of Maine—to restabilize in terms of fire.”

In short, if you don’t clear and burn out the deadwood, you build up nature-defying tinderboxes that then require only a lightning strike, a spark from an unrepaired power line, or human carelessness to ignite into a raging inferno. As one 40-year conservationist and expert summarized,

“…There’s only one solution, the one we know yet still avoid. We need to get good fire on the ground and whittle down some of that fuel load.

In fact, a dramatically larger human footprint in the fire-prone shrublands and chaparral (dwarf trees) areas along the coasts increases the risk residents will start fires. California’s population nearly doubled from 1970 to 2020, from about 20 million people to 39.5 million people, and nearly all of the gain was in the coastal areas.

Under those conditions, California’s strong, naturally-occurring winds, which crest periodically, are the main culprit that fuels and spreads the human-set blazes in the shrublands. The Diablo winds in the north and Santa Ana winds in the south can actually reach hurricane force. As wind moves west over California mountains and down toward the coast, it compresses, warms and intensifies. The winds blow flames and carry embers, spreading the fires quickly before they can be contained.

Among other proofs that industrialization and fossil fuels aren’t the culprit is the fact that researchers had shown that when California was occupied by indigenous communities, wildfires would burn up some 4.5 million acres a year. That’s nearly six times the 2010–2019 period, when wildfires burned an average of just 775,000 acres annually in California.

Beyond the untoward clash of all of these natural forces of climate and ecology with misguided government forest and shrubland husbandry policies, there is actually an even more dispositive smoking gun, as it were.

To wit, the Climate Howlers have not yet embraced the apparent absurdity that the planet’s purportedly rising temperatures have targeted the Blue State of California for special punishment. Yet when we look at the year-to-date data for forest fires, we find that unlike California and Oregon, the US as a whole is now experiencing the weakest fire years since 2010.

That’s right. As of August 24 each year, the 10-year average burn has been 5.114 million acres across the US, but in 2020, it was 28% lower, at 3.714 million acres.

National fire data year to date:

Image 7.png

Indeed, what the above chart shows is that on a national basis, there has been no worsening trend at all during the last decade—just huge oscillations year to year, driven not by some grand planetary heat vector but by changing local weather and ecological conditions.

You just can’t go from 2.7 million burned acres in 2010 to 7.2 million acres in 2012, then back to 2.7 million acres in 2014, then to 6.7 million acres in 2017, followed by just 3.7 million acres in 2020—and still argue along with the Climate Howlers that the planet is angry.

On the contrary, the only real trend evident is that on a decadal basis during recent times, average forest fire acreage in California has been slowly rising, owing to the above-described dismal failure of government forest management policies. But even the mildly rising average fire acreage trend since 1950 is a rounding error compared to the annual averages from prehistoric times, which were nearly 6 times greater than during the most recent decade.

Furthermore, the gently rising trend since 1950, as shown below, should not be confused with the Climate Howlers’ bogus claim that California’s fires have “grown more apocalyptic every year,” as The New York Times reported.

In fact, they are comparing 2020’s above-average burn to 2019, which saw an unusually small amount of acreage burned—just 280,000 acres compared to 1.3 million and 1.6 million in 2017 and 2018, respectively, and 775,000 on average over the last decade.

Image 7.jpg

Nor is this lack of correlation with global warming just a California and US phenomenon. As shown in the chart below, the global extent of drought, measured by five levels of severity, with brown being the most extreme, has shown no worsening trend at all during the past 40 years.

Global Extent of Five Levels Of Drought, 1982–2012

Image 9.jpg

This brings us to the gravamen of the case. To wit, there is no climate crisis whatsoever, but the AGW hoax has so thoroughly contaminated the mainstream narrative and the policy apparatus in Washington and capitals all around the world that contemporary society is fixing to commit economic hari-kari.

That’s because, in contradistinction to the phony case that the rise of fossil fuel use after 1850 has caused the planetary climate system to become unglued, there has been a massive acceleration of global economic growth and human well-being. One essential element behind that salutary development has been the massive increase in the use of cheap fossil fuels to power economic life.

The chart below could not be more dispositive. During the pre-industrial era between 1500 and 1870, real global GDP crawled along at just 0.41% per annum. By contrast, during the past 150 years of the fossil fuel age, global GDP growth accelerated to 2.82% per annum–or nearly 7 times faster.

This higher growth, of course, in part resulted from a larger and far healthier global population made possible by rising living standards. Yet, it wasn’t human muscle alone that caused the GDP level to go parabolic, as per the chart below.

It was also due to the fantastic mobilization of intellectual capital and technology. One of the most important vectors of the latter was the ingenuity of the fossil fuel industry in unlocking the massive trove of stored work that Mother Nature extracted, condensed, and salted away from the incoming solar energy over the long warmer and wetter eons of the past 600 million years.

Image 10.png

Needless to say, the curve of world energy consumption tightly matches the rise of global GDP shown above. Thus, in 1860, global energy consumption amounted to 30 exajoules per year and virtually 100% of that was represented by the blue layer, labeled “biofuels,” which is just a polite name for wood and the decimation of the forests which it entailed.

Since then, annual energy consumption has increased 18-fold to 550 exajoules (at 100 billion barrels of oil equivalent), but 90% of that gain was due to natural gas, coal, and petroleum. The modern world and today’s prosperous global economy would simply not exist absent of the massive increase in the use of these efficient fuels, meaning that per-capita income and living standards would otherwise be only a small fraction of current levels.

Image 11.png

Yes, that dramatic rise in prosperity in generating fossil fuel consumption has given rise to a commensurate increase in CO2 emissions. But contrary to the Climate Change Narrative, CO2 is not a pollutant!

As we have seen, the correlated increase in CO2 concentrations—from about 290 ppm to 415 ppm since 1850—amounts to a rounding error in both the long-trend of history and in terms of atmospheric loadings from natural sources.

As to the former, concentrations of less than 500 ppm are only recent developments of the last ice age, while during prior geologic ages concentrations reached as high as 2400 ppm.

Likewise, the oceans contain an estimated 37,400 billion tons of suspended carbon, land biomass has 2,000-3,000 billion tons and the atmosphere contains 720 billion tons of CO2. The latter alone is more than 20X current fossil emissions (35 billion tons) shown below.

Of course, the opposite side of the equation is that oceans, land and atmosphere absorb CO2 continuously so the incremental loadings from human sources is very small. That also means that even a small shift in the balance between oceans and air would cause a much more severe rise/fall in CO2 concentrations than anything attributable to human activity.

But since the Climate Howlers falsely imply that the “pre-industrial” level of 290 parts per million was extant since, well, the Big Bang and that the modest rise since 1850 is a one-way ticket to boiling the planet alive, they obsess over the “sources versus sinks” balance in the carbon cycle for no valid reason whatsoever.

Actually, the continuously shifting carbon balance of the planet over any reasonable period of time is a big, so what!

Image 12.png


The chart below dramatically underscores why the CO2 witch hunt is such a deadly threat to future prosperity and human welfare. To wit, even after decades of green energy promotion and huge subsidies from the state, renewables accounted for only 5% of primary global energy consumption in 2019 because:

  • They are still very uncompetitive (high cost) relative to the installed base of fossil, nuclear and hydroelectric energy.
  • They do not really even account for the 5% share reflected in the chart in terms of ability to deliver work to the economy, owing to the intermittency of wind and solar power and the fact that by convention, government scorekeepers gross-up the renewable electrical power delivered to end-users by a factor of three in order to account for generation heat rate losses and transmission and distribution losses in the electric power grid.

By contrast, the 84% share attributed to oil, natural gas, and coal is actually far larger in practical terms. That’s because most of the prime hydro sources have been tapped out long ago and are therefore not a meaningful source of growth. During the last 10 years, for example, US hydropower output has only increased from 275 billion KWh to 288 billion KWh or by barely 0.24% per annum.

Likewise, nuclear power capacity outside of China stopped growing decades ago due to massive political and regulatory resistance. Germany, for example, is in the process of closing its last nuclear plants from a fleet that once generated 170,000 GW hours annually (2000) and is now generating only 75,000 GW hours, with a zero target by the year 2030. Even in the US, nuclear power remains dead in the water, with annual output rising from 754 billion KWh in 2000 to just 809 billion KWh in 2019.

Beyond that, the Climate Howlers are not talking about a gradual substitution of solar and wind for the three fossil-based sources of primary energy as existing plants reach the end of their useful lives over the next 50 years. On the contrary, zero net CO2 emission targets will require the massive early retirement and dismantlement of perfectly good power plants and tens of millions of internal combustion (IC) engine vehicles.

Image 1.jpg

The prospect of substituting green power for existing fossil fuel capacity over the next several decades is where the rubber meets the road. However, to grasp the full extent of the impending calamity, it is necessary to recall that Keynesian GDP accounting inherently obfuscates the true economic cost in a drastically downward direction.

In a word, unlike business accounting, Keynesian GDP accounting is based purely on spending, with no regard for balance sheet changes, such as the accounting charge for dismantling a perfectly efficient and serviceable coal-fired power plant. In that case, the utility owner would take charge of the value of the wasted asset, but such charge—which represents a loss of corporate net worth and, hence, societal wealth—would never show up in the GDP accounts.

In fact, Keynesian GDP accounting is just the modern iteration of Frederic Bastiat’s famous “broken window fallacy.” Gross capital spending gets added to the total GDP with no offset for depreciation and asset write-offs. That’s why, we suppose, climate change activists get all giddy about the alleged economic growth benefits and job gains from green investment: They just don’t count all the assets wasted and jobs lost by shutting down efficient coal mines or fossil-fired utility plants.

Nor are we talking about small amounts. To come even close to the utterly ridiculous COP26 target of net zero emissions by 2050, literally tens of trillions of dollars worth of fossil fuel-fired power plants, heating units, chemical processing plants and internal combustion engine vehicles would have to be decommissioned and taken out of service long before their ordinary useful economic lives had been reached.

For instance, notwithstanding Washington’s endless gumming about green energy and tens of billions of subsidies annually, including gifts of more than two billion dollars per year to the wealthy buyers of Elon Musk’s Tesla EVs, fossil fuel consumption in the electric power utility sector—the only sector where green energy has even made a dent—has hardly declined at all.

What happened, instead, is that between 2000 and 2019, coal- and oil-fired generation dropped from 2,090 billion kWh to 1,004 billion kWh, or by 52%, but that was nearly offset by a huge jump in natural gas-fired generation. Specifically, a natural gas-fired output of 601 billion kWh in 2000 rose to 1,586 billion kWh by 2019, a gain of 164%.

Accordingly, the needle on overall fossil fuel-based generation hardly moved, dropping from 2,691 billion kWh in 2000 to 2,590 billion kWh in 2019. So the question recurs: how in the world do these lame-brains expect to get to zero CO2 emissions from the utility sector when, over the last nineteen years, the rate of fossil fuel production has declined by a trivial 0.20% per annum?

Moreover, as we suggested above regarding the global balances, there is no reason whatsoever to expect any material displacement of fossil-based power production by nuclear or hydroelectric power. These two sectors combined produced 1,097 billion kWh in 2019, but if anything, production is likely to fall in the next several decades.

In the case of last year’s 288 billion kWh of hydroelectric generation, all the rivers in the US that can be harnessed have been—or at least any that the environmentalists would allow to be dammed up. But the large non-fossil piece represented by nuclear power is where big-time trouble is brewing.

The fact is, the last nuclear reactor commissioned in the US was in 2016 for Tennessee Watts Bar Unit 2, which was a companion to the second most recent addition, Watts Bar Unit 1, commissioned way back in 1996.

That’s right. In the last quarter of the century, there have been a grand total of two nuke plants commissioned, meaning that the nation’s 94 operating commercial nuclear reactors at 56 nuclear power plants in 28 states are old as the hills—25–40 years old and heading for decommissioning in the normal course.

The implication cannot be gainsaid. Unless there is a total political reversal with respect to nuclear power, the 809 billion kWh generated in 2019, which represented nearly 20% of total utility output, while likely be shrinking from normal retirements faster than new plants, can be licensed, built and made operational, a process that typically takes well more than a decade.

Of course, there is also a smattering of biomass and geothermal production, but that hasn’t been going anywhere, either. Total generation amounted to 72 billion kWh in 2019, a small decline from 75 billion kWh in the year 2000.

Finally, there is the matter of growth. Even at the tepid level of GDP growth during the last decade, and despite continued improvements in the efficiency of electrical power use in the US economy, total power output rose from 3,951 billion kWh in 2009 to 4,127 billion kWh in 2019, representing a modest 0.44% per annum growth rate.

Then again, a continuation of that modest growth trend—which would be the minimal gain compatible with a continued slow rise in real GDP—would result in total power output requirements of 4,427 billion kWh by 2035 or 300 billion kWh more than current levels.

So here’s the skunk in the woodpile. Total solar and wind-fueled power output in 2019 was just 367 billion kWh (or 8.9% of total utility output). That is, it will require the equivalent of 82% of the current so-called green power production just to supply projected system growth. And that’s to say nothing of replacing nuclear production that is likely to be falling due to retirements and obsolescence or, more importantly, displacing some of the 2,590 billion kWh of fossil production still in the nation’s electrical power grid.

Let us re-iterate: Unless a large share of that 2,590 billion kWh of capacity is shuttered, the idea of zero net CO2 emissions is a pipe dream.

At the same time, it would take trillions of taxpayer subsidies to lift the current 367 billion kWh of green power production toward even half of the power requirements by 2035, which would exceed 2,200 KWhs. And that simply isn’t going to happen in a month of Sundays.

Moreover, that’s not even the half of it. Green power production, and especially wind, which accounted for 4 times more output than solar in 2019, (295 billion kWh versus 72 billion KWhs) is highly intermittent based on seasonal patterns and daily wind strength. Nationally, wind plant performance tends to be highest during the spring and lowest during the mid-to-late summer, while performance during the winter (November through February) is around the annual median. However, this pattern can vary considerably across regions, mostly based on local atmospheric and geographic conditions.

As a result, a wind plant’s capacity factor—a measure of the plant’s actual power generation as a percentage of its maximum capacity—is very closely related to the available wind resource and average wind speed. For example, in New England, the median January capacity factor is about 32%, well above the annual median, while the July capacity factor is closer to 14%, far below the annual median.

In a word, to get the same output and reliability as gas or coal-fired base-load plants, green power plants need to be drastically oversized both in terms of maximum output capacity and backup storage units. As shown below, for most regions of the country, median monthly wind capacity factors range between just 25% and 35%.

Image 2.png

Needless to say, low capacity factors mean high all-in costs for electrical energy delivered to the grid. Analysts use a concept to capture this called LCOE (levelized cost of energy), which is the present value of total cost over the lifetime of a plant divided by the present value of electricity generated over the lifetime.

Using this comprehensive measure, in turn, permits an estimate of the “imposed cost” on the grid owing to the low load factors and unreliability of wind and solar power. Thus, compared to an LCOE of $36 per megawatt-hour of capacity for a combined cycle gas-fired plant, the imposed cost alone is $24 per megawatt-hour for wind and $21 per megawatt-hour for solar installations.

Accordingly, the cost of funding power output growth plus the displacement of substantial amounts of fossil-fired production would be staggering. According to a recent detailed study by the Institute for Energy Research, here are the LCOE calculations for the range of fuels sources:

LCOE Per Megawatt Hour Of Capacity

  • Combined cycle natural gas: $36
  • Nuclear: $33
  • Hydro: $38
  • Coal: $41
  • Onshore wind: $85
  • Solar PV: $89
  • Offshore Wind: $132

These differentials between conventional and green sources of power generation are clearly staggering and contradict the constant propaganda from the Climate Howlers, who falsely claim that solar and wind are cheaper than existing power sources.

But as we will amplify in the final installment (Part 5), the actual scenario is far more forbidding than even these all-in cost differentials would suggest. That’s because the second part of the green agenda is to convert the nation’s efficient fleet of 285 million IC vehicles to electric battery power.

What that will do, of course, is make peak power demand swings on the grid far more extreme—even violent—just as the reliability of a green-powered utility sector falls sharply.

GreenMageddon is where we are heading.


GreenMageddon is no hyperbole. It’s is the virtually certain outcome of attempting to purge CO2 emissions from a modern energy system and economy that literally breathes and exhales fossilized carbon.

Indeed, the very idea of converting today’s economy to an alternative energy respiratory system is so far beyond rational possibility as to defy common sense. Yet that is exactly where the COP26 powers that be and their megaphones in the MSM are leading us.

In the first place, it needs be understood that the climate change advocates essentially lie about how much “green energy” we now use and therefore the scope for energy supply system displacement of fossil fuels which would be required to get to net zero CO2 emissions by 2050.

For instance, it is commonly claimed that 12% of US primary energy consumption (2020) is accounted for by “renewables”, implying that we are off to a decent start in eliminating the fossil fuel dependency of the system.

Actually, no—not even close. That’s because “renewables” and green energy defined as solar and wind are not remotely the same thing.

According to DOE, the US consumed 11.6 quads (quadrillion BTUs) of renewables in 2020, but 7.3 quads or 63% of that was accounted for by old-style non-fossil fuels including:

  • Hydroelectric: 2.6 quads;
  • Wood: 2.5 quads;
  • Biofuels: 2.0 quads;
  • Geothermal: 0.2 quads

Of course, there is nothing wrong with these non-fossil fuels and in some cases they can be quite efficient. But they are not part of the “green solution” to displace some or all of the 73 quads of fossil fuels consumed in 2020 because most of these sources are tapped out or not desirable to expand.

We have already seen, for instance, that hydroelectric—which was a favorite of the New Deal back in the 1930s—was tapped out long ago. Up to 80% of the long rivers in the US are already damned, and environmentalists haven’t permitted a new major hydroelectric project in decades. In fact, hydro-electric output of 291 billion kWh in 2020 was well below the peak level of 356 billion kWh recorded in 1997 and was even exceeded by the 304 kWh generated way back in 1974.

Nor do we hear the Climate Howlers beating the tom-toms for the original source of modern BTUs— more wood combustion!

Actually, they advocate the opposite: Massive tree-planting as “offsets” to carbon emissions.

Likewise, most of the 2.0 quads attributable to biofuels is accounted for by ethanol produced from fermented corn. Yet any material increase in ethanol consumption—via higher mandated blending with gasoline—would likely wreck most of the IC engines on the highways, while turning the vast food production expanses of Iowa and Nebraska into fuel farms.

Finally, consider the implicit lesson in the small amount of consumption—0.2 quads—attributable to geothermal energy. As it happens, geothermal electricity is about as close to a perfect source of renewable energy as you can get, as one analysts recently noted, but there is a huge catch:

It’s virtually carbon-free, doesn’t emit large quantities of noxious gases or generate radioactive waste, doesn’t require the clear-cutting of virgin forests, doesn’t take up lots of room, doesn’t blight the skyline, doesn’t decapitate or incinerate birds, is replenished by the natural heat of the Earth, delivers baseload power at capacity factors usually around 90% and can even if necessary be cycled to follow load.

It’s also one of the lowest-cost generation sources presently available. No other renewable energy source can match this impressive list of virtues or even come close to it.

So why isn’t there more of it?

Because there wasn’t much of it to begin with. While renewable energy sources like wind and solar are exploitable to a greater or lesser extent almost everywhere, high-temperature geothermal resources are found only where there is a coincidence of high heat flow and favorable hydrology, and…..these coincidences occur only in a few places

This gets us to the only so-called “renewables” which are actually expandable at scale—-solar and wind. As to the former, it needs be noted that US consumption during 2020 amounted to only 1.2 quads, or less than half of the primary energy supplied by wood (including a small amount of industrial consumption of bio-waste such at pulp mills etc.).

That’s right. After decades of big time subsidies and endless government promotion, solar is still eclipsed by the fuel first used by cavemen!

The problem with wind power, however, is no less prohibitive. In the case of the 3.0 quads of primary energy attributed to wind in 2020, virtually 100% was used by utilities to generate electricity for the grid. Accordingly, only 90% of that wind energy ever makes it to a home, industrial plant or EV auto. The difference is accounted for by BTUs lost in downstream transmission and distribution lines (T&D losses). And when you add the fact that 64% of primary solar consumption was also used by electric utilities and also suffered T&D losses, you get a truly startling fact.

To wit, only 3.4 quads of solar and wind energy actually generated net electrical power to end users in the US economy in 2020.

In turn, that tiny figure represents only 4.9% of the 69.7 quads of net energy from all fuels (after deducting utility system waste from all fuel sources) used by the entire US economy in 2020. Yet even that tiny fraction was an artifact of the massive government subsidies which have been thrown at the two green fuels.

In the case of wind power, for example, there is a Federal subsidy of 2.5 cents per kWh, which happens to represent 69% of the average wholesale price for wind power, plus a 30% investment tax credit for the original installation of wind farm CapEx. Then again, no one charges for the wind—so wind power is massively capital intensive with CapEx representing 70% of lifetime wind power costs, meaning that another 21% of the cost of power is funded by Uncle Sucker.

Still, the question recurs. How do you get to, say, a 50% replacement of fossil fuels with green energy by 2035, which would be the minimum path to net zero CO2 emissions by 2050—even assuming still more wasteful Joe Biden subsidies than we already have?

In a word, you don’t. That because even a surface investigation takes you smack into the unacknowledged elephant in the green energy room. To wit, the only practical way to deliver wind and solar to the end use sectors of the economy is through massive conversion of green BTUs to electricity and the distribution of them through the leaky power grid.

Needless to say, that process would be fraught with obstacles and risks that the Climate Howlers never even remotely acknowledge. In fact, as we will show below, to convert even 50% of current fossil fuel consumption to wind and solar, would require a near doubling of total primary energy consumption in the utility sector from the 35.7 quads reported for 2020 to nearly 66 quads by 2035.

More crucially, the 10.6% share of utility primary energy or 3.8 quads posted in 2020 for solar and wind would rise to nearly 67% and 44.0 quads by 2035 (see calculations below). That is to say, solar and wind production would have to rise by nearly 12-fold over the next 15 years. And the cost of subsidies to make it happen (including drastically rising retail utility prices to consumers) would be truly staggering

Now, here’s the thing. Given the inherent intermittency and unreliability of solar and wind energy, the electric grid would become dangerously more fragile and subject to brown-outs and blackouts during periods of peak demand and low solar/wind production. That’s because when you take out half or about 11 quads of fossil energy now used by the electric utility industry you are removing baseload capacity which is essentially available 100% of the time, save for scheduled maintenance and very occasional unplanned interruptions.

By contrast, when two-thirds of the grid is powered by solar and wind as we have projected for 2035 under the COP26’s net zero regime, you have fundamentally transformed the nature of the electric power system. There would essentially be no baseload power supply left, meaning that the system would have to be equipped with massive pumped-hydro, compressed air or battery storage facilities to back-fill for no wind or sun days— plus meet time of day and seasonal demand surges, which would get far more severe when nearly the entire economy gets electrified, as further explained below.

The problem, of course, is the production of electrical power so that it can be stored and drawn-down later is inherently inefficient and a BTU waster. That’s especially the case, with pumped storage, the only practical idea for large scale system storage and back-up. Of course, what that solution does is burn a lot of BTUs pumping water uphill to a reservoir—so that the sluice-gates can be opened to regenerate the very same hydroelectric power when needed at a latter date.

Overall, it is estimated that the range of available storage solutions would result in a 10-40% dissipation of the primary green energy supplied to the utility system. So not only would massive costs be incurred to finance power storage, but the loss of BTUs in the storage loading and extraction process would require even more primary green energy capacity to make up for the wasted BTUs!

Thus, if the energy loss owing to storage systems for 32.2 quads of incremental solar and wind conservatively averages 25%, another 8 quads of solar and wind primary capacity would be needed to supply projected 2035 power requirements. That is, by 2035 utility system would need 44 quads of solar and wind or 11.5-times more capacity than its actual green power output in 2020.

For want of doubt, first consider the implications of shifting 50% of fossil fuels used in the transportation sector to solar and wind fueled electrical power production. During 2020, the transportation sector used 24.23 quads of primary energy, of which fossil fuels—petroleum products and natural gas—supplied 22.85 quads or 94% of the total.

The total of primary energy consumption in the transportation sector, of course, was severely depressed in 2020 owing to the shutdown of the airlines during much of the year and the sharp curtailment of auto travel for both leisure and commuting. Accordingly, the appropriate base is 2019 when total consumption was 28.6 quads, which represented a 0.41% per annum growth rate from the level posted for the year 2000.

For purposes of analysis, therefore, we have assumed that modest growth rate continues to 2035, resulting in annual primary energy consumption of 30.53 quad in the transportation sector. If half of that (15 quads) were to be shifted from fossil fuels to solar and wind, it would require another 8.3 quads of green energy.

That’s the math when you factor in that energy efficiency from the plug to the drive shaft is about 72% for EV vehicles versus 39% from the tank to the drive-shaft under optimal driving conditions for IC vehicles, and 35% on average. That’s a gain, but its partially offset by the fact that 10% of primary electrical power generated on the grid is lost in T&D.
Exhaust Loss.png

The postulated 50% de-carbonizing of the transportation sector alone would thus require that the 3.8 quads of solar and wind used by the US utility sector in 2020 reach 12.1 quads by 2035. And that’s before factoring in the displacement effect in the other four sectors.

It’s also not the half of it. When you switch to EV vehicles and and distribution of 3X more quads of energy through the utility system you are also creating havoc with load management. That’s because travel surges around holidays create peak loads that drastically exceed day-in-and-day-out levels. In the case of air travel, for instance, during a typical year revenue passenger miles in July are equal to nearly 140% of the level for the seasonal low in February.

Just imagine a hot but cloudy and windless July 4th. The normal air-conditioning and commercial demand surge would be over-layed with a huge fleet of EVs on the holiday roads and hitting the charging stations with relentless effect. This year, for instance, an record 47 million travelers hit the road on the July 4th weekend.

Of course, that is not a problem for the existing motor fuel supply system. Average demand is about 9 million b/d, but motor fuel stocks range between 220 and 260 million bbls—plus another estimated rolling inventory of 50 million barrels in the tanks of the nation’s 285 million vehicles. So with upwards of 300 million bbls or 33 days of supply in the system, peak load fluctuations are readily absorbed by the system.

Needless to say, electrical power is another breed of cat. It can’t be stored as produced. As indicated above, production must always meet instantaneous demand or the grid will collapse. The only solution is to store dispatchable electric power in another form—pumped storage reservoirs or batteries, and that’s damn expensive.

Moreover, unlike the vastly de-centralized motor fuel stocks which are efficiently market-driven, creating a massive system-wide dispatchable surplus on the utility grid for peak EV demands would be a daunting task. After all, you would need about 140 million EVs on US roads versus today’s 1.4 million plug-in EVs to displace 50% of motor fuel demand.

Nor is the transportation sector unique. Currently the industrial sector accounts for 22.1 quads (2020) of primary energy demand, of which 19.7 quads are supplied by fossil fuels. Those fossil fuels supply various combustion equipment, IC engine driven power plants and machinery, as well as feed stocks for chemical processing industries.

Again, based on the tepid growth rate of primary energy demand in the industrial sector (because production was off-shored to China etc.) we project primary energy demand of 23.2 quads by 2035, and that 12.9 quads would need to come from solar and wind via the electrical grid to replace 50% of current fossil consumption.

So when added to the above estimated incremental demand from the transportation sector, you would need a total of 25.0 quads of solar and wind–or nearly 6.6X more than current levels—to supply the vastly enlarged demand on the electrical grid from conversion to green energy.

The story only gets more complicated when you add-in the residential and commercial sector. For instance, the residential sector is already heavily electrified owing to the electrical powering of lights, air conditioning and household appliances. Consequently, while the household sector has primary energy demand of 6.54 quads, it actually uses 11.53 quads counting the 5 million quads of indirect energy consumption supplied through the electrical utility grid.

Household demand is already highly skewed to peak hours, days and seasons, but when you complete the greening analysis it becomes drastically more so. That is, 5.7 quads of the 6.5 quads of primary energy (i.e. non-utility) consumption in 2020 was supplied by fossil fuels. If you convert half of that to solar and wind by 2035—by essentially forcing residential users to convert most gas-heating to electric heating—you would need nearly 4 quads of additional solar and wind to support the increased residential demand on the electrical grid.

That is to say, the single most variable energy demand sector—America’s 130 million housing units—-would become virtually all electric. Fully 9.0 quads out of total residential energy demand of 12.0 quads (including current electrical power use) of consumption would be supplied by the electrical grid by 2035.

Would that fact create an even more egregious disconnect between unreliable solar and wind power on the fuel side of the electrical grid and variable demand on the user side?

Most surely it would. And that’s especially true when you add in the last two elements of the supply-demand picture. To wit, the commercial sector is growing at about 0.6% per annum, so by 2035 total primary use would be 5.3 quads and the incremental wind and solar requirement to replace half of current fossil fuels, which currently account for 88% of primary energy demand in the sector, would total 2.9 quads.

Finally, the baseline demand for primary energy in the utility sector itself is about 37.0 quads (2019) and it has not been growing for years. So on a 2035 projection, current fossil and non-fossil sources of utility energy would be as follows before giving account to the displacement shifts estimated above in the four end-use sectors of the economy. And this optimistically assumes no loss of nuclear or hydro capacity in the interim:

  • Nuclear: 8.2 quads;
  • Hydroelectric: 2.6 quads;
  • Biofuels: 1.2 quads;
  • Current wind and solar: 3.8 quads;
  • Status quo fossil fuel requirement: 21.2 quads;
  • total baseline utility primary energy: 37.0 quads

At least in the case of utilities, replacing half of the 21.2 quads of fossil fuel with solar and wind would not be quite so demanding. That’s because on average only 37% of the fossil BTUs fired to utility boilers end up as output electricity owing to loss of energy at the steam turbines—-a loss which does not occur with solar or PC electric, which don’t have a steam turbine cycle.

Accordingly, it would require about 4.0 quads of solar and wind capacity (prior to considerations of back-up storage) to replace about 11.0 quads of fossil fuels that would otherwise be consumed by the utility sector.

On an all-in basis, therefore, the implicit transformation of the utility sector would be staggering, and that would only get you half-way to zero net carbon by 2050. Here is the summary of what would be required in terms of total solar and wind capacity in the utility sector by 2035:

  • Current solar & wind: 3.8 quads;
  • transportation sector replacement: +8.5 quads;
  • residential sector replacement: +3.9 quads;
  • industrial sector replacement: +12.9 quads;
  • commercial sector replacement: +2.9 quads;
  • utility sector replacement: +4.0 quads;
  • back-up storage: +8.0 quads;
  • Total Solar & Wind, 2035: 44.0 quads;
  • Multiple of 2020 level: 11.6X

It goes without saying that the above is an economic train-wreck waiting to happen. You simply don’t go from 3.8 quads of solar and wind after decades of tepid gains to 44.0 quads in less than 15 years. Plain and simply, such a shift would take the US hostage to a centralized utility grid based energy respiratory system that would be dangerously unstable, imbalanced and subject to catastrophic black swan type events.

To return to our broken window fallacy, it would also involve dismantling and destroying a highly decentralized fuel supply system based on exploiting the compact high density BTU packages produced by Mother Nature during over 400 million years of the original global warming of the Mesozoic and later periods.

Yet the value of these broken energy windows would be staggering: 100 million IC engine vehicles scrapped before the end of their useful lives; 35 million oil-or gas-fired home heating units ripped out and replaced with electric heating; millions of industrial combustors and IC engine based machinery dismantled; and nearly 14 quads of perfectly good fossil energy capacity in the commercial and utility sectors thrown on the scrap heap, in addition.

In short, do the preening pols and apparatchiks who congregated in Glasgow have a clue about the economic and human mayhem they are fixing to unleash?

Not in the slightest!

Ironically, it was the sleazy, half-sentient Scumbucket Joe Biden himself who led the lemmings toward a proverbial Scottish cliff, which we recall is not too far away from of the location of one of the stupidest political gatherings ever organized by mankind. Nor did the very stupid host and British Prime Minister, Boris Johnson, who cheered them on appreciate the irony that he did so from a pulpit in nearby Glasgow, either.