a1 = .006 (.006) a2 = .059 (.051) g1 = .041 (.018) g2 = .061 (.055) d1 = 3 years (37 years) d2 = 73 years (84 years) Ci = -.041 degs Residual power spectral density (peak is at 57.9 years)

Ran the sim with the reversed time series below

Using the parameters from the prior optimization correlation is negative as expected.

First step of the optimization (match the acfs) works because it is insensitive to scaling, including a scaling of -1

The fit though is not too is not too hot

Skip the first step and give the optimizer control of all parameters, best fit (not guaranteed global minimum)

‘course now the acf is horked

Leif’s magic series

The results at the top of this page were manually optimized. Running the optimizer does much better.

]]>

If we look at the auto-correlation of this data, we see evidence of a remaining periodicity.

Proceeding as before, we remove this periodicity resulting in the the doubly adjusted data shown below.

The remaining spectral peaks are statistically insignificant. We’ll test the significance of the periodicity removed thus far in part 3. The plot below shows A1 and A2 plotted with the uncorrected data with Gaussian filtering (r=6) applied for de-noising.

Comparing A1 to the uncorrected plot, it is evident that the ADO accounts for the cooling evident in the late nineteenth century, the rapid rise in temperatures in the first 40 years if the twentieth century, and the cooling era between 1940 -1975. It also accounts for the rapidity of the rise in temperatures of the last 25 year of the twentieth century when global warming became a concern as well as the recent flattening of temperatures experienced during the so called “pause”.

The scattergrams below compare the raw, adjusted (A1), and double adjusted (A2) to the CO2 data obtained from here, which concatenates the data from the Law Dome ice core reconstruction with the direct observations from the Mauna Loa observatory plotted in the lower right panel.

The three datasets show a similar pattern- little correlation until the CO2 concentration exceeds 300 ppm, then the expected roughly logarithmic relationship. Comparing the A1 and A2 plots, it appears the final adjustment removed some low concentration correlation. If we examine the PSD of the de-trended CO2 plot we can see why.

The spectral peak the same frequency detected in the final adjustment above. Thus the 88 year “periodicity” is actually a response of the climate to the variation in CO2. We reject A2 in favor of A1.

In order to avoid biasing the estimation of climate sensitivity, we must avoid the zero-slope segment where no correlation is exhibited (i.e concentrations below 295 ppm) and add a positive offset to the temperature data such that all points are well above zero. We remove this offset from the fit equation after performing the OLE regression. This only effects the reference concentration corresponding to the zero anomaly temperature and does not impact the sensitivity. The OLE logarithmic fit to the data (Ta=2.83ln(x/246.278)-0.809 where x is the Co2 concentration in ppm) is shown in the plot below.

The transient sensitivity to 2x CO2 is then calculated as 2.83*ln(2)= 1.96 degrees C .

]]>Much if the critique (and I use the term advisedly) relates to this paragraph from the original post:

Unfortunately, the analysis is of no value because, as is commonly known, the CPA cannot be used on auto-regressive time series. This can be easily demonstrated. Here’s a random sample of an ARIMA[3,1,1] process (This is not to infer the climate can be modeled as an ARIMA process. CPA fails for any integrative process, a class which in all likelihood the climate falls within.

Laughingly, the author uses this animation to complain the the output of AIRMA process sample doesn’t look like the GISS data.

But compare that to the GISTemp chart above. They are nothing alike.

That’s a lot of ignorance to pack into one sentence. Leaving aside the fact that the GISS plot is filtered (looks like yearly mean data to me, perhaps with MA filtering) while the ARIMA plot represents raw monthly samples, the point of using a statistical models is not to generate data that matches the temperature record *but to model the statistics*. (Hint: that’s why they call it a statistical model.) Here we are using it to show why changepoint analysis is an inappropriate tool for examining climate data. If CPA fails for a simple ARIMA process (and it does), then it is certainly no match for climate dynamics which contain, off the top of my head, at least five cumulative sub-processes (ocean, atmosphere, glaciers, carbon sink, sea ice, etc.) which are all non-linearly cross-coupled to boot!

As I mentioned in the original post, a simple ARIMA process generator cannot match the dynamics of such a complex system but it **is** sufficient to show the pitfalls of naively applying CPA to feedback processes whose output depends in some way on its accumulated past history. Such systems when subjected to random fluctuations, either from external forcing or generated internally, are known to exhibit rapid phase transitions (as shown in the ARIMA plot e.g. between “month” 800 and 1500). CPA mistakenly interprets these phase transition regions as forcing changepoints when none exists, and misses real forcing changepoints which can be masked by the internally generated phase transitions. It’s not magic Sou, it dynamics. I refer you to any introductory text on the subject.

I have not come across any other article anywhere that states that you cannot apply change point analysis if there is any autoregression in the time series. I did find an article describing a technique to distinguish between shifts in the mean and autocorrelation.

This one’s a hoot. The article Sou linked to is about testing for auto-regression so one can determine if CPA results can be trusted! Quoting from the article,

The pattern test can be used to detect

a violation of the assumption of independent errorswhen control charting data and performing a change-point analysis.

It is clear Sou doesn’t understand the subject matter she’s commenting on so here’s a helpful hint. Auto-regression makes the time samples interdependent which violates the assumption underling changepoint analysis (independent samples), precisely the point I made in the WUWT article.

The author of the above paper has an excellent introduction to changepoint analysis here. Here’s a quote from that text:

It is assumed that the e

_{i}are independent with means of zero. Taylor (2000b) provides a procedure for detecting a departure from this assumption.Data not appropriate for a change-point analysis and control charting include autoregressive time series data such as stock prices.

And then there is this gem

I don’t know how many runs Jeff had to do to get so much of the chart above the zero line, but it strikes me that if he could have got a run anything like the actual surface temperature charts he would have used it. And if he did find one that looked anything like the surface temperature chart, would he have said how many runs he had to do to get it? Would he have computed the odds? Maybe, maybe not. The odds would have been very low. I haven’t done it myself but if anyone has, do tell.

As if the offset has anything to do with analyzing anomaly data. And again Sou exhibits the fundamental misunderstanding that somehow the point of the exercise is to generate a series that “looked anything like the surface temperature chart”.

Next exhibit:

First he argues that there has been a rise in global surface temperature, a “warming trend” that has “remained constant” at around 0.008°C/decade

^{2}. He’d be wrong about that. Global surface temperatures have risen around 0.8°C since the 1920-30s. Since 1950, the global surface temperature has risen by an average of 0.122°C a decade. Since 1980 it has risen by 0.156°C a decade. It’s not had a constant rate of increase of 0.008°C per decade^{2}(nor aincrease of 0.08°C a decade).constant

Again the author reiterates my point illustrated in the plot below from the original post:

The slope is in constant flux between roughly the limits described in the quote above. As such, the only meaningful metric is the trend of the trend, (the slope of the dotted line), which has stayed constant at 0.008°C/decade^{2} for the entirety of the record. Perhaps Sou wants to see some fingerprint of man’s influence in the plot above but there is none to be had.

The leap of faith is that according to Jeff, this large rise in surface temperature must be being caused by something other than human actions. Magic? If it were random, then temperatures would have gone up and down but on balance not moved far from a mean.

A glance at the paleoclimate plot below should disabuse anyone of the notion that the mean climate has meaning, or at least that we should pray we never live to see it.

Does the author truly believe that our climate is not subject to large and persistent natural variation? If so, she is at odds with the climatologists she holds in such esteem. Here’s a recent example from a paper (*Atmospheric controls on northeast Pacific temperature variability and change, 1900–2012*) published in the Proceedings of the National Academy of Sciences by Johnstone and Mantua (2014) (emphasis mine)

This study uses several independent data sources to demonstrate that

century-long warmingaround the northeast Pacific margins,like multidecadal variability, can be primarily attributed to changes in atmospheric circulation. It presents a significant reinterpretation of the region’s recent climate change origins, showing that atmospheric conditions have changed substantially over the last century,that these changes are not likely related to historical anthropogenic and natural radiative forcing,and thatdynamical mechanisms of interannual and multidecadal temperature variabilitycan also apply to observed century-long trends.

Again, Sou no magic, just dynamics. Pick up a book on the subject.

Note: At the risk of appearing to join Sou in her haughty, gratuitous aspersion of process/production engineers, for the record I am a research engineer with 35 years experience. I hold seven patents in the fields of signal processing, non-linear dynamics and modeling.

]]>One of the first difficulties encountered in estimation is the large but unknown variation in the empirical temperature data due to natural causes. Slope estimates are influenced greatly by low-frequency quasi-periodic signals due to external forcing and climate system dynamics. Presented here is a method for removing these cyclical components.

The dataset used in this analysis is the Hadcrut4v2 data with corrections (Cowtan and Way). The raw monthly data is shown below.

The first thing we need to do is remove the seasonal periodicity by calculating a yearly mean for each of the 163 years spanned by the data.

Next we remove the obvious outlier near 1878 by interpolation.

We can began to see a hint of cyclicality by filtering the data slightly using a Gaussian (r=5) filter to remove some of the noise. The unfiltered data though is used for the analysis that follows.

In order to isolate the cyclical components it is first necessary to remove the gross trend. We judge which order of polynomial fit to use by examining the auto-correlation for various orders of fit.

As the order of the polynomial increases, the width of the main lobe narrows (implying a whiter residual) and the cyclical component increases, until at order 5 at which point the polynomial begins to follow the periodicity we are trying to isolate reducing its contribution to the residual. We select order 4 as the best choice for detrending.

The plot below shows (left to right) the ACF with order 4 detrending , the power spectral density (Fourier transform of the ACF), the order=4 residual and the histogram of the residual.

Note Note the strong spectral peak at f = 0.01583 (T ~ 63 years) and its 3rd harmonic (T ~ 21.5 years). These are the cyclical components we seek to remove from the residual.

Using a non-linear fitting algorithm, we fit the residual to a sum of sinusoids model which finds the best fit to be:

-0.0405767 sin(0.167108 -0.29583 n)-0.0829206 sin(1.34258 -0.0986101 n)

Removing the cyclical component from the time series yields the trend plot used in part 2

]]>The circle with the “X” inside represents the convoltion operator. h(t) is the impulse response of the climate system, which is given by the equation. The a and b parameters were set to give the correct step when we fit the step response here:

The ω,ζ parameters were found by fitting a second-order response characteristic to the step response above. This first cut estimate is close enough to determine the response lag used in the parameter optimization procedure below. Here is the raw convolution of the SSN data (with the fake data appended to make the correlation easier to see) with the unoptimized h(t).

I’ve left off any reference to dates in the plot above because we haven’t yet determined the delay of h(t). The discretized kernel is full length (in this case, equal to the the number of SSN data points including the fake ones) but it will have an unknown effective length because the samples at the beginning of the filter contribute negligibly to the output sample (think of the filter as a weighted sum with the weighting coefficients increasing as we move toward the output sample). We can by eyeball, see the obvious correlation but the data has to be “back-ed up” by 18 years (the effective filter delay) to match.

Let’s define the singular spectrum analysis modes we will be using to complete the model.

The non-linear lowest order “mode” is traditionally referred to as the trend in the literature. The trendless oscillatory signals are reconstructed from the eigenmodes (k values) grouped by frequency. The reconstruction built from the non-periodic modes (or periodic signals whose amplitudes are trivial) will be referred to a the residual. Note that since in SSA, trend + modes + residual = data, the residual may sometimes be refer to data – trend – mode, as the two usages are mathematically equivalent.

From this point on, the results presented will be slightly different (and slightly better) than previously shown. Due to some confusion on my part about the start and end dates of the adjusted SST data, I had used a 20 year delay instead of the correct 18 derived above. I have also changed the way I find the optimal values for the system response parameters ω and ζ. Instead of fitting the convolution results to the noise data minus the primary SSA modes as before, I have instead fit to the the SSA trend plus the modes 1-3. This avoids biasing from the spiky events in the climate record that are presumed to be unrelated to the h(t) response.

I’ve also changed the fit algorithm to minimize the variance rather than maximize the correlation because the correlation is insensitive to the arbitrary scaling constant (i.e the kernel “gain” that converts sun-spot numbers to temperature). The results with the optimized system response parameters are shown below shown below.

The plot below shows the results of the convolutional part of the model with no additional oscillatory modes added.

With only the primary 58.4 year mode1 added the results improve markedly.

Mode 1 is the is the so called “60 year cycle” (here 58.4 year) often identified with the Atlantic Multidecadal Oscillation (AMO). It’s existence is controversial but it needs to be stressed at this point that **the conclusion presented here is independent of the MDO signal**. Whether it exists as a periodic forcing or is a statistical fluke, figure 7 shows that the convolution of the mean-centered sun-spot data (which is obviously free of any anthropogenic affects) with h(t) accounts for 90% of the observed temperature anomaly and the remainder, plotted as the residual in figure 7, is trendless and free of any detectable AGW signal. Thus **the entire SST temperature record is free of any anthropogenic contribution**.

Alarmists will no doubt raise the curve-fit canard. Any such critic demonstrates a lack of understand of system dynamics. The system impulse response, h(t), defines a linear time-invariant (LTI) filter. It’s output at any point in time, is a weighted *linear* combination of the input samples accumulated up to that time, with the weighting factors fixed and predetermined by the response parameters defined above. Such systems cannot create information, they can only process the information present in the input signal. The fact that any such LTI system function exists which when convolved with the SSN input signal produces the observed temperature trend proves that all of the information required to explain that portion of the observed record is contained in the input signal.

**Physical Implications**

The system response function h(t) contains two integrations. As such, its output is sensitive to **asymmetry** in its input measured over roughly a 36 year (twice the effective filter lag) time period. That is, when the mean-centered SSN data (presumably a proxy for TSI) double averaged over approximately 36 years is positive, the temperature trends up. If negative, down. In fact, h(t) can be approximated as an accumulation (pure integration) of an 18 year moving average, scaled and offset as shown in the figure below.

Figure 9 alone should but to rest any notion that this is some sort of curve fitting exercise.

h(t) describes a system that stores energy during warm periods and gives it back over cool periods. I am not a climatologists but one can imagine an ocean transport mechanism that could exhibit this behavior.

**Direct TSI Influence**

Because climatologists have ignored the climate’s integrative properties, they have discounted the TSI effect as being too small to account for the observed temperature anomaly. Indeed, the system response h(t) function greatly attenuates the 11 year solar cycle due to the double integration noted above. However, Mode 3 of figure 5 shows there is an 11 year cycle present in that SST data. It is at the .03-.05K amplitude others have found and is nearly in-phase with the SSN data. This phase relationship shows that it is a direct heating effect, bypassing the system response shown above. The figure below adds mode 3 to the model, improving the correlation slightly.

There is also some correlation (r=.24) between the residual above and the averaged annual ENSO index.

**Extending the model to the present**

Because of the delay inherent in h(t) we can’t determine the present output. Greg Goodman had the great idea of extending the SSN data using the prediction of the solar cycle available from ??. Since most of the data in the kernel pipeline is real and since the first samples are lightly weighted, we expect this to work pretty well for 18 years or so. Figure 12 shows this to be the case.

Keep in mind the blue curve of figure 13 (the h(t) output) is independent of the red data. Note how the peaks in the ripple of the model align with the el-nino events present in the data record.

** I’m publishing this incomplete post because I’ve had trouble with wordpress crashing and losing my drafts. Watch this page for updates**

Reader Greg Goodman had a number of constructive suggestions, one of which was to remove the oscillatory modes from the data and run a fit to equation 2 on the residual. This removes any reliance on the SSA for extraction of the trend and thus precludes the possibility that the observed response is SSA related. If the fit parameters come out close to that which was achieved by fitting equation 1 to the differenced trend, it will be confirmation of the result.

Here is the hadcrut4 NH data with the 2 main oscillatory modes removed:

In order to use equation 2 on the data, we need to add a constant of integration, C and solve for the boundary condition of continuity at the step. The modified equation 2 is:

Eqn 2a – Modified 2nd order system response allowing for non-zero initial condition and with boundary condition of continuity at t=τ

Fitting Eqn 2a to the data of figure 1 yields the following parameters:

Compared to the previous fit obtained by fitting eq1 to the differentiated SSA mode.

We see the new fit yields a slightly more damped system with a slightly higher natural frequency. Given the data of figure 1 is much noisier, especially near t=tau, and the difference in detection method, I believe these results are confirmative. Of significance is the match in τ, the time at which the shift occurs.

Figure 2 shows the two responses

The fit is shown below (left) along with the residual (right)

Adding the oscillatory SSA modes to eqn (2a) is shown below against the unaltered data

The correlation coefficient R= .8476 is insignificantly better than the R=.8471 achieved with the original model using the integrated eqn 1. The correlation is dominated by the unpredictable events in the data. Smoothing the data with a HP smoothing factor of only 5 yields R= .971 for both models.The plot below shows a three harmonic sine fit to the two SSA main modes. Note how the combined SSA modes (red) seem to change during the step response transition region (t=27-50). The three sine waves comprising the fit are shown on the right (fundamental period = 209 years). All three components are near their peaks. Time to get out your winter woollies for the next few decades.

**Update: Monday,Oct. 7**

I’ve done the above analysis on the SST data.

The eqn 2a parameters are

The fit to the differenced trend is shown below.

**Update: Monday,Oct. 7 PM**

If we take equation 2a above to represent a system whose output is the temperature anomaly, then that output is the convolution of the input forcing with eqn 2a. The plot below is the cyclical convolution of the yearly sunspot number with eqn 2a, scaled by a gain constant (.0005) and offset by .9 degrees.

The correlation of the convolution with the data is excellent until about 1970 until the kernel end-of-record effects become apparent.

This result is strong empirical evidence both for the skill in which equation 2a encapsulates the climatic response and for the sun spot number being a primary forcing function.

**[Note: In order to improve readability I have removed some earlier updates which were in error or incomplete ]**

**Update Tuesday, Oct 8, 8PM**

Here’s the result of matching the kernel length to the SSN data compared to Gregs data (red) and the hadcrut4 SST data (gold).

In the plot below, the system response delay taken out.

**Update Tuesday, Oct 8, 3:45AM**

The plots above were convolved against a kernel derived from the SST NH data. In the plot below, the system response was derived as above (parameter fit of eqn 2a) but using Greg Goodman’s adjusted SST data. The correlation with the data is excellent until about 1963. The apparent cooling (shaded area below) might be explainable by volcanic activity [although Greg’s not buying it – see his comments below].

The graph below is Figure 1 in the last IPCC report. The black line shows temperatures. The four biggest tropical eruptions over the past century had slight cooling effects.

**Update Tuesday, Oct 8, 8PM**

The plot below adds in a single mode (SSA[L=81,k=3,4]) to the convolved SSN record

I had to reduce the kernel damping factor from the derived .135 to .5 to get the last 25 years to match. Here’s why the damping factor came in so low: As the plot above shows, the trend plus just a single SSA mode describes Greg’s adjusted SSA data (R=.963). When we remove the mode from the data we get: Fitting eqn 2a to what is essentially a sinusoid results in a low damping factor.One difference is that I have been using unsmoothed monthly data decimated to yearly by sampling each June. The rationale for this strategy can be seen by looking at the SST stack plot.

Each column is a month. June has the least variance which I suppose is due to calmer seas. Since we want to avoid filtering to preserve as much transient behavior as possible and since the winter variance is uninteresting to the model, we sample in June. Greg’s stack plot is more homogeneous which indicates that perhaps the data was smoothed across months.

In either case (low damping + volcanic cooling / higher damping and no effect from volcanoes), it points out how much the Hadcrut adjustments have biased the IPCC conclusions regarding AGW. If Greg’s “unadjustment” is correct, one glance at the SSA analysis disproves any notion of AGW. And one glance at SSN convolution shows what really controls the climate. The sun, whoda thunk?

**Update Wednesday, Oct 9, 6AM**

Here are the remaining modes not included in the model.

Beyond mode 5, the reconstructions are noise-like.

The plot below shows the residual error of the model (SSN convolved with eqn 2a + mode 2).

**Update Wednesday, Oct 9, 8AM**

Here’s a backcast of the model. The assumption is that mode 1 above (the 58 year cycle) remained constant which may not be true. I don’t know how to check this since it is a SST record but it seems to match the BEST land temp record pretty well.

**Update: Friday, Oct. 11, 3PM**

Appended Greg’s SSN prediction to data set. Convolved with unmodified kernel.

The second row of plots shows the main result (slope of the reconstructed signal). Not only do we see the same .6 degC/century change between the 1920s peak and the 1990s peak that we saw before, we see the same .6 degC/century change between the 1850s peak and the 1920s peak! This lends further credence to those who argue that even the .6degC/century change is not attributable to AGW.

**Slope Detection Provides First-Order Insensitivity to PDO Amplitude Variations**

In the comments to my WUWT post, Willis Eschenbach pointed out that we don’t know how the unaltered climate would have varied over the observation interval. That’s true but the question is how sensitive are we to such variations. I think the answer is not too. Clearly the dominant variant is the quasi-periodic signal whose period is approximately 60 years. In deconstruction we see it is very sinusoidal looking. Let’s pretend to first order we can model this as a sinusoid of approximately constant frequency but whose amplitude varies over the 163 year observation interval. Let’s also assume we have a linear AGW effect . Our model is thus

c(t) = a t+(m b(t)+1) sin(t wo)

where a is the AGW slope, m is the AM modulation index, b(t) is the time function describing the amplitude variation and wo is the frequency of the mode. Since we are detecting the peak slope, we need differentiate c(t) with respect to time.

c'(t) = a + m b'(t) sin(t wo) + wo (m b(t)+1) cos(t wo)

The slope is maximized when c(t) crosses zero or when wo *t = Pi/2. At those points, the cos terms drop out because cos(Pi/2)=0 and the sine term becomes unity. So

c'(t)max = a + m b'(t)

the “a” term is AGW contribution to the temperature slope at the peak. The m b'(t) term is the natural contribution. Note that the values obtained at the peaks are first order insensitive to b(t), the amplitude variation in the mode, and only sensitive to its first derivative, scaled by the mod index (% change in amplitude). This result can be extended via fourier analysis to the general (non-sinusoidal) case: Maximum insensitivity to amplitude variations is obtained by slope detection at the point of maximum deviation. This principle is why FM radios are insensitive to static noise.

We have five samples of maximum slope, three in the positive direction and two in the negative. Note the symmetry in these excursions shown in the figure below where I’ve added two lines of identical slope.

The first three peak excursions (2 positive, 1 negative) occur prior to any significant AGW contribution so the *a* term above can be assumed negligible over this interval and the peak values are determined by m*b'(t1), m*b'(t1), m*b'(t3).

The figure below shows the data of figure 2 detrended about the central trend, 0.0000588 t – 0.0049.

We see from figure 3 that m*b'(t1) ~ -m*b'(t2) ~ m*b'(t3). Thus an AGW component detectable prior 1965 would show up as a positive offset added to the negative peak excursion of that year. None can be detected and in fact that excursion is slightly more negative. An AGW component detectable prior 1993 would show up as a positive offset added to the positive excursion of that year. Again, no offset is detected and in fact, if we discount the obvious bump attributable to the large el-nino of that year, the peak excursion is probably slightly lower.

This channel has been a reliable predictor of the maximum slope excursion for 150 years, with no discernible effect from exponentially rising CO2 concentrations during the last full cycle. We conclude that the mean rate of increase in the slope of the northern hemisphere sea surface temperature has remained constant over the past 163 years and therefore the null hypothesis (no detectable AGW signal present in the SST data) cannot be rejected.

**Rejecting the IPCC Central Tenet**

This analysis, while limited to the northern hemisphere sea surface temperature record, finds no support for the IPCC AR5 summary statement that “It is extremely likely that human influence has been the dominant cause of

the observed warming since the mid-20th century.” They also state

Greenhouse gases contributed a global mean surface warming likely to be in the range of 0.5°C to 1.3°C over the period 1951−2010.

We’ve shown above that no detectable AGW signal is present in the SST data so let’s add one in to determine if we could detect a signal producing the IPCC’s lowest estimate, .5°C over the period 1951−2010.

The first thing of note is that to achieve the IPCC low-end estimate only requires a benign .5°C/century component!

Running the detection algorithm with settings unchanged shows that even this minor contribution would be detectable were it there (note the positive offsets at both the negative excursion in 1965 and the positive excursion in 1997).

Most of the data-based conclusions in the IPCC are supported by this analysis. To the extent we can trust the data, it shows the sea surface temperature has indeed warmed over the last century and a half and is warming at a slightly increasing rate. It is in the attribution, where the IPCC switches from data-based conclusions to model-based conclusions, that their confidence becomes unjustifiable.

“Models do not generally reproduce the observed reduction in surface warming trend over the last 10 –15 years.”

That’s because the models cannot reproduce the oscillatory natural variation responsible for both the current “pause” and the rapid warming observed in the 1980s and 1990s and wrongly attributed to GHG.

]]>Guest essay by Jeffery S. Patterson My last post on this site, examined the hypothesis that…]]>

Here’s a reblog of my article of 9/25/2013 posted on WUWT. I’ll follow up here with discussions of the method and follow on analysis

**Guest essay by Jeffery S. Patterson**

My last post on this site, examined the hypothesis that the climate is dominated by natural, harmonically related periodicities. As they say in the business, the critics were not kind. Some of the criticisms were due to a misunderstanding of the methodology and others stemmed from an under appreciation for the tentativeness of the conclusions, especially with respect to forecasting. With respect to the sparseness of the stochastic analysis, the critics were well founded. This lack of rigor is why it was submitted as a blog post and not a journal paper. Perhaps it served to spark someone’s interest who can do the uncertainty analysis properly, but I have a day job.

One of the commentators suggested I repeat the exercise using a technique called Singular Spectrum Analysis which I have done in a series of posts starting here. In this post, I…

View original post 1,322 more words

The plot above shows the dominate eigen mode plotted with the detrended SST data. The plot below show the mode plotted against a sine wave fit.

The best fit period is about 64 years and shows signs of slight phase modulation. At first I thought difference in frequency between this mode and the similar mode extracted from the Hadcrut3 surface temperature data pointed away from a celestial forcing. Then I decided to look at correlations between the two.

Here’s the auto-correlation of the Hadcrut3 climate signal reconstructed from the first eigenpair,

It shows cyclostationarity with a period of ~57 years, similar to what was found by harmonic decomposition. Here’s the ACF of the same mode extracted from the SST northern hemisphere data. Here’s both on the same plotThis is pretty convincing evidence that the mode in both datasets have the same period (56.8 years).

The cross correlation shows global surface temperature 4.2 years lags the sea surface temperature by 4.2 years.

The cross spectrum is pretty amazing

And then there is this:

The top pane shows the decimated Hadcrut3 data series we’ve been using. The second pane shows mode 2 of the SSA decomposition. The third pane reconstructs the series with all 35 modes, excluding mode 2. The last pane shows that if we add mode 2 back in, we can re-construct the original series exactly (both are overlayed on the plot).

Focusing on pane 3, where we’ve removed the mode 2 contribution, we see a linear trend with a slope of ~.5 degsC per century which has remained constant across the entire 113 year span, with no sign of acceleration due to anthropogenic effects. Thus, **if there is any signature of AGW to be found, it must be found in the contribution we’ve excluded, **mode 2 of the SSA decomposition.

Note that this conclusion is completely independent of the method of decomposition. If I claim an angel from on high appeared suddenly and handed me the extracted mode plotted on pane 2, the conclusion above would be scientifically valid, as long as you accept the laws of addition. This is shown in the last pane which proves that pane 3 (the reconstruction excluding mode 2) + pane2 (mode 2) = Observed Temperature Record, as it must. **Pane 3 shows no AGW signature, therefore it must be in mode 2 if it is in the observed temperature record at all**.

The conclusion that any sign of AGW must appear in the mode 2 component is also free of any assumptions about the nature of mode 2 or of the underling climate system as a whole. It depends only on the observed temperature record.

Looking at mode 2 (SSA[35,2,t] plotted as a time series in pane 2), we see a periodic, sine-ish looking signal tilted upward. I’ve shown a linear regression with a slope of about .25 degC per century but the slope of this line depends on where we choose to start and stop the record. In fact it is just the average value of the instantaneous slope (year-to-year difference) plotted below.

Note that the maximum slope achieved in 1993 barely exceeds that of 1923. That alone disproves the AGW effect (remember – if there’s AGW it has to be seen in mode 2). But look closely at 1980. There is a suspicious step change in the slope that is undoubtably something hinky with the dataset. If one were to look back at the history of Hadcrut3, I’d bet dollars to donuts there was an “adjustment” made to the data from 1980 onwards – probably because the data wasn’t matching the GCM models.

Hinkiness aside, if somehow the climate should assume the maximum temperature slope ever seen in the past 113 yeas (actually the past 163 years since the slope from 1850 to 1900 was down to flat) and stayed constant at this extreme, the most we’d see is 1.85 degs/century (.5 degsC/century from the overall trend in pane 3 plus the worst-case 1.35 deg/century above).

Of course that’s not going to happen. The mode 2 slope is negative, as is the mode 1 slope (see the trend analysis in part 2) but future trends aside, the main point is,** there is absolutely no AGW signature to be found in the observed surface temperature record**. There is nothing to indicate the last half of last century warmed any faster than the first half, and if my suspicions about the dataset are correct, the maximum rate in the second half was substantially lower. This obviates the need for forecasts altogether, since there is no evidence mankind is having any effect whatsoever on the global surface temperature.

**Note to SkepticalScience readers** The foregoing is not a model, it is a deconstruction. It makes no predictions, projections or divinations. It cannot be validly backcast, forecast (but it should be broadcasted – widely). It makes no assumptions about forcings, debunked or otherwise, and is valid whether the extracted mode is internal,external or a fluke. It depends only on observed data and the Law of Addition.

**Correcting the Record**

Just for fun I’ve taken the step out of the mode 2 first difference (it turns out it was between 1979 and 1980), reintegrated to create a “corrected” mode 2, and used the corrected mode 2 to reconstruct the dataset. The difference is a significant 0.12 degrees overestimation of current warming caused by the adjustment.

]]>