Wednesday 11 July 2012

Update on "demo product" strategy

We revised the demo product strategy from that originally proposed because of instrument failures that prevented pursuing the original plans (see here). Getting closer to this work, Alison McLaren pointed out that the notes don't discuss the depths intended for the analysis products for the demo periods.

So, this note simply clarifies this for the record. It is straightforward, since the choice arises from the purpose for each of the revised demo periods.

The "4i" option adds passive microwave sensors to the long-term ECV product, in effect, and will of interest to see their impact. Clearly, this has to include an analysis product of the same type as the long-term record -- i.e., sst-depth.

The "4ii" option covers Metop AVHRR 0.05deg, AATSR L2P and SEVIRI, all of which deliver true skin SSTs. The main purpose of this demo is simply to have experience of handling these datasets (which are larger volume than the long-term AVHRR GAC and ATSR 0.05deg data). The outputs will be compared with operational OSTIA, and therefore the analysis should be a foundation analysis, obtained by applying the same methods to skin SSTs as currently used in OSTIA operationally.

Tuesday 3 July 2012

How to estimate SST in SST CCI?

The "Algorithm Selection" process has been discussed many times on this blog (herehere and here, for example). Now I can present the conclusions.

First, some general comments. It was great to get some external participation, and to be able to make very clean comparisons between different methods of estimating SST from space on common data, where we had controlled the procedure for comparison against validation data that had not been used in algorithm development by any party.

It turned out that there was tough competition between the algorithms in terms of the quantitative metrics. Just to recap: we generated statistics and maps of "bias" (taken as the mean SST-depth minus drifting buoy difference), "precision" (standard deviations of the same), and "SST sensitivity", and also various measures of stability (with respect to trend, seasons and day-night).

Generally, all the considered algorithms were good. Sometimes one would perform better on a particular metric for a given sensor, but on a different sensor, the ranking would be reversed.

For the ATSRs, the choice came down to using either the ARC-project coefficients, or a version of optimal estimation tuned to the ARC-project coefficients. Both were independent of in situ, therefore. The optimal estimation had a slight edge on precision, but otherwise there was no clear advantage that one consistently had over another across the full range of application.

For the AVHRRs, the choice came down to the same optimal estimation or Boris Petrenko's incremental regression approach. Incremental regression gave better precision, but poorer sensitivity, so there was an aspect of trade off there. For night-time cases, overall both were very similar on quantitative metrics. For day time cases, optimal estimation was a little better on bias and sensitivity, not as good, as mentioned, on precision.

So, in the end, the only very clear deciding factor was that incremental regression is an empirical approach, tuned to in situ measurements. Optimal estimation, being tuned to independent ARC SSTs, retains independence from in situ measurements -- this being an advantage for a significant minority of climate users of SST (according to our earlier survey).

Having opted for optimal estimation for the AVHRRs, it then seemed preferable to make the same choice for the ATSRs, to maximise the consistency of approach across the sensors. (The only exception will be ATSR-1, for which an version of optimal estimation is yet to be developed.)