FAQ: Which Simulations Should I Use?

Users often ask: which simulations should I use for impacts analysis?

The short answer to which model outputs we recommend you use is "as many as possible."

The longer answer:

Sometimes people phrase this question as "which is the best model?" and the reality is that there is no such thing. The climate model simulations are not forecasts that predict the future; they are projections that describe possible futures given particular assumptions. The differences between simulations reflect important uncertainties in our knowledge about the future, including but not limited to assumptions about human activity, our understanding of various earth system processes, and our ability to accurately simulate those processes.

Moreover, every model has its strengths and weaknesses, and while one model may be better than others at reproducing the observed climate in one particular season and location, in another time and place a different model may be better. All of the simulations in the archive have passed the low-bar test of producing outputs that look generally sensible and realistic over the broad North American domain; beyond that, simulation quality depends on the region and purpose in question.

When it comes to climate impacts analysis, the best way to make use of these simulations is to use them to get a sense of the range of likely possibilities for the future, and to see how that envelope of uncertainty propagates through the system you're interested in. If it's fairly easy for you to run your impact model or analysis multiple times, the simplest thing to do is to just use all the model outputs. If it'ts more costly, what we would recommend is that you use at the very least a low-end result and a high-end result, so that you have a sense of how the range of uncertainty affects the outcome, and then probably also a result somewhere in the middle.

What constitutes a low-end or high-end result varies both spatially by variable, so unfortunately we cannot make any general recommendations there, but we do have available on the website some analyses of the simulation outputs which you may find useful in identifying these simulations. These analyses can be found in the Results section.

All that said, there are some cases where particular simulations can and should be eliminated from consideration because they do not do a good job of capturing the weather and climate processes that are important in a particular area. If the simulation doesn't model the underlying phenomena well, there's no reason to believe what it has to say about how things will change in the future. Unfortunately, this kind of credibility analysis is too difficult and time-consuming to perform everywhere as a matter of course, so you will need to search the literature for analysis of model performance in the area of interest. You can also do your own analysis, and it's good practice to check model outputs against your own understanding of the weather and climate in your area of interest to see whether they make sense before you use them. If you're not a climate expert yourself, we recommend that you work with a climate scientist to do your own evaluation of the credibility of the simulations.

Another issue to be aware of is that climate simulations typically have systematic biases in them, which often need to be corrected for use in impacts.

One commonly-used approach that addresses both these concerns is to use the "delta method". In this approach, you generate a climatological average for the current period and the future period, and then subtract the current from the future to get a climate change factor, or delta. Assuming that the bias is constant, it will cancel out when you difference the two climatologies. You can then apply the delta to your existing observational data to get data projected into the simulated future climate. (For temperature, the delta is additive; for precipitation it is the ratio of future to current, not the difference, and is applied multiplicatively.)

We also are generating versions of the NA-CORDEX dataset where the bias has been corrected using a form of quantile mapping, a technique where individual values are adjusted so that the PDF (probability distribution function) of the model data matches the PDF of an observational dataset for the corresponding period of time. This approach is dependent on having a matching observational dataset, so we cannot apply it to all variables, locations, and grids in the archive. The bias-corrected datasets are labeled as such on the data portal.

There are of course many other approaches to dealing with bias, and what makes the most sense depends on the details of your problem, including your resource limitations. Unfortunately we can't make more specific recommendations in the abstract, but we hope this is enough to give an overview of how to approach the problem and to at least get you started. We are happy to discuss these questions in more detail if you have further questions.