Possibly extreme, probably not: Is possibility theory the route for risk‐averse decision‐making?

Ensemble forecasting has become popular in weather prediction to reflect the uncertainty about high‐dimensional, nonlinear systems with extreme sensitivity to initial conditions. By means of small strategical perturbations of the initial conditions, sometimes accompanied with stochastic parameterisation schemes of the atmosphere–ocean dynamical equations, ensemble forecasting aims at sampling possible future scenario and ideally at interpreting them in a Monte‐Carlo‐like approximation. Traditional probabilistic interpretations of ensemble forecasts do not take epistemic uncertainty into account nor the fact that ensemble predictions cannot always be interpreted in a density‐based manner due to the strongly nonlinear dynamics of the atmospheric system. As a result, probabilistic predictions are not always reliable, especially in the case of extreme events. In this work, we investigate whether relying on possibility theory, an uncertainty theory derived from fuzzy set theory and connected to imprecise probabilities, can circumvent these limitations. We show how it can be used to compute confidence intervals with guaranteed reliability, when a classical probabilistic postprocessing technique fails to do so in the case of extreme events. We illustrate our approach with an imperfect version of the Lorenz 96 model and demonstrate that it is promising for risk‐averse decision‐making.


| INTRODUCTION
In weather forecasting, it is acknowledged that by design (limited size of set of ensemble predictions [EPS], targeted sampling of initial conditions [ICs]) and by context (flowdependent regime error, strongly nonlinear system), raw ensemble forecasts generally do not provide reliable probabilistic predictions (Bröcker and Smith, 2008;Gneiting and Katzfuss, 2014). This is especially the case for extreme events (Legg and Mylne, 2004). The latter result from nonlinear interactions at small scales, which implies that they generally cannot be associated with a high density of ensemble members (Mylne et al., 2002). Ensemble forecasts are made more reliable and operational via calibration (Buizza, 2018), whose aim can be summarised as finding the transformation that, applied to the raw ensemble, leads to the probability distribution that will maximise a performance metric on a training set. In spite of the diversity of approaches developed in the literature (Buizza, 2018) and their technical success for improving the prediction skills when it comes to common events, the actionability of probabilistic predictions often remains problematic (Smith, 2016). In particular, the probabilistic prediction of extreme events often needs a development on its own (Friederichs and Hense, 2007;Friederichs et al., 2018). Bröcker and Smith (2008) questioned whether probability distributions constitute "the best representation of the valuable information contained in an EPS." We advance convincing arguments that possibility theory, "a weaker theory than probability […] also relevant in nonprobabilistic settings where additivity no longer makes sense" (Dubois et al., 2004), is an interesting alternative. Our investigation is particularly relevant since conceptual and practical limitations restrict the applicability of a density-based (i.e., additive) interpretation of EPSs. We show how interpreting EPSs in a possibilistic way brings useful formal guarantees on the derived confidence intervals, even in the case of extreme events.
Section 2 summarises the basics of possibility theory. Section 3 presents our possibilistic framework and discusses the theoretical guarantees that can be associated with its outputs. Section 4 introduces the synthetic experiments on the Lorenz 96 system (Lorenz, 1996) (L96) which allow us to assess these guarantees and their operational cost for both common and extreme events. We compare them with the outputs of a classical probabilistic interpretation of EPSs and discuss our results in section 5.

| POSSIBILITY THEORY
Possibility theory is an uncertainty theory developed from fuzzy set theory by Zadeh (1978) and Dubois and Prade (2012). It is designed to handle incomplete information and represent ignorance. Considering a system whose state is described by a variable x∈X , the possibility distribution π : X ! 0, 1 ½ represents the available information (or evidence) about the current state of the system. Given an event A = {x ∈ S A }, where S A is a subset of X , the possibility and necessity measures are defined, respectively, as Π A ð Þ = sup A represents the complementary event of A (see Figure 1 for a visual understanding of these quantities). Both measures satisfy the following axioms and conventions (Cayrac et al., 1994): Þ represents total ignorance: the evidence does not allow us to conclude whether A is rather true or false.
Possibility and probability distributions are interconnected through the concept of imprecise probabilities (Dempster, 2008). A probability measure P and possibility measure Π are consistent if (Dubois et al., 2004) The definition of necessity implies that in these conditions 2.1 | From data to possibility distribution Let x∈X be a stochastic variable for which we try to make a prediction. The evidence about x is a set S = x 1 , …, x N s f gof N s samples of x, which we assume has been randomly generated from an unknown probability distribution P. To turn this information into a possibility distribution describing the knowledge on the actual value of x, we use the technique developed by Masson and Denoeux (2006). Their methodology is specifically designed to F I G U R E 1 Possibility distribution π(s) where for an event of interest A = {s ∈ S A }, the possibility Π(A) and necessity N A ð Þ= 1− Π A ð Þ measures are represented derive a possibility distribution from scarce data. The idea is, after binning the x-axis into n bins, to recover the simultaneous confidence intervals at level β on the true probability P(x ∈ b i ) for each bin b i . From these confidence intervals and considerations about Equation (1), the procedure allows us to compute a possibility distribution π(x) that dominates with confidence β the true probability distribution (i.e., Equation (1) is verified in 100β% of the cases). The simultaneous confidence intervals for multinomial proportions are computed by means of Goodman's formulation (Goodman, 1965). This procedure takes into account the uncertainty on the multinomial proportions that is due to the limited size of S. This is fundamental for our application, which is to seek guarantees on the possibility of observing a given event.
As shown by Equation (2), a possibility distribution can be seen as a complete and consistent framework to deal with imprecise probabilities. Although the above procedure for computing a possibility distribution mostly relies on probabilities, its result contains more information than a purely probabilistic distribution in the situation of incompleteness (typically implied by a small dataset S). Indeed, the interval on the true probability allows the incompleteness of data or knowledge to be accounted for, while a point probability hides the fact that the said probability cannot be fully trusted (e.g., due to epistemic uncertainty). Figure 2 illustrates the results of this methodology applied to datasets sampled from a normal distribution, for various levels of β and N s . For a given N s , the larger β is, the more conservative is the distribution: γ such as π(x) ≥ γ 8x is larger, which implies that for any event A X : F I G U R E 2 Possibility distributions (solid lines) derived from datasets of N s elements sampled from a standard normal distribution. This derivation requires the computation of simultaneous confidence intervals for multinomial proportions over the x-axis binned into n = 10 bins. The effect of the confidence level β = {0.6, 0.75, 0.9, 0.95, 0.99, 1} of the Goodman's formulation is shown (larger β are plotted darker). Vertical red lines represent a frequency histogram of the same datasets and the normalised underlying Gaussian distribution is represented as a dotted line Π A ð Þ≥γ . This also reads: N A ð Þ= 1−Π A ð Þ ≤ 1 −γ , meaning that the confidence level associated with any A cannot reach high values. Increasing N s reduces the relative effect of β and all distributions tend in shape towards the underlying probability distribution, even if the tails remains more conservative for larger β.

| PROPOSED FRAMEWORK
We are interested in the prediction of the state variable x t 0 + t of a dynamical system at lead time t, starting from the IC x t 0 . For simplicity, we omit the reference to t 0 and note x t the verification. In the EPS context, given a numerical prediction model ℳ, the elements of information at hand are 1. An ensemble of M predictions at lead time t, the ensemble members or EPS, obtained by means of ℳ applied to slightly perturbed ICs around t 0 : 2. An archive ℐ t containing the pairsx t 0 + t ,x t 0 + t ð Þ for the lead time t of interest and N I different instances of t 0 . These instances are chosen so that the initial points of two successive trajectories are statistically independent from each other.

| Deriving possibility distributions from EPSs
The objective of our possibilistic interpretation of EPSs is to derive from an EPSx t and the archive ℐ t a possibility distribution π x t jx t , ℐ t ð Þ that encodes the knowledge derived fromx t about the verification x t . The procedure described in this section is summarised and illustrated in the steps 1-5 of Figure 3.
Both system and model being (to a certain extent) deterministic and (close to) stationary, the past behaviour of the couple {system, model} is representative of its future behaviour. Consequently, if we are able to enumerate the possible values (already seen in ℐ t or not) for the verification x t associated with a small range S x t of the values taken by ensemble members, then a future observation x t should belong to that set of possible values when an ensemble memberx m t falls within S x t . Beyond that, we would like to know which ones of these values are more possible than others for x t . In other words, we want to estimate the possibility distribution π x t jx m t ∈S x t À Á . Because there is no notion of "density" of the evidence in the possibilistic perspective (at least in our rationale for choosing this framework), the number of ensemble members falling in S x t will not affect the resulting possibility distribution for x t .
To make use of the full set of ensemble members, we first partition the x-axis into n bins b i , take the subset B of bins occupied by at least one ensemble member of the EPS, and compute |B| possibility distributions π x t jx m t ∈b j À Á where b j ∈ B. Namely, for each bin b j ∈ B occupied by at least one ensemble memberx m t ∈x t , we retrieve the N s ensemble membersx m t ∈b j in the archive ℐ t and build a histogram of the set of corresponding verifications (so-called analogs) over the same binned x-axis. We then derive π x t jx m t ∈b j À Á following the methodology presented in section 2.
We obtain |B| possibility distributions π x t jx m t ∈b j À Á , each dominating with confidence β the true probability distribution P x t jx m t ∈b j À Á . Each possibility distribution provides the possibilities for the verification x t given the presence of one or more ensemble members in bin b j and is thus a partial view on the state x t . Since there is only one truth for x t and several incomplete views on the verification, we can merge them through a disjunctive pooling (Dubois and Prade, 1992;Sentz et al., 2002). Fuzzy set theory offers several definitions for computing the distribution resulting from the union of two fuzzy distributions. We adopt here the standard definition for its intuitive rationale: We construct the resulting possibility distribution as

| From possibility distribution to prediction
We focus on the continuous interpretation of π EPS and now turn to our approach for producing confidence intervals for the future value x t , and on the associated formal guarantees.
As can be easily derived from Equation (2), a possibility density π is consistent with the associated probability measure P if its α-cuts C α π = x, π x ð Þ≥α f gsatisfy: This constitutes an easily verifiable consistency criterion (Hose and Hanss, 2019).
The possibility distribution satisfying this criterion is not unique. Beyond consistency, the choice of a possibility distribution to model the knowledge at hand is driven by the principle of maximum specificity (Dubois et al., 2004). If π 1 and π 2 are two possibility distributions such that π 1 x ð Þ ≤ π 2 x ð Þ8x∈X , then π 1 is said more specific than π 2 and is more informative (i.e., less conservative). Maximum specificity w.r.t. the probabilistic information (a priori unknown) is achieved when the possibility distribution is probabilistically calibrated 1 This means that each α-cut represents a frequentist confidence interval at level 1 − α for the variable of interest and π is a consonant confidence structure (Balch, 2020).
By construction, the individual possibility distributions π x t jx m t ∈b j À Á verify Equation (1) with a guaranteed confidence level β. π EPS being made of their envelope, it cannot be more specific than any single one of them and consequently the same guarantee applies. In the case of its α-cuts, this reads: Masson and Denoeux (2006) show empirically that their data-to-possibility transformation is rather conservative and provides a possibility distribution that actually dominates the true probability distribution with a rate much higher than the guaranteed β. Even for small sample sizes, the choice of β is not critical and quasi perfect coverage rate is obtained: β ≥ 0.8, ensures that P P x∈C α π À Á ≥1 −α À Á ! 1 . Under this assumption, the (1 − α)-cuts can be used as candidate confidence intervals of guaranteed level α. Ideally, we are looking for (1 − α)-cuts verifying Equation (5), which ensures optimal specificity of π EPS and thus maximally informative confidence intervals.

| Experimental setting
We reproduce the experiment designed by Williams et al. (2014), who used an imperfect L96 model to investigate the performances of ensemble postprocessing for the prediction of extreme events. The system dynamics is governed by the following system of coupled equations, where the X variables represent slow-moving, large-scale processes, while Y variables represent small-scale, possibly unresolved, physical processes, where j = 1, …, J and k = 1, …, K. The parameters are set to J = 8, K = 32, h = 1, b = 10, c = 10, and F = 20. This perfect model is randomly initialised and then integrated forward in time by means of a Runge-Kutta fourth-order method with time step dt = 0.002 (model time units) until enough trajectories of duration 1.4, starting every 1.5 time units, are recorded for our analysis. A lead time t = 1 corresponds to 0.2 model time units after initialisation and can be associated with approximately 1 day in the real world (Lorenz, 1996). We are interested in predicting the variable X 1 . An imperfect version of the L96 system is implemented to generate predictions for the X j variables.
In Equation (7) To reproduce the perturbation of the ICs, M perturbed membersX j are sampled independently around the true value of each variable X j following a normal distributionX j N X j ,0:

| Reference model: Gaussian ensemble dressing
We compare the performances of our approach (POSS hereafter) to those of a classical probabilistic framework for interpreting EPSs, namely a Gaussian ensemble dressing (GEB hereafter). Its predictive probability distribution reads (Roulston and Smith, 2003), We infer the parameters θ = {a, ω, σ} through the optimisation of the ignorance score (Roulston and Smith, 2002) over the archive ℐ t used in the possibilistic framework. To that end, we use the nonlinear programming solver provided by MATLAB and apply the guidance developed in Bröcker and Smith (2008) to provide robust solutions.
Confidence intervals at level α on x t are obtained from p by a method that provides the desired intervals associated with the highest-density regions (Hyndman, 1996). We also report in the next section the performances of the confidence intervals similarly extracted from the unprocessed probability density (hereafter RAW) associated with the EPS (a histogram of the EPS normalised to represent a probability density).

| Evaluation criteria
We aim at answering the questions: 1. Can a possibilistic treatment of the EPS provide more guarantees than a probabilistic interpretation? 2. If yes, at what cost?
To that end, we compare the performances of the confidence intervals at level α, noted I α , extracted from the methodologies POSS, GEB, and RAW as described in the previous sections. We say that a confidence interval is guaranteed at level α if the coverage probability verifies P (x ∈ I α ) ≥ α. We use the term guaranteed in the sense that such an interval is associated with a lower bound on the (frequentist) probability that the verification falls within it. Such guarantees are sought, for example, in risk-averse decision-making. We say that it is reliable, or probabilistically calibrated, when P(x ∈ I α ) ≈ α. We call it all the more conservative than P(x ∈ I α ) − α is large, which is associated with nonoptimal interval precision.

| Experiments
All results presented here use n = 30 bins of similar width to partition the x-axis. 2 The test set consists in 40,000 independent trajectories of length t = 7 days and the corresponding EPS predictions. All EPSs have beforehand been preprocessed to remove the constant bias. We consider a range of archive size N I ∈ {156, 1560, 5 × 10 3 , 15 × 10 3 , 30 × 10 3 }. In particular, N I = 156 corresponds to 3 years of model archive, whereas N I = 1560 amounts to 30 years, which corresponds to the standard length of a historical re-forecast dataset (Hamill et al., 2004;Hagedorn, 2008). The two latter N I are operational figures, unlike larger values that we present to study the asymptotic properties of our framework.
We define two types of events: an extreme event, "x ≤ q 5 " (EE), and a common event, "q 50 < x ≤ q 55 " (NEE) where q i represents the percentile of level i of the climatic distribution of x (i.e., global distribution), plotted in Figure 4 along with both events. This will allow us to use test sets of similar sizes 3 in order to position our approach against the generic probabilistic postprocessing techniques that are known to weakly address such extreme events.
A preliminary assessment ( Figure 5) of the effect of the parameter β of Goodman's model on the probabilistic reliability of the (1 − α)-cuts derived from π EPS shows that varying β from 0.6 to 1 does not impact guarantees at any given N I for the events of interest. It only impacts precision and its effect is only visible for small archives (N I ≤ 156) or large lead times, especially in the EE case. We consequently use β = 0.9 in our experiments, which allows to improve specificity while maintaining guarantees on confidence intervals. Figure 6 reports the coverage probability of the confidence intervals I α extracted for α ∈ {0, 0.05, 0.1, ..., 1} for all evaluated methodologies at lead times t ∈ {1, 3, 5, 7} days. We first note that using RAW leads to confidence intervals that are not guaranteed for t > 1 day for both EE and NEE. Postprocessing (here GEB) allows to make them guaranteed at all lead times for the NEE and for t ≤ 3 days for the EE. The effect of the training set size for the probabilistic treatment does not appear to be significant. Conversely, the confidence intervals derived using POSS are globally guaranteed for both events and at all lead times for operational archives (N I < 5 × 10 3 ). Interestingly, when the archive grows significantly, confidence intervals with large α are not guaranteed anymore for the larger lead times in the EE case. The effect appears all the earlier (in terms of lead time) than N I is large.

| Empirical assessment of formal guarantees
We observe here a limitation of possibility theory: its strength lies in incomplete information. As shown in F I G U R E 4 Climatic distribution of the L96's variable of interest X 1 (x for simplicity) where the "extreme" event x ≤ q 5 (EE) and "common" event q 50 < x ≤ q 55 (NEE) are reported Figure 2, the larger the datasets used to derive possibility distributions, the closer the possibility distribution is in shape to the underlying probability distribution. In particular, the level γ such as π(x) ≥ γ 8x tends towards zero. In other words, such possibility distributions tend to conceal the possibility of rare events. F I G U R E 5 Coverage probability of the α-cuts of π EPS at lead time t ∈ {1, 3, 5, 7} days (left to right), in the case of the NEE (top) and EE (bottom). Goodman models with parameter β ∈ {0.6, 0.75, 0.9, 0.95, 0.99} (the darker the line, the larger β) are compared in the case of three archives of respective size N I ∈ {156, 1560, 15 × 10 3 } (grey, blue, and red colour scales, respectively) F I G U R E 6 Coverage probability of the (1 − α)-cuts of π EPS used as confidence intervals of level α at lead time t ∈ {1, 3, 5, 7} days (left to right), in the case of the NEE (top) and EE (bottom). The EPS archive size is N I ∈ {156, 1560, 5 × 10 3 , 15 × 10 3 , 30 × 10 3 } (the larger the darker the line). The coverage probability of the confidence intervals of level α derived from the raw EPS's probability density and from the postprocessed density (with the same training set of size N I as used in the possibilistic framework) is reported as well. The dotted diagonal represents perfect calibration We illustrate this phenomenon in Figure 7, where we represent the average density of analogs used to compute the individual π x t jx m t ∈b j À Á (see step 3; Figure 3). In the EE case, as the lead time increases, this average density decreases by several orders of magnitude for the more extreme bins (x ! infX ). This drop is all the more significant than N I is large. For small N I ≤ 1560, the more extreme bins are, as expected, not represented but the intermediary bins are and their density remains above 1 100 . For very large N I ≥ 5 × 10 3 , the more extreme bins are represented however their density drops below 1 1000 . In other words, the rarest events part of EE are represented only for extremely large archives, where they will be part of large analog sets, which implies, given the asymptotic behaviour illustrated in Figure 2, that they will be concealed from the associated possibility distributions. More precisely, the level γ such as π EPS x ð Þ≥γ 8x∈X remains strictly positive so P(x ∈ I α ) = 1 remains valid for α ≈ 1 (that is the large scale (1 − α)-cuts where 1 − α ! 0). However for intermediate α, the (1 − α)-cuts may not extend enough towards extreme bins, which negatively impacts the coverage rate. This trend is only observed for sufficiently large α, as possibility distributions remain globally more conservative than the EPS-based probability distributions (see next section), and consequently provide I α that encompass more observations than the frequentist calibration requires in the case of smaller α (i.e., for the upper part of the distribution). The "sufficiently large α" decreases with increasing lead times and archive sizes, following the effect described in Figure 7. Figure 8 illustrates our point by breaking down the coverage probability for three subsets of the EE: large archives lead to POSS-based confidence intervals that are all the more guaranteed as the event of interest is not too extreme. Probabilistic calibration for the more extreme part of EE can be improved by increasing the parameter β; however, this has no effect in the case of large archives (see Figure 9).
The NEE case study does not suffer from this limitation as the density of analogs falling in the NEE bins remains around 1 10 at all lead times. In comparison to GEB, POSS improves the reliability of confidence intervals for very short lead times while they remain more conservative for large lead times.
Provided that N I is not too large (which we assume is always the case for operational archives), Figures 6 and 8 clearly show the added value of treating the EPS in a possibilistic manner in terms of guarantees for the EE at large lead times, or in terms of reliability for the NEE at very small lead times. However, we can wonder what is the cost of such improvements. How do the possibility-based confidence intervals compare to their probability-based counterparts, in terms of precision?
F I G U R E 7 Average density of the analog datasets used to derive π EPS , for sizes N I ∈ {156, 1560, 5 × 10 3 , 15 × 10 3 , 30 × 10 3 } (the larger N I the darker the line) and lead time t ∈ {1, 3, 5, 7} days (left to right), in the case of the NEE (top) and the EE (bottom). Only densities above 0 are represented. Vertical dotted lines allow to visualise the events of interest (note that the EE is only defined by its upper bound) Figure 10 compares the average width of the confidence intervals derived from the three methodologies. For both EE and NEE, N I affects the width of the possibilistic I α significantly, making them narrower with larger N I , all the more than the lead time increases. Their probabilistic counterparts are generally much smaller, except when N I ≈ 30 × 10 3 .

| Interval precision
For NEE and level α < .9, POSS brings more information at very short lead times (t = 1 day) than the probabilistic approaches: intervals are smaller or equal in size and remain guaranteed. This is all the more true that the archive is of intermediate size (N I = 1560). Increasing the lead time beyond t = 3 days favours the probabilistic approach, which is more reliable with narrower intervals.
For EE, the added value of POSS over GEB is observed on two occasions: (1) intervals are as reliable yet narrower for very small lead times and α < .9, whatever the archive size; (2) for large lead times and intermediary-sized archives (N I ∈ {1560, 5 × 10 3 }), possibility-based confidence intervals are both guaranteed, reliable and operational (i.e., not too wide compared to GEB's results, contrary to what N I = 156 F I G U R E 8 Coverage probability of the (1 − α)-cuts of π EPS at lead time t = 7 days, in the case of events belonging to a partition of subsets of EE (from left to right: x ≤ q 1 , q 1 < x ≤ q 3 and q 3 < x ≤ q 5 ). The EPS archive size varies: N I ∈ {156, 1560, 5 × 10 3 , 15 × 10 3 , 30 × 10 3 } (the larger the darker the line). The probabilistic calibration of the confidence intervals of level α derived from the raw EPS's probability density and from the postprocessed density (with the same training set of size N I as the possibilistic framework) is also reported. See Figure 6 for legend F I G U R E 9 Coverage probability (left) and associated width distributions (right) of the confidence intervals of level α at lead time t = 7 days, in the EE case, for two archive sizes N I ∈ {1560, 15 × 10 3 } (blue and red colour scale, respectively). POSS results (solid line) for increasing Goodman's parameter β ∈ {0.6, 0.9, 0.95, 0.99} (the larger the darker the line) are compared to GEB results (dotted line). The width distribution is represented through its mean and one standard deviation above and below produces), while the probabilistic intervals are narrower yet not guaranteed at all. In the case of particularly rare events, as represented in Figure 8, an intermediary archive such as N I = 1560 is able to produce confidence intervals close to perfect reliability even for large lead times, as long as the parameter β is increased towards 1. Such reliability is reached at the expense of the interval width, which is significantly increased (w.r.t. smaller β) for the largest α ≥ .85.

| CONCLUSION
We introduced a novel framework to interpret EPSs where a possibility distribution π EPS is derived from the EPS at hand and an archive of (EPS; verification). We showed how to use the (1 − α)-cuts of a continuous interpretation of π EPS to produce confidence intervals at level α about the future value of the variable of interest. Our possibility-based confidence intervals come with formal guarantees, and experimental results show that they overpass probability-based ones in two situations: (1) at very small lead times for both common and extreme events, where they are as reliable yet narrower; (2) more blatantly, at intermediate and large lead times for extreme events, where they remain guaranteed and can be brought close to perfect reliability even for particularly rare events, yet at the expense of precision. These results can be reached with operational archive like the 20-30-year reforecast datasets. The guarantees are retained for smaller archives, which however lead to more conservative intervals and thereby impede operationality.
As raised by one of the reviewers of this study, in practice the verification (as observation) is a random variable itself (Tsyplakov, 2011;Lerch et al., 2017). The use of confidence intervals rather than a Bayesian formalism and the derivation of credible intervals may consequently be discussed. Since our approach is taking such impreciseness into account (limited volume S x t around x t , Masson and Denoeux's transformation; cf. section 3.1), even without explicitly tackling this problem, our framework accounts for (reasonable) randomness in the so-called verification.
Possibility theory is a promising tool for the prediction of extreme events, given a limited and imperfect amount of information on the system's dynamics. Beyond the results presented in this article, further developments by the author (Le Carrer and Ferson, Beyond probabilities: A possibilistic framework to interpret ensemble predictions and fuse imperfect sources of information, unpublished manuscript) show how π EPS can be combined with additional possibility distributions constructed from alternative sources of information such as the IC or F I G U R E 1 0 Distribution (mean ± standard deviation) of the width of the possibility and probability-based confidence intervals described in the legend of Figure 6 for lead time t ∈ {1, 3, 5, 7} days (left to right), in the case of the NEE (top) and EE (bottom). Only the cases N I ∈ {156, 1560, 5 × 10 3 , 30 × 10 3 } are represented (the larger N I , the darker the line) dynamical information (see step 6 of Figure 3). Therein, the concept of ignorance briefly introduced in section 2 is developed and presented as an interesting tool for risk communication.