Illustration 8

Powered by Evolved Analytics' DataModeler

Illustration: Trustable Models

Empirical models only know the information they have been provided during their development. As a result, using them is a bit like driving a car only using the rear - view mirror. Ensembles of diverse but accurate models let us take some of the trepidation out of model use by providing a warning that either the system dynamics have changed or the model is being asked to operate in uncharted territory. Trustable Models is a unique and valuable benefit of SymbolicRegression which is possible because we can develop diverse model structures which are comparable in both accuracy and complexity.

Sample a sigmoid function

For ease of understanding, we will use a sigmoid sampled in one variable as the underlying function. The function and corresponding observations are shown below.

8_trustableModels_1.gif

Graphics:                            1 A sampled sigmoid,  ------------------                                 -0.7 x                     1 +  e        + 1" nohref />

Evolve models

Now we execute multiple IndependentEvolutions in our SymbolicRegression model search. Because of the founders effect, each modeling run follows a different trajectory in searching for quality model structures.To build a trustable model, we want to use a collection of diverse models which are comparable in terms of both accuracy and complexity. As we can see from the ParetoFrontLogPlot below, each of the four IndependentEvolutions had varying degrees of success in modeling the sigmoid. Given enough time, the SymbolicRegression will overcome the founders effect and converge on comparable ParetoFront behaviors; however, fifteen seconds isn't a lot of search time so, in this case, we still see the substantial differences — even though a few are, obviously, quite efficient in discovering a good set of potential solutions.

As we work along the ParetoFront, we see the effect of adding model complexity. The first model is, of course, the best first-order linear model. (Mousing over the ResponseSurfacePlot curves or the corresponding red dots in the ParetoFrontLogPlot will show the underlying models.)

8_trustableModels_3.gif

8_trustableModels_4.gif

As an aside, in the above we have made the model search process harder since we have not included an exponential as one of the FunctionPatterns building blocks. The default set (add, subtract, multiply, divide, square and square-root) can easily be expanded to include other functions (e.g., abs, power, log, exponential, unitstep, sine, cos, etc.) if such behaviors are anticipated in the observed data.

Create an ensemble

For a good trustable model — aka, an ensemble — we are not looking for THE model. Rather, we are looking for a diverse collection of good models as opposed to a tight focus on great models. The default behavior of CreateModelEnsemble will focus on the knee of the ParetoFront to generate good predictions while also including diverse models to facilitate detection of either extrapolation or behavior changes in the underlying system.

Below we have a fairly loose QualityBox of models having a better than 99% 8_trustableModels_5.gif and a ModelComplexity less than 150. As we can see from the EnsemblePredictionPlot, the ensemble does a very good job of predicting the observed response behavior. We want to include both more complex and less accurate models than we would consider if we were looking for THE model since these models will provide the detection of new operating conditions while the models at the knee (which is were we presume THE model resides) provides the prediction accuracy we want when we are operating within the known parameter space.

8_trustableModels_6.gif

8_trustableModels_7.gif

The diversity in the models selected for the ensemble is shown in the response behaviors below as well as an over-weighting of the models near the knee of the ParetoFront.

8_trustableModels_8.gif

Graphics:The response behaviors of the ensemble models

The trustability metric — detecting extrapolation

Below we show the true model and the ensemble response for over both the observed data range and a range expansion of ±100%. For both the nominal range and expanded range, we show the prediction trust metric defined by the EnsembleDivergenceFunction (mouse over the curve to display the metric being used).

The model extrapolation behaviors on the second row illustrate two very important points:

1) the model prediction degrades gracefully despite being asked to extrapolate an extremely large distance away from the parameter ranges used in the model development.

2) the trust measure flags that the model is being asked to operate in uncharted territory.

8_trustableModels_10.gif

8_trustableModels_11.gif e        + 1" nohref /> e        + 1" nohref />

The profound implications of trustability

The dogma of machine learning is that the available data should be partitioned into training, test and validation subsets and THE chosen model must perform well on all data subsets. This is because they hypothesize a model structure in a context-free manner and must guard against over-fitting the data. In contrast, as we have seen, SymbolicRegression explores the complexity-accuracy trade-off and identifies driving variables which facilitates choosing appropriate models. This enables us to easily and effectively extract models and insight from FAT data arrays.

Now let us further exploit the diversity of models structures developed by multiple IndependentEvolutions to form diverse ensembles of quality model (i.e., appropriate complexity and accuracy). These ensembles will agree and predict well when operating in known operating regions (otherwise, they wouldn't be “good models”) but will diverge when either operating outside this space or the underlying system has gone through a fundamental change (otherwise, they would not be “diverse models”). KNOWING that the output of an empirical model is suspect is very unique and valuable — and, hopefully, can help us to avoid physical, financial or intellectual disaster in its real-world application.

An implication of the trustability is that we are able to use ALL DATA in the model development rather than reserving significant subsets for testing. The result from this is that the SymbolicRegression has better information available to it than would be available to other machine learning techniques — avoiding the self-induced myopia that is inflicted upon the other techniques by their limitations.

A diversion into extrapolation

Speaking of dogma, a common practice is to assume that a polynomial model is appropriate. Alas, nature is often not aware that it should restrict itself to simplistic and mathematically tractable forms. As we have seen in the above, SymbolicRegression lets the data determine the appropriate model form. By way of contrast, let us use CreateLinearModel to generate polynomials of various order and fit them to the sigmoid data. The result is shown below.

8_trustableModels_12.gif

Graphics:Linear Polynomial Models From Sigmoid Data

The actual fitted models are tabulated below. Since the targeted response behavior is anti-symmetric, the even-order models do not contribute anything in terms of accuracy relative to the next smaller odd-order model — with the exception of increased ModelComplexity.

8_trustableModels_14.gif

8_trustableModels_15.gif

The ResponsePlot of each of the linear polynomials is shown below along with that of the ensemble. This illustrates that the ensemble does a better job of fitting the targeted behavior — despite the fact that the selected constituent models have ModelComplexity closer to that of the 3rd order polynomial

8_trustableModels_16.gif

8_trustableModels_17.gif

The real benefit of a ModelEnsemble becomes apparent in the ResponsePlot set below where we have asked each model to operate over a ±100% RangeExpansion beyond the nominal DataVariableRange. As we can see, only the ensemble displays the correct extrapolation behavior — while, simultaneously, flagging that its predictions should be treated with suspiscion.

8_trustableModels_18.gif

8_trustableModels_19.gif

In general, a trustable model will not yield a perfect extrapolation behavior; however, experience shows that they do tend to degrade more gracefully than models based upon a priori structures imposed independent of the underlying behavior. Letting the data speak for itself seems to be a good thing.

8_trustableModels_20.gif

Spikey Created with Wolfram Mathematica 8.0