A tight upper bound on the gain of linear and nonlinear predictors for stationary stochastic processes

Research output: Contribution to journalArticlepeer-review

Abstract

One of the striking questions in prediction theory is this: is there a chance to predict future values of a given signal? Usually, we design a predictor for a special signal or problem and then measure the resulting prediction quality. If there is no a priori knowledge on the optimal predictor, the achieved prediction gain will depend strongly of the prediction model used. To cope with this lack of knowledge, a theorem on the maximum achievable prediction gain of stationary signals is presented. This theorem provides the foundation for estimating a quality goal for the predictor design, independent of a special predictor implementation (linear or nonlinear). As usual, the prediction gain is based on the mean square error (MSE) of the predicted signal. The achievable maximum of the prediction gain is calculated using an information theoretic quantity known as the mutual information. In order to obtain the gain, we use a nonparametric approach to estimate the maximum prediction gain based on the observation of one specific signal. We illustrate this by means of well-known example signals and show an application to load forecasting. An estimation algorithm for the prediction gain has been implemented and used in the experimental part of the paper.
Original languageEnglish
Article number726805
Pages (from-to)2909-2917
Number of pages9
JournalIEEE Transactions on Signal Processing
Volume46
Issue number11
DOIs
Publication statusPublished - 1 Nov 1998
Externally publishedYes

Keywords

  • Upper bound
  • Gain
  • Stochastic processes
  • Predictive models
  • Entropy
  • Mutual information
  • Signal processing
  • Signal design
  • Load forecasting
  • Neural networks

Fingerprint

Dive into the research topics of 'A tight upper bound on the gain of linear and nonlinear predictors for stationary stochastic processes'. Together they form a unique fingerprint.

Cite this