extrapolation with recurrent neural network

2019-03-17 07:16发布

I Wrote a simple recurrent neural network (7 neurons, each one is initially connected to all the neurons) and trained it using a genetic algorithm to learn "complicated", non-linear functions like 1/(1+x^2). As the training set, I used 20 values within the range [-5,5] (I tried to use more than 20 but the results were not changed dramatically).

The network can learn this range pretty well, and when given examples of other points within this range, it can predict the value of the function. However, it can not extrapolate correctly and predicting the values of the function outside the range [-5,5]. What are the reasons for that and what can I do to improve its extrapolation abilities?

Thanks!

3条回答
该账号已被封号
2楼-- · 2019-03-17 07:40

Neural networks are not extrapolation methods (no matter - recurrent or not), this is completely out of their capabilities. They are used to fit a function on the provided data, they are completely free to build model outside the subspace populated with training points. So in non very strict sense one should think about them as an interpolation method.

To make things clear, neural network should be capable of generalizing the function inside subspace spanned by the training samples, but not outside of it source www.vias.org/tmdatanaleng/cc_ann_extrapolation.html

Neural network is trained only in the sense of consistency with training samples, while extrapolation is something completely different. Simple example from "H.Lohninger: Teach/Me Data Analysis, Springer-Verlag, Berlin-New York-Tokyo, 1999. ISBN 3-540-14743-8" shows how NN behave in this context source www.vias.org/tmdatanaleng/cc_ann_extrapolation.html

All of these networks are consistent with training data, but can do anything outside of this subspace.

You should rather reconsider your problem's formulation, and if it can be expressed as a regression or classification problem then you can use NN, otherwise you should think about some completely different approach.

The only thing, which can be done to somehow "correct" what is happening outside the training set is to:

  • add artificial training points in the desired subspace (but this simply grows the training set, and again - outside of this new set, network's behavious is "random")
  • add strong regularization, which will force network to create very simple model, but model's complexity will not guarantee any extrapolation strength, as two model's of exactly the same complexity can have for example completely different limits in -/+ infinity.

Combining above two steps can help building model which to some extent "extrapolates", but this, as stated before, is not a purpose of a neural network.

查看更多
爱情/是我丢掉的垃圾
3楼-- · 2019-03-17 07:44

The nature of your post(s) suggests that what you're referring to as "extrapolation" would be more accurately defined as "sequence recognition and reproduction." Training networks to recognize a data sequence with or without time-series (dt) is pretty much the purpose of Recurrent Neural Network (RNN).

The training function shown in your post has output limits governed by 0 and 1 (or -1, since x is effectively abs(x) in the context of that function). So, first things first, be certain your input layer can easily distinguish between negative and positive inputs (if it must).

Next, the number of neurons is not nearly as important as how they're layered and interconnected. How many of the 7 were used for the sequence inputs? What type of network was used and how was it configured? Network feedback will reveal the ratios, proportions, relationships, etc. and aid in the adjustment of network weight adjustments to match the sequence. Feedback can also take the form of a forward-feed depending on the type of network used to create the RNN.

Producing an 'observable' network for the exponential-decay function: 1/(1+x^2), should be a decent exercise to cut your teeth on RNNs. 'Observable', meaning the network is capable of producing results for any input value(s) even though its training data is (far) smaller than all possible inputs. I can only assume that this was your actual objective as opposed to "extrapolation."

查看更多
该账号已被封号
4楼-- · 2019-03-17 07:53

As far as I know this is only possible with networks which do have the echo property. See Echo State Networks on scholarpedia.org.

These networks are designed for arbitrary signal learning and are capable to remember their behavior. enter image description here

You can also take a look at this tutorial.

查看更多
登录 后发表回答