In my CNN network i am using i am using Leaky ReLu
after BN
layer. Leaky ReLu solves dying ReLu problem by adding f(y)=ay for negative values. BN introduces zero mean and unit variance. So is BN remove negative part or not i.e. is this converts all valus into 0 to 1 scale? Based on this only selection of Leaky ReLu will be done. Because if BN remove negative part then use of Leaky relu will be same as relu. I am using keras.
相关问题
- batch_dot with variable batch size in Keras
- How to use Reshape keras layer with two None dimen
- How to use Reshape keras layer with two None dimen
- Why keras use “call” instead of __call__?
- How to conditionally scale values in Keras Lambda
相关文章
- Tensorflow: device CUDA:0 not supported by XLA ser
- How to downgrade to cuda 10.0 in arch linux?
- Change loss function dynamically during training i
- How to use cross_val_score with random_state
- Why does this Keras model require over 6GB of memo
- How to measure overfitting when train and validati
- keras model subclassing examples
- McNemar's test in Python and comparison of cla
The BN layer tries to zero-mean its output by subtracting an expectation over inputs. So we can expect some of its output values to be negative.
So the LeakyReLU following the BN layer will still receive negative values.