Replacing sigmoid activation with custom activatio

2019-07-24 09:49发布

问题:

I am experimenting replacing the Keras sigmoid function with a piecewise linear function defined as:

def custom_activation_4(x):
if x < -6:
    return 0
elif x >= -6 and x < -4:
    return (0.0078*x + 0.049)
elif x >= -4 and x < 0:
    return (0.1205*x + 0.5)
elif x >= 0 and x < 4:
    return (0.1205*x + 0.5)
elif x >= 4 and x < 6:
    return (0.0078*x + 0.951)
else:
    return 1;

When I try to run this as:

classifier_4.add(Dense(output_dim = 18, init = 'uniform', activation = custom_activation_4, input_dim = 9))

The compiler throws an error saying:

Using a `tf.Tensor` as a Python `bool` is not allowed.

I researched this and learned that, I am treating the variable x as a simple python variable whereas it is a tensor. That is the reason it cannot be treated like a simple boolean variable. I also tried using the tensorflow cond method. How to treat and use x as tensor here? Thanks a ton in advance for all the help.

回答1:

Your custom activation is written as a function of a single floating point number but you want to apply it to a whole tensor. The best way to do that is to use tf.where. Something like

def custom_activation_4(x):
  orig = x
  x = tf.where(orig < -6, tf.zeros_like(x), x)
  x = tf.where(orig >= -6 and orig < -4, (0.0078*x + 0.049), x)
  x = tf.where(orig >= -4 and orig < 0, (0.1205*x + 0.5), x)
  x = tf.where(orig >= 0 and orig < 4, (0.1205*x + 0.5), x)
  x = tf.where(orig  >= 4 and orig < 6, (0.0078*x + 0.951), x)
  return tf.where(orig >= 6, 1, x)


回答2:

I tested the code in the answer because I intend to write a similar activation function, yet the following error happened

raise TypeError("Using a tf.Tensor as a Python bool is not allowed. " TypeError: Using a tf.Tensor as a Python bool is not allowed. Use if t is not None: instead of if t: to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the value of a tensor

The reason is that we cannot use Python logical operators on tf.Tensor. So I did some search in tf doc, and it turns out that we have to use theirs like this, which is my code, yet very very similar to yours.

import tensorflow as tf

class QPWC(Layer):


    def __init__(self, sharp=100, **kwargs):
        super(QPWC, self).__init__(**kwargs)
        self.supports_masking = True
        self.sharp = K.cast_to_floatx(sharp)

    def call(self, inputs):

        orig = inputs
        inputs = tf.where(orig <= 0.0, tf.zeros_like(inputs), inputs)
        inputs = tf.where(tf.math.logical_and(tf.greater(orig, 0), tf.less(orig, 0.25)), 0.25 / (1+tf.exp(-self.sharp*((inputs-0.125)/0.5))), inputs)
        inputs = tf.where(tf.math.logical_and(tf.greater(orig, 0.25), tf.less(orig, 0.5)), 0.25 / (1+tf.exp(-self.sharp*((inputs-0.5)/0.5))) + 0.25, inputs)
        inputs = tf.where(tf.math.logical_and(tf.greater(orig, 0.5), tf.less(orig, 0.75)), 0.25 / (1+tf.exp(-self.sharp*((inputs-0.75)/0.5))) + 0.5, inputs)
        return  tf.where(tf.greater(orig, 0.75), tf.ones_like(inputs), inputs)


    def get_config(self):
        config = {'sharp': float(self.sharp)}
        base_config = super(QPWC, self).get_config()
        return dict(list(base_config.items()) + list(config.items()))

    def compute_output_shape(self, input_shape):
        return input_shape