Keras LSTM multiple errors from trying to create m

2019-08-27 05:26发布

问题:

This is a duplicate Question that i posted earlier today, in the other question i was using an old version of Keras. I've upgraded to Keras 2.0.0 and still was getting a lot of errors that i can't figure out on my own so i'm reposting the question mostly verbatim.

I am trying to understand how to use keras for supply chain forecasting and i keep getting errors that i can't find help for elsewhere. I've tried to do similar tutorials; sunspot forecasting tutorial, pollution multivariate tutorial etc but i'm still not understanding how the input_shape argument works or how to organize my data to get it to be accepted by keras.

My dataset is a single time series describing the number of products we sold every month. I took that single time series, 107 months, and turned it into a 30 row, 77 column data set. I created a training set and test set from that.

from command prompt:

Successfully uninstalled Keras-1.2.0
Successfully installed keras-2.0.0

Python Version: 3.5.4

Here's the code and respective errors i'm getting.

model = Sequential()
model.add(LSTM(input_shape=(77, 1), output_dim = 10))

Traceback

C:\Python35\lib\site-packages\keras\backend\tensorflow_backend.py in concatenate(tensors, axis)
   1219         A tensor.
-> 1220     """
   1221     zero = _to_tensor(0., x.dtype.base_dtype)

AttributeError: module 'tensorflow' has no attribute 'concat_v2'

During handling of the above exception, another exception occurred:

TypeError                                 Traceback (most recent call last)
<ipython-input-42-ee393fff874d> in <module>()
      1 model = Sequential()
----> 2 model.add(LSTM(input_shape=(77, 1), output_dim = 10))
      3 #model.add(Dense(10, activation = 'relu'))
      4 #model.add(Dense(1, activation = 'softmax'))

C:\Python35\lib\site-packages\keras\models.py in add(self, layer)
    292                         '`Sequential.from_config(config)`?')
    293     return layer_module.deserialize(config, custom_objects=custom_objects)
--> 294 
    295 
    296 def model_from_yaml(yaml_string, custom_objects=None):

C:\Python35\lib\site-packages\keras\engine\topology.py in create_input_layer(self, batch_input_shape, input_dtype, name)
    396 
    397             # Check ndim.
--> 398             if spec.ndim is not None:
    399                 if K.ndim(x) != spec.ndim:
    400                     raise ValueError('Input ' + str(input_index) +

C:\Python35\lib\site-packages\keras\engine\topology.py in __call__(self, x, mask)
    541             # Handle automatic shape inference (only useful for Theano).
    542             input_shape = _collect_input_shape(inputs)
--> 543 
    544             # Actually call the layer, collecting output(s), mask(s), and shape(s).
    545             output = self.call(inputs, **kwargs)

C:\Python35\lib\site-packages\keras\layers\recurrent.py in build(self, input_shape)
    761             constants.append(dp_mask)
    762         else:
--> 763             constants.append([K.cast_to_floatx(1.) for _ in range(3)])
    764 
    765         if 0 < self.recurrent_dropout < 1:

C:\Python35\lib\site-packages\keras\backend\tensorflow_backend.py in concatenate(tensors, axis)
   1220     """
   1221     zero = _to_tensor(0., x.dtype.base_dtype)
-> 1222     inf = _to_tensor(np.inf, x.dtype.base_dtype)
   1223     x = tf.clip_by_value(x, zero, inf)
   1224     return tf.sqrt(x)

C:\Python35\lib\site-packages\tensorflow\python\ops\array_ops.py in concat(values, axis, name)
   1041       ops.convert_to_tensor(axis,
   1042                             name="concat_dim",
-> 1043                             dtype=dtypes.int32).get_shape(
   1044                             ).assert_is_compatible_with(tensor_shape.scalar())
   1045       return identity(values[0], name=scope)

C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py in convert_to_tensor(value, dtype, name, preferred_dtype)
    674       name=name,
    675       preferred_dtype=preferred_dtype,
--> 676       as_ref=False)
    677 
    678 

C:\Python35\lib\site-packages\tensorflow\python\framework\ops.py in internal_convert_to_tensor(value, dtype, name, as_ref, preferred_dtype)
    739 
    740         if ret is None:
--> 741           ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
    742 
    743         if ret is NotImplemented:

C:\Python35\lib\site-packages\tensorflow\python\framework\constant_op.py in _constant_tensor_conversion_function(v, dtype, name, as_ref)
    111                                          as_ref=False):
    112   _ = as_ref
--> 113   return constant(v, dtype=dtype, name=name)
    114 
    115 

C:\Python35\lib\site-packages\tensorflow\python\framework\constant_op.py in constant(value, dtype, shape, name, verify_shape)
    100   tensor_value = attr_value_pb2.AttrValue()
    101   tensor_value.tensor.CopyFrom(
--> 102       tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape))
    103   dtype_value = attr_value_pb2.AttrValue(type=tensor_value.tensor.dtype)
    104   const_tensor = g.create_op(

C:\Python35\lib\site-packages\tensorflow\python\framework\tensor_util.py in make_tensor_proto(values, dtype, shape, verify_shape)
    372       nparray = np.empty(shape, dtype=np_dt)
    373     else:
--> 374       _AssertCompatible(values, dtype)
    375       nparray = np.array(values, dtype=np_dt)
    376       # check to them.

C:\Python35\lib\site-packages\tensorflow\python\framework\tensor_util.py in _AssertCompatible(values, dtype)
    300     else:
    301       raise TypeError("Expected %s, got %s of type '%s' instead." %
--> 302                       (dtype.name, repr(mismatch), type(mismatch).__name__))
    303 
    304 

TypeError: Expected int32, got <tf.Variable 'lstm_7_W_i:0' shape=(1, 10) dtype=float32_ref> of type 'Variable' instead.

回答1:

I think that the problem goes around TF version. Version compatibility between Keras and TF is a problem that probably anyone has faced, as TF API changes a lot in a small period of time.

I think that for Keras 2.2.X you need a TF version > 1.10.X

Try updating it and see if the problem is fixed!