How Tensorflow handles categorical features with m

2020-03-07 06:32发布

问题:

For example, I have a data in the following csv format:

csv
col0  col1  col2  col3
1     A     E|A|C 3
0     B     D|F   2 
2     C     |     2 

Each column seperated by comma represent one feature. Normally, a feature is one-hot(e.g. col0, col1, col3), but in this case, the feature for col2 has multiple inputs(seperated by |).

I'm sure tensorflow can handle one-hot feature with sparse tensor, but I'm not sure whether it could handle features with multiple inputs like col2?

How should it be represented in Tensorflow's sparse tensor?

I am using the code below (but i don't know input method of col2)

col0 = tf.feature_column.numeric_column('ID')
col1 = tf.feature_column.categorical_column_with_hash_bucket('Title', hash_bucket_size=1000)
col3 = tf.feature_column.numeric_column('Score')

columns = [col0, col1, col3]

tf.estimator.DNNClassifier(
        model_dir=None,
        feature_columns=columns,
        hidden_units=[10, 10],
        n_classes=4
    )

Thanks for your help.

回答1:

OK Looks like writing custom feature column worked for me with the same task.

I took HashedCategoricalColumn as a base, and cleaned up to work with strings only. Should add checks for type though.

class _SparseArrayCategoricalColumn(
    _CategoricalColumn,
    collections.namedtuple('_SparseArrayCategoricalColumn',
                           ['key', 'num_buckets', 'category_delimiter'])):

  @property
  def name(self):
    return self.key

  @property
  def _parse_example_spec(self):
    return {self.key: parsing_ops.VarLenFeature(dtypes.string)}

  def _transform_feature(self, inputs):
    input_tensor = inputs.get(self.key)
    flat_input = array_ops.reshape(input_tensor, (-1,))
    input_tensor = tf.string_split(flat_input, self.category_delimiter)

    if not isinstance(input_tensor, sparse_tensor_lib.SparseTensor):
      raise ValueError('SparseColumn input must be a SparseTensor.')

    sparse_values = input_tensor.values
    # tf.summary.text(self.key, flat_input)
    sparse_id_values = string_ops.string_to_hash_bucket_fast(
        sparse_values, self.num_buckets, name='lookup')


    return sparse_tensor_lib.SparseTensor(
        input_tensor.indices, sparse_id_values, input_tensor.dense_shape)


  @property
  def _variable_shape(self):
    if not hasattr(self, '_shape'):
        self._shape = tensor_shape.vector(self.num_buckets)
    return self._shape

  @property
  def _num_buckets(self):
    """Returns number of buckets in this sparse feature."""
    return self.num_buckets

  def _get_sparse_tensors(self, inputs, weight_collections=None,
                          trainable=None):
    return _CategoricalColumn.IdWeightPair(inputs.get(self), None)


def categorical_column_with_array_input(key,
                                        num_buckets, category_delimiter="|"):
  if (num_buckets is None) or (num_buckets < 1):
    raise ValueError('Invalid num_buckets {}.'.format(num_buckets))

  return _SparseArrayCategoricalColumn(key, num_buckets, category_delimiter)

Then it may be wrapped by embedding/indicator column. Seems it is what you need. It was first step for me. I need to handle column with values like "str:float|str:float...".