Does tensorflow map_fn support taking more than on

2020-02-12 02:14发布

Does tf.map_fn support taking more than one tensors as is supported by python's native map function (example provided below)?

a = [1,2,3,4]
b = [17,12,11,10]
print(map(lambda x,y:x+y, a,b)) # ==> [18, 14, 14, 14]

5条回答
啃猪蹄的小仙女
2楼-- · 2020-02-12 02:30

Not natively, but here's a quick function that achieves it:

def map(fn, arrays, dtype=tf.float32):
    # assumes all arrays have same leading dim
    indices = tf.range(tf.shape(arrays[0])[0])
    out = tf.map_fn(lambda ii: fn(*[array[ii] for array in arrays]), indices, dtype=dtype)
    return out

# example: batch affine tranformation
x = tf.random_normal([4,5,6])
M = tf.random_normal([4,6,10])
b = tf.random_normal([4,10])

f = lambda x0,M0,b0: tf.matmul(x0,M0) + b0
batch_y = map(f, [x,M,b])
查看更多
Viruses.
3楼-- · 2020-02-12 02:37

As on today, I see that map_fn is enhanced to take two tensors as the documentation says that - "elems: A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be applied to fn." The example (though given in numpy form) also shows that it can take two tensors. I'm copying it here.

elems = (np.array([1, 2, 3]), np.array([-1, 1, -1]))
alternate = map_fn(lambda x: x[0] * x[1], elems, dtype=tf.int64)
# alternate == [-1, 2, -3]

Source:

查看更多
Root(大扎)
4楼-- · 2020-02-12 02:43

you can combine approaches described in this page to pass a number of tensors and arguments to be considered when calling to your function, for example -

import tensorflow as tf
cnn = tf.keras.layers.Conv1D(name, filters=64, kernel_size=4, padding='same')
pool = tf.keras.layers.GlobalAveragePooling1D()

def stack_inputs(inp1, inp2, axis=1):
    return tf.stack([inp1, inp2], axis)

def attention_op(q, p, cnn):
    q_encoded = pool(cnn(q))
    q_v_att = pool(tf.keras.layers.Attention()([cnn(q), cnn(p)]))
    return tf.keras.layers.Concatenate()([q_encoded, q_v_att])

cnn1 = cnn('cnn_layer_1')
stacked_inputs = stack_inputs(inp1, inp2)
map_result = tf.keras.backend.map_fn(lambda x: attention_op(x[0], x[1], cnn1), stacked_inputs, dtype=tf.float32)
查看更多
兄弟一词,经得起流年.
5楼-- · 2020-02-12 02:47

The source code shows that this function takes only one elems tensor:

def map_fn(fn, elems, dtype=None, parallel_iterations=10, back_prop=True,
       swap_memory=False, name=None):

I don't see any * and ** parameters.

查看更多
Juvenile、少年°
6楼-- · 2020-02-12 02:50

If Tensors are of the same shape (most cases), stack the tensors in the first dimension and slide them inside the map function:

import tensorflow as tf
# declare variables
a = tf.constant([1, 2, 3, 4])
b = tf.constant([17, 12, 11, 10])

# NOTE: use stack because map_tf only takes one input tensor
ab = tf.stack([a, b], 1)


def map_operation(value_ab):
    # iterates each value_ab
    value_a = value_ab[0]
    value_b = value_ab[1]
    return value_a+value_b

# print(map(lambda x,y:x+y, a,b)) # ==> [18, 14, 14, 14]
# iterates each value_ab at map_operation()
map_result = tf.map_fn(map_operation, ab, dtype=tf.int32)

with tf.Session() as sess:
    tf.initialize_all_variables().run()
    print(sess.run(map_result))   # [18 14 14 14]

reference LINK

查看更多
登录 后发表回答