Given an array, I want to normalize it such that each row sums to 1.
I currently have the following code:
import numpy
w = numpy.array([[0, 1, 0, 1, 0, 0],
[1, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1],
[1, 0, 0, 0, 1, 0],
[0, 0, 0, 1, 0, 1],
[0, 1, 1, 0, 1, 0]], dtype = float)
def rownormalize(array):
i = 0
for row in array:
array[i,:] = array[i,:]/sum(row)
i += 1
I've two questions:
1) The code works, but I'm wondering if there's a more elegant way.
2) How can I convert the data type into a float if it's int? I tried
if array.dtype == int:
array.dtype = float
But it doesn't work.
You can do 1) like that:
and 2) like that:
Divisions even though
broadcasted
across all elements could be expensive. An alternative with focus on performance, would be to pre-compute the reciprocal of row-summations and use those to performbroadcasted
multiplications instead, like so -Runtime test -