Let's say I have the following DataFrame
:
[Row(user='bob', values=[0.5, 0.3, 0.2]),
Row(user='bob', values=[0.1, 0.3, 0.6]),
Row(user='bob', values=[0.8, 0.1, 0.1])]
I would like to groupBy
user
and do something like avg(values)
where the average is taken over each index of the array values
like this:
[Row(user='bob', avgerages=[0.466667, 0.233333, 0.3])]
How can I do this in PySpark?
You can expand array and compute average for each index.
Python
Scala