I am new to PySpark and I am trying to understand how can we write multiple nested for loop in PySpark, rough high level example below.
Any help will be appreciated.
for ( i=0;i<10;i++)
for ( j=0;j<10;j++)
for ( k=0;k<10;k++)
{
print "i"."j"."k"
}
In non distributed setting for-loops are rewritten using foreach
combinator, but due to Spark nature map
and flatMap
are a better choice:
from __future__ import print_function
a_loop = lambda x: ((x, y) for y in xrange(10))
print_me = lambda ((x, y), z): print("{0}.{1}.{2}".format(x, y, z)))
(sc.
parallelize(xrange(10)).
flatMap(a_loop).
flatMap(a_loop).
foreach(print_me)
Of using itertools.product
:
from itertools import product
sc.parallelize(product(xrange(10), repeat=3)).foreach(print)