I have a very big pyspark.sql.dataframe.DataFrame named df. I need some way of enumerating records- thus, being able to access record with certain index. (or select group of records with indexes range)
In pandas, I could make just
indexes=[2,3,6,7]
df[indexes]
Here I want something similar, (and without converting dataframe to pandas)
The closest I can get to is:
Enumerating all the objects in the original dataframe by:
indexes=np.arange(df.count()) df_indexed=df.withColumn('index', indexes)
- Searching for values I need using where() function.
QUESTIONS:
- Why it doesn't work and how to make it working? How to add a row to a dataframe?
Would it work later to make something like:
indexes=[2,3,6,7] df1.where("index in indexes").collect()
Any faster and simpler way to deal with it?
Selecting a single row n of a Pyspark DataFrame, try:
Given a Pyspark DataFrame:
Selecting the 3rd row, try:
Or:
Selecting multiple rows with rows' ids (the 2nd & the 3rd rows in this case), try:
monotonicallyIncreasingId()
- this will assign row numbers in incresing order but not in sequence.sample output with 2 columns:
|---------------------|------------------| | RowNo | Heading 2 | |---------------------|------------------| | 1 | xy | |---------------------|------------------| | 12 | xz | |---------------------|------------------|
If you want assign row numbers use following trick.
Tested in spark-2.0.1 and greater versions.
df.createOrReplaceTempView("df") dfRowId = spark.sql("select *, row_number() over (partition by 0) as rowNo from df")
sample output with 2 columns:
|---------------------|------------------| | RowNo | Heading 2 | |---------------------|------------------| | 1 | xy | |---------------------|------------------| | 2 | xz | |---------------------|------------------|
Hope this helps.
You certainly can add an array for indexing, an array of your choice indeed: In Scala, first we need to create an indexing Array:
You can now append this column to your DF. First, For that, you need to open up our DF and get it as an array, then zip it with your index_array and then we convert the new array back into and RDD. The final step is to get it as a DF:
The indexing would be more clear after that.
It doesn't work because:
withColumn
should be aColumn
not a collection.np.array
won't work here"index in indexes"
as a SQL expression towhere
indexes
is out of scope and it is not resolved as a valid identifierPySpark >= 1.4.0
You can add row numbers using respective window function and query usingColumn.isin
method or properly formated query string:It looks like window functions called without
PARTITION BY
clause move all data to the single partition so above may be not the best solution after all.Not really. Spark DataFrames don't support random row access.
PairedRDD
can be accessed usinglookup
method which is relatively fast if data is partitioned usingHashPartitioner
. There is also indexed-rdd project which supports efficient lookups.Edit:
Independent of PySpark version you can try something like this:
If you want a number range that's guaranteed not to collide but does not require a
.over(partitionBy())
then you can usemonotonicallyIncreasingId()
.Note though that the values are not particularly "neat". Each partition is given a value range and the output will not be contiguous. E.g.
0, 1, 2, 8589934592, 8589934593, 8589934594
.This was added to Spark on Apr 28, 2015 here: https://github.com/apache/spark/commit/d94cd1a733d5715792e6c4eac87f0d5c81aebbe2