I'm new to PySpark and am facing a strange problem. I'm trying to set some column to non-nullable while loading a CSV dataset. I can reproduce my case with a very small dataset (test.csv
):
col1,col2,col3
11,12,13
21,22,23
31,32,33
41,42,43
51,,53
There is a null value at row 5, column 2 and I don't want to get that row inside my DF. I set all fields as non-nullable (nullable=false
) but I get a schema with all the three columns having nullable=true
. This happens even if I set all the three columns as non-nullable! I'm running the latest available version of Spark, 2.0.1.
Here's the code:
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import *
spark = SparkSession \
.builder \
.appName("Python Spark SQL basic example") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
struct = StructType([ StructField("col1", StringType(), False), \
StructField("col2", StringType(), False), \
StructField("col3", StringType(), False) \
])
df = spark.read.load("test.csv", schema=struct, format="csv", header="true")
df.printSchema()
returns:
root
|-- col1: string (nullable = true)
|-- col2: string (nullable = true)
|-- col3: string (nullable = true)
and df.show()
returns:
+----+----+----+
|col1|col2|col3|
+----+----+----+
| 11| 12| 13|
| 21| 22| 23|
| 31| 32| 33|
| 41| 42| 43|
| 51|null| 53|
+----+----+----+
while I expect this:
root
|-- col1: string (nullable = false)
|-- col2: string (nullable = false)
|-- col3: string (nullable = false)
+----+----+----+
|col1|col2|col3|
+----+----+----+
| 11| 12| 13|
| 21| 22| 23|
| 31| 32| 33|
| 41| 42| 43|
+----+----+----+