The Apache Spark SQL operation CACHE table has an option so that it runs lazy. But what about UNCACHE table ? The documentation doesn't say anything if it is lazy or not. Will the table be dropped immediately from cache or will it be deferred until the next run of the garbage collection? If it is lazy, is there a way to find out, if my table is still cached or not?
相关问题
- How to maintain order of key-value in DataFrame sa
- Spark on Yarn Container Failure
- In Spark Streaming how to process old data and del
- Filter from Cassandra table by RDD values
- Spark 2.1 cannot write Vector field on CSV
相关文章
- Livy Server: return a dataframe as JSON?
- Is there a google API to read cached content? [clo
- SQL query Frequency Distribution matrix for produc
- How to filter rows for a specific aggregate with s
- AWS API Gateway caching ignores query parameters
- How to name file when saveAsTextFile in spark?
- Check if url is cached webview android
- Spark save(write) parquet only one file
The default UNCACHE operation is non-blocking. If you use the DSL, you can call
df.unpersist(true)
on a dataframe/dataset to make the operation blocking.