Spark dataframe have toRDD()
method but I don't understand how It's useful. Can we start a SQL streaming job by processing converted source dataset to RDD instead of making and starting DataStreamWriter?
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
回答1:
Dataset
provides uniform API for both batch and streaming processing and not every method is applicable to streaming Datasets. If you search carefully, you'll find other methods which cannot be used with streaming Datasets (for example describe
).
Can we start a SQL streaming job by processing converted source dataset to RDD instead of making and starting DataStreamWriter?
We cannot. What starts in structured streaming, stays in structured streaming. Conversions to RDD are not allowed.