I've loaded a DataFrame from a SQLServer table. It looks like this:
>>> df.show()
+--------------------+----------+
| timestamp| Value |
+--------------------+----------+
|2015-12-02 00:10:...| 652.8|
|2015-12-02 00:20:...| 518.4|
|2015-12-02 00:30:...| 524.6|
|2015-12-02 00:40:...| 382.9|
|2015-12-02 00:50:...| 461.6|
|2015-12-02 01:00:...| 476.6|
|2015-12-02 01:10:...| 472.6|
|2015-12-02 01:20:...| 353.0|
|2015-12-02 01:30:...| 407.9|
|2015-12-02 01:40:...| 475.9|
|2015-12-02 01:50:...| 513.2|
|2015-12-02 02:00:...| 569.0|
|2015-12-02 02:10:...| 711.4|
|2015-12-02 02:20:...| 457.6|
|2015-12-02 02:30:...| 392.0|
|2015-12-02 02:40:...| 459.5|
|2015-12-02 02:50:...| 560.2|
|2015-12-02 03:00:...| 252.9|
|2015-12-02 03:10:...| 228.7|
|2015-12-02 03:20:...| 312.2|
+--------------------+----------+
Now I'd like to group (and sum) values by hour (or day, or month or...), but I don't really have a clue about how can I do that.
That's how I load the DataFrame. I've got the feeling that this isn't the right way to do it, though:
query = """
SELECT column1 AS timestamp, column2 AS value
FROM table
WHERE blahblah
"""
sc = SparkContext("local", 'test')
sqlctx = SQLContext(sc)
df = sqlctx.load(source="jdbc",
url="jdbc:sqlserver://<CONNECTION_DATA>",
dbtable="(%s) AS alias" % query)
Is it ok?
Also, you can use date_format to create any time period you wish. Groupby specific day:
Groupby specific month (just change the format):
Since 1.5.0 Spark provides a number of functions like
dayofmonth
,hour
,month
oryear
which can operate on dates and timestamps. So iftimestamp
is aTimestampType
all you need is a correct expression. For example:Pre-1.5.0 your best option is to use
HiveContext
and Hive UDFs either withselectExpr
:or raw SQL:
Just remember that aggregation is performed by Spark not pushed-down to the external source. Usually it is a desired behavior but there are situations when you may prefer to perform aggregation as a subquery to limit data transfer.