I want to write Spark UDAF where type of the column could be any that has a Scala Numeric defined on it. I've searched over Internet but found only examples with concrete types like DoubleType
, LongType
. Isn't this possible? But how then use that UDAFs with other numeric values?
相关问题
- How to maintain order of key-value in DataFrame sa
- Unusual use of the new keyword
- Get Runtime Type picked by implicit evidence
- Spark on Yarn Container Failure
- What's the point of nonfinal singleton objects
相关文章
- Gatling拓展插件开发,check(bodyString.saveAs("key"))怎么实现
- Livy Server: return a dataframe as JSON?
- RDF libraries for Scala [closed]
- Why is my Dispatching on Actors scaled down in Akk
- How do you run cucumber with Scala 2.11 and sbt 0.
- GRPC: make high-throughput client in Java/Scala
- Setting up multiple test folders in a SBT project
- SQL query Frequency Distribution matrix for produc
For simplicity let's assume you want to define a custom
sum
. You'll have provide aTypeTag
for the input type and use Scala reflection to define schemas:With a function defined as above we can create instance handling specific types:
Note:
To get the same flexibility as the built-in aggregate functions you'd have to define your own
AggregateFunction
, likeImperativeAggregate
orDeclarativeAggregate
. It is possible, but it is an internal API.