Presently, I have all my data files in Azure Data Lake Store. I need to process these files which are mostly in csv format. The processing would be running jobs on these files to extract various information for e.g.Data for certain periods of dates or certain events related to a scenario or adding data from multiple tables/files. These jobs run everyday through u-sql jobs in data factory(v1 or v2) and then sent to powerBI for visualization.
Using ADLA for all this processing, I feel it takes a lot of time to process and seems very expensive. I got a suggestion that I should use Azure Databricks for the above processes. Could somebody help me with this direction in the difference between the two and if it would be helpful to shift? Can I modify all my U-sql jobs into the Databricks notebook format?
Disclaimer: I work for Databricks.
It is tough to give pros/cons or advice without knowing how much data you work with, what kind of data it is, or how long your processing times are. If you want to compare Azure's Data Lake Analytics costs to Databricks, it can only be accurately done through speaking with a member of the sales team.
Keep in mind that ADLA is based on YARN cluster manager(from Hadoop) and only runs U-SQL batch processing workloads. A description from blue granite:
Databricks covers both batch and stream processing, and handles both ETL (data engineer) and Data science (Machine Learning, Deep Learning) workloads. Generally, here is why companies use Databricks.
There's more reasons than those, but those are some of the most common. You should try out a trial on the website if you think it may help your situation.