Running data science workloads is challenge regardless of whether you are running them on your laptop, on an on-premises cluster, or in the cloud. While buying 100% managed service is an option, these tools are usually quite expensive and lack extensibility. Therefore, many companies option for open source data science tools like scikit-learn and Apache Spark's MLlib in order to balance both functionality and cost.
However, even if a project succeeds at a point in time with any set of tools, these projects become harder and harder to maintain as data volumes increase and a desire for real-time pushes technology to its limit. New projects also struggle as new challenges of scale invalidate previous assumptions. This talk will talk discuss some patterns that we see at Databricks that companies leverage to succeed with their data science projects.