Alon Halevy, Flip Korn, Natalya F. Noy, Christopher Olston, Neoklis Polyzotis, Sudip Roy, Steven Euijong Whang
For most large enterprises today, data constitutes their core asset, along with code and infrastructure. For most enterprises, the amount of data that they produce internally has exploded in recent years. At the same time, in many cases, engineers and data scientists do not use centralized data-management systems and end up creating what became known as a data lake—a collection of datasets that often are not well organized or not organized at all and where one needs to “fish” for useful datasets. In this paper, we describe our experience building and deploying GOODS, a system to manage Google’s internal data lake. GOODS crawls Google’s infrastructure and builds a catalog of discovered datasets, including structured files, databases, spreadsheets, and even services that provide access to the data. GOODS extracts metadata about datasets in a post-hoc way: engineers continue to generate and organize datasets in the same way that they have before, and GOODS provides value without disrupting teams’ practices. The technical challenges that we had to address resulted both from the scale and heterogeneity of Google’s data lake and from our decision to extract metadata in a post-hoc manner. We believe that many of the lessons that we learned are applicable to building large-scale enterprise-level data-management systems in general.