Skip to main content

Analytics Podcast Episodes

View All Tags

Lars Kamp
Kevin Hu

In this episode, we interview Kevin Hu, co-founder and CEO at Metaplane. Metaplane offers data observability for the modern data stack. Kevin calls Metaplane the "Datadog for data," in reference to observability for microservices and cloud-native stacks.

As data volume and tool usage grow, so does the potential for something to break—resulting in errors and data downtime. In the modern data stack, the chain of SQL-based transformations between the original data source and the computed result is long and complex. For this reason, it's often nearly impossible to pinpoint the source of data errors.

Metaplane's focus is data criticality, and Metaplane has built instrumentation to understand exactly where errors occur. When data is mission-critical to the business, data teams become "solution-aware."

We take a walk down memory lane in this episode. We discuss the early days of the cloud warehouse market and the paradigm shift to separate storage and compute that, overnight, turned Snowflake into a market leader.

As a result of this shift, the market for analytics expanded and spawned a new generation of data tooling across categories like data integration and ETL, customer data platforms, data catalogs, reverse ETL, and data observability by companies like RudderStack, Airbyte, Census, Hightouch, and, of course, Metaplane.

Lars Kamp

Apache Iceberg is a new table format that offers both the simplicity of SQL and separation of storage and compute. The Iceberg table format works with any compute engine, so users are not limited to working with a single engine. Popular engines (e.g., Spark, Trino, Flink, and Hive) and modern cloud warehouses (e.g., Snowflake, Redshift, and BigQuery) can work with Iceberg tables at the same time.

A table format is a layer that sits between the file format and database. Iceberg is an abstraction layer above file formats like Parquet, Avro, and ORC born out of necessity at Netflix. Like many other companies at the time, Netflix shifted from MPP data warehouses to the Hadoop ecosystem in the 2010s. MPP warehouses like Teradata were hitting scale limitations and becoming too expensive at Netflix's scale.

The Hadoop ecosystem abandoned the table abstraction layer in favor of scale. In Hadoop, we deal directly with file systems like HDFS. The conventional wisdom at the time was that bringing compute to storage was easier than moving the data to compute. Hadoop scales compute and disk together, which turned out to be incredibly hard to manage in the on-premise world.

Early on, Netflix shifted to the cloud and started storing data in Amazon S3 instead, which separated storage from compute. Snowflake, the cloud warehouse, also picked up on that principle, bringing back SQL semantics and tables from "old" data warehouses.

Netflix wanted to separate storage/compute and SQL table semantics. They wanted to add, remove, and rename columns without S3 paths. But rather than going with another proprietary vendor, Netflix wanted to stay in open source and open formats. And thus, Iceberg was developed and eventually donated to the Apache Foundation. Today, Iceberg is also in use at companies like Apple and LinkedIn.

Tabular commercializes Apache Iceberg. Working with open-source Iceberg tables still requires understanding of object stores and distributed data processing engines and how various components interact with each other. Tabular lowers the bar for adoption and removes the heavy lifting.

Jason Reid is a co-founder and heads Product at Tabular. In this episode, Jason walks us through the benefits of using an open table format like Iceberg and how it works with existing analytics infrastructure and tooling of the modern data stack like dbt.

Lars Kamp
Julia Schottenstein

dbt Labs' mission is to empower data practitioners to create and disseminate organizational knowledge with its open-source product dbt. dbt helps write and execute data transformation jobs by compiling code to SQL and running it against your cloud warehouse.

When raw data from production or SaaS apps arrives in a cloud warehouse for analysis, it's not in a usable state. Analytics engineers need to prepare, clean, join, and transform the data to match business needs. These needs could include visualizing data for a sales forecast, feeding data into a machine learning model, or preparing operational analytics with infrastructure data. The analytics engineering workflow covers all the steps from raw data extraction to data modeling and end uses like reporting or data science.

Today, over 16,000 companies use dbt. dbt has become a foundational technology for the analytics engineering workflow, which is very similar to the DevOps workflow. dbt applies software engineering principles to working with data. To "productionize" data, engineers develop, test, and integrate it—and then also provide observability and alerting once it's in production. All of this functionality is included in dbt Cloud, the commercial version of dbt.

Julia Schottenstein heads Product at dbt Labs. In this episode, Julia walks us through the evolution of dbt from a tool for data teams at start-ups to enterprise deployments where sometimes thousands of analytics engineers collaborate through dbt. We cover all aspects of the modern data stack—cloud warehouses, ETL, data pipelines, and orchestration—with an outlook on the wider use of data in the enterprise by both humans and applications:

  • dbt's semantic layer, which assigns definitions (e.g., revenue, customer, churn) to a specific metric

    The semantic layer in dbt contains the definitions for each metric, ensuring consistency and flexibility—users can slice and dice a metric along any dimension. Metrics are computed at the time of a query rather than pointing to an already materialized view.

  • Continuous integration and deployment (CI/CD) for data

    Building data pipelines is expensive, and data transformation can take a long time with large data sets and complex queries. dbt Cloud ships a purpose-built CI tool that builds the absolute minimum set of code and data to test changes.

  • How dbt works, with its direct acyclic graph (DAG)

    The DAG is a visual representation of data models and the connections in-between them. dbt started out with SQL to run all transformations, but is now also inviting other languages such as Python.

Lars Kamp
Michael Driscoll

Creating an analytics dashboard is a time-consuming process that involves stitching together many components: ELT pipelines, cloud warehouses, transformation and semantic layers, data catalogs, and a dashboard tool. The flexibility of the Modern Data Stack (MDS) also means a great deal of complexity and many design decisions.

Rill Data is on a mission to radically simplify how developers create operational dashboards. Rill offers blazing fast dashboards that come bundled with a real-time analytical database and a modeling layer.

Michael Driscoll is the co-founder and CEO of Rill Data. In this episode, Mike demos the latest 0.16 release of Rill Developer.

There are three pieces of infrastructure that form a Rill dashboard application:

  • Sources: Rill ships with a CLI you can use to import data from an object store like AWS S3 or Google Cloud Storage. Rill treats the object store as the source of truth and imports data for the "last-mile ETL." As data in the object store changes, Rill orchestrates incremental updates.
  • Runtime: The runtime itself consists of a database (DuckDB), a web UI for rendering the dashboards (SvelteKit), and a middleware written in Go. Rill Enterprise replaces DuckDB with Apache Druid to process large data sets.
  • Models: Configuration code that parameterizes the dashboards, using YAML and SQL.

Bringing these things together in one application is an opinionated way to transform data to dashboards that Mike says covers "80%+ of the use cases that [they've] come across when building operational dashboards." Rill customers create dashboards to build analytics for their advertising, marketplace, and infrastructure operations.

Rill's stack is a departure from point-and-click interfaces, moving towards what Mike calls "BI-as-code." Source definitions and metrics are implemented in YAML, and models create a SQL query. The combination of SQL and YAML creates a BI layer that can be checked into a Git repository, which can then be managed automatically by CI workflows.

We also cover broader trends in our discussion, including the convergence of engineering and analytics cultures as engineers adopt practices from analytics to work with infrastructure data. Watch this episode to learn more about building data infrastructure for engineering teams with SQL and YAML with Rill.

Lars Kamp

Some studies estimate that nine out of ten copies of data are precomputed. Precomputation requires a lot of engineering and batch processing. Compare this to what you can achieve when instead computing raw data, which reduces the amount of data you need to manage, store, and secure by up to 90%. Yet, some precomputation has often still been required because of bottlenecks in I/O, storage, or compute.

FeatureBase is the first analytical database built entirely on bitmaps.

Bitmaps lay out data differently from both the row-oriented layout of transactional databases and the columnar layout of analytical databases; bitmaps store data at the value. Due to the nature of bitmaps, the data pertaining to each unique value within a row or column can be accessed independently without having to scan the row or column. The I/O for typical analytical workloads is only a fraction of that of traditional analytical queries.

Bitmaps are more efficient when it comes to storing, transporting, and managing data—they are orders of magnitude faster than today's popular cloud warehouses, and also an order of magnitude more efficient at storing data. Their efficiency makes them ideal for real-time processing and artificial intelligence workloads.

In fact, that's what positions FeatureBase as the database between real-time streaming engines like Kafka on one end, and cloud warehouses as long-term storage engines on the other end. FeatureBase is the working memory in-between the two.

Higinio "H.O." Maycotte is Founder and CEO at FeatureBase. In this session, we explore the mathematical pillars of databases and bitmaps. We cover:

The data footprint and scale of some of FeatureBase's customers is nothing short of breathtaking. One of their advertising customers processes 120 billion updates a day—that's 1.38 million updates per second. FeatureBase allowed them to reduce their server count from 1,000 servers to just 11, saving them millions of dollars per year.

The team at FeatureBase has invested over $30 million in R&D and nine years of their lives to advance the use of bitmaps in databases. Watch this fascinating session with H.O. to learn more about math, bitmaps, and modern real-time processing data architecture.

Contact Us

Have feedback or need help? Don’t be shy—we’d love to hear from you!

 

 

 

Some Engineering Inc.