Data Lakehouse: The Foundation for Your Workflow

Our Arkham Data Lakehouse is the high-performance engine that underpins the entire Data Platform. It combines the massive scale of a data lake with the reliability and performance of a data warehouse, creating a single, unified foundation for all your data. As a builder, you don't manage the Lakehouse directly; instead, you experience its benefits through the speed, reliability, and powerful features of the Arkham toolchain.

How the Lakehouse Powers the Arkham Tools

Our Lakehouse architecture is the "how" behind the seamless experience you have in our platform's UI tools. Each core technical feature of the Lakehouse is designed to directly enable a key part of your workflow.

  • Reliable Pipelines for the Pipeline Builder The Lakehouse brings full ACID transaction guarantees to your data transformations. When your Pipeline Builder job runs a multi-stage pipeline, it executes as a single, atomic transaction. This means a pipeline either succeeds completely or fails cleanly, eliminating the risk of partial updates and data corruption, ensuring your Production datasets are always consistent.
  • Performant Queries in the Playground Your queries in the Playground are fast because the Lakehouse uses open columnar formats (like Apache Parquet) and a decoupled compute architecture. Data is stored in a query-optimized way, and the query engine can scale independently, ensuring consistently low latency for your ad-hoc analysis, even on massive datasets.
  • Effortless Governance in the Data Catalog The Time Travel capability of the transactional layer is the foundation of the Data Catalog's governance features. Every change to a dataset creates a new version, and the Catalog maintains a full, auditable history. This allows you to inspect the state of your data at any point in time, track lineage, and debug with confidence.
  • Flexible Ingestion via Connectors Our Connectors can reliably ingest data of any shape because the Lakehouse is built to handle any data format on cost-effective object storage, while the transactional layer still enforces strong schema validation on write. This unique combination gives you the flexibility of a data lake with the guarantees of a warehouse, right from the first step of your workflow.

Arkham vs. Alternative Architectures

To appreciate the benefits of Arkham's managed Lakehouse, it's helpful to compare it to the traditional architectures that data teams often have to build and maintain themselves. Arkham's platform is designed to give you the advantages of a Lakehouse without the setup and management overhead.

Feature

Data Lakes

Data Warehouses

Arkham Lakehouse (Best of Both)

Storage Cost

✅ Very low (S3)

❌ High (compute + storage)

✅ Very low (S3)

Data Formats

✅ Any format (JSON, CSV, Parquet)

❌ Structured only

✅ Any format + structure

Scalability

✅ Petabyte scale

❌ Limited by cost

✅ Petabyte scale

ACID Transactions

❌ No guarantees

✅ Full ACID support

✅ Full ACID support

Data Quality

❌ No enforcement

✅ Strong enforcement

✅ Strong enforcement

Schema Evolution

❌ Manual management

❌ Rigid structure

✅ Automatic evolution

Query Performance

❌ Slow, inconsistent

✅ Fast, optimized

✅ Fast, optimized

ML/AI Support

✅ Great for ML

❌ Poor ML support

✅ Great for ML

Real-time Analytics

❌ Batch processing

✅ Real-time queries

✅ Real-time queries

Time Travel

❌ Not available

❌ Limited versions

✅ Full version history

Setup Complexity

✅ Simple (but lacks features)

❌ Complex ETL

Zero (Managed by Arkham)

  • Data Platform Overview: See how the Lakehouse underpins the entire integrated data workflow.
  • Connectors: Leverage the Lakehouse's flexibility to ingest any data format.
  • Pipeline Builder: Build reliable pipelines backed by the ACID transaction guarantees of the Lakehouse.
  • Data Catalog: Automatically govern and version your data with the Lakehouse's Time Travel capabilities.
  • Playground: Run high-performance queries on massive datasets thanks to the Lakehouse's optimized engine.