Allium is an enterprise-grade blockchain data warehouse. Instead of giving users a SQL editor like Dune Analytics, Allium delivers decoded blockchain tables directly into the customer's own Snowflake, BigQuery, or Databricks instance. The product targets data teams that need to join onchain data with offchain data — internal product metrics, CRM, transaction logs — and run production pipelines against it. This guide explains how Allium's warehouse model works, what tables it ships, and where enterprise teams typically use it.
Allium covers 50+ chains with full transaction, log, trace, and decoded-event coverage. Customers as of 2026 include Visa, Stripe, Uniswap Labs, Phantom, and a number of stablecoin issuers per Allium's customer page. Pricing is enterprise — quote-based, not published — and starts in the high four figures per month.
What Is Allium?
Allium is a managed blockchain data pipeline. The company runs the ingest, decode, enrichment, and delivery layers; the customer points their warehouse at Allium and starts querying. There is no Allium UI for ad-hoc analysis — by design. The product is the data, not a dashboard.
The contrast with Dune is clear. Dune hosts both the data and the analyst experience; Allium hosts only the data. Customers that pick Allium typically have an existing analytics stack (Looker, Hex, internal dashboards) and want to add blockchain data as another source.
Three product lines as of 2026: Allium Data (warehouse drops, batch), Allium Datastreams (real-time event streams), and Allium Explorer (a hosted UI for ad-hoc queries, the closest thing to Dune in their stack). Most customers buy Data + Datastreams.
How the Warehouse Works
The pipeline has four stages.
Ingest. Allium operates archive nodes for every supported chain. For EVM chains, this is geth or erigon snapshots. For Solana, the Geyser plugin. For Bitcoin, full nodes. For non-EVM chains (Tron, Stellar, Aptos, Sui, TON), each has a chain-specific export. Ingest runs continuously; new blocks land within seconds.
Decode. Allium maintains an in-house decoding pipeline. ABIs for major contracts are pre-loaded; custom contracts can be added on request. Decoded events become typed columns — uniswap_v3_ethereum.swap_events with token0_amount, token1_amount, recipient, etc. The decoding layer is what differentiates a warehouse from raw RPC data.
Enrich. The enrichment layer adds USD prices (from a multi-source price service), token metadata, contract labels, and cross-chain stitching. Cross-chain stitching matters because a USDC transfer via CCTP shows as two separate events; Allium publishes a stitched view in cross_chain_transfers tables.
Deliver. Tables land in the customer's warehouse via Snowflake Data Sharing, BigQuery Authorized Views, or Databricks Delta Sharing. No data movement at the customer's end — the tables appear as if native. Refresh cadence: 2–10 minutes for most chains.
What Tables Allium Ships
Per Allium's data catalog, the standard table set per chain includes:
Raw tables: blocks, transactions, logs, traces — full chain history
Decoded events: per-protocol event tables (Uniswap, Aave, Compound, etc.)
Token tables: ERC-20 transfers, ERC-721 mints/transfers, ERC-1155 transfers, all USD-priced
DEX tables: normalized swap events across major DEXes
Wallet labels: ~50M curated labels (smaller than Nansen but enterprise-vetted)
Cross-chain transfers: stitched bridges (CCTP, Hop, Across, Stargate, etc.)
Stablecoin tables: supply, mints, burns, by issuer and chain
The cross-chain and stablecoin tables are differentiators. Most platforms either don't stitch cross-chain (leaving the burn/mint as two events) or stitch only a subset of bridges. Allium publishes explicit cross_chain_transfers with bridge protocol attribution.
Datastreams: Real-Time Events
Allium Datastreams is the streaming product. Instead of warehouse drops at 2–10 minute intervals, Datastreams pushes events as they're confirmed onchain — typical latency of ~1–3 seconds after block finality.
Delivery options: AWS Kinesis, Google Pub/Sub, Kafka, or webhook. Customers use Datastreams for trading, alerting, and product features that need real-time response (e.g., a wallet app showing a deposit the moment it confirms).
Streaming is more expensive than batch warehouse drops because of throughput SLA requirements. Pricing scales with event volume — high-throughput chains like Solana cost more than Bitcoin.
Use Cases
Three enterprise patterns recur.
Stablecoin reconciliation. Issuers and large stablecoin users (Stripe, Visa) reconcile onchain mints, burns, and transfers against offchain ledgers. Allium's stablecoin tables make this a SQL join. The reconciliation runs continuously — any discrepancy between onchain state and the issuer's books shows up within minutes. Eco's stablecoin execution flows fit naturally into this model — see our stablecoin analytics guide.
Compliance and AML. Onchain wallet labels join with internal user IDs to flag risky transactions. Customers like fintech onramps and exchanges push every onchain interaction through Allium's wallet-label tables and risk-scoring views. The output feeds into existing compliance tooling (Sardine, Chainalysis, in-house).
Product analytics. Wallet apps, exchanges, and DeFi protocols measure user behavior with onchain + offchain joins. "Did the user we onboarded last month make a swap on our DEX?" needs onchain swap data joined with the user's KYC record. Allium ships the onchain side; the customer joins it in their warehouse.
Comparison: Allium vs Dune
Both are SQL-native. The difference is delivery model and target customer.
Dimension | Allium | Dune |
Delivery | Customer's warehouse (Snowflake, BigQuery, Databricks) | Hosted SQL editor |
Target customer | Enterprise data teams | Analysts, traders, indie users |
Pricing | Custom, ~$5K-$50K+/month | Free + $399/mo + $999/mo public tiers |
Real-time streaming | Yes (Datastreams) | No (batch only) |
Cross-chain stitching | Built-in | Limited |
Custom contract decoding | On request, fast turnaround | Community spellbook (variable) |
Public dashboards | No | Yes, core feature |
Teams without an existing warehouse rarely buy Allium — the integration cost is high if you're starting from zero. Teams with mature data infrastructure typically prefer Allium because it slots into existing pipelines.
Setup and Integration
The integration sequence is roughly:
Sales call, scope chains and tables needed, agree pricing
Customer provides warehouse access (Snowflake account ID, BigQuery project ID, Databricks workspace)
Allium provisions data sharing — typically same-day for Snowflake, 1–2 days for BigQuery
Tables appear in the customer warehouse, queryable as native
For Datastreams: customer provisions a Kinesis/Pub-Sub/Kafka endpoint, Allium starts publishing
From contract signing to first query: typically 1–2 weeks for batch, 2–4 weeks for streaming.
SQL Patterns for Allium Tables
Once Allium tables land in the customer's warehouse, the SQL is standard. Common patterns:
Stablecoin supply by chain. Sum mints minus burns from issuer contracts. Allium's stablecoin tables abstract this — a single query returns supply over time per chain per issuer, no manual joining required.
Cross-chain transfer attribution. Allium's cross_chain_transfers table joins origin and destination events with the bridge protocol attribution included. Querying for "USDC bridged from Ethereum to Base in the last 7 days" is a one-line WHERE clause.
Wallet activity reconstruction. Given a wallet, return its full history across all supported chains. Allium's per-chain transaction tables share a common schema, so a UNION across chains produces a unified history.
DEX volume by venue. Allium's normalized DEX tables group swaps by protocol and chain. Useful for tracking which DEX is dominant on which chain over time.
Token flow graphs. Aggregate transfers between addresses to build a flow graph. Useful for AML, fraud detection, and bot identification. Allium's transfer tables include from/to addresses ready to graph.
The schema is documented at docs.allium.so. Most analysts onboarding to Allium become productive within a week.
Latency and Freshness
Latency is one of the most-asked questions about any blockchain warehouse. Allium's published latency targets:
Batch warehouse. 2-10 minutes from chain finality to table availability. Most chains land within 5 minutes; high-throughput chains (Solana) and cold chains (Bitcoin) can take longer.
Datastreams. 1-3 seconds from event to delivery on the customer's stream endpoint. The latency floor is determined by chain finality time itself — Ethereum's 12-second slot time means the earliest a finalized event can be reported is ~12 seconds after the block.
Cross-chain stitched tables. Lag the slower of the two chains. A CCTP transfer from Ethereum to Base shows up in the stitched table only after both the burn (Ethereum) and the mint (Base) are finalized — typically 15-30 seconds total.
For most analytics use cases, batch is fast enough. Datastreams matters when the application is real-time — trading systems, immediate user notifications, fraud detection. The pricing premium for streaming reflects the SLA cost.
Allium vs Other Enterprise Sources
Allium is one of three commercial blockchain warehouse vendors. The others are Coin Metrics and the legacy options (Chainalysis Investigations, Elliptic Forensics).
Coin Metrics. Established in 2017, originally focused on macro Bitcoin and Ethereum metrics. The product has expanded to cover ~50 chains. Coin Metrics' edge is methodology rigor — academic-grade definitions for metrics like "supply held by long-term holders." Pricing is enterprise but typically lower than Allium for batch-only customers. Coin Metrics is the typical choice for institutional macro research; Allium is the choice for application-level data engineering.
Chainalysis Investigations / Elliptic. Compliance-first products. The data underneath is similar to Allium, but the surface is investigation tools — entity graphs, risk scores, sanctions screening. Customers are mostly law enforcement, banks, and exchanges. Not a substitute for Allium for a fintech or DeFi product team.
Self-hosted indexers. Some teams run their own indexer (Goldsky, Subsquid, The Graph) and skip the warehouse layer. The trade-off: lower per-month cost, much higher engineering cost. Self-hosting becomes worth it above a certain scale or when the team has a uniquely specific data model.
Most teams that consider Allium also evaluate Coin Metrics and self-hosted alternatives. The choice usually comes down to delivery model — warehouse drops (Allium), API-only (Coin Metrics, hosted indexers), or compliance UI (Chainalysis, Elliptic).
Common Pipelines Built on Allium
Three reference architectures customers commonly build.
The treasury reconciliation pipeline. Allium tables drop into Snowflake. dbt models join onchain balances and transfers with internal treasury accounts. The output is a daily reconciliation report flagging any discrepancy between onchain state and the issuer's books. Stripe, Visa, and large stablecoin issuers reportedly run this pattern.
The risk-screening pipeline. Allium wallet labels join with internal user IDs. New deposits trigger a SQL lookup that returns risk score, label, and any flags. The decision (accept, hold for review, reject) feeds back into the customer's payments system. Used by fintech onramps and exchanges.
The product-analytics pipeline. Allium decoded events join with the customer's product database. The output: dashboards in Looker, Hex, or Mode showing onchain activity by user cohort. Wallet apps and DEXs use this pattern to measure activation, retention, and feature adoption.
The pipelines have one thing in common: they assume the customer's data infrastructure already exists. Teams without a warehouse, dbt, and BI stack rarely succeed with Allium quickly. Allium's value proposition is "stop doing blockchain ETL"; if the team isn't already running ETL, the value is harder to capture.
Eco's Role
Eco moves stablecoins onchain. Allium observes them. Stablecoin issuers and treasury teams that build on Eco often pipe Allium's stablecoin and cross-chain transfer tables into their internal warehouse for reconciliation. The two products are complementary: Eco at execution, Allium at observation. Onchain transfers settled through Eco land in Allium's tables within minutes. For background on what gets tracked, see our stablecoin onchain analytics guide.
FAQ
How much does Allium cost?
Allium pricing is custom, not published. Public references suggest entry pricing in the high four to low five figures per month for batch warehouse drops, with streaming and full-coverage plans in the mid-five figures. The product targets enterprise data budgets, not individual analysts.
What's the difference between Allium Data and Allium Datastreams?
Allium Data ships decoded blockchain tables to the customer's warehouse with 2–10 minute refresh. Allium Datastreams pushes events in real time (~1–3 second latency) via Kinesis, Pub-Sub, Kafka, or webhook. Most customers buy both; some only need batch.
Which warehouses does Allium support?
Snowflake (via Data Sharing), BigQuery (Authorized Views), and Databricks (Delta Sharing). All three use native sharing protocols — no data movement at the customer end. Tables appear as if locally provisioned.
Does Allium decode my custom contract?
Yes, on request. Customers submit ABIs and contract addresses; Allium adds them to the decoding pipeline. Turnaround is typically 1–3 days for verified contracts, longer for unverified or complex assembly.
Can I use Allium without a data warehouse?
Allium Explorer is the hosted-UI option for customers without a warehouse. It's similar to Dune in surface — SQL editor, query results — but pricing and target customer are still enterprise. Most non-enterprise users pick Dune instead.

