Measure Check
The measure
check is specifically designed for Cube.js semantic layer validation. It validates pre-defined measures from your Cube.js data model, making it ideal for semantic layer data quality checks.
Configuration
Parameter | Required | Description |
---|---|---|
name | Yes | Unique name for the check |
dataset | Yes | Cube name or SQL query |
type | Yes | Must be measure |
measure | Yes | Cube.js measure name |
condition | Yes | Comparison operator |
threshold | Yes | Value to compare against |
dimensions | No | Cube.js dimensions for grouping |
filter | No | WHERE clause conditions |
Prerequisites
- Cube.js instance must be running and accessible
- Datasource must be configured with
type: cube
- Measures must be defined in your Cube.js schema
Examples
Basic Measure Check
- name: total_revenue_measure
dataset: Orders
type: measure
measure: totalRevenue
condition: ge
threshold: 1000000
This check validates the totalRevenue
measure from the Orders cube.
Measure with Dimensions
- name: revenue_by_region
dataset: Sales
type: measure
measure: revenue
dimensions: [region, quarter]
condition: gt
threshold: 50000
This check ensures revenue by region and quarter exceeds $50K.
Complex Measure
- name: customer_lifetime_value
dataset: Customers
type: measure
measure: lifetimeValue
dimensions: [customerSegment]
condition: ge
threshold: 500
This check validates customer lifetime value by segment.
Time-based Measure
- name: monthly_active_users
dataset: Users
type: measure
measure: monthlyActiveUsers
condition: gt
threshold: 10000
time_dimension:
name: createdAt
granularity: month
This check ensures monthly active users exceed 10K.
Cube.js Integration
Sample Cube Schema
// schemas/Orders.js
cube(`Orders`, {
sql: `SELECT * FROM orders`,
measures: {
totalRevenue: {
type: `sum`,
sql: `order_amount`
},
averageOrderValue: {
type: `avg`,
sql: `order_amount`
},
orderCount: {
type: `count`
}
},
dimensions: {
region: {
sql: `region`,
type: `string`
},
createdAt: {
sql: `created_at`,
type: `time`
}
}
});
Datasource Configuration
datasources:
- name: cube
type: cube
uri: http://localhost:4000/cubejs-api/v1
Generated SQL
The measure check generates Cube.js API calls that translate to optimized SQL:
-- Generated by Cube.js for totalRevenue measure
SELECT SUM(order_amount) as totalRevenue
FROM orders
With dimensions:
-- Generated by Cube.js with dimensions
SELECT
region,
quarter,
SUM(order_amount) as revenue
FROM orders
GROUP BY region, quarter
Use Cases
- Semantic Layer Validation: Ensure business metrics are correct
- KPI Monitoring: Validate key performance indicators
- Business Rules: Check calculated business measures
- Data Consistency: Ensure measures match expectations
- Cube.js Testing: Validate Cube.js schema definitions
Advantages over Raw SQL
- Pre-calculated: Measures are optimized by Cube.js
- Business Logic: Includes complex business calculations
- Caching: Benefits from Cube.js caching layer
- Consistency: Uses same definitions as dashboards
- Security: Respects Cube.js security context
Example Results
✓ total_revenue_measure: 1250000 (≥ 1000000)
✗ revenue_by_region_west_q1: 45000 (> 50000)
✓ customer_lifetime_value_premium: 750 (≥ 500)
Common Measure Types
Revenue Measures
- name: monthly_recurring_revenue
dataset: Subscriptions
type: measure
measure: mrr
condition: gt
threshold: 100000
User Engagement Measures
- name: daily_active_users
dataset: UserActivity
type: measure
measure: dau
condition: ge
threshold: 5000
Conversion Measures
- name: conversion_rate
dataset: Funnel
type: measure
measure: conversionRate
condition: ge
threshold: 0.05
Error Handling
Common issues and solutions:
- Cube not found: Verify cube name matches schema
- Measure not found: Check measure exists in cube definition
- API connection: Ensure Cube.js instance is accessible
- Security: Verify API token and permissions
Performance Tips
- Pre-aggregations: Use Cube.js pre-aggregations for large datasets
- Caching: Leverage Cube.js caching for repeated checks
- Partitioning: Use time-based partitioning in cube schema
- Indexes: Ensure underlying tables have proper indexes