Fabric OneLake
Microsoft Fabric OneLake Connector
This connector extracts metadata from Microsoft Fabric OneLake, including workspaces, lakehouses, warehouses, schemas, and tables.
Quick Start
- Set up authentication - Configure Azure credentials (see Prerequisites)
- Grant permissions - Ensure your identity has
Workspace.Read.Alland workspace access - Configure recipe - Use
fabric-onelake_recipe.ymlas a template - Run ingestion - Execute
datahub ingest -c fabric-onelake_recipe.yml
Key Features
- Workspace, Lakehouse, Warehouse, and Schema containers
- Table datasets with proper subtypes
- Automatic detection and handling of schemas-enabled and schemas-disabled lakehouses
- Pattern-based filtering for workspaces, lakehouses, warehouses, and tables
- Stateful ingestion for stale entity removal
- Multiple authentication methods (Service Principal, Managed Identity, Azure CLI, DefaultAzureCredential)
Important Capabilities
| Capability | Status | Notes |
|---|---|---|
| Asset Containers | ✅ | Enabled by default. |
| Detect Deleted Entities | ✅ | Enabled by default via stateful ingestion. |
| Platform Instance | ✅ | Enabled by default. |
| Schema Metadata | ✅ | Enabled by default. |
Extracts metadata from Microsoft Fabric OneLake.
CLI based Ingestion
Starter Recipe
Check out the following recipe to get started with ingestion! See below for full configuration options.
For general pointers on writing and running a recipe, see our main recipe guide.
# Example recipe for Microsoft Fabric OneLake source
source:
type: fabric-onelake
config:
# Authentication (using service principal)
credential:
authentication_method: service_principal
client_id: ${AZURE_CLIENT_ID}
client_secret: ${AZURE_CLIENT_SECRET}
tenant_id: ${AZURE_TENANT_ID}
# Optional: Platform instance (use as tenant identifier)
# This groups all workspaces under a tenant-level container
# platform_instance: "contoso-tenant"
# Optional: Environment
# env: PROD
# Optional: Filter workspaces by name pattern
# workspace_pattern:
# allow:
# - ".*" # Allow all workspaces by default
# deny: []
# Optional: Filter lakehouses by name pattern
# lakehouse_pattern:
# allow:
# - ".*" # Allow all lakehouses by default
# deny: []
# Optional: Filter warehouses by name pattern
# warehouse_pattern:
# allow:
# - ".*" # Allow all warehouses by default
# deny: []
# Optional: Filter tables by name pattern
# Format: 'schema.table' or just 'table' for default schema
# table_pattern:
# allow:
# - ".*" # Allow all tables by default
# deny: []
# Feature flags
extract_lakehouses: true
extract_warehouses: true
extract_schemas: true # Set to false to skip schema containers
# Optional: API timeout (seconds)
# api_timeout: 30
# Optional: Stateful ingestion for stale entity removal
# stateful_ingestion:
# enabled: true
# remove_stale_metadata: true
# Optional: Schema extraction configuration
# extract_schema:
# enabled: true # Enable schema extraction (default: true)
# method: sql_analytics_endpoint
# Optional: SQL Analytics Endpoint configuration
# sql_endpoint:
# enabled: true # Enable SQL endpoint connection (default: true)
# odbc_driver: "ODBC Driver 18 for SQL Server" # ODBC driver name (default: "ODBC Driver 18 for SQL Server")
# encrypt: "yes" # Enable encryption (default: "yes")
# trust_server_certificate: "no" # Trust server certificate (default: "no")
# query_timeout: 30 # Timeout for SQL queries in seconds (default: 30)
sink:
type: datahub-rest
config:
server: "http://localhost:8080"
Config Details
- Options
- Schema
Note that a . is used to denote nested fields in the YAML recipe.
| Field | Description |
|---|---|
api_timeout integer | Timeout for REST API calls in seconds. Default: 30 |
extract_lakehouses boolean | Whether to extract lakehouses and their tables. Default: True |
extract_schemas boolean | Whether to extract schema containers. If False, tables will be directly under lakehouse/warehouse containers. Default: True |
extract_warehouses boolean | Whether to extract warehouses and their tables. Default: True |
platform_instance One of string, null | The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://docs.datahub.com/docs/platform-instances/ for more details. Default: None |
env string | The environment that all assets produced by this connector belong to Default: PROD |
credential AzureCredentialConfig | Unified Azure authentication configuration. This class provides a reusable authentication configuration that can be composed into any Azure connector's configuration. It supports multiple authentication methods and returns a TokenCredential that works with any Azure SDK client. Example usage in a connector config: class MyAzureConnectorConfig(ConfigModel): credential: AzureCredentialConfig = Field( default_factory=AzureCredentialConfig, description="Azure authentication configuration" ) subscription_id: str = Field(...) |
credential.authentication_method Enum | One of: "default", "service_principal", "managed_identity", "cli" |
credential.client_id One of string, null | Azure Application (client) ID. Required for service_principal authentication. Find this in Azure Portal > App registrations > Your app > Overview. Default: None |
credential.client_secret One of string(password), null | Azure client secret. Required for service_principal authentication. Create in Azure Portal > App registrations > Your app > Certificates & secrets. Default: None |
credential.exclude_cli_credential boolean | When using 'default' authentication, exclude Azure CLI credential. Useful in production to avoid accidentally using developer credentials. Default: False |
credential.exclude_environment_credential boolean | When using 'default' authentication, exclude environment variables. Environment variables checked: AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID. Default: False |
credential.exclude_managed_identity_credential boolean | When using 'default' authentication, exclude managed identity. Useful during local development when managed identity is not available. Default: False |
credential.managed_identity_client_id One of string, null | Client ID for user-assigned managed identity. Leave empty to use system-assigned managed identity. Only used when authentication_method is 'managed_identity'. Default: None |
credential.tenant_id One of string, null | Azure tenant (directory) ID. Required for service_principal authentication. Find this in Azure Portal > Microsoft Entra ID > Overview. Default: None |
extract_schema ExtractSchemaConfig | Configuration for schema extraction. |
extract_schema.enabled boolean | Enable schema extraction Default: True |
extract_schema.method string | Schema extraction method. Currently only 'sql_analytics_endpoint' is supported. Default: sql_analytics_endpoint |
lakehouse_pattern AllowDenyPattern | A class to store allow deny regexes |
lakehouse_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
sql_endpoint One of SqlEndpointConfig, null | SQL Analytics Endpoint configuration for schema extraction. Required when extract_schema.enabled=True and extract_schema.method='sql_analytics_endpoint'. |
sql_endpoint.enabled boolean | Enable SQL Analytics Endpoint connection Default: True |
sql_endpoint.encrypt Enum | One of: "yes", "no", "mandatory", "optional", "strict" Default: yes |
sql_endpoint.odbc_driver string | ODBC driver name for SQL Server connections. Default: ODBC Driver 18 for SQL Server |
sql_endpoint.query_timeout integer | Timeout for SQL queries in seconds Default: 30 |
sql_endpoint.trust_server_certificate Enum | One of: "yes", "no" Default: no |
table_pattern AllowDenyPattern | A class to store allow deny regexes |
table_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
warehouse_pattern AllowDenyPattern | A class to store allow deny regexes |
warehouse_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
workspace_pattern AllowDenyPattern | A class to store allow deny regexes |
workspace_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
stateful_ingestion One of StatefulStaleMetadataRemovalConfig, null | Configuration for stateful ingestion and stale entity removal. When enabled, tracks ingested entities and removes those that no longer exist in Fabric. Default: None |
stateful_ingestion.enabled boolean | Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False Default: False |
stateful_ingestion.fail_safe_threshold number | Prevents large amount of soft deletes & the state from committing from accidental changes to the source configuration if the relative change percent in entities compared to the previous state is above the 'fail_safe_threshold'. Default: 75.0 |
stateful_ingestion.remove_stale_metadata boolean | Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled. Default: True |
The JSONSchema for this configuration is inlined below.
{
"$defs": {
"AllowDenyPattern": {
"additionalProperties": false,
"description": "A class to store allow deny regexes",
"properties": {
"allow": {
"default": [
".*"
],
"description": "List of regex patterns to include in ingestion",
"items": {
"type": "string"
},
"title": "Allow",
"type": "array"
},
"deny": {
"default": [],
"description": "List of regex patterns to exclude from ingestion.",
"items": {
"type": "string"
},
"title": "Deny",
"type": "array"
},
"ignoreCase": {
"anyOf": [
{
"type": "boolean"
},
{
"type": "null"
}
],
"default": true,
"description": "Whether to ignore case sensitivity during pattern matching.",
"title": "Ignorecase"
}
},
"title": "AllowDenyPattern",
"type": "object"
},
"AzureAuthenticationMethod": {
"description": "Supported Azure authentication methods.\n\n- DEFAULT: Uses DefaultAzureCredential which auto-detects credentials from\n environment variables, managed identity, Azure CLI, etc.\n- SERVICE_PRINCIPAL: Uses client ID, client secret, and tenant ID\n- MANAGED_IDENTITY: Uses Azure Managed Identity (system or user-assigned)\n- CLI: Uses Azure CLI credential (requires `az login`)",
"enum": [
"default",
"service_principal",
"managed_identity",
"cli"
],
"title": "AzureAuthenticationMethod",
"type": "string"
},
"AzureCredentialConfig": {
"additionalProperties": false,
"description": "Unified Azure authentication configuration.\n\nThis class provides a reusable authentication configuration that can be\ncomposed into any Azure connector's configuration. It supports multiple\nauthentication methods and returns a TokenCredential that works with\nany Azure SDK client.\n\nExample usage in a connector config:\n class MyAzureConnectorConfig(ConfigModel):\n credential: AzureCredentialConfig = Field(\n default_factory=AzureCredentialConfig,\n description=\"Azure authentication configuration\"\n )\n subscription_id: str = Field(...)",
"properties": {
"authentication_method": {
"$ref": "#/$defs/AzureAuthenticationMethod",
"default": "default",
"description": "Authentication method to use. Options: 'default' (auto-detects from environment), 'service_principal' (client ID + secret + tenant), 'managed_identity' (Azure Managed Identity), 'cli' (Azure CLI credential). Recommended: Use 'default' which tries multiple methods automatically."
},
"client_id": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Azure Application (client) ID. Required for service_principal authentication. Find this in Azure Portal > App registrations > Your app > Overview.",
"title": "Client Id"
},
"client_secret": {
"anyOf": [
{
"format": "password",
"type": "string",
"writeOnly": true
},
{
"type": "null"
}
],
"default": null,
"description": "Azure client secret. Required for service_principal authentication. Create in Azure Portal > App registrations > Your app > Certificates & secrets.",
"title": "Client Secret"
},
"tenant_id": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Azure tenant (directory) ID. Required for service_principal authentication. Find this in Azure Portal > Microsoft Entra ID > Overview.",
"title": "Tenant Id"
},
"managed_identity_client_id": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Client ID for user-assigned managed identity. Leave empty to use system-assigned managed identity. Only used when authentication_method is 'managed_identity'.",
"title": "Managed Identity Client Id"
},
"exclude_cli_credential": {
"default": false,
"description": "When using 'default' authentication, exclude Azure CLI credential. Useful in production to avoid accidentally using developer credentials.",
"title": "Exclude Cli Credential",
"type": "boolean"
},
"exclude_environment_credential": {
"default": false,
"description": "When using 'default' authentication, exclude environment variables. Environment variables checked: AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID.",
"title": "Exclude Environment Credential",
"type": "boolean"
},
"exclude_managed_identity_credential": {
"default": false,
"description": "When using 'default' authentication, exclude managed identity. Useful during local development when managed identity is not available.",
"title": "Exclude Managed Identity Credential",
"type": "boolean"
}
},
"title": "AzureCredentialConfig",
"type": "object"
},
"ExtractSchemaConfig": {
"additionalProperties": false,
"description": "Configuration for schema extraction.",
"properties": {
"enabled": {
"default": true,
"description": "Enable schema extraction",
"title": "Enabled",
"type": "boolean"
},
"method": {
"const": "sql_analytics_endpoint",
"default": "sql_analytics_endpoint",
"description": "Schema extraction method. Currently only 'sql_analytics_endpoint' is supported.",
"title": "Method",
"type": "string"
}
},
"title": "ExtractSchemaConfig",
"type": "object"
},
"SqlEndpointConfig": {
"additionalProperties": false,
"description": "Configuration for SQL Analytics Endpoint schema extraction.\n\nReferences:\n- https://learn.microsoft.com/en-us/fabric/data-warehouse/warehouse-connectivity\n- https://learn.microsoft.com/en-us/fabric/data-warehouse/connect-to-fabric-data-warehouse\n- https://learn.microsoft.com/en-us/fabric/data-warehouse/what-is-the-sql-analytics-endpoint-for-a-lakehouse\n- https://learn.microsoft.com/en-us/sql/connect/odbc/dsn-connection-string-attribute?view=sql-server-ver17#encrypt",
"properties": {
"enabled": {
"default": true,
"description": "Enable SQL Analytics Endpoint connection",
"title": "Enabled",
"type": "boolean"
},
"odbc_driver": {
"default": "ODBC Driver 18 for SQL Server",
"description": "ODBC driver name for SQL Server connections.",
"title": "Odbc Driver",
"type": "string"
},
"encrypt": {
"default": "yes",
"description": "Enable encryption for SQL Server connections. Valid values: 'yes'/'mandatory' (enable encryption, default in ODBC Driver 18.0+), 'no'/'optional' (disable encryption), or 'strict' (ODBC Driver 18.0+, TDS 8.0 protocol only, always verifies server certificate). See: https://learn.microsoft.com/en-us/sql/connect/odbc/dsn-connection-string-attribute?view=sql-server-ver17#encrypt",
"enum": [
"yes",
"no",
"mandatory",
"optional",
"strict"
],
"title": "Encrypt",
"type": "string"
},
"trust_server_certificate": {
"default": "no",
"description": "Trust server certificate without validation. Set to 'yes' only if certificate validation fails. When 'encrypt=strict', this setting is ignored and certificate validation is always performed. See: https://learn.microsoft.com/en-us/sql/connect/odbc/dsn-connection-string-attribute?view=sql-server-ver17",
"enum": [
"yes",
"no"
],
"title": "Trust Server Certificate",
"type": "string"
},
"query_timeout": {
"default": 30,
"description": "Timeout for SQL queries in seconds",
"maximum": 300,
"minimum": 1,
"title": "Query Timeout",
"type": "integer"
}
},
"title": "SqlEndpointConfig",
"type": "object"
},
"StatefulStaleMetadataRemovalConfig": {
"additionalProperties": false,
"description": "Base specialized config for Stateful Ingestion with stale metadata removal capability.",
"properties": {
"enabled": {
"default": false,
"description": "Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or `datahub_api` is specified, otherwise False",
"title": "Enabled",
"type": "boolean"
},
"remove_stale_metadata": {
"default": true,
"description": "Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.",
"title": "Remove Stale Metadata",
"type": "boolean"
},
"fail_safe_threshold": {
"default": 75.0,
"description": "Prevents large amount of soft deletes & the state from committing from accidental changes to the source configuration if the relative change percent in entities compared to the previous state is above the 'fail_safe_threshold'.",
"maximum": 100.0,
"minimum": 0.0,
"title": "Fail Safe Threshold",
"type": "number"
}
},
"title": "StatefulStaleMetadataRemovalConfig",
"type": "object"
}
},
"additionalProperties": false,
"description": "Configuration for Fabric OneLake source.\n\nThis connector extracts metadata from Microsoft Fabric OneLake including:\n- Workspaces as Containers\n- Lakehouses as Containers\n- Warehouses as Containers\n- Schemas as Containers\n- Tables as Datasets with schema metadata\n\nNote on Tenant/Platform Instance:\nThe Fabric REST API does not expose tenant-level endpoints or operations.\nAll API operations are performed at the workspace level. To represent tenant-level\norganization in DataHub, users should set the `platform_instance` configuration\nfield to their tenant identifier (e.g., \"contoso-tenant\"). This will be included\nin all container and dataset URNs, effectively grouping all workspaces under the\nspecified platform instance/tenant.",
"properties": {
"env": {
"default": "PROD",
"description": "The environment that all assets produced by this connector belong to",
"title": "Env",
"type": "string"
},
"platform_instance": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://docs.datahub.com/docs/platform-instances/ for more details.",
"title": "Platform Instance"
},
"stateful_ingestion": {
"anyOf": [
{
"$ref": "#/$defs/StatefulStaleMetadataRemovalConfig"
},
{
"type": "null"
}
],
"default": null,
"description": "Configuration for stateful ingestion and stale entity removal. When enabled, tracks ingested entities and removes those that no longer exist in Fabric."
},
"credential": {
"$ref": "#/$defs/AzureCredentialConfig",
"description": "Azure authentication configuration. Supports service principal, managed identity, Azure CLI, or auto-detection (DefaultAzureCredential). See AzureCredentialConfig for detailed options."
},
"workspace_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns to filter workspaces by name. Example: allow=['prod-.*'], deny=['.*-test']"
},
"lakehouse_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns to filter lakehouses by name. Applied to all workspaces matching workspace_pattern."
},
"warehouse_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns to filter warehouses by name. Applied to all workspaces matching workspace_pattern."
},
"table_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns to filter tables by name. Format: 'schema.table' or just 'table' for default schema."
},
"extract_lakehouses": {
"default": true,
"description": "Whether to extract lakehouses and their tables.",
"title": "Extract Lakehouses",
"type": "boolean"
},
"extract_warehouses": {
"default": true,
"description": "Whether to extract warehouses and their tables.",
"title": "Extract Warehouses",
"type": "boolean"
},
"extract_schemas": {
"default": true,
"description": "Whether to extract schema containers. If False, tables will be directly under lakehouse/warehouse containers.",
"title": "Extract Schemas",
"type": "boolean"
},
"api_timeout": {
"default": 30,
"description": "Timeout for REST API calls in seconds.",
"maximum": 300,
"minimum": 1,
"title": "Api Timeout",
"type": "integer"
},
"extract_schema": {
"$ref": "#/$defs/ExtractSchemaConfig",
"description": "Configuration for schema extraction from tables."
},
"sql_endpoint": {
"anyOf": [
{
"$ref": "#/$defs/SqlEndpointConfig"
},
{
"type": "null"
}
],
"description": "SQL Analytics Endpoint configuration for schema extraction. Required when extract_schema.enabled=True and extract_schema.method='sql_analytics_endpoint'."
}
},
"title": "FabricOneLakeSourceConfig",
"type": "object"
}
Microsoft Fabric OneLake
This connector extracts metadata from Microsoft Fabric OneLake, including workspaces, lakehouses, warehouses, schemas, and tables.
Concept Mapping
| Microsoft Fabric | DataHub Entity | Notes |
|---|---|---|
| Workspace | Container (subtype: Fabric Workspace) | Top-level organizational unit |
| Lakehouse | Container (subtype: Fabric Lakehouse) | Contains schemas and tables |
| Warehouse | Container (subtype: Fabric Warehouse) | Contains schemas and tables |
| Schema | Container (subtype: Fabric Schema) | Logical grouping within lakehouse/warehouse |
| Table | Dataset | Tables within schemas |
Hierarchy Structure
Platform (fabric-onelake)
└── Workspace (Container)
├── Lakehouse (Container)
│ └── Schema (Container)
│ └── Table (Dataset)
└── Warehouse (Container)
└── Schema (Container)
└── Table/View (Dataset)
Platform Instance as Tenant
The Fabric REST API does not expose tenant-level endpoints. To represent tenant-level organization in DataHub, set the platform_instance configuration field to your tenant identifier (e.g., "contoso-tenant"). This will be included in all container and dataset URNs, effectively grouping all workspaces under the specified platform instance/tenant.
Prerequisites
Authentication
The connector supports multiple Azure authentication methods:
| Method | Best For | Configuration |
|---|---|---|
| Service Principal | Production environments | authentication_method: service_principal |
| Managed Identity | Azure-hosted deployments (VMs, AKS, App Service) | authentication_method: managed_identity |
| Azure CLI | Local development | authentication_method: cli (run az login first) |
| DefaultAzureCredential | Flexible environments | authentication_method: default |
For service principal setup, see Register an application with Microsoft Entra ID.
Required Permissions
The connector requires read-only access to Fabric workspaces and their contents. The authenticated identity (service principal, managed identity, or user) must have:
Workspace-Level Permissions:
- Workspace.Read.All or Workspace.ReadWrite.All (Microsoft Entra delegated scope)
- Viewer role or higher in the Fabric workspaces you want to ingest
API Permissions: The service principal or user must have the following Microsoft Entra API permissions:
Workspace.Read.All(delegated) - Required to list and read workspace metadata- Or
Workspace.ReadWrite.All(delegated) - Provides read and write access
Token Audiences: The connector uses two different token audiences depending on the operation:
- Fabric REST API (
https://api.fabric.microsoft.com): Uses Power BI API scope (https://analysis.windows.net/powerbi/api/.default) for listing workspaces, lakehouses, warehouses, and basic table metadata - OneLake Delta Table APIs (
https://onelake.table.fabric.microsoft.com): Uses Storage audience (https://storage.azure.com/.default) for accessing schemas and tables in schemas-enabled lakehouses
The connector automatically handles both token audiences. For schemas-enabled lakehouses, it will use OneLake Delta Table APIs with Storage audience tokens. For schemas-disabled lakehouses, it uses the standard Fabric REST API.
OneLake Data Access Permissions: For schemas-enabled lakehouses, you may also need OneLake data access permissions:
- If OneLake security is enabled on your lakehouse, ensure your identity has Read or ReadWrite permissions on the lakehouse item
- These permissions are separate from workspace roles and are managed in the Fabric portal under the lakehouse's security settings
Note: The connector automatically detects whether a lakehouse has schemas enabled and uses the appropriate API endpoint and token audience. No additional configuration is required.
For detailed information on permissions, see:
Granting Permissions
For Service Principal:
- Register an application in Microsoft Entra ID (Azure AD)
- Grant API permissions:
- Navigate to Azure Portal > App registrations > Your app > API permissions
- Add permission: Power BI Service > Delegated permissions >
Workspace.Read.All - Click Grant admin consent (if you have admin rights)
- Assign workspace roles:
- In Fabric portal, navigate to each workspace
- Go to Workspace settings > Access
- Add your service principal and assign Viewer role or higher
For Managed Identity:
- Enable system-assigned managed identity on your Azure resource (VM, AKS, App Service, etc.)
- Assign the managed identity to Fabric workspaces with Viewer role or higher
- The connector will automatically use the managed identity for authentication
Configuration
Basic Recipe
source:
type: fabric-onelake
config:
# Authentication (using service principal)
credential:
authentication_method: service_principal
client_id: ${AZURE_CLIENT_ID}
client_secret: ${AZURE_CLIENT_SECRET}
tenant_id: ${AZURE_TENANT_ID}
# Optional: Platform instance (use as tenant identifier)
# platform_instance: "contoso-tenant"
# Optional: Environment
# env: PROD
# Optional: Filter workspaces by name pattern
# workspace_pattern:
# allow:
# - "prod-.*"
# deny:
# - ".*-test"
# Optional: Filter lakehouses by name pattern
# lakehouse_pattern:
# allow:
# - ".*"
# deny: []
# Optional: Filter warehouses by name pattern
# warehouse_pattern:
# allow:
# - ".*"
# deny: []
# Optional: Filter tables by name pattern
# table_pattern:
# allow:
# - ".*"
# deny: []
sink:
type: datahub-rest
config:
server: "http://localhost:8080"
Advanced Configuration
source:
type: fabric-onelake
config:
credential:
authentication_method: service_principal
client_id: ${AZURE_CLIENT_ID}
client_secret: ${AZURE_CLIENT_SECRET}
tenant_id: ${AZURE_TENANT_ID}
# Platform instance (represents tenant)
platform_instance: "contoso-tenant"
# Environment
env: PROD
# Filtering
workspace_pattern:
allow:
- "prod-.*"
- "shared-.*"
deny:
- ".*-test"
- ".*-dev"
lakehouse_pattern:
allow:
- ".*"
deny:
- ".*-backup"
warehouse_pattern:
allow:
- ".*"
deny: []
table_pattern:
allow:
- ".*"
deny:
- ".*_temp"
- ".*_backup"
# Feature flags
extract_lakehouses: true
extract_warehouses: true
extract_schemas: true # Set to false to skip schema containers
# API timeout (seconds)
api_timeout: 30
# Stateful ingestion (optional)
stateful_ingestion:
enabled: true
remove_stale_metadata: true
sink:
type: datahub-rest
config:
server: "http://localhost:8080"
Using Managed Identity
source:
type: fabric-onelake
config:
credential:
authentication_method: managed_identity
# For user-assigned managed identity, specify client_id
# client_id: ${MANAGED_IDENTITY_CLIENT_ID}
platform_instance: "contoso-tenant"
env: PROD
sink:
type: datahub-rest
config:
server: "http://localhost:8080"
Using Azure CLI (Local Development)
source:
type: fabric-onelake
config:
credential:
authentication_method: cli
# Run 'az login' first
platform_instance: "contoso-tenant"
env: DEV
sink:
type: datahub-rest
config:
server: "http://localhost:8080"
Schema Extraction
Schema extraction (column metadata) is supported via the SQL Analytics Endpoint. This feature extracts column names, data types, nullability, and ordinal positions from tables in both Lakehouses and Warehouses.
Prerequisites for Schema Extraction
Schema extraction via SQL Analytics Endpoint requires ODBC drivers to be installed on the system.
1. ODBC Driver Manager
First, install the ODBC driver manager (UnixODBC) on your system:
Ubuntu/Debian:
sudo apt-get update
sudo apt-get install -y unixodbc unixodbc-dev
RHEL/CentOS/Fedora:
# RHEL/CentOS 7/8
sudo yum install -y unixODBC unixODBC-devel
# Fedora / RHEL 9+
sudo dnf install -y unixODBC unixODBC-devel
macOS:
brew install unixodbc
2. Microsoft ODBC Driver for SQL Server
Install the Microsoft ODBC Driver 18 for SQL Server (required for connecting to Fabric SQL Analytics Endpoint):
Ubuntu 20.04/22.04:
curl https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add -
curl https://packages.microsoft.com/config/ubuntu/$(lsb_release -rs)/prod.list | sudo tee /etc/apt/sources.list.d/mssql-release.list
sudo apt-get update
sudo ACCEPT_EULA=Y apt-get install -y msodbcsql18
RHEL/CentOS 7/8:
sudo curl -o /etc/yum.repos.d/mssql-release.repo https://packages.microsoft.com/config/rhel/$(rpm -E %{rhel})/mssql-release.repo
sudo ACCEPT_EULA=Y yum install -y msodbcsql18
RHEL 9 / Fedora:
sudo curl -o /etc/yum.repos.d/mssql-release.repo https://packages.microsoft.com/config/rhel/9/mssql-release.repo
sudo ACCEPT_EULA=Y dnf install -y msodbcsql18
macOS:
brew tap microsoft/mssql-release https://github.com/Microsoft/homebrew-mssql-release
brew update
HOMEBREW_ACCEPT_EULA=Y brew install msodbcsql18 mssql-tools18
Windows: Download and install from Microsoft ODBC Driver for SQL Server.
3. Verify Installation
After installation, verify that the ODBC driver is available:
odbcinst -q -d
You should see ODBC Driver 18 for SQL Server in the list.
4. Permissions
Your Azure identity must have access to query the SQL Analytics Endpoint (same permissions as accessing the endpoint via SQL tools).
5. Python Dependencies
The fabric-onelake extra includes sqlalchemy and pyodbc dependencies. Install them with:
pip install 'acryl-datahub[fabric-onelake]'
Note: If you encounter libodbc.so.2: cannot open shared object file errors, ensure the ODBC driver manager is installed (step 1 above).
Configuration
Schema extraction is enabled by default. You can configure it as follows:
source:
type: fabric-onelake
config:
credential:
authentication_method: service_principal
client_id: ${AZURE_CLIENT_ID}
client_secret: ${AZURE_CLIENT_SECRET}
tenant_id: ${AZURE_TENANT_ID}
# Schema extraction configuration
extract_schema:
enabled: true # Enable schema extraction (default: true)
method: sql_analytics_endpoint # Currently only this method is supported
# SQL Analytics Endpoint configuration
sql_endpoint:
enabled: true # Enable SQL endpoint connection (default: true)
# Optional: ODBC connection options
# odbc_driver: "ODBC Driver 18 for SQL Server" # Default: "ODBC Driver 18 for SQL Server"
# encrypt: "yes" # Enable encryption (default: "yes")
# trust_server_certificate: "no" # Trust server certificate (default: "no")
query_timeout: 30 # Timeout for SQL queries in seconds (default: 30)
How It Works
- Endpoint Discovery: The SQL Analytics Endpoint URL is automatically fetched from the Fabric API for each Lakehouse/Warehouse. The endpoint format is
<unique-identifier>.datawarehouse.fabric.microsoft.comand cannot be constructed from workspace_id alone. - Authentication: Uses the same Azure credentials configured for REST API access with Azure AD token injection
- Connection: Connects to the SQL Analytics Endpoint using ODBC with the discovered endpoint URL
- Query: Queries
INFORMATION_SCHEMA.COLUMNSto extract column metadata (required for schema extraction) - Type Mapping: SQL Server data types are automatically mapped to DataHub types using the standard type mapping system
References:
- What is the SQL analytics endpoint for a lakehouse?
- Warehouse connectivity in Microsoft Fabric
- Connect to Fabric Data Warehouse
Important Notes
- Endpoint URL Discovery: The SQL Analytics Endpoint URL is automatically fetched from the Fabric API for each Lakehouse/Warehouse. The endpoint format is
<unique-identifier>.datawarehouse.fabric.microsoft.comand cannot be constructed from workspace_id alone. If the endpoint URL cannot be retrieved from the API, schema extraction will fail for that item. - No Fallback: Unlike legacy Power BI Premium endpoints, Fabric SQL Analytics Endpoints do not support fallback connection strings. The endpoint must be obtained from the API.
Known Limitations
- Metadata Sync Delays: The SQL Analytics Endpoint may have delays in reflecting schema changes. New columns or schema modifications may take minutes to hours to appear.
- Missing Tables: Some tables may not be visible in the SQL endpoint due to:
- Unsupported data types
- Permission issues
- Table count limits in very large databases
- Graceful Degradation: If schema extraction fails for a table, the table will still be ingested without column metadata (no ingestion failure)
Disabling Schema Extraction
To disable schema extraction and ingest tables without column metadata:
source:
type: fabric-onelake
config:
extract_schema:
enabled: false
Schemas-Enabled vs Schemas-Disabled Lakehouses
The connector automatically handles both schemas-enabled and schemas-disabled lakehouses:
- Schemas-Enabled Lakehouses: The connector uses OneLake Delta Table APIs to list schemas first, then tables within each schema. This requires Storage audience tokens (
https://storage.azure.com/.default). - Schemas-Disabled Lakehouses: The connector uses the standard Fabric REST API
/tablesendpoint, which lists all tables. Tables without an explicit schema are automatically assigned to thedboschema in DataHub. This uses Power BI API scope tokens.
Important: All tables in DataHub will have a schema in their URN, even for schemas-disabled lakehouses. Tables without an explicit schema are normalized to use the dbo schema by default. This ensures consistent URN structure across all Fabric entities.
The connector automatically detects the lakehouse type and uses the appropriate API endpoint. No configuration changes are needed.
Stateful Ingestion
The connector supports stateful ingestion to track ingested entities and remove stale metadata. Enable it with:
stateful_ingestion:
enabled: true
remove_stale_metadata: true
When enabled, the connector will:
- Track all ingested workspaces, lakehouses, warehouses, schemas, and tables
- Remove entities from DataHub that no longer exist in Fabric
- Maintain state across ingestion runs
References
Azure Authentication
- Register an application with Microsoft Entra ID
- Azure Identity Library
- Service Principal Authentication
- Managed Identities
Fabric Concepts
- Microsoft Fabric Overview
- OneLake Overview
- Workspaces in Fabric
- Lakehouses in Fabric
- Warehouses in Fabric
Code Coordinates
- Class Name:
datahub.ingestion.source.fabric.onelake.source.FabricOneLakeSource - Browse on GitHub
Questions
If you've got any questions on configuring ingestion for Fabric OneLake, feel free to ping us on our Slack.