Azure Data Factory
Azure Data Factory
For context on getting started with ingestion, check out our metadata ingestion guide.
Setup
To install this plugin, run pip install 'acryl-datahub[azure-data-factory]'.
Quickstart Recipe
source:
type: azure-data-factory
config:
# Required
subscription_id: ${AZURE_SUBSCRIPTION_ID}
# Authentication (service principal)
credential:
authentication_method: service_principal
client_id: ${AZURE_CLIENT_ID}
client_secret: ${AZURE_CLIENT_SECRET}
tenant_id: ${AZURE_TENANT_ID}
# Optional filters
factory_pattern:
allow: ["prod-.*"]
# Features
include_lineage: true
include_execution_history: false
env: PROD
sink:
type: datahub-rest
config:
server: "http://localhost:8080"
Authentication Methods
| Method | Config Value | Use Case |
|---|---|---|
| Service Principal | service_principal | Production |
| Managed Identity | managed_identity | Azure-hosted |
| Azure CLI | cli | Local development |
| Auto-detect | default | Flexible |
Config Details
| Field | Required | Description |
|---|---|---|
subscription_id | ✅ | Azure subscription ID |
credential.authentication_method | Auth method (default: default) | |
credential.client_id | App (client) ID for service principal | |
credential.client_secret | Client secret for service principal | |
credential.tenant_id | Tenant (directory) ID | |
resource_group | Filter to specific resource group | |
factory_pattern | Regex allow/deny for factories | |
pipeline_pattern | Regex allow/deny for pipelines | |
include_lineage | Extract lineage (default: true) | |
include_execution_history | Extract pipeline runs (default: false) | |
execution_history_days | Days of history, 1-90 (default: 7) | |
platform_instance_map | Map linked services to platform instances | |
env | Environment (default: PROD) |
Entity Mapping
| ADF Concept | DataHub Entity |
|---|---|
| Data Factory | Container |
| Pipeline | DataFlow |
| Activity | DataJob |
| Dataset | Dataset |
| Pipeline Run | DataProcessInstance |
Questions
If you've got any questions on configuring this source, feel free to ping us on our Slack.
Important Capabilities
| Capability | Status | Notes |
|---|---|---|
| Asset Containers | ✅ | Enabled by default. Supported for types - Data Factory. |
| Detect Deleted Entities | ✅ | Enabled by default via stateful ingestion. |
| Platform Instance | ✅ | Enabled by default. |
| Table-Level Lineage | ✅ | Extracts lineage from Copy and Data Flow activities. Supported for types - Copy Activity, Data Flow Activity. |
Extracts metadata and lineage from Azure Data Factory pipelines, activities, and datasets.
This connector is for Azure Data Factory (classic), not Azure Fabric's Data Factory. Azure Fabric support is planned for a future release.
Prerequisites
Authentication
The connector supports multiple authentication methods:
| Method | Best For | Configuration |
|---|---|---|
| Service Principal | Production environments | authentication_method: service_principal |
| Managed Identity | Azure-hosted deployments (VMs, AKS, App Service) | authentication_method: managed_identity |
| Azure CLI | Local development | authentication_method: cli (run az login first) |
| DefaultAzureCredential | Flexible environments | authentication_method: default |
For service principal setup, see Register an application with Microsoft Entra ID.
Required Permissions
The connector only performs read operations. Grant one of the following:
Option 1: Built-in Reader Role (recommended)
Assign the Reader role at subscription, resource group, or Data Factory level.
Option 2: Custom Role with Minimal Permissions
Download datahub-adf-reader-role.json, update the {subscription-id}, then:
# Create custom role
az role definition create --role-definition datahub-adf-reader-role.json
# Assign to service principal
az role assignment create \
--assignee <service-principal-id> \
--role "DataHub ADF Reader" \
--scope /subscriptions/{subscription-id}
For detailed instructions, see Azure custom roles.
Lineage Extraction
Which Activities Produce Lineage?
The connector extracts table-level lineage from these ADF activity types:
| Activity Type | Lineage Behavior |
|---|---|
| Copy Activity | Creates lineage from input dataset(s) to output dataset |
| Data Flow | Extracts sources, sinks, and transformation script |
| Lookup Activity | Creates input lineage from the lookup dataset |
| ExecutePipeline | Creates pipeline-to-pipeline lineage to the child pipeline |
Lineage is enabled by default (include_lineage: true).
How Lineage Resolution Works
For lineage to connect properly to datasets ingested from other sources (e.g., Snowflake, BigQuery), the connector needs to know which DataHub platform each ADF linked service corresponds to.
Step 1: Automatic Platform Mapping
The connector automatically maps ADF linked service types to DataHub platforms. For example, a Snowflake linked service maps to the snowflake platform.
View all supported linked service mappings
| ADF Linked Service Type | DataHub Platform |
|---|---|
| AzureBlobStorage | abs |
| AzureBlobFS | abs |
| AzureDataLakeStore | abs |
| AzureFileStorage | abs |
| AzureSqlDatabase | mssql |
| AzureSqlDW | mssql |
| AzureSynapseAnalytics | mssql |
| AzureSqlMI | mssql |
| SqlServer | mssql |
| AzureDatabricks | databricks |
| AzureDatabricksDeltaLake | databricks |
| AmazonS3 | s3 |
| AmazonS3Compatible | s3 |
| AmazonRedshift | redshift |
| GoogleCloudStorage | gcs |
| GoogleBigQuery | bigquery |
| Snowflake | snowflake |
| PostgreSql | postgres |
| AzurePostgreSql | postgres |
| MySql | mysql |
| AzureMySql | mysql |
| Oracle | oracle |
| OracleServiceCloud | oracle |
| Db2 | db2 |
| Teradata | teradata |
| Vertica | vertica |
| Hive | hive |
| Spark | spark |
| Hdfs | hdfs |
| Salesforce | salesforce |
| SalesforceServiceCloud | salesforce |
| SalesforceMarketingCloud | salesforce |
Unsupported linked service types log a warning and skip lineage for that dataset.
Step 2: Platform Instance Mapping (for cross-recipe lineage)
If you're ingesting the same data sources with other DataHub connectors (e.g., Snowflake, BigQuery), you need to ensure the platform_instance values match. Use platform_instance_map to map your ADF linked service names to the platform instance used in your other recipes:
# ADF Recipe
source:
type: azure-data-factory
config:
subscription_id: ${AZURE_SUBSCRIPTION_ID}
platform_instance_map:
# Key: Your ADF linked service name (exact match required)
# Value: The platform_instance from your other source recipe
"snowflake-prod-connection": "prod_warehouse"
"bigquery-analytics": "analytics_project"
# Corresponding Snowflake Recipe (platform_instance must match)
source:
type: snowflake
config:
platform_instance: "prod_warehouse" # Must match the value in platform_instance_map
# ... other config
Without matching platform_instance values, lineage will create separate dataset entities instead of connecting to your existing ingested datasets.
Data Flow Transformation Scripts
For Data Flow activities, the connector extracts the transformation script and stores it in the dataTransformLogic aspect, visible in the DataHub UI under activity details.
Execution History
Pipeline runs are extracted as DataProcessInstance entities by default:
source:
type: azure-data-factory
config:
include_execution_history: true # default
execution_history_days: 7 # 1-90 days
This provides run status, duration, timestamps, trigger info, parameters, and activity-level details.
Advanced: Multi-Environment Setup
When to Use platform_instance
Use the ADF connector's platform_instance config to distinguish separate ADF deployments when ingesting from multiple subscriptions or tenants:
| Scenario | Risk | Solution |
|---|---|---|
| Single subscription | None | Not needed |
| Multiple subscriptions | Low | Recommended |
| Multiple tenants | High - name collision risk | Required |
# Multi-tenant example
source:
type: azure-data-factory
config:
subscription_id: "tenant-a-sub"
platform_instance: "tenant-a" # Prevents URN collisions
Factory names are unique within Azure, but different tenants could have identically-named factories. Use platform_instance to prevent entity overwrites.
URN Format
Pipeline URNs follow this format:
urn:li:dataFlow:(azure-data-factory,{factory_name}.{pipeline_name},{env})
With platform_instance:
urn:li:dataFlow:(azure-data-factory,{platform_instance}.{factory_name}.{pipeline_name},{env})
For Azure naming rules, see Azure Data Factory naming rules.
CLI based Ingestion
Starter Recipe
Check out the following recipe to get started with ingestion! See below for full configuration options.
For general pointers on writing and running a recipe, see our main recipe guide.
# Example recipe for Azure Data Factory source
# See README.md for full configuration options
source:
type: azure-data-factory
config:
# Required: Azure subscription containing Data Factories
subscription_id: ${AZURE_SUBSCRIPTION_ID}
# Optional: Filter to specific resource group
# resource_group: my-resource-group
# Authentication (using service principal)
credential:
authentication_method: service_principal
client_id: ${AZURE_CLIENT_ID}
client_secret: ${AZURE_CLIENT_SECRET}
tenant_id: ${AZURE_TENANT_ID}
# Optional: Filter factories by name pattern
factory_pattern:
allow:
- ".*" # Allow all factories by default
deny: []
# Optional: Filter pipelines by name pattern
pipeline_pattern:
allow:
- ".*" # Allow all pipelines by default
deny: []
# Feature flags
include_lineage: true
include_column_lineage: false # Advanced: requires Data Flow parsing
include_execution_history: false # Set to true for pipeline run history
execution_history_days: 7 # Only used when include_execution_history is true
# Optional: Map linked services to platform instances for accurate lineage
# platform_instance_map:
# "my-snowflake-connection": "prod_snowflake"
# Optional: Platform instance for this ADF connector
# platform_instance: "main-adf"
# Environment
env: PROD
# Optional: Stateful ingestion for stale entity removal
# stateful_ingestion:
# enabled: true
sink:
type: datahub-rest
config:
server: "http://localhost:8080"
Config Details
- Options
- Schema
Note that a . is used to denote nested fields in the YAML recipe.
| Field | Description |
|---|---|
subscription_id ✅ string | Azure subscription ID containing the Data Factories to ingest. Find this in Azure Portal > Subscriptions. |
execution_history_days integer | Number of days of execution history to extract. Only used when include_execution_history is True. Higher values increase ingestion time. Default: 7 |
include_column_lineage boolean | Extract column-level lineage from Data Flow activities. Requires parsing Data Flow definitions. Default: True |
include_execution_history boolean | Extract pipeline and activity execution history as DataProcessInstance. Includes run status, duration, and parameters. Enables lineage extraction from parameterized activities using actual runtime values. Default: True |
include_lineage boolean | Extract lineage from activity inputs/outputs. Maps ADF datasets to DataHub datasets based on linked service type. Default: True |
platform_instance One of string, null | The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://docs.datahub.com/docs/platform-instances/ for more details. Default: None |
platform_instance_map map(str,string) | |
resource_group One of string, null | Azure resource group name to filter Data Factories. If not specified, all Data Factories in the subscription will be ingested. Default: None |
env string | The environment that all assets produced by this connector belong to Default: PROD |
credential AzureCredentialConfig | Unified Azure authentication configuration. This class provides a reusable authentication configuration that can be composed into any Azure connector's configuration. It supports multiple authentication methods and returns a TokenCredential that works with any Azure SDK client. Example usage in a connector config: class MyAzureConnectorConfig(ConfigModel): credential: AzureCredentialConfig = Field( default_factory=AzureCredentialConfig, description="Azure authentication configuration" ) subscription_id: str = Field(...) |
credential.authentication_method Enum | One of: "default", "service_principal", "managed_identity", "cli" |
credential.client_id One of string, null | Azure Application (client) ID. Required for service_principal authentication. Find this in Azure Portal > App registrations > Your app > Overview. Default: None |
credential.client_secret One of string(password), null | Azure client secret. Required for service_principal authentication. Create in Azure Portal > App registrations > Your app > Certificates & secrets. Default: None |
credential.exclude_cli_credential boolean | When using 'default' authentication, exclude Azure CLI credential. Useful in production to avoid accidentally using developer credentials. Default: False |
credential.exclude_environment_credential boolean | When using 'default' authentication, exclude environment variables. Environment variables checked: AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID. Default: False |
credential.exclude_managed_identity_credential boolean | When using 'default' authentication, exclude managed identity. Useful during local development when managed identity is not available. Default: False |
credential.managed_identity_client_id One of string, null | Client ID for user-assigned managed identity. Leave empty to use system-assigned managed identity. Only used when authentication_method is 'managed_identity'. Default: None |
credential.tenant_id One of string, null | Azure tenant (directory) ID. Required for service_principal authentication. Find this in Azure Portal > Microsoft Entra ID > Overview. Default: None |
factory_pattern AllowDenyPattern | A class to store allow deny regexes |
factory_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
pipeline_pattern AllowDenyPattern | A class to store allow deny regexes |
pipeline_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
stateful_ingestion One of StatefulStaleMetadataRemovalConfig, null | Configuration for stateful ingestion and stale entity removal. When enabled, tracks ingested entities and removes those that no longer exist in Azure Data Factory. Default: None |
stateful_ingestion.enabled boolean | Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False Default: False |
stateful_ingestion.fail_safe_threshold number | Prevents large amount of soft deletes & the state from committing from accidental changes to the source configuration if the relative change percent in entities compared to the previous state is above the 'fail_safe_threshold'. Default: 75.0 |
stateful_ingestion.remove_stale_metadata boolean | Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled. Default: True |
The JSONSchema for this configuration is inlined below.
{
"$defs": {
"AllowDenyPattern": {
"additionalProperties": false,
"description": "A class to store allow deny regexes",
"properties": {
"allow": {
"default": [
".*"
],
"description": "List of regex patterns to include in ingestion",
"items": {
"type": "string"
},
"title": "Allow",
"type": "array"
},
"deny": {
"default": [],
"description": "List of regex patterns to exclude from ingestion.",
"items": {
"type": "string"
},
"title": "Deny",
"type": "array"
},
"ignoreCase": {
"anyOf": [
{
"type": "boolean"
},
{
"type": "null"
}
],
"default": true,
"description": "Whether to ignore case sensitivity during pattern matching.",
"title": "Ignorecase"
}
},
"title": "AllowDenyPattern",
"type": "object"
},
"AzureAuthenticationMethod": {
"description": "Supported Azure authentication methods.\n\n- DEFAULT: Uses DefaultAzureCredential which auto-detects credentials from\n environment variables, managed identity, Azure CLI, etc.\n- SERVICE_PRINCIPAL: Uses client ID, client secret, and tenant ID\n- MANAGED_IDENTITY: Uses Azure Managed Identity (system or user-assigned)\n- CLI: Uses Azure CLI credential (requires `az login`)",
"enum": [
"default",
"service_principal",
"managed_identity",
"cli"
],
"title": "AzureAuthenticationMethod",
"type": "string"
},
"AzureCredentialConfig": {
"additionalProperties": false,
"description": "Unified Azure authentication configuration.\n\nThis class provides a reusable authentication configuration that can be\ncomposed into any Azure connector's configuration. It supports multiple\nauthentication methods and returns a TokenCredential that works with\nany Azure SDK client.\n\nExample usage in a connector config:\n class MyAzureConnectorConfig(ConfigModel):\n credential: AzureCredentialConfig = Field(\n default_factory=AzureCredentialConfig,\n description=\"Azure authentication configuration\"\n )\n subscription_id: str = Field(...)",
"properties": {
"authentication_method": {
"$ref": "#/$defs/AzureAuthenticationMethod",
"default": "default",
"description": "Authentication method to use. Options: 'default' (auto-detects from environment), 'service_principal' (client ID + secret + tenant), 'managed_identity' (Azure Managed Identity), 'cli' (Azure CLI credential). Recommended: Use 'default' which tries multiple methods automatically."
},
"client_id": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Azure Application (client) ID. Required for service_principal authentication. Find this in Azure Portal > App registrations > Your app > Overview.",
"title": "Client Id"
},
"client_secret": {
"anyOf": [
{
"format": "password",
"type": "string",
"writeOnly": true
},
{
"type": "null"
}
],
"default": null,
"description": "Azure client secret. Required for service_principal authentication. Create in Azure Portal > App registrations > Your app > Certificates & secrets.",
"title": "Client Secret"
},
"tenant_id": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Azure tenant (directory) ID. Required for service_principal authentication. Find this in Azure Portal > Microsoft Entra ID > Overview.",
"title": "Tenant Id"
},
"managed_identity_client_id": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Client ID for user-assigned managed identity. Leave empty to use system-assigned managed identity. Only used when authentication_method is 'managed_identity'.",
"title": "Managed Identity Client Id"
},
"exclude_cli_credential": {
"default": false,
"description": "When using 'default' authentication, exclude Azure CLI credential. Useful in production to avoid accidentally using developer credentials.",
"title": "Exclude Cli Credential",
"type": "boolean"
},
"exclude_environment_credential": {
"default": false,
"description": "When using 'default' authentication, exclude environment variables. Environment variables checked: AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID.",
"title": "Exclude Environment Credential",
"type": "boolean"
},
"exclude_managed_identity_credential": {
"default": false,
"description": "When using 'default' authentication, exclude managed identity. Useful during local development when managed identity is not available.",
"title": "Exclude Managed Identity Credential",
"type": "boolean"
}
},
"title": "AzureCredentialConfig",
"type": "object"
},
"StatefulStaleMetadataRemovalConfig": {
"additionalProperties": false,
"description": "Base specialized config for Stateful Ingestion with stale metadata removal capability.",
"properties": {
"enabled": {
"default": false,
"description": "Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or `datahub_api` is specified, otherwise False",
"title": "Enabled",
"type": "boolean"
},
"remove_stale_metadata": {
"default": true,
"description": "Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.",
"title": "Remove Stale Metadata",
"type": "boolean"
},
"fail_safe_threshold": {
"default": 75.0,
"description": "Prevents large amount of soft deletes & the state from committing from accidental changes to the source configuration if the relative change percent in entities compared to the previous state is above the 'fail_safe_threshold'.",
"maximum": 100.0,
"minimum": 0.0,
"title": "Fail Safe Threshold",
"type": "number"
}
},
"title": "StatefulStaleMetadataRemovalConfig",
"type": "object"
}
},
"additionalProperties": false,
"description": "Configuration for Azure Data Factory source.\n\nThis connector extracts metadata from Azure Data Factory including:\n- Data Factories as Containers\n- Pipelines as DataFlows\n- Activities as DataJobs\n- Dataset lineage\n- Execution history (optional)",
"properties": {
"env": {
"default": "PROD",
"description": "The environment that all assets produced by this connector belong to",
"title": "Env",
"type": "string"
},
"platform_instance": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://docs.datahub.com/docs/platform-instances/ for more details.",
"title": "Platform Instance"
},
"stateful_ingestion": {
"anyOf": [
{
"$ref": "#/$defs/StatefulStaleMetadataRemovalConfig"
},
{
"type": "null"
}
],
"default": null,
"description": "Configuration for stateful ingestion and stale entity removal. When enabled, tracks ingested entities and removes those that no longer exist in Azure Data Factory."
},
"credential": {
"$ref": "#/$defs/AzureCredentialConfig",
"description": "Azure authentication configuration. Supports service principal, managed identity, Azure CLI, or auto-detection (DefaultAzureCredential). See AzureCredentialConfig for detailed options."
},
"subscription_id": {
"description": "Azure subscription ID containing the Data Factories to ingest. Find this in Azure Portal > Subscriptions.",
"title": "Subscription Id",
"type": "string"
},
"resource_group": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "Azure resource group name to filter Data Factories. If not specified, all Data Factories in the subscription will be ingested.",
"title": "Resource Group"
},
"factory_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns to filter Data Factories by name. Example: allow=['prod-.*'], deny=['.*-test']"
},
"pipeline_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns to filter pipelines by name. Applied to all factories matching factory_pattern."
},
"include_lineage": {
"default": true,
"description": "Extract lineage from activity inputs/outputs. Maps ADF datasets to DataHub datasets based on linked service type.",
"title": "Include Lineage",
"type": "boolean"
},
"include_column_lineage": {
"default": true,
"description": "Extract column-level lineage from Data Flow activities. Requires parsing Data Flow definitions.",
"title": "Include Column Lineage",
"type": "boolean"
},
"include_execution_history": {
"default": true,
"description": "Extract pipeline and activity execution history as DataProcessInstance. Includes run status, duration, and parameters. Enables lineage extraction from parameterized activities using actual runtime values.",
"title": "Include Execution History",
"type": "boolean"
},
"execution_history_days": {
"default": 7,
"description": "Number of days of execution history to extract. Only used when include_execution_history is True. Higher values increase ingestion time.",
"maximum": 90,
"minimum": 1,
"title": "Execution History Days",
"type": "integer"
},
"platform_instance_map": {
"additionalProperties": {
"type": "string"
},
"description": "Map linked service names to DataHub platform instances. Example: {'my-snowflake-connection': 'prod_snowflake'}. Used for accurate lineage resolution to existing datasets.",
"title": "Platform Instance Map",
"type": "object"
}
},
"required": [
"subscription_id"
],
"title": "AzureDataFactoryConfig",
"type": "object"
}
Code Coordinates
- Class Name:
datahub.ingestion.source.azure_data_factory.adf_source.AzureDataFactorySource - Browse on GitHub
Questions
If you've got any questions on configuring ingestion for Azure Data Factory, feel free to ping us on our Slack.