Skip to main content

Azure Data Factory

Azure Data Factory

For context on getting started with ingestion, check out our metadata ingestion guide.

Setup

To install this plugin, run pip install 'acryl-datahub[azure-data-factory]'.

Quickstart Recipe

source:
type: azure-data-factory
config:
# Required
subscription_id: ${AZURE_SUBSCRIPTION_ID}

# Authentication (service principal)
credential:
authentication_method: service_principal
client_id: ${AZURE_CLIENT_ID}
client_secret: ${AZURE_CLIENT_SECRET}
tenant_id: ${AZURE_TENANT_ID}

# Optional filters
factory_pattern:
allow: ["prod-.*"]

# Features
include_lineage: true
include_execution_history: false

env: PROD

sink:
type: datahub-rest
config:
server: "http://localhost:8080"

Authentication Methods

MethodConfig ValueUse Case
Service Principalservice_principalProduction
Managed Identitymanaged_identityAzure-hosted
Azure CLIcliLocal development
Auto-detectdefaultFlexible

Config Details

FieldRequiredDescription
subscription_idAzure subscription ID
credential.authentication_methodAuth method (default: default)
credential.client_idApp (client) ID for service principal
credential.client_secretClient secret for service principal
credential.tenant_idTenant (directory) ID
resource_groupFilter to specific resource group
factory_patternRegex allow/deny for factories
pipeline_patternRegex allow/deny for pipelines
include_lineageExtract lineage (default: true)
include_execution_historyExtract pipeline runs (default: false)
execution_history_daysDays of history, 1-90 (default: 7)
platform_instance_mapMap linked services to platform instances
envEnvironment (default: PROD)

Entity Mapping

ADF ConceptDataHub Entity
Data FactoryContainer
PipelineDataFlow
ActivityDataJob
DatasetDataset
Pipeline RunDataProcessInstance

Questions

If you've got any questions on configuring this source, feel free to ping us on our Slack. Incubating

Important Capabilities

CapabilityStatusNotes
Asset ContainersEnabled by default. Supported for types - Data Factory.
Detect Deleted EntitiesEnabled by default via stateful ingestion.
Platform InstanceEnabled by default.
Table-Level LineageExtracts lineage from Copy and Data Flow activities. Supported for types - Copy Activity, Data Flow Activity.

Extracts metadata and lineage from Azure Data Factory pipelines, activities, and datasets.

Not Azure Fabric

This connector is for Azure Data Factory (classic), not Azure Fabric's Data Factory. Azure Fabric support is planned for a future release.

Prerequisites

Authentication

The connector supports multiple authentication methods:

MethodBest ForConfiguration
Service PrincipalProduction environmentsauthentication_method: service_principal
Managed IdentityAzure-hosted deployments (VMs, AKS, App Service)authentication_method: managed_identity
Azure CLILocal developmentauthentication_method: cli (run az login first)
DefaultAzureCredentialFlexible environmentsauthentication_method: default

For service principal setup, see Register an application with Microsoft Entra ID.

Required Permissions

The connector only performs read operations. Grant one of the following:

Option 1: Built-in Reader Role (recommended)

Assign the Reader role at subscription, resource group, or Data Factory level.

Option 2: Custom Role with Minimal Permissions

Download datahub-adf-reader-role.json, update the {subscription-id}, then:

# Create custom role
az role definition create --role-definition datahub-adf-reader-role.json

# Assign to service principal
az role assignment create \
--assignee <service-principal-id> \
--role "DataHub ADF Reader" \
--scope /subscriptions/{subscription-id}

For detailed instructions, see Azure custom roles.

Lineage Extraction

Which Activities Produce Lineage?

The connector extracts table-level lineage from these ADF activity types:

Activity TypeLineage Behavior
Copy ActivityCreates lineage from input dataset(s) to output dataset
Data FlowExtracts sources, sinks, and transformation script
Lookup ActivityCreates input lineage from the lookup dataset
ExecutePipelineCreates pipeline-to-pipeline lineage to the child pipeline

Lineage is enabled by default (include_lineage: true).

How Lineage Resolution Works

For lineage to connect properly to datasets ingested from other sources (e.g., Snowflake, BigQuery), the connector needs to know which DataHub platform each ADF linked service corresponds to.

Step 1: Automatic Platform Mapping

The connector automatically maps ADF linked service types to DataHub platforms. For example, a Snowflake linked service maps to the snowflake platform.

View all supported linked service mappings
ADF Linked Service TypeDataHub Platform
AzureBlobStorageabs
AzureBlobFSabs
AzureDataLakeStoreabs
AzureFileStorageabs
AzureSqlDatabasemssql
AzureSqlDWmssql
AzureSynapseAnalyticsmssql
AzureSqlMImssql
SqlServermssql
AzureDatabricksdatabricks
AzureDatabricksDeltaLakedatabricks
AmazonS3s3
AmazonS3Compatibles3
AmazonRedshiftredshift
GoogleCloudStoragegcs
GoogleBigQuerybigquery
Snowflakesnowflake
PostgreSqlpostgres
AzurePostgreSqlpostgres
MySqlmysql
AzureMySqlmysql
Oracleoracle
OracleServiceCloudoracle
Db2db2
Teradatateradata
Verticavertica
Hivehive
Sparkspark
Hdfshdfs
Salesforcesalesforce
SalesforceServiceCloudsalesforce
SalesforceMarketingCloudsalesforce

Unsupported linked service types log a warning and skip lineage for that dataset.

Step 2: Platform Instance Mapping (for cross-recipe lineage)

If you're ingesting the same data sources with other DataHub connectors (e.g., Snowflake, BigQuery), you need to ensure the platform_instance values match. Use platform_instance_map to map your ADF linked service names to the platform instance used in your other recipes:

# ADF Recipe
source:
type: azure-data-factory
config:
subscription_id: ${AZURE_SUBSCRIPTION_ID}
platform_instance_map:
# Key: Your ADF linked service name (exact match required)
# Value: The platform_instance from your other source recipe
"snowflake-prod-connection": "prod_warehouse"
"bigquery-analytics": "analytics_project"
# Corresponding Snowflake Recipe (platform_instance must match)
source:
type: snowflake
config:
platform_instance: "prod_warehouse" # Must match the value in platform_instance_map
# ... other config

Without matching platform_instance values, lineage will create separate dataset entities instead of connecting to your existing ingested datasets.

Data Flow Transformation Scripts

For Data Flow activities, the connector extracts the transformation script and stores it in the dataTransformLogic aspect, visible in the DataHub UI under activity details.

Execution History

Pipeline runs are extracted as DataProcessInstance entities by default:

source:
type: azure-data-factory
config:
include_execution_history: true # default
execution_history_days: 7 # 1-90 days

This provides run status, duration, timestamps, trigger info, parameters, and activity-level details.

Advanced: Multi-Environment Setup

When to Use platform_instance

Use the ADF connector's platform_instance config to distinguish separate ADF deployments when ingesting from multiple subscriptions or tenants:

ScenarioRiskSolution
Single subscriptionNoneNot needed
Multiple subscriptionsLowRecommended
Multiple tenantsHigh - name collision riskRequired
# Multi-tenant example
source:
type: azure-data-factory
config:
subscription_id: "tenant-a-sub"
platform_instance: "tenant-a" # Prevents URN collisions
danger

Factory names are unique within Azure, but different tenants could have identically-named factories. Use platform_instance to prevent entity overwrites.

URN Format

Pipeline URNs follow this format:

urn:li:dataFlow:(azure-data-factory,{factory_name}.{pipeline_name},{env})

With platform_instance:

urn:li:dataFlow:(azure-data-factory,{platform_instance}.{factory_name}.{pipeline_name},{env})

For Azure naming rules, see Azure Data Factory naming rules.

CLI based Ingestion

Starter Recipe

Check out the following recipe to get started with ingestion! See below for full configuration options.

For general pointers on writing and running a recipe, see our main recipe guide.

# Example recipe for Azure Data Factory source
# See README.md for full configuration options

source:
type: azure-data-factory
config:
# Required: Azure subscription containing Data Factories
subscription_id: ${AZURE_SUBSCRIPTION_ID}

# Optional: Filter to specific resource group
# resource_group: my-resource-group

# Authentication (using service principal)
credential:
authentication_method: service_principal
client_id: ${AZURE_CLIENT_ID}
client_secret: ${AZURE_CLIENT_SECRET}
tenant_id: ${AZURE_TENANT_ID}

# Optional: Filter factories by name pattern
factory_pattern:
allow:
- ".*" # Allow all factories by default
deny: []

# Optional: Filter pipelines by name pattern
pipeline_pattern:
allow:
- ".*" # Allow all pipelines by default
deny: []

# Feature flags
include_lineage: true
include_column_lineage: false # Advanced: requires Data Flow parsing
include_execution_history: false # Set to true for pipeline run history
execution_history_days: 7 # Only used when include_execution_history is true

# Optional: Map linked services to platform instances for accurate lineage
# platform_instance_map:
# "my-snowflake-connection": "prod_snowflake"

# Optional: Platform instance for this ADF connector
# platform_instance: "main-adf"

# Environment
env: PROD

# Optional: Stateful ingestion for stale entity removal
# stateful_ingestion:
# enabled: true

sink:
type: datahub-rest
config:
server: "http://localhost:8080"


Config Details

Note that a . is used to denote nested fields in the YAML recipe.

FieldDescription
subscription_id 
string
Azure subscription ID containing the Data Factories to ingest. Find this in Azure Portal > Subscriptions.
execution_history_days
integer
Number of days of execution history to extract. Only used when include_execution_history is True. Higher values increase ingestion time.
Default: 7
include_column_lineage
boolean
Extract column-level lineage from Data Flow activities. Requires parsing Data Flow definitions.
Default: True
include_execution_history
boolean
Extract pipeline and activity execution history as DataProcessInstance. Includes run status, duration, and parameters. Enables lineage extraction from parameterized activities using actual runtime values.
Default: True
include_lineage
boolean
Extract lineage from activity inputs/outputs. Maps ADF datasets to DataHub datasets based on linked service type.
Default: True
platform_instance
One of string, null
The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://docs.datahub.com/docs/platform-instances/ for more details.
Default: None
platform_instance_map
map(str,string)
resource_group
One of string, null
Azure resource group name to filter Data Factories. If not specified, all Data Factories in the subscription will be ingested.
Default: None
env
string
The environment that all assets produced by this connector belong to
Default: PROD
credential
AzureCredentialConfig
Unified Azure authentication configuration.

This class provides a reusable authentication configuration that can be
composed into any Azure connector's configuration. It supports multiple
authentication methods and returns a TokenCredential that works with
any Azure SDK client.

Example usage in a connector config:
class MyAzureConnectorConfig(ConfigModel):
credential: AzureCredentialConfig = Field(
default_factory=AzureCredentialConfig,
description="Azure authentication configuration"
)
subscription_id: str = Field(...)
credential.authentication_method
Enum
One of: "default", "service_principal", "managed_identity", "cli"
credential.client_id
One of string, null
Azure Application (client) ID. Required for service_principal authentication. Find this in Azure Portal > App registrations > Your app > Overview.
Default: None
credential.client_secret
One of string(password), null
Azure client secret. Required for service_principal authentication. Create in Azure Portal > App registrations > Your app > Certificates & secrets.
Default: None
credential.exclude_cli_credential
boolean
When using 'default' authentication, exclude Azure CLI credential. Useful in production to avoid accidentally using developer credentials.
Default: False
credential.exclude_environment_credential
boolean
When using 'default' authentication, exclude environment variables. Environment variables checked: AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID.
Default: False
credential.exclude_managed_identity_credential
boolean
When using 'default' authentication, exclude managed identity. Useful during local development when managed identity is not available.
Default: False
credential.managed_identity_client_id
One of string, null
Client ID for user-assigned managed identity. Leave empty to use system-assigned managed identity. Only used when authentication_method is 'managed_identity'.
Default: None
credential.tenant_id
One of string, null
Azure tenant (directory) ID. Required for service_principal authentication. Find this in Azure Portal > Microsoft Entra ID > Overview.
Default: None
factory_pattern
AllowDenyPattern
A class to store allow deny regexes
factory_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
pipeline_pattern
AllowDenyPattern
A class to store allow deny regexes
pipeline_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
stateful_ingestion
One of StatefulStaleMetadataRemovalConfig, null
Configuration for stateful ingestion and stale entity removal. When enabled, tracks ingested entities and removes those that no longer exist in Azure Data Factory.
Default: None
stateful_ingestion.enabled
boolean
Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False
Default: False
stateful_ingestion.fail_safe_threshold
number
Prevents large amount of soft deletes & the state from committing from accidental changes to the source configuration if the relative change percent in entities compared to the previous state is above the 'fail_safe_threshold'.
Default: 75.0
stateful_ingestion.remove_stale_metadata
boolean
Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.
Default: True

Code Coordinates

  • Class Name: datahub.ingestion.source.azure_data_factory.adf_source.AzureDataFactorySource
  • Browse on GitHub

Questions

If you've got any questions on configuring ingestion for Azure Data Factory, feel free to ping us on our Slack.