Microsoft SQL Server
Important Capabilities
| Capability | Status | Notes |
|---|---|---|
| Asset Containers | ✅ | Enabled by default. Supported for types - Database, Schema. |
| Classification | ✅ | Optionally enabled via classification.enabled. |
| Column-level Lineage | ✅ | Enabled by default to get lineage for stored procedures via include_lineage and for views via include_view_column_lineage. Supported for types - Stored Procedure, View. |
| Data Profiling | ✅ | Optionally enabled via configuration. |
| Descriptions | ✅ | Enabled by default. |
| Detect Deleted Entities | ✅ | Enabled by default via stateful ingestion. |
| Domains | ✅ | Supported via the domain config field. |
| Platform Instance | ✅ | Enabled by default. |
| Schema Metadata | ✅ | Enabled by default. |
| Table-Level Lineage | ✅ | Enabled by default to get lineage for stored procedures via include_lineage and for views via include_view_lineage. Supported for types - Stored Procedure, View. |
| Test Connection | ✅ | Enabled by default. |
This plugin extracts the following:
- Metadata for databases, schemas, views and tables
- Column types associated with each table/view
- Table, row, and column statistics via optional SQL profiling
Two source types are available:
mssql: Uses python-tds library (pure Python, easier to install)mssql-odbc: Uses pyodbc library (required for encryption, Azure managed services)
If you need encryption (e.g., for Azure SQL), use source type mssql-odbc and configure uri_args with your ODBC driver settings.
Prerequisites
If you want to ingest MSSQL Jobs and stored procedures (with code) the user credentials needs the proper privileges.
The DataHub MSSQL source automatically detects your environment and uses the optimal method:
- RDS/Managed SQL Server: Uses stored procedures (recommended for managed environments)
- On-premises SQL Server: Uses direct table access (typically faster when available)
- Automatic fallback: If the primary method fails, it automatically tries the alternative approach
Permissions for All Environments
-- Core permissions for stored procedures (required)
USE MSDB
GRANT SELECT ON OBJECT::msdb.dbo.sysjobsteps TO 'USERNAME'
GRANT SELECT ON OBJECT::msdb.dbo.sysjobs TO 'USERNAME'
-- Stored procedure permissions (required for RDS/managed environments)
GRANT EXECUTE ON msdb.dbo.sp_help_job TO 'USERNAME'
GRANT EXECUTE ON msdb.dbo.sp_help_jobstep TO 'USERNAME'
-- Permissions for stored procedure code and lineage
USE 'DATA_DB_NAME'
GRANT VIEW DEFINITION TO 'USERNAME'
GRANT SELECT ON OBJECT::sys.sql_expression_dependencies TO 'USERNAME'
RDS SQL Server Specific Notes
For Amazon RDS SQL Server environments, the stored procedure approach is preferred and typically the only method that works due to restricted table access. Ensure the following permissions are granted:
-- Essential for RDS environments
GRANT EXECUTE ON msdb.dbo.sp_help_job TO 'USERNAME'
GRANT EXECUTE ON msdb.dbo.sp_help_jobstep TO 'USERNAME'
Production Recommendation: Explicitly set is_aws_rds: true in your configuration to avoid automatic detection issues:
source:
type: mssql
config:
is_aws_rds: true # Recommended for RDS environments
# ... other config
The connector automatically detects RDS environments by analyzing the server name (e.g., *.rds.amazonaws.com), but explicit configuration is more reliable and avoids potential false positives or negatives.
On-Premises SQL Server Notes
For on-premises SQL Server installations, direct table access is typically available and faster. The source will automatically use direct queries when possible, with stored procedures as fallback.
Production Recommendation: Explicitly set is_aws_rds: false to ensure optimal performance:
source:
type: mssql
config:
is_aws_rds: false # Recommended for on-premises
# ... other config
Troubleshooting Permissions
If you encounter permission errors:
- RDS environments: Ensure stored procedure execute permissions are granted
- On-premises environments: Verify both table select and stored procedure execute permissions
- Mixed environments: Grant all permissions listed above for maximum compatibility
The DataHub source will automatically handle fallback between methods and provide detailed error messages with specific permission requirements if issues occur.
Query-Based Lineage and Usage Statistics
DataHub can extract lineage and usage statistics by analyzing SQL queries executed on your MS SQL Server. This provides insights into:
- Table-level lineage: Understand which tables are read from and written to
- Column-level lineage: Track how data flows between specific columns
- Usage patterns: Identify frequently accessed tables and user activity
- Query performance: Monitor execution counts and timing
Known Limitations
- User attribution not supported: SQL Server Query Store and DMVs do not preserve historical user session context. Query extraction focuses on query content, frequency, and performance metrics.
Prerequisites for Query-Based Lineage
SQL Server Version Requirements
Query-based lineage requires SQL Server 2016 or later.
Permission Requirements
Grant the DataHub user VIEW SERVER STATE permission:
USE master;
GRANT VIEW SERVER STATE TO [datahub_user];
GO
This permission is required to access:
- Query Store (preferred method, SQL Server 2016+)
- Dynamic Management Views (DMVs) as fallback
Verify permissions:
-- Check server-level permissions
SELECT
state_desc,
permission_name
FROM sys.server_permissions
WHERE grantee_principal_id = SUSER_ID('datahub_user');
-- Should show VIEW SERVER STATE
Query Store Setup (Recommended)
Query Store is the preferred method for query extraction. It provides better query history retention and performance.
Enable Query Store
-- Enable Query Store for your database
ALTER DATABASE [YourDatabase] SET QUERY_STORE = ON;
-- Configure Query Store settings (recommended production values)
-- Source: SQL Server defaults since 2016/2017, based on Microsoft's extensive testing
-- Reference: https://learn.microsoft.com/en-us/sql/relational-databases/performance/best-practice-with-the-query-store
ALTER DATABASE [YourDatabase] SET QUERY_STORE (
OPERATION_MODE = READ_WRITE,
DATA_FLUSH_INTERVAL_SECONDS = 900, -- 15 min: balances durability vs I/O overhead
INTERVAL_LENGTH_MINUTES = 60, -- 1 hour: sufficient granularity for lineage, lower storage overhead
MAX_STORAGE_SIZE_MB = 1000, -- 1 GB: ~30-90 days retention for typical workloads (100-1000 queries/day)
QUERY_CAPTURE_MODE = AUTO, -- Captures frequent queries only; filters ad-hoc noise
SIZE_BASED_CLEANUP_MODE = AUTO -- Auto-removes old queries at capacity; prevents read-only mode
);
Adjust for your environment:
- Low volume (<100 queries/day):
MAX_STORAGE_SIZE_MB = 100 - High volume (>10,000 queries/day):
MAX_STORAGE_SIZE_MB = 2000,INTERVAL_LENGTH_MINUTES = 30 - Dev/Test:
MAX_STORAGE_SIZE_MB = 100 - OLTP with ad-hoc queries:
QUERY_CAPTURE_MODE = ALL(monitor storage)
Verify Query Store Status
SELECT
name,
is_query_store_on,
query_store_state_desc
FROM sys.databases
WHERE name = 'YourDatabase';
Expected output:
is_query_store_on: 1 (enabled)query_store_state_desc: "READ_WRITE" (active)
Configuration
Enable query-based lineage in your DataHub recipe:
source:
type: mssql
config:
host_port: localhost:1433
database: YourDatabase
username: datahub_user
password: your_password
# Enable query-based lineage extraction
include_query_lineage: true
# Maximum number of queries to extract (default: 1000, max: 10000)
max_queries_to_extract: 1000
# Minimum query execution count to include (default: 1)
# Higher values reduce noise from rarely-executed queries
min_query_calls: 5
# Exclude system and temporary queries (comprehensive list)
query_exclude_patterns:
- "%sys.%" # System tables
- "%tempdb.%" # Temp database
- "%INFORMATION_SCHEMA%" # Metadata views
- "%msdb.%" # SQL Agent database
- "%master.%" # Master database
- "%model.%" # Model database
- "%#%" # Temporary tables
- "%sp_reset_connection%" # JDBC connection resets
- "%SET ANSI_%" # Connection setup
- "%SELECT @@%" # Driver metadata queries
- "%ReportServer%" # SSRS system tables
# Enable usage statistics (requires graph connection)
# include_usage_statistics: true
sink:
# Your sink config
Configuration Options
| Option | Type | Default | Description |
|---|---|---|---|
include_query_lineage | boolean | false | Enable query-based lineage extraction |
max_queries_to_extract | integer | 1000 | Maximum queries to analyze (range: 1-10000) |
min_query_calls | integer | 1 | Minimum execution count to include query |
query_exclude_patterns | list[string] | [] | SQL LIKE patterns to exclude queries (max 100 patterns) |
include_usage_statistics | boolean | false | Extract usage statistics (requires include_query_lineage: true) |
Query Extraction Methods
DataHub automatically selects the best available method:
Query Store (Preferred) - SQL Server 2016+
- Provides comprehensive query history
- Better performance and reliability
- Requires Query Store to be enabled
DMV Fallback - All supported versions
- Uses
sys.dm_exec_cached_plansand related DMVs - Limited to queries currently in plan cache
- Smaller query history window
- Uses
The source will automatically detect and use the appropriate method based on your SQL Server version and configuration.
Troubleshooting
Debug Mode
Enable debug logging to see detailed information about query extraction and parsing:
datahub ingest -c recipe.yml --debug
Look for these key log messages:
INFO: Extracted X queries from query_store- Confirms queries were fetchedINFO: Processed X queries for lineage extraction (X failed)- Shows parsing resultsINFO: Generated X lineage workunits from queries- Confirms lineage generationDEBUG: Query extraction completed in X.XX seconds- Performance metrics
Common Issues and Solutions
Issue: "Query Store is not enabled"
Solution: Follow the Query Store setup instructions above (see "Enable Query Store" section).
Issue: "VIEW SERVER STATE permission denied"
Solution: Follow the permission grant instructions above (see "Permission Requirements" section).
Issue: "SQL Server version 2014 detected, but 2016+ is required"
Solution:
- Upgrade to SQL Server 2016 or later for Query Store support
- The DMV fallback method has limited query history and is not recommended as the primary method
Issue: Query Store Returns No Results
Possible causes:
No queries in history:
- Queries may have been cleared from Query Store
- Query Store retention settings may be too aggressive
- Solution: Execute some queries and wait for them to appear in Query Store
All queries filtered by exclude patterns:
- Review your
query_exclude_patternsconfiguration - Solution: Reduce or remove overly broad exclusion patterns
- Review your
Queries below min_query_calls threshold:
- Queries executed fewer times than
min_query_callsare excluded - Solution: Lower the
min_query_callsvalue or execute queries more frequently
- Queries executed fewer times than
Query Store query capture mode:
- If set to
NONEorCUSTOMwith restrictive filters - Solution: Set to
AUTOorALL:ALTER DATABASE [YourDatabase]
SET QUERY_STORE (QUERY_CAPTURE_MODE = AUTO);
- If set to
Query Store data not yet captured:
- Query Store captures queries asynchronously
- Solution: Wait 5-10 minutes after query execution, then verify:
SELECT COUNT(*) AS query_count
FROM sys.query_store_query;
-- Should return > 0
Issue: Queries Extracted but No Lineage Appears
Symptoms:
- Logs show "Extracted X queries" but "Generated 0 lineage workunits"
- Or many queries show parsing failures
Possible causes:
SQL parsing errors:
- Complex SQL syntax not supported (CTEs, window functions, vendor-specific syntax)
- Solution: Check debug logs for
SqlUnderstandingErrororUnsupportedStatementTypeError
Tables filtered by patterns:
- Tables in queries match your
table_patterndeny list - Solution: Review your
table_patternandschema_patternconfiguration
- Tables in queries match your
Tables not yet ingested:
- Lineage references tables that haven't been discovered yet
- Solution: Ensure base table ingestion runs before query lineage, or run ingestion twice
Database name mismatch:
- Queries reference different databases than configured
- Solution: Use
database: nullto ingest all databases, or adjustdatabaseconfig
Debug steps:
# Run with debug logging
datahub ingest -c recipe.yml --debug 2>&1 | grep -i "lineage\|parsing\|query"
# Look for these patterns:
# - "Unable to parse query" - indicates SQL parsing issues
# - "Table X not found" - indicates missing base tables
# - "Filtered table" - indicates pattern matching issues
Issue: Authentication or Connection Errors
Error message:
Login failed for user 'datahub_user'
Cannot open database "YourDatabase" requested by the login
Solutions:
Login failure:
-- Verify user exists and has correct password
SELECT name, type_desc, is_disabled
FROM sys.server_principals
WHERE name = 'datahub_user';Database access denied:
-- Grant database access
USE [YourDatabase];
CREATE USER [datahub_user] FOR LOGIN [datahub_user];
GRANT CONNECT TO [datahub_user];Connection timeout:
- Increase timeout in connection string:
source:
config:
uri_args:
connect_timeout: 30
timeout: 30
- Increase timeout in connection string:
Issue: Temporary Tables Not Recognized
Symptoms:
- Lineage shows temp table names like
#temp_tableas actual tables - Temp tables pollute your lineage graph
Solution:
MSSQL temp tables (starting with #) are automatically filtered by default. If you see temp tables in lineage:
Verify temp table patterns are configured:
source:
config:
temporary_tables_pattern:
- ".*#.*" # Built-in default patternAdd custom temp table patterns:
source:
config:
temporary_tables_pattern:
- ".*#.*" # Standard SQL Server temp tables
- ".*\.temp_.*" # Custom naming pattern
- ".*\.staging_.*" # ETL staging tables
Issue: Performance Problems (Slow Extraction or Hit Query Limit)
Symptoms:
- Query extraction takes >30 seconds
- Ingestion times out or hangs during query extraction
- Ingestion report shows exactly
max_queries_to_extractqueries processed - Warning in logs: "Reached max_queries_to_extract limit"
Understanding the Problem:
When you hit the limit, only the top N queries by execution time are extracted. This means:
- Remaining queries are not processed for lineage
- Less frequently executed queries may be missed
- Lineage may be incomplete for less active tables
Performance Tuning Solutions:
Reduce query limit (if experiencing slowness):
source:
config:
max_queries_to_extract: 500 # Reduced from default 1000Increase query limit (if you need more coverage and can afford the time):
source:
config:
max_queries_to_extract: 5000 # Increased from default 1000Filter out noise with exclude patterns (see comprehensive list in Configuration section above)
Focus on frequently-executed queries:
source:
config:
min_query_calls: 10 # Only extract queries executed 10+ timesCheck Query Store performance:
-- Check Query Store size and query count
SELECT
actual_state_desc,
readonly_reason,
current_storage_size_mb,
max_storage_size_mb,
query_count = (SELECT COUNT(*) FROM sys.query_store_query)
FROM sys.database_query_store_options;
Recommendations:
- Start with
max_queries_to_extract: 1000(default) - Monitor extraction time in debug logs
- Adjust based on your requirements and performance constraints
- Use exclude patterns to filter unnecessary queries before hitting the limit
Issue: DMV Fallback But Empty Results
Symptoms:
- Logs show "falling back to DMV-based extraction"
- But then "Extracted 0 queries from dmv"
Cause: DMVs only contain queries currently in the plan cache. The cache clears on:
- SQL Server restart
- Memory pressure
- Manual cache clearing
Solutions:
Enable Query Store (recommended) - See "Enable Query Store" section above
Execute representative queries before ingestion:
- Run your typical ETL jobs
- Execute common queries
- Wait 5-10 minutes, then run ingestion
Check plan cache size:
SELECT
size_in_bytes/1024/1024 AS cache_size_mb,
name,
type
FROM sys.dm_os_memory_clerks
WHERE type = 'CACHESTORE_SQLCP';
Issue: Configuration Validation Errors
Error messages:
ValidationError: max_queries_to_extract must be positive
ValidationError: query_exclude_patterns cannot exceed 100 patterns
Valid ranges:
max_queries_to_extract: 1 to 10000min_query_calls: 0 to any positive integerquery_exclude_patterns: Maximum 100 patterns, each up to 500 characters
Issue: SQL Aggregator Initialization Failed
Error message:
RuntimeError: Failed to initialize SQL aggregator for query-based lineage,
but include_query_lineage: true was explicitly enabled
Possible causes:
- Graph connection missing when
include_usage_statistics: true - Invalid platform instance configuration
Solution:
- If using
include_usage_statistics, ensure your sink is configured with a DataHub GMS connection - Verify your source configuration is valid
How to Verify Query Lineage is Working End-to-End
Follow these steps to confirm everything is configured correctly:
Verify permissions:
-- Should return at least one row with VIEW SERVER STATE
SELECT * FROM fn_my_permissions(NULL, 'SERVER')
WHERE permission_name = 'VIEW SERVER STATE';Verify Query Store has data (see "Verify Query Store Status" section above)
Test Query Store access:
-- Test Query Store access
SELECT TOP 1 query_id, query_sql_text
FROM sys.query_store_query_text;
-- Test DMV access
SELECT TOP 1 sql_handle
FROM sys.dm_exec_query_stats;Run ingestion with debug logging:
datahub ingest -c recipe.yml --debug 2>&1 | tee ingestion.logCheck for success indicators in logs:
grep -i "extracted.*queries" ingestion.log
grep -i "processed.*queries" ingestion.log
grep -i "generated.*lineage workunits" ingestion.logVerify in DataHub UI:
- Navigate to a table that appears in your queries
- Check the "Lineage" tab for upstream/downstream relationships
- Check the "Queries" tab to see associated SQL queries
Still Having Issues?
If you've tried the above steps and still experiencing issues:
Collect diagnostic information:
# Run ingestion with debug logging
datahub ingest -c recipe.yml --debug > ingestion.log 2>&1
# Check SQL Server version
# Check Query Store status
# Check user permissionsCheck DataHub GitHub issues:
- Search for similar problems: https://github.com/datahub-project/datahub/issues
Ask for help:
- Include your DataHub version
- Include SQL Server version
- Include sanitized config (remove passwords!)
- Include relevant log excerpts
- Describe expected vs actual behavior
Best Practices
- Start Conservative: Begin with
max_queries_to_extract: 1000andmin_query_calls: 5 - Monitor Query Store Size: Set appropriate retention policies
- Use Exclude Patterns: Filter out system queries and temporary tables (see comprehensive list in Configuration section)
- Regular Extraction: Run ingestion at least daily for accurate usage statistics
- Test Before Production: Validate permissions and Query Store setup in a non-production environment first
Performance Considerations
- Query Store Impact: Query Store has minimal overhead when configured appropriately
- Extraction Performance: Query Store typically performs better than DMV method
- Storage: Query Store storage usage depends on retention settings and query volume
- Parsing Time: Scales with query complexity and volume; monitor debug logs for timing
CLI based Ingestion
Starter Recipe
Check out the following recipe to get started with ingestion! See below for full configuration options.
For general pointers on writing and running a recipe, see our main recipe guide.
source:
type: mssql
config:
# Coordinates
host_port: localhost:1433
database: DemoDatabase
# Credentials
username: user
password: pass
# This is recommended to improve lineage quality. Ignores case-sensitivity when constructing internal identifiers.
convert_urns_to_lowercase: True
# Options
# Uncomment if you need to use encryption with pytds
# See https://python-tds.readthedocs.io/en/latest/pytds.html#pytds.connect
# options:
# connect_args:
# cafile: server-ca.pem
# validate_host: true
sink:
# sink configs
#------------------------------------------------------------------------
#Example: with query-based lineage and usage statistics
#This requires SQL Server 2016+ and Query Store enabled.
#See the prerequisites documentation for setup instructions.
# ------------------------------------------------------------------------
source:
type: mssql
config:
# Coordinates
host_port: localhost:1433
database: DemoDatabase
# Credentials
username: user
password: pass
# Query-based lineage extraction
include_query_lineage: true
max_queries_to_extract: 1000
min_query_calls: 5
query_exclude_patterns:
- "%sys.%"
- "%tempdb.%"
- "%INFORMATION_SCHEMA%"
- "%msdb.%"
- "%#%" # Temporary tables
sink:
# sink configs
#------------------------------------------------------------------------
#Example: using ingestion with ODBC and encryption
#This requires you to have already installed the Microsoft ODBC Driver for SQL Server.
#See https://docs.microsoft.com/en-us/sql/connect/python/pyodbc/step-1-configure-development-environment-for-pyodbc-python-development?view=sql-server-ver15
# Note: ODBC mode is automatically determined by the source type.
# Use 'mssql-odbc' for ODBC connections, 'mssql' for non-ODBC (pytds) connections.
# ------------------------------------------------------------------------
source:
type: mssql-odbc
config:
# Coordinates
host_port: localhost:1433
database: DemoDatabase
# Credentials
username: admin
password: password
# Options
uri_args:
driver: "ODBC Driver 18 for SQL Server"
Encrypt: "yes"
TrustServerCertificate: "Yes"
ssl: "True"
sink:
# sink configs
Config Details
- Options
- Schema
Note that a . is used to denote nested fields in the YAML recipe.
| Field | Description |
|---|---|
bucket_duration Enum | One of: "DAY", "HOUR" |
convert_urns_to_lowercase boolean | Enable to convert the SQL Server assets urns to lowercase Default: False |
database One of string, null | database (catalog). If set to Null, all databases will be considered for ingestion. Default: None |
end_time string(date-time) | Latest date of lineage/usage to consider. Default: Current time in UTC |
format_sql_queries boolean | Whether to format sql queries Default: False |
host_port string | MSSQL host URL. Default: localhost:1433 |
include_containers_for_pipelines boolean | Enable the container aspects ingestion for both pipelines and tasks. Note that this feature requires the corresponding model support in the backend, which was introduced in version 0.15.0.1. Default: False |
include_descriptions boolean | Include table descriptions information. Default: True |
include_jobs boolean | Include ingest of MSSQL Jobs. Requires access to the 'msdb' and 'sys' schema. Default: True |
include_lineage boolean | Enable lineage extraction for stored procedures Default: True |
include_operational_stats boolean | Whether to display operational stats. Default: True |
include_query_lineage boolean | Enable query-based lineage extraction from Query Store or DMVs. Query Store is preferred (SQL Server 2016+) and must be enabled on the database. Falls back to DMV-based extraction (sys.dm_exec_cached_plans) for older versions. Requires VIEW SERVER STATE permission. See documentation for setup instructions. Default: False |
include_read_operational_stats boolean | Whether to report read operational stats. Experimental. Default: False |
include_stored_procedures boolean | Include ingest of stored procedures. Requires access to the 'sys' schema. Default: True |
include_stored_procedures_code boolean | Include information about object code. Default: True |
include_table_location_lineage boolean | If the source supports it, include table lineage to the underlying storage location. Default: True |
include_tables boolean | Whether tables should be ingested. Default: True |
include_top_n_queries boolean | Whether to ingest the top_n_queries. Default: True |
include_usage_statistics boolean | Generate usage statistics from query history. Requires include_query_lineage to be enabled. Collects metrics like unique user counts, query frequencies, and column access patterns. Statistics appear in DataHub UI under the Dataset Profile > Usage tab. Default: False |
include_view_column_lineage boolean | Populates column-level lineage for view->view and table->view lineage using DataHub's sql parser. Requires include_view_lineage to be enabled. Default: True |
include_view_lineage boolean | Populates view->view and table->view lineage using DataHub's sql parser. Default: True |
include_views boolean | Whether views should be ingested. Default: True |
incremental_lineage boolean | When enabled, emits lineage as incremental to existing lineage already in DataHub. When disabled, re-states lineage on each run. Default: False |
is_aws_rds One of boolean, null | Indicates if the SQL Server instance is running on AWS RDS. When None (default), automatic detection will be attempted using server name analysis. Default: None |
max_queries_to_extract integer | Maximum number of queries to extract for lineage analysis. Queries are prioritized by execution time and frequency. Default: 1000 |
min_query_calls integer | Minimum number of executions required for a query to be included. Set higher to focus on frequently-used queries. Default: 1 |
options object | Any options specified here will be passed to SQLAlchemy.create_engine as kwargs. To set connection arguments in the URL, specify them under connect_args. |
password One of string(password), null | password Default: None |
platform_instance One of string, null | The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://docs.datahub.com/docs/platform-instances/ for more details. Default: None |
quote_schemas boolean | Represent a schema identifiers combined with quoting preferences. See sqlalchemy quoted_name docs. Default: False |
sqlalchemy_uri One of string, null | URI of database to connect to. See https://docs.sqlalchemy.org/en/14/core/engines.html#database-urls. Takes precedence over other connection parameters. Default: None |
start_time string(date-time) | Earliest date of lineage/usage to consider. Default: Last full day in UTC (or hour, depending on bucket_duration). You can also specify relative time with respect to end_time such as '-7 days' Or '-7d'. Default: None |
top_n_queries integer | Number of top queries to save to each table. Default: 10 |
uri_args map(str,string) | |
use_file_backed_cache boolean | Whether to use a file backed cache for the view definitions. Default: True |
username One of string, null | username Default: None |
env string | The environment that all assets produced by this connector belong to Default: PROD |
database_pattern AllowDenyPattern | A class to store allow deny regexes |
database_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
domain map(str,AllowDenyPattern) | A class to store allow deny regexes |
domain. key.allowarray | List of regex patterns to include in ingestion Default: ['.*'] |
domain. key.allow.stringstring | |
domain. key.ignoreCaseOne of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
domain. key.denyarray | List of regex patterns to exclude from ingestion. Default: [] |
domain. key.deny.stringstring | |
procedure_pattern AllowDenyPattern | A class to store allow deny regexes |
procedure_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
profile_pattern AllowDenyPattern | A class to store allow deny regexes |
profile_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
query_exclude_patterns One of array, null | SQL LIKE patterns to exclude from query extraction. Example: ['%sys.%', '%temp_%'] to exclude system and temp tables. Default: None |
query_exclude_patterns.string string | |
schema_pattern AllowDenyPattern | A class to store allow deny regexes |
schema_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
table_pattern AllowDenyPattern | A class to store allow deny regexes |
table_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
temporary_tables_pattern array | [Advanced] Regex patterns for temporary tables to filter in lineage ingestion. Specify regex to match the entire table name in database.schema.table format. Defaults are to set in such a way to ignore the temporary staging tables created by known ETL tools. Default: ['.*\.FIVETRAN_.*_STAGING\..*', '.*__DBT_TMP$', ... |
temporary_tables_pattern.string string | |
user_email_pattern AllowDenyPattern | A class to store allow deny regexes |
user_email_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
view_pattern AllowDenyPattern | A class to store allow deny regexes |
view_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
classification ClassificationConfig | |
classification.enabled boolean | Whether classification should be used to auto-detect glossary terms Default: False |
classification.info_type_to_term map(str,string) | |
classification.max_workers integer | Number of worker processes to use for classification. Set to 1 to disable. Default: 4 |
classification.sample_size integer | Number of sample values used for classification. Default: 100 |
classification.classifiers array | Classifiers to use to auto-detect glossary terms. If more than one classifier, infotype predictions from the classifier defined later in sequence take precedance. Default: [{'type': 'datahub', 'config': None}] |
classification.classifiers.DynamicTypedClassifierConfig DynamicTypedClassifierConfig | |
classification.classifiers.DynamicTypedClassifierConfig.type ❓ string | The type of the classifier to use. For DataHub, use datahub |
classification.classifiers.DynamicTypedClassifierConfig.config One of object, null | The configuration required for initializing the classifier. If not specified, uses defaults for classifer type. Default: None |
classification.column_pattern AllowDenyPattern | A class to store allow deny regexes |
classification.column_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
classification.table_pattern AllowDenyPattern | A class to store allow deny regexes |
classification.table_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
profiling GEProfilingConfig | |
profiling.catch_exceptions boolean | Default: True |
profiling.enabled boolean | Whether profiling should be done. Default: False |
profiling.field_sample_values_limit integer | Upper limit for number of sample values to collect for all columns. Default: 20 |
profiling.include_field_distinct_count boolean | Whether to profile for the number of distinct values for each column. Default: True |
profiling.include_field_distinct_value_frequencies boolean | Whether to profile for distinct value frequencies. Default: False |
profiling.include_field_histogram boolean | Whether to profile for the histogram for numeric fields. Default: False |
profiling.include_field_max_value boolean | Whether to profile for the max value of numeric columns. Default: True |
profiling.include_field_mean_value boolean | Whether to profile for the mean value of numeric columns. Default: True |
profiling.include_field_median_value boolean | Whether to profile for the median value of numeric columns. Default: True |
profiling.include_field_min_value boolean | Whether to profile for the min value of numeric columns. Default: True |
profiling.include_field_null_count boolean | Whether to profile for the number of nulls for each column. Default: True |
profiling.include_field_quantiles boolean | Whether to profile for the quantiles of numeric columns. Default: False |
profiling.include_field_sample_values boolean | Whether to profile for the sample values for all columns. Default: True |
profiling.include_field_stddev_value boolean | Whether to profile for the standard deviation of numeric columns. Default: True |
profiling.limit One of integer, null | Max number of documents to profile. By default, profiles all documents. Default: None |
profiling.max_number_of_fields_to_profile One of integer, null | A positive integer that specifies the maximum number of columns to profile for any table. None implies all columns. The cost of profiling goes up significantly as the number of columns to profile goes up. Default: None |
profiling.max_workers integer | Number of worker threads to use for profiling. Set to 1 to disable. Default: 20 |
profiling.method Enum | One of: "ge", "sqlalchemy" Default: ge |
profiling.offset One of integer, null | Offset in documents to profile. By default, uses no offset. Default: None |
profiling.partition_datetime One of string(date-time), null | If specified, profile only the partition which matches this datetime. If not specified, profile the latest partition. Only Bigquery supports this. Default: None |
profiling.partition_profiling_enabled boolean | Whether to profile partitioned tables. Only BigQuery and Aws Athena supports this. If enabled, latest partition data is used for profiling. Default: True |
profiling.profile_external_tables boolean | Whether to profile external tables. Only Snowflake and Redshift supports this. Default: False |
profiling.profile_if_updated_since_days One of number, null | Profile table only if it has been updated since these many number of days. If set to null, no constraint of last modified time for tables to profile. Supported only in snowflake and BigQuery. Default: None |
profiling.profile_nested_fields boolean | Whether to profile complex types like structs, arrays and maps. Default: False |
profiling.profile_table_level_only boolean | Whether to perform profiling at table-level only, or include column-level profiling as well. Default: False |
profiling.profile_table_row_count_estimate_only boolean | Use an approximate query for row count. This will be much faster but slightly less accurate. Only supported for Postgres and MySQL. Default: False |
profiling.profile_table_row_limit One of integer, null | Profile tables only if their row count is less than specified count. If set to null, no limit on the row count of tables to profile. Supported only in Snowflake, BigQuery. Supported for Oracle based on gathered stats. Default: 5000000 |
profiling.profile_table_size_limit One of integer, null | Profile tables only if their size is less than specified GBs. If set to null, no limit on the size of tables to profile. Supported only in Snowflake, BigQuery and Databricks. Supported for Oracle based on calculated size from gathered stats. Default: 5 |
profiling.query_combiner_enabled boolean | This feature is still experimental and can be disabled if it causes issues. Reduces the total number of queries issued and speeds up profiling by dynamically combining SQL queries where possible. Default: True |
profiling.report_dropped_profiles boolean | Whether to report datasets or dataset columns which were not profiled. Set to True for debugging purposes. Default: False |
profiling.sample_size integer | Number of rows to be sampled from table for column level profiling.Applicable only if use_sampling is set to True. Default: 10000 |
profiling.turn_off_expensive_profiling_metrics boolean | Whether to turn off expensive profiling or not. This turns off profiling for quantiles, distinct_value_frequencies, histogram & sample_values. This also limits maximum number of fields being profiled to 10. Default: False |
profiling.use_sampling boolean | Whether to profile column level stats on sample of table. Only BigQuery and Snowflake support this. If enabled, profiling is done on rows sampled from table. Sampling is not done for smaller tables. Default: True |
profiling.operation_config OperationConfig | |
profiling.operation_config.lower_freq_profile_enabled boolean | Whether to do profiling at lower freq or not. This does not do any scheduling just adds additional checks to when not to run profiling. Default: False |
profiling.operation_config.profile_date_of_month One of integer, null | Number between 1 to 31 for date of month (both inclusive). If not specified, defaults to Nothing and this field does not take affect. Default: None |
profiling.operation_config.profile_day_of_week One of integer, null | Number between 0 to 6 for day of week (both inclusive). 0 is Monday and 6 is Sunday. If not specified, defaults to Nothing and this field does not take affect. Default: None |
profiling.tags_to_ignore_sampling One of array, null | Fixed list of tags to ignore sampling. If not specified, tables will be sampled based on use_sampling. Default: None |
profiling.tags_to_ignore_sampling.string string | |
stateful_ingestion One of StatefulStaleMetadataRemovalConfig, null | Default: None |
stateful_ingestion.enabled boolean | Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False Default: False |
stateful_ingestion.fail_safe_threshold number | Prevents large amount of soft deletes & the state from committing from accidental changes to the source configuration if the relative change percent in entities compared to the previous state is above the 'fail_safe_threshold'. Default: 75.0 |
stateful_ingestion.remove_stale_metadata boolean | Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled. Default: True |
The JSONSchema for this configuration is inlined below.
{
"$defs": {
"AllowDenyPattern": {
"additionalProperties": false,
"description": "A class to store allow deny regexes",
"properties": {
"allow": {
"default": [
".*"
],
"description": "List of regex patterns to include in ingestion",
"items": {
"type": "string"
},
"title": "Allow",
"type": "array"
},
"deny": {
"default": [],
"description": "List of regex patterns to exclude from ingestion.",
"items": {
"type": "string"
},
"title": "Deny",
"type": "array"
},
"ignoreCase": {
"anyOf": [
{
"type": "boolean"
},
{
"type": "null"
}
],
"default": true,
"description": "Whether to ignore case sensitivity during pattern matching.",
"title": "Ignorecase"
}
},
"title": "AllowDenyPattern",
"type": "object"
},
"BucketDuration": {
"enum": [
"DAY",
"HOUR"
],
"title": "BucketDuration",
"type": "string"
},
"ClassificationConfig": {
"additionalProperties": false,
"properties": {
"enabled": {
"default": false,
"description": "Whether classification should be used to auto-detect glossary terms",
"title": "Enabled",
"type": "boolean"
},
"sample_size": {
"default": 100,
"description": "Number of sample values used for classification.",
"title": "Sample Size",
"type": "integer"
},
"max_workers": {
"default": 4,
"description": "Number of worker processes to use for classification. Set to 1 to disable.",
"title": "Max Workers",
"type": "integer"
},
"table_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns to filter tables for classification. This is used in combination with other patterns in parent config. Specify regex to match the entire table name in `database.schema.table` format. e.g. to match all tables starting with customer in Customer database and public schema, use the regex 'Customer.public.customer.*'"
},
"column_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns to filter columns for classification. This is used in combination with other patterns in parent config. Specify regex to match the column name in `database.schema.table.column` format."
},
"info_type_to_term": {
"additionalProperties": {
"type": "string"
},
"default": {},
"description": "Optional mapping to provide glossary term identifier for info type",
"title": "Info Type To Term",
"type": "object"
},
"classifiers": {
"default": [
{
"type": "datahub",
"config": null
}
],
"description": "Classifiers to use to auto-detect glossary terms. If more than one classifier, infotype predictions from the classifier defined later in sequence take precedance.",
"items": {
"$ref": "#/$defs/DynamicTypedClassifierConfig"
},
"title": "Classifiers",
"type": "array"
}
},
"title": "ClassificationConfig",
"type": "object"
},
"DynamicTypedClassifierConfig": {
"additionalProperties": false,
"properties": {
"type": {
"description": "The type of the classifier to use. For DataHub, use `datahub`",
"title": "Type",
"type": "string"
},
"config": {
"anyOf": [
{},
{
"type": "null"
}
],
"default": null,
"description": "The configuration required for initializing the classifier. If not specified, uses defaults for classifer type.",
"title": "Config"
}
},
"required": [
"type"
],
"title": "DynamicTypedClassifierConfig",
"type": "object"
},
"GEProfilingConfig": {
"additionalProperties": false,
"properties": {
"method": {
"default": "ge",
"description": "Profiling method to use. Options: `ge` (Great Expectations) or `sqlalchemy` (custom SQLAlchemy-based profiler). The SQLAlchemy profiler has no GE dependency and provides the same functionality.",
"enum": [
"ge",
"sqlalchemy"
],
"title": "Method",
"type": "string"
},
"enabled": {
"default": false,
"description": "Whether profiling should be done.",
"title": "Enabled",
"type": "boolean"
},
"operation_config": {
"$ref": "#/$defs/OperationConfig",
"description": "Experimental feature. To specify operation configs."
},
"limit": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "Max number of documents to profile. By default, profiles all documents.",
"title": "Limit"
},
"offset": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "Offset in documents to profile. By default, uses no offset.",
"title": "Offset"
},
"profile_table_level_only": {
"default": false,
"description": "Whether to perform profiling at table-level only, or include column-level profiling as well.",
"title": "Profile Table Level Only",
"type": "boolean"
},
"include_field_null_count": {
"default": true,
"description": "Whether to profile for the number of nulls for each column.",
"title": "Include Field Null Count",
"type": "boolean"
},
"include_field_distinct_count": {
"default": true,
"description": "Whether to profile for the number of distinct values for each column.",
"title": "Include Field Distinct Count",
"type": "boolean"
},
"include_field_min_value": {
"default": true,
"description": "Whether to profile for the min value of numeric columns.",
"title": "Include Field Min Value",
"type": "boolean"
},
"include_field_max_value": {
"default": true,
"description": "Whether to profile for the max value of numeric columns.",
"title": "Include Field Max Value",
"type": "boolean"
},
"include_field_mean_value": {
"default": true,
"description": "Whether to profile for the mean value of numeric columns.",
"title": "Include Field Mean Value",
"type": "boolean"
},
"include_field_median_value": {
"default": true,
"description": "Whether to profile for the median value of numeric columns.",
"title": "Include Field Median Value",
"type": "boolean"
},
"include_field_stddev_value": {
"default": true,
"description": "Whether to profile for the standard deviation of numeric columns.",
"title": "Include Field Stddev Value",
"type": "boolean"
},
"include_field_quantiles": {
"default": false,
"description": "Whether to profile for the quantiles of numeric columns.",
"title": "Include Field Quantiles",
"type": "boolean"
},
"include_field_distinct_value_frequencies": {
"default": false,
"description": "Whether to profile for distinct value frequencies.",
"title": "Include Field Distinct Value Frequencies",
"type": "boolean"
},
"include_field_histogram": {
"default": false,
"description": "Whether to profile for the histogram for numeric fields.",
"title": "Include Field Histogram",
"type": "boolean"
},
"include_field_sample_values": {
"default": true,
"description": "Whether to profile for the sample values for all columns.",
"title": "Include Field Sample Values",
"type": "boolean"
},
"max_workers": {
"default": 20,
"description": "Number of worker threads to use for profiling. Set to 1 to disable.",
"title": "Max Workers",
"type": "integer"
},
"report_dropped_profiles": {
"default": false,
"description": "Whether to report datasets or dataset columns which were not profiled. Set to `True` for debugging purposes.",
"title": "Report Dropped Profiles",
"type": "boolean"
},
"turn_off_expensive_profiling_metrics": {
"default": false,
"description": "Whether to turn off expensive profiling or not. This turns off profiling for quantiles, distinct_value_frequencies, histogram & sample_values. This also limits maximum number of fields being profiled to 10.",
"title": "Turn Off Expensive Profiling Metrics",
"type": "boolean"
},
"field_sample_values_limit": {
"default": 20,
"description": "Upper limit for number of sample values to collect for all columns.",
"title": "Field Sample Values Limit",
"type": "integer"
},
"max_number_of_fields_to_profile": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "A positive integer that specifies the maximum number of columns to profile for any table. `None` implies all columns. The cost of profiling goes up significantly as the number of columns to profile goes up.",
"title": "Max Number Of Fields To Profile"
},
"profile_if_updated_since_days": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Profile table only if it has been updated since these many number of days. If set to `null`, no constraint of last modified time for tables to profile. Supported only in `snowflake` and `BigQuery`.",
"schema_extra": {
"supported_sources": [
"snowflake",
"bigquery"
]
},
"title": "Profile If Updated Since Days"
},
"profile_table_size_limit": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": 5,
"description": "Profile tables only if their size is less than specified GBs. If set to `null`, no limit on the size of tables to profile. Supported only in `Snowflake`, `BigQuery` and `Databricks`. Supported for `Oracle` based on calculated size from gathered stats.",
"schema_extra": {
"supported_sources": [
"snowflake",
"bigquery",
"unity-catalog",
"oracle"
]
},
"title": "Profile Table Size Limit"
},
"profile_table_row_limit": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": 5000000,
"description": "Profile tables only if their row count is less than specified count. If set to `null`, no limit on the row count of tables to profile. Supported only in `Snowflake`, `BigQuery`. Supported for `Oracle` based on gathered stats.",
"schema_extra": {
"supported_sources": [
"snowflake",
"bigquery",
"oracle"
]
},
"title": "Profile Table Row Limit"
},
"profile_table_row_count_estimate_only": {
"default": false,
"description": "Use an approximate query for row count. This will be much faster but slightly less accurate. Only supported for Postgres and MySQL. ",
"schema_extra": {
"supported_sources": [
"postgres",
"mysql"
]
},
"title": "Profile Table Row Count Estimate Only",
"type": "boolean"
},
"query_combiner_enabled": {
"default": true,
"description": "*This feature is still experimental and can be disabled if it causes issues.* Reduces the total number of queries issued and speeds up profiling by dynamically combining SQL queries where possible.",
"title": "Query Combiner Enabled",
"type": "boolean"
},
"catch_exceptions": {
"default": true,
"description": "",
"title": "Catch Exceptions",
"type": "boolean"
},
"partition_profiling_enabled": {
"default": true,
"description": "Whether to profile partitioned tables. Only BigQuery and Aws Athena supports this. If enabled, latest partition data is used for profiling.",
"schema_extra": {
"supported_sources": [
"athena",
"bigquery"
]
},
"title": "Partition Profiling Enabled",
"type": "boolean"
},
"partition_datetime": {
"anyOf": [
{
"format": "date-time",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "If specified, profile only the partition which matches this datetime. If not specified, profile the latest partition. Only Bigquery supports this.",
"schema_extra": {
"supported_sources": [
"bigquery"
]
},
"title": "Partition Datetime"
},
"use_sampling": {
"default": true,
"description": "Whether to profile column level stats on sample of table. Only BigQuery and Snowflake support this. If enabled, profiling is done on rows sampled from table. Sampling is not done for smaller tables. ",
"schema_extra": {
"supported_sources": [
"bigquery",
"snowflake"
]
},
"title": "Use Sampling",
"type": "boolean"
},
"sample_size": {
"default": 10000,
"description": "Number of rows to be sampled from table for column level profiling.Applicable only if `use_sampling` is set to True.",
"schema_extra": {
"supported_sources": [
"bigquery",
"snowflake"
]
},
"title": "Sample Size",
"type": "integer"
},
"profile_external_tables": {
"default": false,
"description": "Whether to profile external tables. Only Snowflake and Redshift supports this.",
"schema_extra": {
"supported_sources": [
"redshift",
"snowflake"
]
},
"title": "Profile External Tables",
"type": "boolean"
},
"tags_to_ignore_sampling": {
"anyOf": [
{
"items": {
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Fixed list of tags to ignore sampling. If not specified, tables will be sampled based on `use_sampling`.",
"title": "Tags To Ignore Sampling"
},
"profile_nested_fields": {
"default": false,
"description": "Whether to profile complex types like structs, arrays and maps. ",
"title": "Profile Nested Fields",
"type": "boolean"
}
},
"title": "GEProfilingConfig",
"type": "object"
},
"OperationConfig": {
"additionalProperties": false,
"properties": {
"lower_freq_profile_enabled": {
"default": false,
"description": "Whether to do profiling at lower freq or not. This does not do any scheduling just adds additional checks to when not to run profiling.",
"title": "Lower Freq Profile Enabled",
"type": "boolean"
},
"profile_day_of_week": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "Number between 0 to 6 for day of week (both inclusive). 0 is Monday and 6 is Sunday. If not specified, defaults to Nothing and this field does not take affect.",
"title": "Profile Day Of Week"
},
"profile_date_of_month": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "Number between 1 to 31 for date of month (both inclusive). If not specified, defaults to Nothing and this field does not take affect.",
"title": "Profile Date Of Month"
}
},
"title": "OperationConfig",
"type": "object"
},
"StatefulStaleMetadataRemovalConfig": {
"additionalProperties": false,
"description": "Base specialized config for Stateful Ingestion with stale metadata removal capability.",
"properties": {
"enabled": {
"default": false,
"description": "Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or `datahub_api` is specified, otherwise False",
"title": "Enabled",
"type": "boolean"
},
"remove_stale_metadata": {
"default": true,
"description": "Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.",
"title": "Remove Stale Metadata",
"type": "boolean"
},
"fail_safe_threshold": {
"default": 75.0,
"description": "Prevents large amount of soft deletes & the state from committing from accidental changes to the source configuration if the relative change percent in entities compared to the previous state is above the 'fail_safe_threshold'.",
"maximum": 100.0,
"minimum": 0.0,
"title": "Fail Safe Threshold",
"type": "number"
}
},
"title": "StatefulStaleMetadataRemovalConfig",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"bucket_duration": {
"$ref": "#/$defs/BucketDuration",
"default": "DAY",
"description": "Size of the time window to aggregate usage stats."
},
"end_time": {
"description": "Latest date of lineage/usage to consider. Default: Current time in UTC",
"format": "date-time",
"title": "End Time",
"type": "string"
},
"start_time": {
"default": null,
"description": "Earliest date of lineage/usage to consider. Default: Last full day in UTC (or hour, depending on `bucket_duration`). You can also specify relative time with respect to end_time such as '-7 days' Or '-7d'.",
"format": "date-time",
"title": "Start Time",
"type": "string"
},
"top_n_queries": {
"default": 10,
"description": "Number of top queries to save to each table.",
"exclusiveMinimum": 0,
"title": "Top N Queries",
"type": "integer"
},
"user_email_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "regex patterns for user emails to filter in usage."
},
"include_operational_stats": {
"default": true,
"description": "Whether to display operational stats.",
"title": "Include Operational Stats",
"type": "boolean"
},
"include_read_operational_stats": {
"default": false,
"description": "Whether to report read operational stats. Experimental.",
"title": "Include Read Operational Stats",
"type": "boolean"
},
"format_sql_queries": {
"default": false,
"description": "Whether to format sql queries",
"title": "Format Sql Queries",
"type": "boolean"
},
"include_top_n_queries": {
"default": true,
"description": "Whether to ingest the top_n_queries.",
"title": "Include Top N Queries",
"type": "boolean"
},
"schema_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns for schemas to filter in ingestion. Specify regex to only match the schema name. e.g. to match all tables in schema analytics, use the regex 'analytics'"
},
"table_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns for tables to filter in ingestion. Specify regex to match the entire table name in database.schema.table format. e.g. to match all tables starting with customer in Customer database and public schema, use the regex 'Customer.public.customer.*'"
},
"view_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns for views to filter in ingestion. Note: Defaults to table_pattern if not specified. Specify regex to match the entire view name in database.schema.view format. e.g. to match all views starting with customer in Customer database and public schema, use the regex 'Customer.public.customer.*'"
},
"classification": {
"$ref": "#/$defs/ClassificationConfig",
"default": {
"enabled": false,
"sample_size": 100,
"max_workers": 4,
"table_pattern": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"column_pattern": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"info_type_to_term": {},
"classifiers": [
{
"config": null,
"type": "datahub"
}
]
},
"description": "For details, refer to [Classification](../../../../metadata-ingestion/docs/dev_guides/classification.md)."
},
"incremental_lineage": {
"default": false,
"description": "When enabled, emits lineage as incremental to existing lineage already in DataHub. When disabled, re-states lineage on each run.",
"title": "Incremental Lineage",
"type": "boolean"
},
"convert_urns_to_lowercase": {
"default": false,
"description": "Enable to convert the SQL Server assets urns to lowercase",
"title": "Convert Urns To Lowercase",
"type": "boolean"
},
"env": {
"default": "PROD",
"description": "The environment that all assets produced by this connector belong to",
"title": "Env",
"type": "string"
},
"platform_instance": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://docs.datahub.com/docs/platform-instances/ for more details.",
"title": "Platform Instance"
},
"stateful_ingestion": {
"anyOf": [
{
"$ref": "#/$defs/StatefulStaleMetadataRemovalConfig"
},
{
"type": "null"
}
],
"default": null
},
"options": {
"additionalProperties": true,
"description": "Any options specified here will be passed to [SQLAlchemy.create_engine](https://docs.sqlalchemy.org/en/14/core/engines.html#sqlalchemy.create_engine) as kwargs. To set connection arguments in the URL, specify them under `connect_args`.",
"title": "Options",
"type": "object"
},
"profile_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns to filter tables (or specific columns) for profiling during ingestion. Note that only tables allowed by the `table_pattern` will be considered."
},
"domain": {
"additionalProperties": {
"$ref": "#/$defs/AllowDenyPattern"
},
"default": {},
"description": "Attach domains to databases, schemas or tables during ingestion using regex patterns. Domain key can be a guid like *urn:li:domain:ec428203-ce86-4db3-985d-5a8ee6df32ba* or a string like \"Marketing\".) If you provide strings, then datahub will attempt to resolve this name to a guid, and will error out if this fails. There can be multiple domain keys specified.",
"title": "Domain",
"type": "object"
},
"include_views": {
"default": true,
"description": "Whether views should be ingested.",
"title": "Include Views",
"type": "boolean"
},
"include_tables": {
"default": true,
"description": "Whether tables should be ingested.",
"title": "Include Tables",
"type": "boolean"
},
"include_table_location_lineage": {
"default": true,
"description": "If the source supports it, include table lineage to the underlying storage location.",
"title": "Include Table Location Lineage",
"type": "boolean"
},
"include_view_lineage": {
"default": true,
"description": "Populates view->view and table->view lineage using DataHub's sql parser.",
"title": "Include View Lineage",
"type": "boolean"
},
"include_view_column_lineage": {
"default": true,
"description": "Populates column-level lineage for view->view and table->view lineage using DataHub's sql parser. Requires `include_view_lineage` to be enabled.",
"title": "Include View Column Lineage",
"type": "boolean"
},
"use_file_backed_cache": {
"default": true,
"description": "Whether to use a file backed cache for the view definitions.",
"title": "Use File Backed Cache",
"type": "boolean"
},
"profiling": {
"$ref": "#/$defs/GEProfilingConfig",
"default": {
"method": "ge",
"enabled": false,
"operation_config": {
"lower_freq_profile_enabled": false,
"profile_date_of_month": null,
"profile_day_of_week": null
},
"limit": null,
"offset": null,
"profile_table_level_only": false,
"include_field_null_count": true,
"include_field_distinct_count": true,
"include_field_min_value": true,
"include_field_max_value": true,
"include_field_mean_value": true,
"include_field_median_value": true,
"include_field_stddev_value": true,
"include_field_quantiles": false,
"include_field_distinct_value_frequencies": false,
"include_field_histogram": false,
"include_field_sample_values": true,
"max_workers": 20,
"report_dropped_profiles": false,
"turn_off_expensive_profiling_metrics": false,
"field_sample_values_limit": 20,
"max_number_of_fields_to_profile": null,
"profile_if_updated_since_days": null,
"profile_table_size_limit": 5,
"profile_table_row_limit": 5000000,
"profile_table_row_count_estimate_only": false,
"query_combiner_enabled": true,
"catch_exceptions": true,
"partition_profiling_enabled": true,
"partition_datetime": null,
"use_sampling": true,
"sample_size": 10000,
"profile_external_tables": false,
"tags_to_ignore_sampling": null,
"profile_nested_fields": false
}
},
"username": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "username",
"title": "Username"
},
"password": {
"anyOf": [
{
"format": "password",
"type": "string",
"writeOnly": true
},
{
"type": "null"
}
],
"default": null,
"description": "password",
"title": "Password"
},
"host_port": {
"default": "localhost:1433",
"description": "MSSQL host URL.",
"title": "Host Port",
"type": "string"
},
"database": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "database (catalog). If set to Null, all databases will be considered for ingestion.",
"title": "Database"
},
"sqlalchemy_uri": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URI of database to connect to. See https://docs.sqlalchemy.org/en/14/core/engines.html#database-urls. Takes precedence over other connection parameters.",
"title": "Sqlalchemy Uri"
},
"include_stored_procedures": {
"default": true,
"description": "Include ingest of stored procedures. Requires access to the 'sys' schema.",
"title": "Include Stored Procedures",
"type": "boolean"
},
"include_stored_procedures_code": {
"default": true,
"description": "Include information about object code.",
"title": "Include Stored Procedures Code",
"type": "boolean"
},
"procedure_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns for stored procedures to filter in ingestion.Specify regex to match the entire procedure name in database.schema.procedure_name format. e.g. to match all procedures starting with customer in Customer database and public schema, use the regex 'Customer.public.customer.*'"
},
"include_jobs": {
"default": true,
"description": "Include ingest of MSSQL Jobs. Requires access to the 'msdb' and 'sys' schema.",
"title": "Include Jobs",
"type": "boolean"
},
"include_descriptions": {
"default": true,
"description": "Include table descriptions information.",
"title": "Include Descriptions",
"type": "boolean"
},
"uri_args": {
"additionalProperties": {
"type": "string"
},
"default": {},
"description": "Arguments to URL-encode when connecting. See https://docs.microsoft.com/en-us/sql/connect/odbc/dsn-connection-string-attribute?view=sql-server-ver15.",
"title": "Uri Args",
"type": "object"
},
"database_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns for databases to filter in ingestion."
},
"include_lineage": {
"default": true,
"description": "Enable lineage extraction for stored procedures",
"title": "Include Lineage",
"type": "boolean"
},
"include_containers_for_pipelines": {
"default": false,
"description": "Enable the container aspects ingestion for both pipelines and tasks. Note that this feature requires the corresponding model support in the backend, which was introduced in version 0.15.0.1.",
"title": "Include Containers For Pipelines",
"type": "boolean"
},
"temporary_tables_pattern": {
"default": [
".*\\.FIVETRAN_.*_STAGING\\..*",
".*__DBT_TMP$",
".*\\.SEGMENT_[a-f0-9]{8}[-_][a-f0-9]{4}[-_][a-f0-9]{4}[-_][a-f0-9]{4}[-_][a-f0-9]{12}",
".*\\.STAGING_.*_[a-f0-9]{8}[-_][a-f0-9]{4}[-_][a-f0-9]{4}[-_][a-f0-9]{4}[-_][a-f0-9]{12}",
".*\\.(GE_TMP_|GE_TEMP_|GX_TEMP_)[0-9A-F]{8}"
],
"description": "[Advanced] Regex patterns for temporary tables to filter in lineage ingestion. Specify regex to match the entire table name in database.schema.table format. Defaults are to set in such a way to ignore the temporary staging tables created by known ETL tools.",
"items": {
"type": "string"
},
"title": "Temporary Tables Pattern",
"type": "array"
},
"quote_schemas": {
"default": false,
"description": "Represent a schema identifiers combined with quoting preferences. See [sqlalchemy quoted_name docs](https://docs.sqlalchemy.org/en/20/core/sqlelement.html#sqlalchemy.sql.expression.quoted_name).",
"title": "Quote Schemas",
"type": "boolean"
},
"is_aws_rds": {
"anyOf": [
{
"type": "boolean"
},
{
"type": "null"
}
],
"default": null,
"description": "Indicates if the SQL Server instance is running on AWS RDS. When None (default), automatic detection will be attempted using server name analysis.",
"title": "Is Aws Rds"
},
"include_query_lineage": {
"default": false,
"description": "Enable query-based lineage extraction from Query Store or DMVs. Query Store is preferred (SQL Server 2016+) and must be enabled on the database. Falls back to DMV-based extraction (sys.dm_exec_cached_plans) for older versions. Requires VIEW SERVER STATE permission. See documentation for setup instructions.",
"title": "Include Query Lineage",
"type": "boolean"
},
"max_queries_to_extract": {
"default": 1000,
"description": "Maximum number of queries to extract for lineage analysis. Queries are prioritized by execution time and frequency.",
"title": "Max Queries To Extract",
"type": "integer"
},
"min_query_calls": {
"default": 1,
"description": "Minimum number of executions required for a query to be included. Set higher to focus on frequently-used queries.",
"title": "Min Query Calls",
"type": "integer"
},
"query_exclude_patterns": {
"anyOf": [
{
"items": {
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "SQL LIKE patterns to exclude from query extraction. Example: ['%sys.%', '%temp_%'] to exclude system and temp tables.",
"title": "Query Exclude Patterns"
},
"include_usage_statistics": {
"default": false,
"description": "Generate usage statistics from query history. Requires include_query_lineage to be enabled. Collects metrics like unique user counts, query frequencies, and column access patterns. Statistics appear in DataHub UI under the Dataset Profile > Usage tab.",
"title": "Include Usage Statistics",
"type": "boolean"
}
},
"title": "SQLServerConfig",
"type": "object"
}
Code Coordinates
- Class Name:
datahub.ingestion.source.sql.mssql.source.SQLServerSource - Browse on GitHub
Questions
If you've got any questions on configuring ingestion for Microsoft SQL Server, feel free to ping us on our Slack.