Skip to main content

Microsoft SQL Server

Certified

Important Capabilities

CapabilityStatusNotes
Asset ContainersEnabled by default. Supported for types - Database, Schema.
ClassificationOptionally enabled via classification.enabled.
Column-level LineageEnabled by default to get lineage for stored procedures via include_lineage and for views via include_view_column_lineage. Supported for types - Stored Procedure, View.
Data ProfilingOptionally enabled via configuration.
DescriptionsEnabled by default.
Detect Deleted EntitiesEnabled by default via stateful ingestion.
DomainsSupported via the domain config field.
Platform InstanceEnabled by default.
Schema MetadataEnabled by default.
Table-Level LineageEnabled by default to get lineage for stored procedures via include_lineage and for views via include_view_lineage. Supported for types - Stored Procedure, View.
Test ConnectionEnabled by default.

This plugin extracts the following:

  • Metadata for databases, schemas, views and tables
  • Column types associated with each table/view
  • Table, row, and column statistics via optional SQL profiling

Two source types are available:

  • mssql: Uses python-tds library (pure Python, easier to install)
  • mssql-odbc: Uses pyodbc library (required for encryption, Azure managed services)

If you need encryption (e.g., for Azure SQL), use source type mssql-odbc and configure uri_args with your ODBC driver settings.

Prerequisites

If you want to ingest MSSQL Jobs and stored procedures (with code) the user credentials needs the proper privileges.

The DataHub MSSQL source automatically detects your environment and uses the optimal method:

  • RDS/Managed SQL Server: Uses stored procedures (recommended for managed environments)
  • On-premises SQL Server: Uses direct table access (typically faster when available)
  • Automatic fallback: If the primary method fails, it automatically tries the alternative approach

Permissions for All Environments

-- Core permissions for stored procedures (required)
USE MSDB
GRANT SELECT ON OBJECT::msdb.dbo.sysjobsteps TO 'USERNAME'
GRANT SELECT ON OBJECT::msdb.dbo.sysjobs TO 'USERNAME'

-- Stored procedure permissions (required for RDS/managed environments)
GRANT EXECUTE ON msdb.dbo.sp_help_job TO 'USERNAME'
GRANT EXECUTE ON msdb.dbo.sp_help_jobstep TO 'USERNAME'

-- Permissions for stored procedure code and lineage
USE 'DATA_DB_NAME'
GRANT VIEW DEFINITION TO 'USERNAME'
GRANT SELECT ON OBJECT::sys.sql_expression_dependencies TO 'USERNAME'

RDS SQL Server Specific Notes

For Amazon RDS SQL Server environments, the stored procedure approach is preferred and typically the only method that works due to restricted table access. Ensure the following permissions are granted:

-- Essential for RDS environments
GRANT EXECUTE ON msdb.dbo.sp_help_job TO 'USERNAME'
GRANT EXECUTE ON msdb.dbo.sp_help_jobstep TO 'USERNAME'

Production Recommendation: Explicitly set is_aws_rds: true in your configuration to avoid automatic detection issues:

source:
type: mssql
config:
is_aws_rds: true # Recommended for RDS environments
# ... other config

The connector automatically detects RDS environments by analyzing the server name (e.g., *.rds.amazonaws.com), but explicit configuration is more reliable and avoids potential false positives or negatives.

On-Premises SQL Server Notes

For on-premises SQL Server installations, direct table access is typically available and faster. The source will automatically use direct queries when possible, with stored procedures as fallback.

Production Recommendation: Explicitly set is_aws_rds: false to ensure optimal performance:

source:
type: mssql
config:
is_aws_rds: false # Recommended for on-premises
# ... other config

Troubleshooting Permissions

If you encounter permission errors:

  1. RDS environments: Ensure stored procedure execute permissions are granted
  2. On-premises environments: Verify both table select and stored procedure execute permissions
  3. Mixed environments: Grant all permissions listed above for maximum compatibility

The DataHub source will automatically handle fallback between methods and provide detailed error messages with specific permission requirements if issues occur.


Query-Based Lineage and Usage Statistics

DataHub can extract lineage and usage statistics by analyzing SQL queries executed on your MS SQL Server. This provides insights into:

  • Table-level lineage: Understand which tables are read from and written to
  • Column-level lineage: Track how data flows between specific columns
  • Usage patterns: Identify frequently accessed tables and user activity
  • Query performance: Monitor execution counts and timing

Known Limitations

  • User attribution not supported: SQL Server Query Store and DMVs do not preserve historical user session context. Query extraction focuses on query content, frequency, and performance metrics.

Prerequisites for Query-Based Lineage

SQL Server Version Requirements

Query-based lineage requires SQL Server 2016 or later.

Permission Requirements

Grant the DataHub user VIEW SERVER STATE permission:

USE master;
GRANT VIEW SERVER STATE TO [datahub_user];
GO

This permission is required to access:

  • Query Store (preferred method, SQL Server 2016+)
  • Dynamic Management Views (DMVs) as fallback

Verify permissions:

-- Check server-level permissions
SELECT
state_desc,
permission_name
FROM sys.server_permissions
WHERE grantee_principal_id = SUSER_ID('datahub_user');
-- Should show VIEW SERVER STATE

Query Store is the preferred method for query extraction. It provides better query history retention and performance.

Enable Query Store

-- Enable Query Store for your database
ALTER DATABASE [YourDatabase] SET QUERY_STORE = ON;

-- Configure Query Store settings (recommended production values)
-- Source: SQL Server defaults since 2016/2017, based on Microsoft's extensive testing
-- Reference: https://learn.microsoft.com/en-us/sql/relational-databases/performance/best-practice-with-the-query-store
ALTER DATABASE [YourDatabase] SET QUERY_STORE (
OPERATION_MODE = READ_WRITE,
DATA_FLUSH_INTERVAL_SECONDS = 900, -- 15 min: balances durability vs I/O overhead
INTERVAL_LENGTH_MINUTES = 60, -- 1 hour: sufficient granularity for lineage, lower storage overhead
MAX_STORAGE_SIZE_MB = 1000, -- 1 GB: ~30-90 days retention for typical workloads (100-1000 queries/day)
QUERY_CAPTURE_MODE = AUTO, -- Captures frequent queries only; filters ad-hoc noise
SIZE_BASED_CLEANUP_MODE = AUTO -- Auto-removes old queries at capacity; prevents read-only mode
);

Adjust for your environment:

  • Low volume (<100 queries/day): MAX_STORAGE_SIZE_MB = 100
  • High volume (>10,000 queries/day): MAX_STORAGE_SIZE_MB = 2000, INTERVAL_LENGTH_MINUTES = 30
  • Dev/Test: MAX_STORAGE_SIZE_MB = 100
  • OLTP with ad-hoc queries: QUERY_CAPTURE_MODE = ALL (monitor storage)

Verify Query Store Status

SELECT
name,
is_query_store_on,
query_store_state_desc
FROM sys.databases
WHERE name = 'YourDatabase';

Expected output:

  • is_query_store_on: 1 (enabled)
  • query_store_state_desc: "READ_WRITE" (active)

Configuration

Enable query-based lineage in your DataHub recipe:

source:
type: mssql
config:
host_port: localhost:1433
database: YourDatabase
username: datahub_user
password: your_password

# Enable query-based lineage extraction
include_query_lineage: true

# Maximum number of queries to extract (default: 1000, max: 10000)
max_queries_to_extract: 1000

# Minimum query execution count to include (default: 1)
# Higher values reduce noise from rarely-executed queries
min_query_calls: 5

# Exclude system and temporary queries (comprehensive list)
query_exclude_patterns:
- "%sys.%" # System tables
- "%tempdb.%" # Temp database
- "%INFORMATION_SCHEMA%" # Metadata views
- "%msdb.%" # SQL Agent database
- "%master.%" # Master database
- "%model.%" # Model database
- "%#%" # Temporary tables
- "%sp_reset_connection%" # JDBC connection resets
- "%SET ANSI_%" # Connection setup
- "%SELECT @@%" # Driver metadata queries
- "%ReportServer%" # SSRS system tables

# Enable usage statistics (requires graph connection)
# include_usage_statistics: true

sink:
# Your sink config

Configuration Options

OptionTypeDefaultDescription
include_query_lineagebooleanfalseEnable query-based lineage extraction
max_queries_to_extractinteger1000Maximum queries to analyze (range: 1-10000)
min_query_callsinteger1Minimum execution count to include query
query_exclude_patternslist[string][]SQL LIKE patterns to exclude queries (max 100 patterns)
include_usage_statisticsbooleanfalseExtract usage statistics (requires include_query_lineage: true)

Query Extraction Methods

DataHub automatically selects the best available method:

  1. Query Store (Preferred) - SQL Server 2016+

    • Provides comprehensive query history
    • Better performance and reliability
    • Requires Query Store to be enabled
  2. DMV Fallback - All supported versions

    • Uses sys.dm_exec_cached_plans and related DMVs
    • Limited to queries currently in plan cache
    • Smaller query history window

The source will automatically detect and use the appropriate method based on your SQL Server version and configuration.

Troubleshooting

Debug Mode

Enable debug logging to see detailed information about query extraction and parsing:

datahub ingest -c recipe.yml --debug

Look for these key log messages:

  • INFO: Extracted X queries from query_store - Confirms queries were fetched
  • INFO: Processed X queries for lineage extraction (X failed) - Shows parsing results
  • INFO: Generated X lineage workunits from queries - Confirms lineage generation
  • DEBUG: Query extraction completed in X.XX seconds - Performance metrics

Common Issues and Solutions

Issue: "Query Store is not enabled"

Solution: Follow the Query Store setup instructions above (see "Enable Query Store" section).

Issue: "VIEW SERVER STATE permission denied"

Solution: Follow the permission grant instructions above (see "Permission Requirements" section).

Issue: "SQL Server version 2014 detected, but 2016+ is required"

Solution:

  • Upgrade to SQL Server 2016 or later for Query Store support
  • The DMV fallback method has limited query history and is not recommended as the primary method

Issue: Query Store Returns No Results

Possible causes:

  1. No queries in history:

    • Queries may have been cleared from Query Store
    • Query Store retention settings may be too aggressive
    • Solution: Execute some queries and wait for them to appear in Query Store
  2. All queries filtered by exclude patterns:

    • Review your query_exclude_patterns configuration
    • Solution: Reduce or remove overly broad exclusion patterns
  3. Queries below min_query_calls threshold:

    • Queries executed fewer times than min_query_calls are excluded
    • Solution: Lower the min_query_calls value or execute queries more frequently
  4. Query Store query capture mode:

    • If set to NONE or CUSTOM with restrictive filters
    • Solution: Set to AUTO or ALL:
      ALTER DATABASE [YourDatabase]
      SET QUERY_STORE (QUERY_CAPTURE_MODE = AUTO);
  5. Query Store data not yet captured:

    • Query Store captures queries asynchronously
    • Solution: Wait 5-10 minutes after query execution, then verify:
      SELECT COUNT(*) AS query_count
      FROM sys.query_store_query;
      -- Should return > 0

Issue: Queries Extracted but No Lineage Appears

Symptoms:

  • Logs show "Extracted X queries" but "Generated 0 lineage workunits"
  • Or many queries show parsing failures

Possible causes:

  1. SQL parsing errors:

    • Complex SQL syntax not supported (CTEs, window functions, vendor-specific syntax)
    • Solution: Check debug logs for SqlUnderstandingError or UnsupportedStatementTypeError
  2. Tables filtered by patterns:

    • Tables in queries match your table_pattern deny list
    • Solution: Review your table_pattern and schema_pattern configuration
  3. Tables not yet ingested:

    • Lineage references tables that haven't been discovered yet
    • Solution: Ensure base table ingestion runs before query lineage, or run ingestion twice
  4. Database name mismatch:

    • Queries reference different databases than configured
    • Solution: Use database: null to ingest all databases, or adjust database config

Debug steps:

# Run with debug logging
datahub ingest -c recipe.yml --debug 2>&1 | grep -i "lineage\|parsing\|query"

# Look for these patterns:
# - "Unable to parse query" - indicates SQL parsing issues
# - "Table X not found" - indicates missing base tables
# - "Filtered table" - indicates pattern matching issues

Issue: Authentication or Connection Errors

Error message:

Login failed for user 'datahub_user'
Cannot open database "YourDatabase" requested by the login

Solutions:

  1. Login failure:

    -- Verify user exists and has correct password
    SELECT name, type_desc, is_disabled
    FROM sys.server_principals
    WHERE name = 'datahub_user';
  2. Database access denied:

    -- Grant database access
    USE [YourDatabase];
    CREATE USER [datahub_user] FOR LOGIN [datahub_user];
    GRANT CONNECT TO [datahub_user];
  3. Connection timeout:

    • Increase timeout in connection string:
      source:
      config:
      uri_args:
      connect_timeout: 30
      timeout: 30

Issue: Temporary Tables Not Recognized

Symptoms:

  • Lineage shows temp table names like #temp_table as actual tables
  • Temp tables pollute your lineage graph

Solution:

MSSQL temp tables (starting with #) are automatically filtered by default. If you see temp tables in lineage:

  1. Verify temp table patterns are configured:

    source:
    config:
    temporary_tables_pattern:
    - ".*#.*" # Built-in default pattern
  2. Add custom temp table patterns:

    source:
    config:
    temporary_tables_pattern:
    - ".*#.*" # Standard SQL Server temp tables
    - ".*\.temp_.*" # Custom naming pattern
    - ".*\.staging_.*" # ETL staging tables

Issue: Performance Problems (Slow Extraction or Hit Query Limit)

Symptoms:

  • Query extraction takes >30 seconds
  • Ingestion times out or hangs during query extraction
  • Ingestion report shows exactly max_queries_to_extract queries processed
  • Warning in logs: "Reached max_queries_to_extract limit"

Understanding the Problem:

When you hit the limit, only the top N queries by execution time are extracted. This means:

  • Remaining queries are not processed for lineage
  • Less frequently executed queries may be missed
  • Lineage may be incomplete for less active tables

Performance Tuning Solutions:

  1. Reduce query limit (if experiencing slowness):

    source:
    config:
    max_queries_to_extract: 500 # Reduced from default 1000
  2. Increase query limit (if you need more coverage and can afford the time):

    source:
    config:
    max_queries_to_extract: 5000 # Increased from default 1000
  3. Filter out noise with exclude patterns (see comprehensive list in Configuration section above)

  4. Focus on frequently-executed queries:

    source:
    config:
    min_query_calls: 10 # Only extract queries executed 10+ times
  5. Check Query Store performance:

    -- Check Query Store size and query count
    SELECT
    actual_state_desc,
    readonly_reason,
    current_storage_size_mb,
    max_storage_size_mb,
    query_count = (SELECT COUNT(*) FROM sys.query_store_query)
    FROM sys.database_query_store_options;

Recommendations:

  • Start with max_queries_to_extract: 1000 (default)
  • Monitor extraction time in debug logs
  • Adjust based on your requirements and performance constraints
  • Use exclude patterns to filter unnecessary queries before hitting the limit

Issue: DMV Fallback But Empty Results

Symptoms:

  • Logs show "falling back to DMV-based extraction"
  • But then "Extracted 0 queries from dmv"

Cause: DMVs only contain queries currently in the plan cache. The cache clears on:

  • SQL Server restart
  • Memory pressure
  • Manual cache clearing

Solutions:

  1. Enable Query Store (recommended) - See "Enable Query Store" section above

  2. Execute representative queries before ingestion:

    • Run your typical ETL jobs
    • Execute common queries
    • Wait 5-10 minutes, then run ingestion
  3. Check plan cache size:

    SELECT
    size_in_bytes/1024/1024 AS cache_size_mb,
    name,
    type
    FROM sys.dm_os_memory_clerks
    WHERE type = 'CACHESTORE_SQLCP';

Issue: Configuration Validation Errors

Error messages:

ValidationError: max_queries_to_extract must be positive
ValidationError: query_exclude_patterns cannot exceed 100 patterns

Valid ranges:

  • max_queries_to_extract: 1 to 10000
  • min_query_calls: 0 to any positive integer
  • query_exclude_patterns: Maximum 100 patterns, each up to 500 characters

Issue: SQL Aggregator Initialization Failed

Error message:

RuntimeError: Failed to initialize SQL aggregator for query-based lineage,
but include_query_lineage: true was explicitly enabled

Possible causes:

  • Graph connection missing when include_usage_statistics: true
  • Invalid platform instance configuration

Solution:

  • If using include_usage_statistics, ensure your sink is configured with a DataHub GMS connection
  • Verify your source configuration is valid

How to Verify Query Lineage is Working End-to-End

Follow these steps to confirm everything is configured correctly:

  1. Verify permissions:

    -- Should return at least one row with VIEW SERVER STATE
    SELECT * FROM fn_my_permissions(NULL, 'SERVER')
    WHERE permission_name = 'VIEW SERVER STATE';
  2. Verify Query Store has data (see "Verify Query Store Status" section above)

  3. Test Query Store access:

    -- Test Query Store access
    SELECT TOP 1 query_id, query_sql_text
    FROM sys.query_store_query_text;

    -- Test DMV access
    SELECT TOP 1 sql_handle
    FROM sys.dm_exec_query_stats;
  4. Run ingestion with debug logging:

    datahub ingest -c recipe.yml --debug 2>&1 | tee ingestion.log
  5. Check for success indicators in logs:

    grep -i "extracted.*queries" ingestion.log
    grep -i "processed.*queries" ingestion.log
    grep -i "generated.*lineage workunits" ingestion.log
  6. Verify in DataHub UI:

    • Navigate to a table that appears in your queries
    • Check the "Lineage" tab for upstream/downstream relationships
    • Check the "Queries" tab to see associated SQL queries

Still Having Issues?

If you've tried the above steps and still experiencing issues:

  1. Collect diagnostic information:

    # Run ingestion with debug logging
    datahub ingest -c recipe.yml --debug > ingestion.log 2>&1

    # Check SQL Server version
    # Check Query Store status
    # Check user permissions
  2. Check DataHub GitHub issues:

  3. Ask for help:

    • Include your DataHub version
    • Include SQL Server version
    • Include sanitized config (remove passwords!)
    • Include relevant log excerpts
    • Describe expected vs actual behavior

Best Practices

  1. Start Conservative: Begin with max_queries_to_extract: 1000 and min_query_calls: 5
  2. Monitor Query Store Size: Set appropriate retention policies
  3. Use Exclude Patterns: Filter out system queries and temporary tables (see comprehensive list in Configuration section)
  4. Regular Extraction: Run ingestion at least daily for accurate usage statistics
  5. Test Before Production: Validate permissions and Query Store setup in a non-production environment first

Performance Considerations

  • Query Store Impact: Query Store has minimal overhead when configured appropriately
  • Extraction Performance: Query Store typically performs better than DMV method
  • Storage: Query Store storage usage depends on retention settings and query volume
  • Parsing Time: Scales with query complexity and volume; monitor debug logs for timing

CLI based Ingestion

Starter Recipe

Check out the following recipe to get started with ingestion! See below for full configuration options.

For general pointers on writing and running a recipe, see our main recipe guide.

source:
type: mssql
config:
# Coordinates
host_port: localhost:1433
database: DemoDatabase

# Credentials
username: user
password: pass

# This is recommended to improve lineage quality. Ignores case-sensitivity when constructing internal identifiers.
convert_urns_to_lowercase: True

# Options
# Uncomment if you need to use encryption with pytds
# See https://python-tds.readthedocs.io/en/latest/pytds.html#pytds.connect
# options:
# connect_args:
# cafile: server-ca.pem
# validate_host: true

sink:
# sink configs

#------------------------------------------------------------------------
#Example: with query-based lineage and usage statistics
#This requires SQL Server 2016+ and Query Store enabled.
#See the prerequisites documentation for setup instructions.
# ------------------------------------------------------------------------

source:
type: mssql
config:
# Coordinates
host_port: localhost:1433
database: DemoDatabase

# Credentials
username: user
password: pass

# Query-based lineage extraction
include_query_lineage: true
max_queries_to_extract: 1000
min_query_calls: 5
query_exclude_patterns:
- "%sys.%"
- "%tempdb.%"
- "%INFORMATION_SCHEMA%"
- "%msdb.%"
- "%#%" # Temporary tables

sink:
# sink configs

#------------------------------------------------------------------------
#Example: using ingestion with ODBC and encryption
#This requires you to have already installed the Microsoft ODBC Driver for SQL Server.
#See https://docs.microsoft.com/en-us/sql/connect/python/pyodbc/step-1-configure-development-environment-for-pyodbc-python-development?view=sql-server-ver15

# Note: ODBC mode is automatically determined by the source type.
# Use 'mssql-odbc' for ODBC connections, 'mssql' for non-ODBC (pytds) connections.
# ------------------------------------------------------------------------

source:
type: mssql-odbc
config:
# Coordinates
host_port: localhost:1433
database: DemoDatabase

# Credentials
username: admin
password: password

# Options
uri_args:
driver: "ODBC Driver 18 for SQL Server"
Encrypt: "yes"
TrustServerCertificate: "Yes"
ssl: "True"

sink:
# sink configs

Config Details

Note that a . is used to denote nested fields in the YAML recipe.

FieldDescription
bucket_duration
Enum
One of: "DAY", "HOUR"
convert_urns_to_lowercase
boolean
Enable to convert the SQL Server assets urns to lowercase
Default: False
database
One of string, null
database (catalog). If set to Null, all databases will be considered for ingestion.
Default: None
end_time
string(date-time)
Latest date of lineage/usage to consider. Default: Current time in UTC
format_sql_queries
boolean
Whether to format sql queries
Default: False
host_port
string
MSSQL host URL.
Default: localhost:1433
include_containers_for_pipelines
boolean
Enable the container aspects ingestion for both pipelines and tasks. Note that this feature requires the corresponding model support in the backend, which was introduced in version 0.15.0.1.
Default: False
include_descriptions
boolean
Include table descriptions information.
Default: True
include_jobs
boolean
Include ingest of MSSQL Jobs. Requires access to the 'msdb' and 'sys' schema.
Default: True
include_lineage
boolean
Enable lineage extraction for stored procedures
Default: True
include_operational_stats
boolean
Whether to display operational stats.
Default: True
include_query_lineage
boolean
Enable query-based lineage extraction from Query Store or DMVs. Query Store is preferred (SQL Server 2016+) and must be enabled on the database. Falls back to DMV-based extraction (sys.dm_exec_cached_plans) for older versions. Requires VIEW SERVER STATE permission. See documentation for setup instructions.
Default: False
include_read_operational_stats
boolean
Whether to report read operational stats. Experimental.
Default: False
include_stored_procedures
boolean
Include ingest of stored procedures. Requires access to the 'sys' schema.
Default: True
include_stored_procedures_code
boolean
Include information about object code.
Default: True
include_table_location_lineage
boolean
If the source supports it, include table lineage to the underlying storage location.
Default: True
include_tables
boolean
Whether tables should be ingested.
Default: True
include_top_n_queries
boolean
Whether to ingest the top_n_queries.
Default: True
include_usage_statistics
boolean
Generate usage statistics from query history. Requires include_query_lineage to be enabled. Collects metrics like unique user counts, query frequencies, and column access patterns. Statistics appear in DataHub UI under the Dataset Profile > Usage tab.
Default: False
include_view_column_lineage
boolean
Populates column-level lineage for view->view and table->view lineage using DataHub's sql parser. Requires include_view_lineage to be enabled.
Default: True
include_view_lineage
boolean
Populates view->view and table->view lineage using DataHub's sql parser.
Default: True
include_views
boolean
Whether views should be ingested.
Default: True
incremental_lineage
boolean
When enabled, emits lineage as incremental to existing lineage already in DataHub. When disabled, re-states lineage on each run.
Default: False
is_aws_rds
One of boolean, null
Indicates if the SQL Server instance is running on AWS RDS. When None (default), automatic detection will be attempted using server name analysis.
Default: None
max_queries_to_extract
integer
Maximum number of queries to extract for lineage analysis. Queries are prioritized by execution time and frequency.
Default: 1000
min_query_calls
integer
Minimum number of executions required for a query to be included. Set higher to focus on frequently-used queries.
Default: 1
options
object
Any options specified here will be passed to SQLAlchemy.create_engine as kwargs. To set connection arguments in the URL, specify them under connect_args.
password
One of string(password), null
password
Default: None
platform_instance
One of string, null
The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://docs.datahub.com/docs/platform-instances/ for more details.
Default: None
quote_schemas
boolean
Represent a schema identifiers combined with quoting preferences. See sqlalchemy quoted_name docs.
Default: False
sqlalchemy_uri
One of string, null
URI of database to connect to. See https://docs.sqlalchemy.org/en/14/core/engines.html#database-urls. Takes precedence over other connection parameters.
Default: None
start_time
string(date-time)
Earliest date of lineage/usage to consider. Default: Last full day in UTC (or hour, depending on bucket_duration). You can also specify relative time with respect to end_time such as '-7 days' Or '-7d'.
Default: None
top_n_queries
integer
Number of top queries to save to each table.
Default: 10
uri_args
map(str,string)
use_file_backed_cache
boolean
Whether to use a file backed cache for the view definitions.
Default: True
username
One of string, null
username
Default: None
env
string
The environment that all assets produced by this connector belong to
Default: PROD
database_pattern
AllowDenyPattern
A class to store allow deny regexes
database_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
domain
map(str,AllowDenyPattern)
A class to store allow deny regexes
domain.key.allow
array
List of regex patterns to include in ingestion
Default: ['.*']
domain.key.allow.string
string
domain.key.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
domain.key.deny
array
List of regex patterns to exclude from ingestion.
Default: []
domain.key.deny.string
string
procedure_pattern
AllowDenyPattern
A class to store allow deny regexes
procedure_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
profile_pattern
AllowDenyPattern
A class to store allow deny regexes
profile_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
query_exclude_patterns
One of array, null
SQL LIKE patterns to exclude from query extraction. Example: ['%sys.%', '%temp_%'] to exclude system and temp tables.
Default: None
query_exclude_patterns.string
string
schema_pattern
AllowDenyPattern
A class to store allow deny regexes
schema_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
table_pattern
AllowDenyPattern
A class to store allow deny regexes
table_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
temporary_tables_pattern
array
[Advanced] Regex patterns for temporary tables to filter in lineage ingestion. Specify regex to match the entire table name in database.schema.table format. Defaults are to set in such a way to ignore the temporary staging tables created by known ETL tools.
Default: ['.*\.FIVETRAN_.*_STAGING\..*', '.*__DBT_TMP$', ...
temporary_tables_pattern.string
string
user_email_pattern
AllowDenyPattern
A class to store allow deny regexes
user_email_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
view_pattern
AllowDenyPattern
A class to store allow deny regexes
view_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
classification
ClassificationConfig
classification.enabled
boolean
Whether classification should be used to auto-detect glossary terms
Default: False
classification.info_type_to_term
map(str,string)
classification.max_workers
integer
Number of worker processes to use for classification. Set to 1 to disable.
Default: 4
classification.sample_size
integer
Number of sample values used for classification.
Default: 100
classification.classifiers
array
Classifiers to use to auto-detect glossary terms. If more than one classifier, infotype predictions from the classifier defined later in sequence take precedance.
Default: [{'type': 'datahub', 'config': None}]
classification.classifiers.DynamicTypedClassifierConfig
DynamicTypedClassifierConfig
classification.classifiers.DynamicTypedClassifierConfig.type 
string
The type of the classifier to use. For DataHub, use datahub
classification.classifiers.DynamicTypedClassifierConfig.config
One of object, null
The configuration required for initializing the classifier. If not specified, uses defaults for classifer type.
Default: None
classification.column_pattern
AllowDenyPattern
A class to store allow deny regexes
classification.column_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
classification.table_pattern
AllowDenyPattern
A class to store allow deny regexes
classification.table_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
profiling
GEProfilingConfig
profiling.catch_exceptions
boolean
Default: True
profiling.enabled
boolean
Whether profiling should be done.
Default: False
profiling.field_sample_values_limit
integer
Upper limit for number of sample values to collect for all columns.
Default: 20
profiling.include_field_distinct_count
boolean
Whether to profile for the number of distinct values for each column.
Default: True
profiling.include_field_distinct_value_frequencies
boolean
Whether to profile for distinct value frequencies.
Default: False
profiling.include_field_histogram
boolean
Whether to profile for the histogram for numeric fields.
Default: False
profiling.include_field_max_value
boolean
Whether to profile for the max value of numeric columns.
Default: True
profiling.include_field_mean_value
boolean
Whether to profile for the mean value of numeric columns.
Default: True
profiling.include_field_median_value
boolean
Whether to profile for the median value of numeric columns.
Default: True
profiling.include_field_min_value
boolean
Whether to profile for the min value of numeric columns.
Default: True
profiling.include_field_null_count
boolean
Whether to profile for the number of nulls for each column.
Default: True
profiling.include_field_quantiles
boolean
Whether to profile for the quantiles of numeric columns.
Default: False
profiling.include_field_sample_values
boolean
Whether to profile for the sample values for all columns.
Default: True
profiling.include_field_stddev_value
boolean
Whether to profile for the standard deviation of numeric columns.
Default: True
profiling.limit
One of integer, null
Max number of documents to profile. By default, profiles all documents.
Default: None
profiling.max_number_of_fields_to_profile
One of integer, null
A positive integer that specifies the maximum number of columns to profile for any table. None implies all columns. The cost of profiling goes up significantly as the number of columns to profile goes up.
Default: None
profiling.max_workers
integer
Number of worker threads to use for profiling. Set to 1 to disable.
Default: 20
profiling.method
Enum
One of: "ge", "sqlalchemy"
Default: ge
profiling.offset
One of integer, null
Offset in documents to profile. By default, uses no offset.
Default: None
profiling.partition_datetime
One of string(date-time), null
If specified, profile only the partition which matches this datetime. If not specified, profile the latest partition. Only Bigquery supports this.
Default: None
profiling.partition_profiling_enabled
boolean
Whether to profile partitioned tables. Only BigQuery and Aws Athena supports this. If enabled, latest partition data is used for profiling.
Default: True
profiling.profile_external_tables
boolean
Whether to profile external tables. Only Snowflake and Redshift supports this.
Default: False
profiling.profile_if_updated_since_days
One of number, null
Profile table only if it has been updated since these many number of days. If set to null, no constraint of last modified time for tables to profile. Supported only in snowflake and BigQuery.
Default: None
profiling.profile_nested_fields
boolean
Whether to profile complex types like structs, arrays and maps.
Default: False
profiling.profile_table_level_only
boolean
Whether to perform profiling at table-level only, or include column-level profiling as well.
Default: False
profiling.profile_table_row_count_estimate_only
boolean
Use an approximate query for row count. This will be much faster but slightly less accurate. Only supported for Postgres and MySQL.
Default: False
profiling.profile_table_row_limit
One of integer, null
Profile tables only if their row count is less than specified count. If set to null, no limit on the row count of tables to profile. Supported only in Snowflake, BigQuery. Supported for Oracle based on gathered stats.
Default: 5000000
profiling.profile_table_size_limit
One of integer, null
Profile tables only if their size is less than specified GBs. If set to null, no limit on the size of tables to profile. Supported only in Snowflake, BigQuery and Databricks. Supported for Oracle based on calculated size from gathered stats.
Default: 5
profiling.query_combiner_enabled
boolean
This feature is still experimental and can be disabled if it causes issues. Reduces the total number of queries issued and speeds up profiling by dynamically combining SQL queries where possible.
Default: True
profiling.report_dropped_profiles
boolean
Whether to report datasets or dataset columns which were not profiled. Set to True for debugging purposes.
Default: False
profiling.sample_size
integer
Number of rows to be sampled from table for column level profiling.Applicable only if use_sampling is set to True.
Default: 10000
profiling.turn_off_expensive_profiling_metrics
boolean
Whether to turn off expensive profiling or not. This turns off profiling for quantiles, distinct_value_frequencies, histogram & sample_values. This also limits maximum number of fields being profiled to 10.
Default: False
profiling.use_sampling
boolean
Whether to profile column level stats on sample of table. Only BigQuery and Snowflake support this. If enabled, profiling is done on rows sampled from table. Sampling is not done for smaller tables.
Default: True
profiling.operation_config
OperationConfig
profiling.operation_config.lower_freq_profile_enabled
boolean
Whether to do profiling at lower freq or not. This does not do any scheduling just adds additional checks to when not to run profiling.
Default: False
profiling.operation_config.profile_date_of_month
One of integer, null
Number between 1 to 31 for date of month (both inclusive). If not specified, defaults to Nothing and this field does not take affect.
Default: None
profiling.operation_config.profile_day_of_week
One of integer, null
Number between 0 to 6 for day of week (both inclusive). 0 is Monday and 6 is Sunday. If not specified, defaults to Nothing and this field does not take affect.
Default: None
profiling.tags_to_ignore_sampling
One of array, null
Fixed list of tags to ignore sampling. If not specified, tables will be sampled based on use_sampling.
Default: None
profiling.tags_to_ignore_sampling.string
string
stateful_ingestion
One of StatefulStaleMetadataRemovalConfig, null
Default: None
stateful_ingestion.enabled
boolean
Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False
Default: False
stateful_ingestion.fail_safe_threshold
number
Prevents large amount of soft deletes & the state from committing from accidental changes to the source configuration if the relative change percent in entities compared to the previous state is above the 'fail_safe_threshold'.
Default: 75.0
stateful_ingestion.remove_stale_metadata
boolean
Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.
Default: True

Code Coordinates

  • Class Name: datahub.ingestion.source.sql.mssql.source.SQLServerSource
  • Browse on GitHub

Questions

If you've got any questions on configuring ingestion for Microsoft SQL Server, feel free to ping us on our Slack.