Skip to main content

PowerBI

Overview

Microsoft Power BI is a business intelligence and analytics platform. Learn more in the official Microsoft Power BI documentation.

The DataHub integration for Microsoft Power BI covers BI entities such as dashboards, charts, datasets, and related ownership context. Depending on module capabilities, it can also capture features such as lineage, usage, profiling, ownership, tags, and stateful deletion detection.

Concept Mapping

PowerBIDatahub
DashboardDashboard
Dataset's TableDataset
TileChart
Report.webUrlChart.externalUrl
WorkspaceContainer
ReportDashboard
PaginatedReportDashboard
PageChart
AppDashboard
  • If Tile is created from report then Chart.externalUrl is set to Report.webUrl.
  • The Page is unavailable for PowerBI PaginatedReport.

Module powerbi

Certified

Important Capabilities

CapabilityStatusNotes
Asset ContainersEnabled by default. Supported for types - Workspace, Semantic Model.
Column-level LineageDisabled by default, configured using extract_column_level_lineage. .
Data ProfilingOptionally enabled via configuration profiling.enabled.
DescriptionsEnabled by default.
Detect Deleted EntitiesEnabled by default via stateful ingestion.
Extract OwnershipEnabled by default.
Extract TagsEnabled by default.
Platform InstanceEnabled by default.
Schema MetadataEnabled by default.
Table-Level LineageEnabled by default, configured using extract_lineage.
Test ConnectionEnabled by default.

Overview

The powerbi module ingests metadata from Powerbi into DataHub. It is intended for production ingestion workflows and module-specific capabilities are documented below.

This plugin extracts the following:

  • Power BI dashboards, tiles and datasets
  • Names, descriptions and URLs of dashboard and tile
  • Owners of dashboards

Prerequisites

In order to execute this source, you will need to have a Microsoft Entra Application service principal and grant permissions to it inside Power BI.

Power BI's APIs can be categorized into two sets of API methods, with different permission structures:

  • Public APIs are designed for developers to interact with specific resources within a tenant, and require the Entra application to be explicitly granted access to individual Workspaces.
  • The Admin APIs are designed for administrators to interact with the entire Power BI tenant at a high level, and return metadata on all Power BI resources.

The recommended way to execute Power BI ingestion is to do both: add your Entra application to the workspaces you want to ingest, andgrant it access to the public and Admin APIs. That way ingestion can extract the most metadata.

Public APIs ingestion

To grant public API access to your Entra application:

  1. Grant permissions to access Fabric public APIs: Add your Entra Application's parent Entra Group under your Power BI/Fabric tenant settings in order to grant API access.

    a. In Power BI or Fabric, go to Settings -> Admin portal

    b. In the Admin portal, navigate to Tenant settings

    d. Under Developer Settings, enable the option Service principals can call Fabric Public APIs (or Allow service principals to use Power BI APIs in older versions of Power BI), and add your application's Entra group under Specific security groups.

  2. Add your Entra application as a member of your Power BI workspaces: For workspaces which you want to ingest into DataHub, add the Entra application as a member. For most cases Viewer role is enough, but for profiling the Contributor role is required.

If you have granted your Entra application permissions to the public APIs and added it as a member in a workspace, then the Power BI Source will be able to ingest the below metadata of that particular workspace:

  • Dashboards
  • Dashboard Tiles
  • Reports
  • Report Pages

If you don't want to add an Entra application as a member in your workspace, then you can enable admin_apis_only: true in your recipe to use the Power BI Admin API only. Caveats of setting admin_apis_only to true:

  • Report Pages will not get ingested as the page API is not available in the Power BI Admin API
  • Power BI Parameters will not get resolved to actual values while processing M-Query for table lineage
  • Dataset profiling is unavailable, as it requires access to the non-admin workspace API

Admin APIs ingestion

To grant admin API access to the Entra application:

  1. Grant permissions to access Admin APIs: Add your Entra Application's parent Entra Group under your Power BI/Fabric tenant settings in order to grant API access.

    a. In Power BI or Fabric, go to Settings -> Admin portal

    b. In the Admin portal, navigate to Tenant settings

    d. For each of the following options, enable the option and add your Entra application's Group under Specific security groups:

    • Service principals can access read-only admin APIs
    • Enhance admin APIs responses with detailed metadata
    • Enhance admin APIs responses with DAX and mashup expressions

If you have granted your Entra application permissions to the Admin APIs, then the Power BI Source will be able to ingest the below listed metadata of that particular workspace:

  • Lineage
  • Datasets
  • Endorsement as tag
  • Dashboards
  • Dashboard Tiles
  • Reports
  • Report Pages
  • App

Install the Plugin

pip install 'acryl-datahub[powerbi]'

Starter Recipe

Check out the following recipe to get started with ingestion! See below for full configuration options.

For general pointers on writing and running a recipe, see our main recipe guide.

source:
type: "powerbi"
config:
# Your Power BI tenant identifier
tenant_id: a949d688-67c0-4bf1-a344-e939411c6c0a

# Microsoft Entra Application identifier
client_id: 12345678-abcd-abcd-abcd-123456789012
# Microsoft Entra Application client secret value
client_secret: Abc12d~efg3hijkl_45Abcdefg67aBcdef89

# Ingest elements of below PowerBi Workspace into Datahub
workspace_name_pattern:
allow:
- MyWorkspace
deny:
- PrivateWorkspace

# Enable / Disable ingestion of ownership information for dashboards
extract_ownership: true

# Enable/Disable extracting workspace information to DataHub containers
extract_workspaces_to_containers: true

# Enable / Disable ingestion of endorsements.
# Please notice that this may overwrite any existing tags defined to ingested entities!
extract_endorsements_to_tags: false

# Optional -- This mapping is optional and only required to configure platform-instance for upstream tables
# A mapping of PowerBI datasource's server i.e host[:port] to data platform instance.
# :port is optional and only needed if your datasource server is running on non-standard port.
# For Google BigQuery the datasource's server is google bigquery project name.
# For Fabric OneLake (DirectLake lineage), use PowerBI workspace ID as the key.
# The platform_instance typically represents the Fabric tenant identifier, matching how
# the OneLake connector uses platform_instance to group workspaces by tenant.
server_to_platform_instance:
ap-south-1.snowflakecomputing.com:
platform_instance: operational_instance
env: DEV
oracle-server:1920:
platform_instance: high_performance_production_unit
env: PROD
big-query-sales-project:
platform_instance: sn-2
env: QA
# Fabric OneLake: workspace ID -> (platform_instance, env)
# platform_instance is typically the Fabric tenant identifier
ff23fbe3-7418-42f8-a675-9f10eb2b78cb: # PowerBI workspace ID
platform_instance: contoso-tenant # Fabric tenant/platform instance
env: PROD

# Need admin_api, only ingest workspace that are modified since...
modified_since: "2023-02-10T00:00:00.0000000Z"

ownership:
# create powerbi user as datahub corpuser, false will still extract ownership of workspace/ dashboards
create_corp_user: false
# use email to build user urn instead of powerbi user identifier
use_powerbi_email: true
# remove email suffix like @acryl.io
remove_email_suffix: true
# only ingest user with certain authority
owner_criteria: ["ReadWriteReshareExplore","Owner","Admin"]
# wrap powerbi tables (datahub dataset) under 1 powerbi dataset (datahub container)
extract_datasets_to_containers: true
# only ingest dataset that are endorsed, like "Certified"
filter_dataset_endorsements:
allow:
- Certified

# extract powerbi dashboards and tiles
extract_dashboards: false
# extract powerbi dataset table schema
extract_dataset_schema: true

# Enable PowerBI dataset profiling
profiling:
enabled: false
# Pattern to limit which resources to profile
# Matched resource format is following:
# workspace_name.dataset_name.table_name
profile_pattern:
deny:
- .*


sink:
# sink configs

Config Details

Note that a . is used to denote nested fields in the YAML recipe.

FieldDescription
client_id 
string
Azure app client identifier
client_secret 
string(password)
Azure app client secret
tenant_id 
string
PowerBI tenant identifier
admin_apis_only
boolean
Retrieve metadata using PowerBI Admin API only. If this is enabled, then Report Pages will not be extracted. Admin API access is required if this setting is enabled
Default: False
convert_lineage_urns_to_lowercase
boolean
Whether to convert the urns of ingested lineage dataset to lowercase
Default: True
convert_urns_to_lowercase
boolean
Whether to convert the PowerBI assets urns to lowercase
Default: False
dsn_to_database_schema
map(str,string)
dsn_to_platform_name
map(str,string)
enable_advance_lineage_sql_construct
boolean
Whether to enable advance native sql construct for parsing like join, sub-queries. along this flag , the native_query_parsing should be enabled. By default convert_lineage_urns_to_lowercase is enabled, in-case if you have disabled it in previous ingestion execution then it may break lineageas this option generates the upstream datasets URN in lowercase.
Default: True
environment
Enum
One of: "COMMERCIAL", "GOVERNMENT"
extract_app
boolean
Whether to ingest workspace app. Requires DataHub server 0.14.2+.
Default: False
extract_column_level_lineage
boolean
Whether to extract column level lineage. Works only if configs native_query_parsing, enable_advance_lineage_sql_construct & extract_lineage are enabled.Works for M-Query where native SQL is used for transformation.
Default: False
extract_dashboards
boolean
Whether to ingest PBI Dashboard and Tiles as Datahub Dashboard and Chart
Default: True
extract_dataset_schema
boolean
Whether to ingest PBI Dataset Table columns and measures. Note: this setting must be true for schema extraction and column lineage to be enabled.
Default: True
extract_datasets_to_containers
boolean
PBI tables will be grouped under a Datahub Container, the container reflect a PBI Dataset
Default: False
extract_endorsements_to_tags
boolean
Whether to extract endorsements to tags, note that this may overwrite existing tags. Admin API access is required if this setting is enabled.
Default: False
extract_independent_datasets
boolean
Whether to extract datasets not used in any PowerBI visualization
Default: False
extract_lineage
boolean
Whether lineage should be ingested between X and Y. Admin API access is required if this setting is enabled
Default: True
extract_ownership
boolean
Whether ownership should be ingested. Admin API access is required if this setting is enabled. Note that enabling this may overwrite owners that you've added inside DataHub's web application.
Default: False
extract_reports
boolean
Whether reports should be ingested
Default: True
extract_workspaces_to_containers
boolean
Extract workspaces to DataHub containers
Default: True
include_workspace_name_in_dataset_urn
boolean
It is recommended to set this to true, as it helps prevent the overwriting of datasets.Read section #11560 at https://docs.datahub.com/docs/how/updating-datahub/ before enabling this option.To maintain backward compatibility, this is set to False.
Default: False
incremental_lineage
boolean
When enabled, emits lineage as incremental to existing lineage already in DataHub. When disabled, re-states lineage on each run.
Default: False
m_query_parse_timeout
integer
Timeout for PowerBI M-query parsing in seconds. Table-level lineage is determined by analyzing the M-query expression. Increase this value if you encounter the 'M-Query Parsing Timeout' message in the connector report.
Default: 70
modified_since
One of string, null
Get only recently modified workspaces based on modified_since datetime '2023-02-10T00:00:00.0000000Z', excludeInActiveWorkspaces limit to last 30 days
Default: None
native_query_parsing
boolean
Whether PowerBI native query should be parsed to extract lineage
Default: True
patch_metadata
boolean
Patch dashboard metadata
Default: True
platform_instance
One of string, null
The instance of the platform that all assets produced by this recipe belong to
Default: None
scan_batch_size
integer
batch size for sending workspace_ids to PBI, 100 is the limit
Default: 1
scan_timeout
integer
timeout for PowerBI metadata scanning
Default: 60
workspace_id_as_urn_part
boolean
It is recommended to set this to True only if you have legacy workspaces based on Office 365 groups, as those workspaces can have identical names. To maintain backward compatibility, this is set to False which uses workspace name
Default: False
env
string
The environment that all assets produced by this connector belong to
Default: PROD
athena_table_platform_override
array
List of platform overrides for Athena federated queries. Use this to override the platform when Athena queries data from federated sources (e.g., MySQL, PostgreSQL) via ODBC. The lineage will point to the actual source platform instead of Athena. This override is applied AFTER catalog stripping, so use 2-part names (database.table), not 3-part names (catalog.database.table). Overrides with a DSN specified take precedence over those without.
Default: []
athena_table_platform_override.AthenaPlatformOverride
AthenaPlatformOverride
Configuration for overriding the platform of Athena federated tables.

Use this when Athena queries data from federated sources (e.g., MySQL, PostgreSQL)
and you want the lineage to point to the actual source platform instead of Athena.
athena_table_platform_override.AthenaPlatformOverride.database 
string
The database name in the Athena query (after catalog stripping).
athena_table_platform_override.AthenaPlatformOverride.platform 
string
The target DataHub platform name (e.g., 'mysql', 'postgres').
athena_table_platform_override.AthenaPlatformOverride.table 
string
The table name in the Athena query.
athena_table_platform_override.AthenaPlatformOverride.dsn
One of string, null
Optional DSN to scope this override to a specific data source. If specified, this override only applies when the query comes from this DSN.
Default: None
filter_dataset_endorsements
AllowDenyPattern
A class to store allow deny regexes
filter_dataset_endorsements.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
filter_dataset_endorsements.allow
array
List of regex patterns to include in ingestion
Default: ['.*']
filter_dataset_endorsements.allow.string
string
filter_dataset_endorsements.deny
array
List of regex patterns to exclude from ingestion.
Default: []
filter_dataset_endorsements.deny.string
string
ownership
OwnershipMapping
ownership.create_corp_user
boolean
Whether to create user entities from PowerBI data. When False (RECOMMENDED): PowerBI emits ownership URNs only (soft references). User profiles must come from LDAP/SCIM/Okta. When True (OPT-IN): PowerBI creates users with displayName and email from PowerBI. WARNING: May overwrite existing user profiles from other sources. Use only if PowerBI is your authoritative user source.
Default: False
ownership.dataset_configured_by_as_owner
boolean
Take PBI dataset configuredBy as dataset owner if exist
Default: False
ownership.remove_email_suffix
boolean
Remove PowerBI User email suffix for example, @acryl.io
Default: False
ownership.use_powerbi_email
boolean
Use PowerBI User email to ingest as corpuser, default is powerbi user identifier
Default: True
ownership.owner_criteria
One of array, null
Need to have certain authority to qualify as owner for example ['ReadWriteReshareExplore','Owner','Admin']
Default: None
ownership.owner_criteria.string
string
profile_pattern
AllowDenyPattern
A class to store allow deny regexes
profile_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
server_to_platform_instance
One of map(str,union), map(str,union)
server_to_platform_instance.key.platform_instance
One of string, null, union(anyOf), string, null
DataHub platform instance name. To generate correct urn for upstream dataset, this should match with platform instance name used in ingestion recipe of other datahub sources.
Default: None
server_to_platform_instance.key.metastore 
string
Databricks Unity Catalog metastore name.
server_to_platform_instance.key.env
string
The environment that all assets produced by DataHub platform ingestion source belong to
Default: PROD
workspace_id_pattern
AllowDenyPattern
A class to store allow deny regexes
workspace_id_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
workspace_name_pattern
AllowDenyPattern
A class to store allow deny regexes
workspace_name_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
workspace_type_filter
array
Ingest the metadata of the workspace where the workspace type corresponds to the specified workspace_type_filter. Note: This field works in conjunction with 'workspace_id_pattern'. Both must be matched for a workspace to be processed.
Default: ['Workspace']
workspace_type_filter.enum
Enum
One of: "Workspace", "PersonalGroup", "Personal", "AdminWorkspace", "AdminInsights"
profiling
PowerBiProfilingConfig
profiling.enabled
boolean
Whether profiling of PowerBI datasets should be done
Default: False
stateful_ingestion
One of StatefulStaleMetadataRemovalConfig, null
PowerBI Stateful Ingestion Config.
Default: None
stateful_ingestion.enabled
boolean
Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False
Default: False
stateful_ingestion.fail_safe_threshold
number
Prevents large amount of soft deletes & the state from committing from accidental changes to the source configuration if the relative change percent in entities compared to the previous state is above the 'fail_safe_threshold'.
Default: 75.0
stateful_ingestion.remove_stale_metadata
boolean
Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.
Default: True

Capabilities

Use the Important Capabilities table above as the source of truth for supported features and whether additional configuration is required.

User and Ownership Handling

PowerBI Source supports two modes for handling user ownership:

When ownership.create_corp_user: false (default), PowerBI will:

  • Extract ownership information as URN references only
  • NOT create user entities in DataHub
  • User profiles must come from your identity provider (LDAP/SCIM/Okta)

This is the recommended approach as it prevents PowerBI from overwriting user profiles from your identity provider.

ownership:
create_corp_user: false # Default - soft references only
Full User Creation Mode (Opt-in)

When ownership.create_corp_user: true, PowerBI will:

  • Create user entities with displayName and email from PowerBI
  • This may overwrite existing user profiles from LDAP/Okta/SCIM

Warning: Only use this if PowerBI is your authoritative source for user information.

ownership:
create_corp_user: true # Opt-in - creates user entities
Filtering Owners by Access Rights

You can limit which users become owners using owner_criteria. Only users with at least one of the specified access rights will be assigned as owners:

ownership:
owner_criteria:
- ReadWriteReshareExplore
- Owner
- Admin

Valid values depend on the PowerBI access right types for your resources (e.g., dataset, report, dashboard). If owner_criteria is not set or is an empty list, all users with principalType: User qualify as owners.

Lineage

This source extracts table lineage for tables present in PowerBI Datasets. Lets consider a PowerBI Dataset SALES_REPORT and a PostgreSQL database is configured as data-source in SALES_REPORT dataset.

Consider SALES_REPORT PowerBI Dataset has a table SALES_ANALYSIS which is backed by SALES_ANALYSIS_VIEW of PostgreSQL Database then in this case SALES_ANALYSIS_VIEW will appear as upstream dataset for SALES_ANALYSIS table.

You can control table lineage ingestion using extract_lineage configuration parameter, by default it is set to true.

PowerBI Source extracts the lineage information by parsing PowerBI M-Query expressions and from dataset data returned by the PowerBI API.

The source will attempt to extract information from ODBC connection strings in M-Query expressions to determine the database type. If the database type matches a supported platform and the source is able to extract enough information to construct a valid Dataset URN, it will extract lineage for that data source.

PowerBI Source will extract lineage for the below listed PowerBI Data Sources:

  1. Snowflake
  2. Oracle
  3. PostgreSQL
  4. Microsoft SQL Server
  5. Google BigQuery
  6. Databricks
  7. MySQL
  8. Amazon Redshift
  9. Amazon Athena

Native SQL query parsing is supported for Snowflake, Amazon Redshift, and ODBC data sources.

Athena Federated Query Platform Override

When using Amazon Athena via ODBC that queries federated data sources (e.g., Athena querying MySQL or PostgreSQL via federated connectors), the lineage URNs will default to the Athena platform. Use athena_table_platform_override to point lineage to the actual source platform instead of Athena.

Configuration:

source:
type: powerbi
config:
# ... other config ...
dsn_to_platform_name:
MyAthenaDSN: athena
athena_table_platform_override:
# DSN-scoped key (takes precedence)
"MyAthenaDSN:analytics.users": mysql
# Global key (fallback for any DSN)
"reporting.orders": postgres

Key format:

  • DSN-scoped: "DSN_NAME:database.table" - applies only to specific DSN
  • Global: "database.table" - applies to all DSNs

DSN-scoped keys take precedence over global keys, allowing different overrides for the same table name across different Athena data sources.

Note: This override only applies to Athena ODBC connections. For other ODBC platforms, lineage will use the platform determined from the DSN configuration.

For example, consider the SQL query shown below. The table OPERATIONS_ANALYTICS.TRANSFORMED_PROD.V_UNIT_TARGET will be ingested as an upstream table.

let
Source = Value.NativeQuery(
Snowflake.Databases(
"sdfsd788.ws-east-2.fakecomputing.com",
"operations_analytics_prod",
[Role = "OPERATIONS_ANALYTICS_MEMBER"]
){[Name = "OPERATIONS_ANALYTICS"]}[Data],
"select #(lf)UPPER(REPLACE(AGENT_NAME,\'-\',\'\')) AS Agent,#(lf)TIER,#(lf)UPPER(MANAGER),#(lf)TEAM_TYPE,#(lf)DATE_TARGET,#(lf)MONTHID,#(lf)TARGET_TEAM,#(lf)SELLER_EMAIL,#(lf)concat((UPPER(REPLACE(AGENT_NAME,\'-\',\'\'))), MONTHID) as AGENT_KEY,#(lf)UNIT_TARGET AS SME_Quota,#(lf)AMV_TARGET AS Revenue_Quota,#(lf)SERVICE_QUOTA,#(lf)BL_TARGET,#(lf)SOFTWARE_QUOTA as Software_Quota#(lf)#(lf)from OPERATIONS_ANALYTICS.TRANSFORMED_PROD.V_UNIT_TARGETS#(lf)#(lf)where YEAR_TARGET >= 2020#(lf)and TEAM_TYPE = \'foo\'#(lf)and TARGET_TEAM = \'bar\'",
null,
[EnableFolding = true]
),
#"Added Conditional Column" = Table.AddColumn(
Source,
"Has PS Software Quota?",
each
if [TIER] = "Expansion (Medium)" then
"Yes"
else if [TIER] = "Acquisition" then
"Yes"
else
"No"
)
in
#"Added Conditional Column"

Use full-table-name in from clause. For example dev.public.category

M-Query Pattern Supported For Lineage Extraction

Lets consider a M-Query which combine two PostgreSQL tables. Such M-Query can be written as per below patterns.

Pattern-1

let
Source = PostgreSQL.Database("localhost", "book_store"),
book_date = Source{[Schema="public",Item="book"]}[Data],
issue_history = Source{[Schema="public",Item="issue_history"]}[Data],
combine_result = Table.Combine({book_date, issue_history})
in
combine_result

Pattern-2

let
Source = PostgreSQL.Database("localhost", "book_store"),
combine_result = Table.Combine({Source{[Schema="public",Item="book"]}[Data], Source{[Schema="public",Item="issue_history"]}[Data]})
in
combine_result

Pattern-2 is not supported for upstream table lineage extraction as it uses nested item-selector i.e. {Source{[Schema="public",Item="book"]}[Data], Source{[Schema="public",Item="issue_history"]}[Data]} as argument to M-QUery table function i.e. Table.Combine

Pattern-1 is supported as it first assigns the table from schema to variable and then variable is used in M-Query Table function i.e. Table.Combine

Extract endorsements to tags

By default, extracting endorsement information to tags is disabled. The feature may be useful if organization uses endorsements to identify content quality.

Please note that the default implementation overwrites tags for the ingested entities, if you need to preserve existing tags, consider using a transformer with semantics: PATCH tags instead of OVERWRITE.

Profiling

The profiling implementation is done through querying DAX query endpoint. Therefore, the principal needs to have permission to query the datasets to be profiled. Usually this means that the service principal should have Contributor role for the workspace to be ingested. Profiling is done with column-based queries to be able to handle wide datasets without timeouts.

Take into account that the profiling implementation executes a fairly big number of DAX queries, and for big datasets this is a significant load to the PowerBI system.

The profiling_pattern setting may be used to limit profiling actions to only a certain set of resources in PowerBI. Both allowed and deny rules are matched against the following pattern for every table in a PowerBI Dataset: workspace_name.dataset_name.table_name. Users may limit profiling with these settings at table level, dataset level or workspace level.

Limitations

  • Some metadata and lineage fields are only available through admin APIs or specific tenant settings.
  • Lineage quality depends on available model metadata and supported query/source patterns.
  • Large tenants with many workspaces can require longer extraction windows.

Troubleshooting

  • Authentication failures: verify tenant_id, client_id, and client_secret, and confirm the app has the required Power BI API permissions.
  • Missing workspaces/assets: check service principal access to target workspaces or enable the required admin API mode/settings.
  • Lineage gaps: confirm lineage-related config is enabled and that semantic models expose supported upstream source details.

Code Coordinates

  • Class Name: datahub.ingestion.source.powerbi.powerbi.PowerBiDashboardSource
  • Browse on GitHub
Questions?

If you've got any questions on configuring ingestion for PowerBI, feel free to ping us on our Slack.

💡 Contributing to this documentation

This page is auto-generated from the underlying source code. To make changes, please edit the relevant source files in the metadata-ingestion directory.

Tip: For quick typo fixes or documentation updates, you can click the ✏️ Edit icon directly in the GitHub UI to open a Pull Request. For larger changes and PR naming conventions, please refer to our Contributing Guide.