Skip to main content
Version: Next

Oracle

Incubating

Important Capabilities

CapabilityStatusNotes
Asset ContainersEnabled by default. Supported for types - Database, Schema.
ClassificationOptionally enabled via classification.enabled.
Column-level LineageEnabled by default to get lineage for stored procedures via include_lineage and for views via include_view_column_lineage. Supported for types - Stored Procedure, View.
Dataset UsageEnabled by default via SQL aggregator when processing observed queries.
DescriptionsEnabled by default.
Detect Deleted EntitiesEnabled by default via stateful ingestion.
DomainsEnabled by default.
Schema MetadataEnabled by default.
Table-Level LineageEnabled by default to get lineage for stored procedures via include_lineage and for views via include_view_lineage. Supported for types - Stored Procedure, View.
Test ConnectionEnabled by default.

This plugin extracts the following:

  • Metadata for databases, schemas, and tables
  • Column types associated with each table
  • Table, row, and column statistics via optional SQL profiling
  • Stored procedures, functions, and packages with dependency tracking
  • Materialized views with proper lineage
  • Lineage, usage statistics, and operations via SQL aggregator (similar to BigQuery/Snowflake/Teradata)

Lineage is automatically generated when parsing:

  • Stored procedure definitions (via SQL aggregator)
  • Materialized view definitions (via SQL aggregator)
  • View definitions (via SQL aggregator)

Usage statistics and operations are generated from observed queries and audit trail data processed by the SQL aggregator. This provides comprehensive lineage, usage, and operational tracking from the same SQL parsing infrastructure.

Using the Oracle source requires that you've also installed the correct drivers; see the oracledb docs. The easiest approach is to use the thin mode (default) which requires no additional Oracle client installation.

Prerequisites

Data Dictionary Mode/Views

The Oracle ingestion source supports two modes for extracting metadata information (see data_dictionary_mode option): ALL and DBA. In the ALL mode, the SQLAlchemy backend queries ALL_ data dictionary views to extract metadata information. In the DBA mode, the Oracle ingestion source directly queries DBA_ data dictionary views to extract metadata information. ALL_ views only provide information accessible to the user used for ingestion while DBA_ views provide information for the entire database (that is, all schema objects in the database).

The following table contains a brief description of what each data dictionary view is used for:

Data Dictionary ViewWhat's it used for?
ALL_TABLES or DBA_TABLESGet list of all relational tables in the database
ALL_VIEWS or DBA_VIEWSGet list of all views in the database
ALL_TAB_COMMENTS or DBA_TAB_COMMENTSGet comments on tables and views
ALL_TAB_COLS or DBA_TAB_COLSGet description of the columns of tables and views
ALL_COL_COMMENTS or DBA_COL_COMMENTSGet comments on the columns of tables and views
ALL_TAB_IDENTITY_COLS or DBA_TAB_IDENTITY_COLSGet table identity columns
ALL_CONSTRAINTS or DBA_CONSTRAINTSGet constraint definitions on tables
ALL_CONS_COLUMNS or DBA_CONS_COLUMNSGet list of columns that are specified in constraints
ALL_USERS or DBA_USERSGet all schema names
ALL_OBJECTS or DBA_OBJECTSGet stored procedures, functions, and packages
ALL_SOURCE or DBA_SOURCEGet source code for stored procedures and functions
ALL_ARGUMENTS or DBA_ARGUMENTSGet arguments for stored procedures and functions
ALL_DEPENDENCIES or DBA_DEPENDENCIESGet dependency information for database objects
ALL_MVIEWS or DBA_MVIEWSGet materialized views and their definitions

Data Dictionary Views accessible information and required privileges

  • ALL_ views display all the information accessible to the user used for ingestion, including information from the user's schema as well as information from objects in other schemas, if the user has access to those objects by way of grants of privileges or roles.
  • DBA_ views display all relevant information in the entire database. They can be queried only by users with the SYSDBA system privilege or SELECT ANY DICTIONARY privilege, or SELECT_CATALOG_ROLE role, or by users with direct privileges granted to them.

Required Permissions

The following permissions are required based on features used:

Basic Metadata (Tables & Views)

-- Using data_dictionary_mode: ALL (default)
GRANT SELECT ON ALL_TABLES TO datahub_user;
GRANT SELECT ON ALL_TAB_COLS TO datahub_user;
GRANT SELECT ON ALL_TAB_COMMENTS TO datahub_user;
GRANT SELECT ON ALL_COL_COMMENTS TO datahub_user;
GRANT SELECT ON ALL_VIEWS TO datahub_user;
GRANT SELECT ON ALL_CONSTRAINTS TO datahub_user;
GRANT SELECT ON ALL_CONS_COLUMNS TO datahub_user;

-- Using data_dictionary_mode: DBA (elevated permissions)
GRANT SELECT ON DBA_TABLES TO datahub_user;
GRANT SELECT ON DBA_TAB_COLS TO datahub_user;
GRANT SELECT ON DBA_TAB_COMMENTS TO datahub_user;
GRANT SELECT ON DBA_COL_COMMENTS TO datahub_user;
GRANT SELECT ON DBA_VIEWS TO datahub_user;
GRANT SELECT ON DBA_CONSTRAINTS TO datahub_user;
GRANT SELECT ON DBA_CONS_COLUMNS TO datahub_user;

Stored Procedures (enabled by default)

-- For ALL mode
GRANT SELECT ON ALL_OBJECTS TO datahub_user;
GRANT SELECT ON ALL_SOURCE TO datahub_user;
GRANT SELECT ON ALL_ARGUMENTS TO datahub_user;
GRANT SELECT ON ALL_DEPENDENCIES TO datahub_user;

-- For DBA mode
GRANT SELECT ON DBA_OBJECTS TO datahub_user;
GRANT SELECT ON DBA_SOURCE TO datahub_user;
GRANT SELECT ON DBA_ARGUMENTS TO datahub_user;
GRANT SELECT ON DBA_DEPENDENCIES TO datahub_user;

Materialized Views (enabled by default)

-- For ALL mode
GRANT SELECT ON ALL_MVIEWS TO datahub_user;

-- For DBA mode
GRANT SELECT ON DBA_MVIEWS TO datahub_user;

Database Name Resolution

GRANT SELECT ON V_$DATABASE TO datahub_user;

CLI based Ingestion

Starter Recipe

Check out the following recipe to get started with ingestion! See below for full configuration options.

For general pointers on writing and running a recipe, see our main recipe guide.

source:
type: oracle
config:
# Coordinates
host_port: localhost:1521
database: dbname

# Credentials
username: user
password: pass

# Options
service_name: svc # omit database if using this option

# Data Dictionary Mode
data_dictionary_mode: "ALL" # or "DBA" for full database access

# Stored Procedures
include_stored_procedures: true
procedure_pattern:
allow:
- "SCHEMA.*" # Include all procedures from SCHEMA
deny:
- "SYS.*" # Exclude system procedures

# Materialized Views
include_materialized_views: true

# Usage and Operations (requires audit data or query logs)
include_usage_stats: true
include_operational_stats: true

# Oracle Client Configuration (optional)
enable_thick_mode: false # Set to true to use Oracle thick client
# thick_mode_lib_dir: "/path/to/oracle/client" # Required on Mac/Windows if thick mode enabled

sink:
# sink configs

Config Details

Note that a . is used to denote nested fields in the YAML recipe.

FieldDescription
host_port 
string
host URL
add_database_name_to_urn
One of boolean, null
Add oracle database name to urn, default urn is schema.table
Default: False
bucket_duration
Enum
One of: "DAY", "HOUR"
convert_urns_to_lowercase
boolean
Whether to convert dataset urns to lowercase.
Default: False
data_dictionary_mode
Enum
One of: "ALL", "DBA"
database
One of string, null
If using, omit service_name.
Default: None
enable_thick_mode
One of boolean, null
Connection defaults to thin mode. Set to True to enable thick mode.
Default: False
end_time
string(date-time)
Latest date of lineage/usage to consider. Default: Current time in UTC
format_sql_queries
boolean
Whether to format sql queries
Default: False
include_materialized_views
boolean
Include materialized views in ingestion. Requires access to DBA_MVIEWS or ALL_MVIEWS. If permission errors occur, you can disable this feature or grant the required permissions.
Default: True
include_operational_stats
boolean
Generate operation statistics from audit trail data (CREATE, INSERT, UPDATE, DELETE operations).
Default: False
include_read_operational_stats
boolean
Whether to report read operational stats. Experimental.
Default: False
include_stored_procedures
boolean
Include ingest of stored procedures, functions, and packages. Requires access to DBA_PROCEDURES or ALL_PROCEDURES.
Default: True
include_table_location_lineage
boolean
If the source supports it, include table lineage to the underlying storage location.
Default: True
include_tables
boolean
Whether tables should be ingested.
Default: True
include_top_n_queries
boolean
Whether to ingest the top_n_queries.
Default: True
include_usage_stats
boolean
Generate usage statistics via SQL aggregator. Requires observed queries to be processed.
Default: False
include_view_column_lineage
boolean
Populates column-level lineage for view->view and table->view lineage using DataHub's sql parser. Requires include_view_lineage to be enabled.
Default: True
include_view_lineage
boolean
Populates view->view and table->view lineage using DataHub's sql parser.
Default: True
include_views
boolean
Whether views should be ingested.
Default: True
incremental_lineage
boolean
When enabled, emits lineage as incremental to existing lineage already in DataHub. When disabled, re-states lineage on each run.
Default: False
options
object
Any options specified here will be passed to SQLAlchemy.create_engine as kwargs. To set connection arguments in the URL, specify them under connect_args.
password
One of string(password), null
password
Default: None
platform_instance
One of string, null
The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://docs.datahub.com/docs/platform-instances/ for more details.
Default: None
scheme
string
Will be set automatically to default value.
Default: oracle
service_name
One of string, null
Oracle service name. If using, omit database.
Default: None
sqlalchemy_uri
One of string, null
URI of database to connect to. See https://docs.sqlalchemy.org/en/14/core/engines.html#database-urls. Takes precedence over other connection parameters.
Default: None
start_time
string(date-time)
Earliest date of lineage/usage to consider. Default: Last full day in UTC (or hour, depending on bucket_duration). You can also specify relative time with respect to end_time such as '-7 days' Or '-7d'.
Default: None
thick_mode_lib_dir
One of string, null
If using thick mode on Windows or Mac, set thick_mode_lib_dir to the oracle client libraries path. On Linux, this value is ignored, as ldconfig or LD_LIBRARY_PATH will define the location.
Default: None
top_n_queries
integer
Number of top queries to save to each table.
Default: 10
use_file_backed_cache
boolean
Whether to use a file backed cache for the view definitions.
Default: True
username
One of string, null
username
Default: None
env
string
The environment that all assets produced by this connector belong to
Default: PROD
domain
map(str,AllowDenyPattern)
A class to store allow deny regexes
domain.key.allow
array
List of regex patterns to include in ingestion
Default: ['.*']
domain.key.allow.string
string
domain.key.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
domain.key.deny
array
List of regex patterns to exclude from ingestion.
Default: []
domain.key.deny.string
string
procedure_pattern
AllowDenyPattern
A class to store allow deny regexes
procedure_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
profile_pattern
AllowDenyPattern
A class to store allow deny regexes
profile_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
schema_pattern
AllowDenyPattern
A class to store allow deny regexes
schema_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
table_pattern
AllowDenyPattern
A class to store allow deny regexes
table_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
user_email_pattern
AllowDenyPattern
A class to store allow deny regexes
user_email_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
view_pattern
AllowDenyPattern
A class to store allow deny regexes
view_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
classification
ClassificationConfig
classification.enabled
boolean
Whether classification should be used to auto-detect glossary terms
Default: False
classification.info_type_to_term
map(str,string)
classification.max_workers
integer
Number of worker processes to use for classification. Set to 1 to disable.
Default: 4
classification.sample_size
integer
Number of sample values used for classification.
Default: 100
classification.classifiers
array
Classifiers to use to auto-detect glossary terms. If more than one classifier, infotype predictions from the classifier defined later in sequence take precedance.
Default: [{'type': 'datahub', 'config': None}]
classification.classifiers.DynamicTypedClassifierConfig
DynamicTypedClassifierConfig
classification.classifiers.DynamicTypedClassifierConfig.type 
string
The type of the classifier to use. For DataHub, use datahub
classification.classifiers.DynamicTypedClassifierConfig.config
One of object, null
The configuration required for initializing the classifier. If not specified, uses defaults for classifer type.
Default: None
classification.column_pattern
AllowDenyPattern
A class to store allow deny regexes
classification.column_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
classification.table_pattern
AllowDenyPattern
A class to store allow deny regexes
classification.table_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
profiling
GEProfilingConfig
profiling.catch_exceptions
boolean
Default: True
profiling.enabled
boolean
Whether profiling should be done.
Default: False
profiling.field_sample_values_limit
integer
Upper limit for number of sample values to collect for all columns.
Default: 20
profiling.include_field_distinct_count
boolean
Whether to profile for the number of distinct values for each column.
Default: True
profiling.include_field_distinct_value_frequencies
boolean
Whether to profile for distinct value frequencies.
Default: False
profiling.include_field_histogram
boolean
Whether to profile for the histogram for numeric fields.
Default: False
profiling.include_field_max_value
boolean
Whether to profile for the max value of numeric columns.
Default: True
profiling.include_field_mean_value
boolean
Whether to profile for the mean value of numeric columns.
Default: True
profiling.include_field_median_value
boolean
Whether to profile for the median value of numeric columns.
Default: True
profiling.include_field_min_value
boolean
Whether to profile for the min value of numeric columns.
Default: True
profiling.include_field_null_count
boolean
Whether to profile for the number of nulls for each column.
Default: True
profiling.include_field_quantiles
boolean
Whether to profile for the quantiles of numeric columns.
Default: False
profiling.include_field_sample_values
boolean
Whether to profile for the sample values for all columns.
Default: True
profiling.include_field_stddev_value
boolean
Whether to profile for the standard deviation of numeric columns.
Default: True
profiling.limit
One of integer, null
Max number of documents to profile. By default, profiles all documents.
Default: None
profiling.max_number_of_fields_to_profile
One of integer, null
A positive integer that specifies the maximum number of columns to profile for any table. None implies all columns. The cost of profiling goes up significantly as the number of columns to profile goes up.
Default: None
profiling.max_workers
integer
Number of worker threads to use for profiling. Set to 1 to disable.
Default: 20
profiling.offset
One of integer, null
Offset in documents to profile. By default, uses no offset.
Default: None
profiling.partition_datetime
One of string(date-time), null
If specified, profile only the partition which matches this datetime. If not specified, profile the latest partition. Only Bigquery supports this.
Default: None
profiling.partition_profiling_enabled
boolean
Whether to profile partitioned tables. Only BigQuery and Aws Athena supports this. If enabled, latest partition data is used for profiling.
Default: True
profiling.profile_external_tables
boolean
Whether to profile external tables. Only Snowflake and Redshift supports this.
Default: False
profiling.profile_if_updated_since_days
One of number, null
Profile table only if it has been updated since these many number of days. If set to null, no constraint of last modified time for tables to profile. Supported only in snowflake and BigQuery.
Default: None
profiling.profile_nested_fields
boolean
Whether to profile complex types like structs, arrays and maps.
Default: False
profiling.profile_table_level_only
boolean
Whether to perform profiling at table-level only, or include column-level profiling as well.
Default: False
profiling.profile_table_row_count_estimate_only
boolean
Use an approximate query for row count. This will be much faster but slightly less accurate. Only supported for Postgres and MySQL.
Default: False
profiling.profile_table_row_limit
One of integer, null
Profile tables only if their row count is less than specified count. If set to null, no limit on the row count of tables to profile. Supported only in Snowflake, BigQuery. Supported for Oracle based on gathered stats.
Default: 5000000
profiling.profile_table_size_limit
One of integer, null
Profile tables only if their size is less than specified GBs. If set to null, no limit on the size of tables to profile. Supported only in Snowflake, BigQuery and Databricks. Supported for Oracle based on calculated size from gathered stats.
Default: 5
profiling.query_combiner_enabled
boolean
This feature is still experimental and can be disabled if it causes issues. Reduces the total number of queries issued and speeds up profiling by dynamically combining SQL queries where possible.
Default: True
profiling.report_dropped_profiles
boolean
Whether to report datasets or dataset columns which were not profiled. Set to True for debugging purposes.
Default: False
profiling.sample_size
integer
Number of rows to be sampled from table for column level profiling.Applicable only if use_sampling is set to True.
Default: 10000
profiling.turn_off_expensive_profiling_metrics
boolean
Whether to turn off expensive profiling or not. This turns off profiling for quantiles, distinct_value_frequencies, histogram & sample_values. This also limits maximum number of fields being profiled to 10.
Default: False
profiling.use_sampling
boolean
Whether to profile column level stats on sample of table. Only BigQuery and Snowflake support this. If enabled, profiling is done on rows sampled from table. Sampling is not done for smaller tables.
Default: True
profiling.operation_config
OperationConfig
profiling.operation_config.lower_freq_profile_enabled
boolean
Whether to do profiling at lower freq or not. This does not do any scheduling just adds additional checks to when not to run profiling.
Default: False
profiling.operation_config.profile_date_of_month
One of integer, null
Number between 1 to 31 for date of month (both inclusive). If not specified, defaults to Nothing and this field does not take affect.
Default: None
profiling.operation_config.profile_day_of_week
One of integer, null
Number between 0 to 6 for day of week (both inclusive). 0 is Monday and 6 is Sunday. If not specified, defaults to Nothing and this field does not take affect.
Default: None
profiling.tags_to_ignore_sampling
One of array, null
Fixed list of tags to ignore sampling. If not specified, tables will be sampled based on use_sampling.
Default: None
profiling.tags_to_ignore_sampling.string
string
stateful_ingestion
One of StatefulStaleMetadataRemovalConfig, null
Default: None
stateful_ingestion.enabled
boolean
Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False
Default: False
stateful_ingestion.fail_safe_threshold
number
Prevents large amount of soft deletes & the state from committing from accidental changes to the source configuration if the relative change percent in entities compared to the previous state is above the 'fail_safe_threshold'.
Default: 75.0
stateful_ingestion.remove_stale_metadata
boolean
Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.
Default: True

The Oracle source extracts metadata from Oracle databases, including:

  • Tables and Views: Standard relational tables and views with column information, constraints, and comments
  • Stored Procedures: Functions, procedures, and packages with source code, arguments, and dependency tracking
  • Materialized Views: Materialized views with proper lineage and refresh information
  • Lineage: Automatic lineage generation from stored procedure definitions and materialized view queries via SQL parsing
  • Usage Statistics: Query execution statistics and table access patterns (when audit data is available)
  • Operations: Data modification events (CREATE, INSERT, UPDATE, DELETE) from audit trail data

The connector uses the python-oracledb driver and supports both thin mode (default, no Oracle client required) and thick mode (requires Oracle client installation).

As a SQL-based service, the Oracle integration is also supported by our SQL profiler for table and column statistics.

Code Coordinates

  • Class Name: datahub.ingestion.source.sql.oracle.OracleSource
  • Browse on GitHub

Questions

If you've got any questions on configuring ingestion for Oracle, feel free to ping us on our Slack.