ML Feature
The ML Feature entity represents an individual input variable used by machine learning models. Features are the building blocks of feature engineering - they transform raw data into meaningful signals that ML algorithms can learn from. In modern ML systems, features are first-class citizens that can be discovered, documented, versioned, and reused across multiple models and teams.
Identity
ML Features are identified by two pieces of information:
- The feature namespace: A logical grouping or namespace that organizes related features together. This often corresponds to a feature table name, domain area, or team. Examples include
user_features,transaction_features,product_features. - The feature name: The unique name of the feature within its namespace. Examples include
age,lifetime_value,days_since_signup.
An example of an ML Feature identifier is urn:li:mlFeature:(user_features,age).
The identity is defined by the mlFeatureKey aspect, which contains:
featureNamespace: A string representing the logical namespace or grouping for the featurename: The unique name of the feature within that namespace
URN Structure Examples
urn:li:mlFeature:(user_features,age)
urn:li:mlFeature:(user_features,lifetime_value)
urn:li:mlFeature:(transaction_features,amount_last_7d)
urn:li:mlFeature:(product_features,price)
urn:li:mlFeature:(product_features,category_embedding)
The namespace and name together form a globally unique identifier. Multiple features can share the same namespace (representing a logical grouping), but each feature name must be unique within its namespace.
Important Capabilities
Feature Properties
ML Features support comprehensive metadata through the mlFeatureProperties aspect. This aspect captures the essential characteristics that define a feature:
Description and Documentation
Features should have clear descriptions explaining what they represent, how they're calculated, and when they should be used. Good feature documentation is critical for:
- Helping data scientists discover relevant features for their models
- Preventing duplicate feature creation
- Understanding feature semantics and appropriate use cases
- Facilitating feature reuse across teams
Python SDK: Create an ML Feature with description
import os
import datahub.emitter.mce_builder as builder
import datahub.metadata.schema_classes as models
from datahub.emitter.mcp import MetadataChangeProposalWrapper
from datahub.emitter.rest_emitter import DatahubRestEmitter
gms_server = os.getenv("DATAHUB_GMS_URL", "http://localhost:8080")
token = os.getenv("DATAHUB_GMS_TOKEN")
emitter = DatahubRestEmitter(gms_server=gms_server, token=token)
feature_urn = builder.make_ml_feature_urn(
feature_table_name="user_features",
feature_name="age",
)
metadata_change_proposal = MetadataChangeProposalWrapper(
entityUrn=feature_urn,
aspect=models.MLFeaturePropertiesClass(
description="Age of the user in years, calculated as the difference between current date and birth date. "
"This feature is commonly used for demographic segmentation and age-based personalization. "
"Values range from 18-100 for registered users (age verification required).",
dataType="CONTINUOUS",
),
)
emitter.emit(metadata_change_proposal)
Data Type
Features have a data type specified using MLFeatureDataType that describes the nature of the feature values. Understanding data type is essential for proper feature handling, preprocessing, and model training. DataHub supports a rich taxonomy of data types:
Categorical Types:
NOMINAL: Discrete values with no inherent order (e.g., country, product category)ORDINAL: Discrete values that can be ranked (e.g., education level, rating)BINARY: Two-category values (e.g., is_subscriber, has_clicked)
Numeric Types:
CONTINUOUS: Real-valued numeric data (e.g., height, price, temperature)COUNT: Non-negative integer counts (e.g., number of purchases, page views)INTERVAL: Numeric data with equal spacing (e.g., percentages, scores)
Temporal:
TIME: Time-based cyclical features (e.g., hour_of_day, day_of_week)
Unstructured:
TEXT: Text data requiring NLP processingIMAGE: Image dataVIDEO: Video dataAUDIO: Audio data
Structured:
MAP: Dictionary or mapping structuresSEQUENCE: Lists, arrays, or sequencesSET: Unordered collectionsBYTE: Binary-encoded complex objects
Special:
USELESS: High-cardinality unique values with no predictive relationship (e.g., random IDs)UNKNOWN: Type is not yet determined
Python SDK: Create features with different data types
import os
import datahub.emitter.mce_builder as builder
import datahub.metadata.schema_classes as models
from datahub.emitter.mcp import MetadataChangeProposalWrapper
from datahub.emitter.rest_emitter import DatahubRestEmitter
gms_server = os.getenv("DATAHUB_GMS_URL", "http://localhost:8080")
token = os.getenv("DATAHUB_GMS_TOKEN")
emitter = DatahubRestEmitter(gms_server=gms_server, token=token)
features = [
{
"name": "user_country",
"description": "Country of user residence",
"data_type": "NOMINAL",
},
{
"name": "subscription_tier",
"description": "User subscription level: free, basic, premium, enterprise",
"data_type": "ORDINAL",
},
{
"name": "is_email_verified",
"description": "Whether the user has verified their email address",
"data_type": "BINARY",
},
{
"name": "total_purchases",
"description": "Total number of purchases made by the user",
"data_type": "COUNT",
},
{
"name": "signup_hour",
"description": "Hour of day when user signed up (0-23)",
"data_type": "TIME",
},
{
"name": "lifetime_value",
"description": "Total revenue generated by user in USD",
"data_type": "CONTINUOUS",
},
{
"name": "user_bio",
"description": "User profile biography text",
"data_type": "TEXT",
},
]
for feature_def in features:
feature_urn = builder.make_ml_feature_urn(
feature_table_name="user_features",
feature_name=feature_def["name"],
)
metadata_change_proposal = MetadataChangeProposalWrapper(
entityUrn=feature_urn,
aspect=models.MLFeaturePropertiesClass(
description=feature_def["description"],
dataType=feature_def["data_type"],
),
)
emitter.emit(metadata_change_proposal)
Source Lineage
One of the most powerful capabilities of ML Features in DataHub is their ability to declare source datasets through the sources property. This creates explicit "DerivedFrom" lineage relationships between features and the upstream datasets they're computed from.
Source lineage enables:
- End-to-end traceability: Track a model prediction back to the raw data that generated its features
- Impact analysis: Understand which features (and downstream models) are affected when a dataset changes
- Data quality: Identify the root cause when feature values appear incorrect
- Compliance: Document data provenance for regulatory requirements
- Discovery: Find all features derived from a particular dataset
Python SDK: Add source lineage to a feature
import datahub.emitter.mce_builder as builder
import datahub.metadata.schema_classes as models
from datahub.emitter.mcp import MetadataChangeProposalWrapper
from datahub.emitter.rest_emitter import DatahubRestEmitter
emitter = DatahubRestEmitter(gms_server="http://localhost:8080", extra_headers={})
feature_urn = builder.make_ml_feature_urn(
feature_table_name="user_features",
feature_name="days_since_signup",
)
users_table_urn = builder.make_dataset_urn(
name="analytics.users",
platform="snowflake",
env="PROD",
)
metadata_change_proposal = MetadataChangeProposalWrapper(
entityUrn=feature_urn,
aspect=models.MLFeaturePropertiesClass(
description="Number of days since the user created their account, "
"calculated as the difference between current date and signup_date. "
"Used for cohort analysis and lifecycle stage segmentation.",
dataType="COUNT",
sources=[users_table_urn],
),
)
emitter.emit(metadata_change_proposal)
Versioning
Features support versioning through the version property. Version information helps teams:
- Track breaking changes to feature definitions or calculations
- Maintain multiple feature versions during migration periods
- Understand which feature version a model was trained with
- Coordinate feature rollouts across training and serving systems
Python SDK: Create a versioned feature
import os
import datahub.emitter.mce_builder as builder
import datahub.metadata.schema_classes as models
from datahub.emitter.mcp import MetadataChangeProposalWrapper
from datahub.emitter.rest_emitter import DatahubRestEmitter
gms_server = os.getenv("DATAHUB_GMS_URL", "http://localhost:8080")
token = os.getenv("DATAHUB_GMS_TOKEN")
emitter = DatahubRestEmitter(gms_server=gms_server, token=token)
feature_urn = builder.make_ml_feature_urn(
feature_table_name="user_features",
feature_name="total_spend",
)
dataset_urn = builder.make_dataset_urn(
name="analytics.orders",
platform="snowflake",
env="PROD",
)
metadata_change_proposal = MetadataChangeProposalWrapper(
entityUrn=feature_urn,
aspect=models.MLFeaturePropertiesClass(
description="Total amount spent by user across all orders. "
"Version 2.0 now includes refunds and returns, providing net spend instead of gross. "
"Changed from gross spend calculation in v1.0.",
dataType="CONTINUOUS",
version=models.VersionTagClass(versionTag="2.0"),
sources=[dataset_urn],
),
)
emitter.emit(metadata_change_proposal)
Custom Properties
Features support arbitrary key-value custom properties through the customProperties field, allowing you to capture platform-specific or organization-specific metadata:
- Feature importance scores
- Update frequency or freshness SLAs
- Cost or compute requirements
- Feature store specific configuration
- Team or project ownership details
Tags and Glossary Terms
ML Features support tags and glossary terms for classification, discovery, and governance:
- Tags (via
globalTagsaspect) provide lightweight categorization such as PII indicators, feature maturity levels, or domain areas - Glossary Terms (via
glossaryTermsaspect) link features to standardized business definitions and concepts
Read this blog to understand when to use tags vs terms.
Python SDK: Add tags and terms to a feature
import datahub.emitter.mce_builder as builder
import datahub.metadata.schema_classes as models
from datahub.emitter.mcp import MetadataChangeProposalWrapper
from datahub.emitter.rest_emitter import DatahubRestEmitter
from datahub.ingestion.graph.client import DatahubClientConfig, DataHubGraph
gms_endpoint = "http://localhost:8080"
emitter = DatahubRestEmitter(gms_server=gms_endpoint, extra_headers={})
graph = DataHubGraph(DatahubClientConfig(server=gms_endpoint))
feature_urn = builder.make_ml_feature_urn(
feature_table_name="user_features",
feature_name="email_address",
)
current_tags = graph.get_aspect(
entity_urn=feature_urn, aspect_type=models.GlobalTagsClass
)
tag_to_add = builder.make_tag_urn("PII")
term_to_add = builder.make_term_urn("CustomerData")
if current_tags:
if tag_to_add not in [tag.tag for tag in current_tags.tags]:
current_tags.tags.append(models.TagAssociationClass(tag=tag_to_add))
else:
current_tags = models.GlobalTagsClass(
tags=[models.TagAssociationClass(tag=tag_to_add)]
)
emitter.emit(
MetadataChangeProposalWrapper(
entityUrn=feature_urn,
aspect=current_tags,
)
)
current_terms = graph.get_aspect(
entity_urn=feature_urn, aspect_type=models.GlossaryTermsClass
)
if current_terms:
if term_to_add not in [term.urn for term in current_terms.terms]:
current_terms.terms.append(models.GlossaryTermAssociationClass(urn=term_to_add))
else:
current_terms = models.GlossaryTermsClass(
terms=[models.GlossaryTermAssociationClass(urn=term_to_add)],
auditStamp=models.AuditStampClass(time=0, actor="urn:li:corpuser:datahub"),
)
emitter.emit(
MetadataChangeProposalWrapper(
entityUrn=feature_urn,
aspect=current_terms,
)
)
Ownership
Ownership is associated with features using the ownership aspect. Clear feature ownership is essential for:
- Knowing who to contact with questions about feature behavior
- Understanding responsibility for feature quality and updates
- Managing access control and governance
- Coordinating feature changes across teams
Python SDK: Add ownership to a feature
import datahub.emitter.mce_builder as builder
import datahub.metadata.schema_classes as models
from datahub.emitter.mcp import MetadataChangeProposalWrapper
from datahub.emitter.rest_emitter import DatahubRestEmitter
from datahub.ingestion.graph.client import DatahubClientConfig, DataHubGraph
gms_endpoint = "http://localhost:8080"
emitter = DatahubRestEmitter(gms_server=gms_endpoint, extra_headers={})
graph = DataHubGraph(DatahubClientConfig(server=gms_endpoint))
feature_urn = builder.make_ml_feature_urn(
feature_table_name="user_features",
feature_name="age",
)
owner_to_add = builder.make_user_urn("data_science_team")
current_ownership = graph.get_aspect(
entity_urn=feature_urn, aspect_type=models.OwnershipClass
)
if current_ownership:
if owner_to_add not in [owner.owner for owner in current_ownership.owners]:
current_ownership.owners.append(
models.OwnerClass(
owner=owner_to_add,
type=models.OwnershipTypeClass.DATAOWNER,
)
)
else:
current_ownership = models.OwnershipClass(
owners=[
models.OwnerClass(
owner=owner_to_add,
type=models.OwnershipTypeClass.DATAOWNER,
)
]
)
emitter.emit(
MetadataChangeProposalWrapper(
entityUrn=feature_urn,
aspect=current_ownership,
)
)
Domains and Organization
Features can be organized into domains (via the domains aspect) to represent organizational structure or functional areas. Domain organization helps teams:
- Manage large feature catalogs by grouping related features
- Apply consistent governance policies to related features
- Facilitate discovery within organizational boundaries
- Track feature adoption by business unit
Code Examples
Creating a Complete ML Feature
Here's a comprehensive example that creates a feature with all core metadata:
Python SDK: Create a complete ML Feature
import os
import datahub.emitter.mce_builder as builder
import datahub.metadata.schema_classes as models
from datahub.emitter.mcp import MetadataChangeProposalWrapper
from datahub.emitter.rest_emitter import DatahubRestEmitter
gms_server = os.getenv("DATAHUB_GMS_URL", "http://localhost:8080")
token = os.getenv("DATAHUB_GMS_TOKEN")
emitter = DatahubRestEmitter(gms_server=gms_server, token=token)
feature_urn = builder.make_ml_feature_urn(
feature_table_name="user_features",
feature_name="user_engagement_score",
)
source_dataset_urn = builder.make_dataset_urn(
name="analytics.user_activity",
platform="snowflake",
env="PROD",
)
owner_urn = builder.make_user_urn("ml_platform_team")
tag_urn = builder.make_tag_urn("HighValue")
term_urn = builder.make_term_urn("EngagementMetrics")
domain_urn = builder.make_domain_urn("user_analytics")
feature_properties = MetadataChangeProposalWrapper(
entityUrn=feature_urn,
aspect=models.MLFeaturePropertiesClass(
description="Composite engagement score calculated from user activity metrics including "
"page views, time on site, feature usage, and interaction frequency. "
"Higher scores indicate more engaged users. Range: 0-100.",
dataType="CONTINUOUS",
version=models.VersionTagClass(versionTag="1.0"),
sources=[source_dataset_urn],
customProperties={
"update_frequency": "daily",
"calculation_method": "weighted_sum",
"min_value": "0",
"max_value": "100",
},
),
)
ownership = MetadataChangeProposalWrapper(
entityUrn=feature_urn,
aspect=models.OwnershipClass(
owners=[
models.OwnerClass(
owner=owner_urn,
type=models.OwnershipTypeClass.DATAOWNER,
)
]
),
)
tags = MetadataChangeProposalWrapper(
entityUrn=feature_urn,
aspect=models.GlobalTagsClass(tags=[models.TagAssociationClass(tag=tag_urn)]),
)
terms = MetadataChangeProposalWrapper(
entityUrn=feature_urn,
aspect=models.GlossaryTermsClass(
terms=[models.GlossaryTermAssociationClass(urn=term_urn)],
auditStamp=models.AuditStampClass(time=0, actor="urn:li:corpuser:datahub"),
),
)
domains = MetadataChangeProposalWrapper(
entityUrn=feature_urn,
aspect=models.DomainsClass(domains=[domain_urn]),
)
for mcp in [feature_properties, ownership, tags, terms, domains]:
emitter.emit(mcp)
Linking Features to Feature Tables
Features are typically organized into feature tables. While the feature entity itself doesn't directly reference its parent table (the relationship is inverse - tables reference features), you can discover the containing table through relationships:
Python SDK: Find feature table containing a feature
from datahub.ingestion.graph.client import DatahubClientConfig, DataHubGraph
from datahub.metadata.urns import MlFeatureUrn
graph = DataHubGraph(DatahubClientConfig(server="http://localhost:8080"))
feature_urn = MlFeatureUrn(
feature_namespace="user_features",
name="age",
)
relationships = graph.get_related_entities(
entity_urn=str(feature_urn),
relationship_types=["Contains"],
direction=DataHubGraph.RelationshipDirection.INCOMING,
)
if relationships:
feature_table_urns = [rel.urn for rel in relationships]
print(f"Feature {feature_urn} is contained in tables:")
for table_urn in feature_table_urns:
print(f" - {table_urn}")
else:
print(f"Feature {feature_urn} is not associated with any feature table")
Querying ML Features
You can retrieve ML Feature metadata using both the Python SDK and REST API:
Python SDK: Read an ML Feature
from datahub.sdk import DataHubClient, MLFeatureUrn
client = DataHubClient.from_env()
# Or get this from the UI (share -> copy urn) and use MLFeatureUrn.from_string(...)
mlfeature_urn = MLFeatureUrn(
"test_feature_table_all_feature_dtypes", "test_BOOL_feature"
)
mlfeature_entity = client.entities.get(mlfeature_urn)
print("MLFeature name:", mlfeature_entity.name)
print("MLFeature table:", mlfeature_entity.feature_table_urn)
print("MLFeature description:", mlfeature_entity.description)
REST API: Fetch ML Feature metadata
# Get the complete entity with all aspects
curl 'http://localhost:8080/entities/urn%3Ali%3AmlFeature%3A(user_features,age)'
# Get relationships to see source datasets and consuming models
curl 'http://localhost:8080/relationships?direction=OUTGOING&urn=urn%3Ali%3AmlFeature%3A(user_features,age)&types=DerivedFrom'
curl 'http://localhost:8080/relationships?direction=INCOMING&urn=urn%3Ali%3AmlFeature%3A(user_features,age)&types=Consumes'
Batch Feature Creation
When creating many features at once (e.g., from a feature store ingestion connector), batch operations improve performance:
Python SDK: Create multiple features efficiently
import os
import datahub.emitter.mce_builder as builder
import datahub.metadata.schema_classes as models
from datahub.emitter.mcp import MetadataChangeProposalWrapper
from datahub.emitter.rest_emitter import DatahubRestEmitter
gms_server = os.getenv("DATAHUB_GMS_URL", "http://localhost:8080")
token = os.getenv("DATAHUB_GMS_TOKEN")
emitter = DatahubRestEmitter(gms_server=gms_server, token=token)
source_dataset = builder.make_dataset_urn(
name="analytics.users",
platform="snowflake",
env="PROD",
)
features_config = [
{
"name": "age",
"description": "User age in years",
"data_type": "CONTINUOUS",
},
{
"name": "country",
"description": "User country of residence",
"data_type": "NOMINAL",
},
{
"name": "is_verified",
"description": "Whether user email is verified",
"data_type": "BINARY",
},
{
"name": "total_orders",
"description": "Total number of orders placed",
"data_type": "COUNT",
},
{
"name": "signup_hour",
"description": "Hour of day user signed up",
"data_type": "TIME",
},
]
mcps = []
for feature_config in features_config:
feature_urn = builder.make_ml_feature_urn(
feature_table_name="user_features",
feature_name=feature_config["name"],
)
mcp = MetadataChangeProposalWrapper(
entityUrn=feature_urn,
aspect=models.MLFeaturePropertiesClass(
description=feature_config["description"],
dataType=feature_config["data_type"],
sources=[source_dataset],
),
)
mcps.append(mcp)
for mcp in mcps:
emitter.emit(mcp)
print(f"Created {len(mcps)} features in feature namespace 'user_features'")
Integration Points
ML Features integrate with multiple other entities in DataHub's metadata model to form a comprehensive ML metadata ecosystem:
Relationships with Datasets
Features declare their source datasets through the sources property in mlFeatureProperties. This creates a "DerivedFrom" lineage relationship that:
- Shows which raw data tables feed into each feature
- Enables impact analysis when datasets change
- Provides end-to-end lineage from data warehouse to model predictions
- Supports data quality root cause analysis
The relationship is directional: features point to their source datasets. Multiple features can derive from the same dataset, and a single feature can derive from multiple datasets if it's computed via a join or union.
Relationships with ML Models
ML Models consume features through the mlFeatures property in MLModelProperties. This creates a "Consumes" lineage relationship showing:
- Which features are used by each model
- Which models depend on a particular feature
- The complete set of inputs for model training and inference
- Impact analysis for feature changes on downstream models
This relationship enables critical use cases like:
- Feature usage tracking: Identify unused features that can be deprecated
- Model impact analysis: Find all models affected when a feature changes
- Feature importance correlation: Link model performance to feature changes
- Compliance documentation: Show exactly what data influences model decisions
Python SDK: Link features to a model
import datahub.emitter.mce_builder as builder
import datahub.metadata.schema_classes as models
from datahub.emitter.mcp import MetadataChangeProposalWrapper
from datahub.emitter.rest_emitter import DatahubRestEmitter
from datahub.ingestion.graph.client import DatahubClientConfig, DataHubGraph
from datahub.metadata.schema_classes import MLModelPropertiesClass
gms_endpoint = "http://localhost:8080"
# Create an emitter to DataHub over REST
emitter = DatahubRestEmitter(gms_server=gms_endpoint, extra_headers={})
model_urn = builder.make_ml_model_urn(
model_name="my-test-model", platform="science", env="PROD"
)
feature_urns = [
builder.make_ml_feature_urn(
feature_name="my-feature3", feature_table_name="my-feature-table"
),
]
# This code concatenates the new features with the existing features in the model
# If you want to replace all existing features with only the new ones, you can comment out this line.
graph = DataHubGraph(DatahubClientConfig(server=gms_endpoint))
model_properties = graph.get_aspect(
entity_urn=model_urn, aspect_type=MLModelPropertiesClass
)
if model_properties:
current_features = model_properties.mlFeatures
print("current_features:", current_features)
if current_features:
feature_urns += current_features
model_properties = models.MLModelPropertiesClass(mlFeatures=feature_urns)
# MCP creation
metadata_change_proposal = MetadataChangeProposalWrapper(
entityUrn=model_urn,
aspect=model_properties,
)
# Emit metadata!
emitter.emit(metadata_change_proposal)
Relationships with ML Feature Tables
Feature tables contain ML Features through the "Contains" relationship. The feature table's mlFeatures property lists the URNs of features it contains. This relationship:
- Organizes features into logical groupings
- Enables navigation from table to features and back
- Represents the physical or logical organization in the feature store
- Helps discover related features that share characteristics
While features don't explicitly store their parent table, you can discover it by querying incoming "Contains" relationships.
Python SDK: Add a feature to a feature table
import datahub.emitter.mce_builder as builder
import datahub.metadata.schema_classes as models
from datahub.emitter.mcp import MetadataChangeProposalWrapper
from datahub.emitter.rest_emitter import DatahubRestEmitter
from datahub.ingestion.graph.client import DatahubClientConfig, DataHubGraph
from datahub.metadata.schema_classes import MLFeatureTablePropertiesClass
gms_endpoint = "http://localhost:8080"
# Create an emitter to DataHub over REST
emitter = DatahubRestEmitter(gms_server=gms_endpoint, extra_headers={})
feature_table_urn = builder.make_ml_feature_table_urn(
feature_table_name="my-feature-table", platform="feast"
)
feature_urns = [
builder.make_ml_feature_urn(
feature_name="my-feature2", feature_table_name="my-feature-table"
),
]
# This code concatenates the new features with the existing features in the feature table.
# If you want to replace all existing features with only the new ones, you can comment out this line.
graph = DataHubGraph(DatahubClientConfig(server=gms_endpoint))
feature_table_properties = graph.get_aspect(
entity_urn=feature_table_urn, aspect_type=MLFeatureTablePropertiesClass
)
if feature_table_properties:
current_features = feature_table_properties.mlFeatures
print("current_features:", current_features)
if current_features:
feature_urns += current_features
feature_table_properties = models.MLFeatureTablePropertiesClass(mlFeatures=feature_urns)
# MCP createion
metadata_change_proposal = MetadataChangeProposalWrapper(
entityUrn=feature_table_urn,
aspect=feature_table_properties,
)
# Emit metadata! This is a blocking call
emitter.emit(metadata_change_proposal)
Platform Integration
Features are often associated with a platform through their namespace or through related entities (feature tables). While features themselves don't have a direct platform reference in their key, the namespace often encodes platform-specific organization, and related feature tables declare their platform explicitly.
GraphQL Resolvers
Features are accessible through DataHub's GraphQL API via the MLFeatureType class. The GraphQL interface provides:
- Search and discovery capabilities for features
- Autocomplete for feature names during searches
- Batch loading of feature metadata
- Filtering features by properties and relationships
Notable Exceptions
Feature Namespace vs Feature Table
The featureNamespace in the feature key is a logical grouping concept and doesn't necessarily correspond 1:1 with feature tables:
- In many feature stores: The namespace matches the feature table name. A feature table named
user_featurescontains features with namespaceuser_features. - In some systems: The namespace might represent a broader domain or project, with multiple feature tables sharing a namespace.
- Best practice: Use consistent namespace naming that aligns with your feature table organization for clarity.
When ingesting features, ensure namespace values match the corresponding feature table names for proper relationship establishment.
Feature Identity and Feature Tables
A feature's identity (featureNamespace + name) is independent of any feature table. This means:
- The same feature URN could theoretically be referenced by multiple feature tables (though this is uncommon)
- Changing a feature's containing table requires updating the table's metadata, not the feature itself
- Features can exist without being part of any feature table (though this reduces discoverability)
Most feature stores enforce 1:1 relationships between features and feature tables to avoid ambiguity.
Versioning Strategies
There are multiple approaches to versioning features:
Option 1: Version in the URN (namespace or name)
urn:li:mlFeature:(user_features_v2,age)
urn:li:mlFeature:(user_features,age_v2)
- Pros: Each version is a separate entity with independent metadata
- Cons: Harder to track version history; requires manual version management
Option 2: Version in the properties
MLFeatureProperties(
description="User age in years",
version=VersionTag(versionTag="2.0")
)
- Pros: Version history attached to single entity; easier lineage tracking
- Cons: Point-in-time queries are harder; version changes mutate entity
Recommendation: Use the version property in mlFeatureProperties for most use cases. Only use versioned URNs when breaking changes require fully separate entities (e.g., changing data type from continuous to categorical).
Composite Features and Feature Engineering
Composite features (features derived from other features) can be modeled in two ways:
Approach 1: Intermediate features as entities Create explicit feature entities for each transformation step, with lineage between them:
raw_feature -> transformed_feature -> composite_feature
Approach 2: Direct source lineage Skip intermediate features and link composite features directly to source datasets, documenting the transformation in the description.
Choose Approach 1 when:
- Intermediate features are reused by multiple downstream features/models
- You need to track transformations explicitly for governance
- Feature engineering pipelines are complex and multi-stage
Choose Approach 2 when:
- Transformations are simple and one-off
- Intermediate features have no independent value
- You want to reduce metadata entity count
Feature Drift and Monitoring
While DataHub's ML Feature entity doesn't include built-in drift monitoring aspects, you can use:
- Custom Properties: Store drift metrics or monitoring status
- Tags: Apply tags like
HIGH_DRIFT_DETECTEDorMONITORING_ENABLED - Documentation: Link to external monitoring dashboards via
institutionalMemory - External Systems: Integrate with specialized feature monitoring tools and reference them in feature metadata
Feature drift detection typically happens in runtime feature stores or model monitoring systems, with DataHub serving as the metadata catalog that links to those systems.
Search and Discovery
Features are searchable by:
- Name (with autocomplete)
- Namespace (partial text search)
- Description (full text search)
- Tags and glossary terms
- Owner
- Source datasets (via lineage)
The name field has the highest search boost score (8.0), making feature name the primary discovery mechanism. Ensure feature names are descriptive and follow consistent naming conventions across your organization.
Technical Reference
For technical details about fields, searchability, and relationships, view the Columns tab in DataHub.