Model Fabric

Model Fabric is an environment for domain specialist or data scientists to create AI/ML models to address the business needs. Following are the list of features and enhancements in Genix Suite 25.2

Compliance Check for Imported Models

Imported models are now checked for logging and platform compatibility. Status is shown as Compliant or Non-Compliant during model registration.

Python Version and Pip Package Validation

During model import, the Python version is shown, and pip packages are validated for compatibility. Invalid packages are not allowed for model registration.

Variable Names for ICM Models

From Data Exploration till Orchestration, variable names from ICM now appear exactly as in the source system, including special characters. No explicit variable name mapping is required during Model Orchestration.

Standard Logging for Predictive Models

Models trained from Model Recommender now log inputs, outputs, and runtime metadata in a consistent format.

Logging Enhancements for DL Forecasting Models

Forecast outputs from deep learning models now include forecast time, predicted time, and predicted values.

Logging Support for ML Forecasting Models

Machine learning forecasts are now logged in the same format as DL models, with full support for numerical and categorical targets.

Clone Existing Dataset

In Data Explorer, datasets can now be copied to create a new version with the same settings, filters, and variables through Copy Dataset option.

Centralized Orchestrator Management

Users can now manage all orchestrators in one place via Model RegistryManage WorkflowsModel Inference Jobs.

From the Model Inference Jobs page, users can view orchestrator details, check execution and schedule history, delete orchestrators, or clear execution history - all from a single interface.

Centralized Schedule Management

Users can now access all scheduled orchestrators and pipelines from the new Schedule Management view via Model RegistryManage WorkflowsView Execution Schedule.
The page displays schedule details, schedule run history, and status (Active/Paused) for each job.
Users can filter by time range, analyze execution statistics, and view summary charts for each of the scheduled orchestrator.

Decoupling Pipeline Designer from Model Requirement

Users are no longer required to select a registered model when configuring source or destination nodes in Pipeline Designer. This allows pipelines to be built for tasks like data fetching, custom preprocessing, or data export - without involving a model. This is for use cases like KSH to CDL pipelines with custom logic.

Decoupling Forecast Duration from System Defaults

Users can now define the forecasting duration for ML Forecasting models directly within the orchestrator configuration. This gives full control over how far ahead predictions are generated. This is applicable when configuring orchestrator for ML Forecasting models.

Simplified Output Mapping for Compatible Models

For compatible models (internal or external that follow the standard output format), the output mapping step has been removed in the orchestrator.
Output fields are auto identified, reducing the number of clicks and errors. The output mapping is still applicable for non-compatible models, where manual mapping is required.

Predicted Timestamp in KSH Data

A timestamp issue affecting predicted values sourced from KSH has been resolved. Predictions now reflect accurate timestamps aligned with the source data. This improves reliability of model outputs during visualization.