1.

What is Machine Learning interpretability and what are different types of dataset shifts?

Answer»

Response: 

Machine Learning interpretability refers to a concept where one can have a better understanding of what is happening as part of the predicted outcome from an ML model. In real-world SCENARIOS, there are always DATA quality issues, nature of data distribution, the way data has been collected or gathered over some time etc. has a lot of impact in formulating the machine learning PROCESS in which the model is developed. The outcome of a model in terms of prediction accuracy or something similar largely depends on various aspects such as features that are used, variation in features, data distribution, variation of correlation between those features etc. 

There are different types of dataset shifts. These are critical since this will impact the model performance after it is being put into PRODUCTION. Existing model, which is trained and developed in the development PHASE, may change due to various factors. 

  1. Covariate shift – when a shift occurs in independent variables, then it is termed as covariate shift. 
  2. Concept shift – when a shift occurs between the relationship of independent and target variables, then it is termed as concept drift. 
  3. Prior probability shift - when a shift occurs in target variable, then it is termed as prior probability shift. 


Discussion

No Comment Found