Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Prediction Explanations on small data

Warning

The described workaround is intended for users who are very familiar with the partitioning methods used in DataRobot modeling. Be certain you understand the implications of the changes and their impact on resulting models.

Robot 1

Can I get Prediction Explanations for a small dataset?

For small datasets, specifically those with validation subsets less than 100 rows, we cannot run XEMP Prediction Explanations. (I assume that's true for SHAP also, but I haven't confirmed). Is there a common workaround for this? I was considering just doubling or tripling the dataset by creating duplicates, but not sure if others have used slicker approaches.

Robot 2

It’s not true for SHAP, actually. No minimum row count there. 🤠

I feel like I’ve seen workarounds described in #cfds or #data-science or somewhere... One thing you can do is adjust the partitioning ratios to ensure 100 rows land in Validation. There might be other tricks too.

Robot 1

Right, that second idea makes sense, but you'd need probably > 200 rows. The user has a dataset with 86 rows.

I just don't want to have to eighty-six their use case. 🥁

Robot 2

OK, no dice there. 🎲

I’d want to be really careful with duplicates, but this MIGHT finesse the issues:

  1. Train on your actual dataset, do “Training Predictions”, and carefully note the partitions for all rows.

  2. Supplement the dataset with copied rows, and add a partition columns such that all original rows go in the same partitions as before, and all copied rows go in the Validation fold. I guess you probably want to leave the holdout the same.

  3. Start a new project, select User CV, and train the model. Probably do Training Predictions again and make sure the original rows kept the same prediction values.

  4. You should be able to run XEMP now.

Robot 2

I think (fingers crossed) that this would result in the same trained model, but you will have faked out XEMP. However, Validation scores for the modified model would be highly suspect. XEMP explanations would probably be OK, as long as you ensure the copied data didn’t appreciably change the distributions of any features in the Validation set.

I think if you scrupulously kept the Holdout rows the same, and the Holdout scores match in the two models, that is a sign of success.

Robot 1

Right, so if I ran Autopilot again, it would do unreasonably well on that Validation set, but if I just train the same blueprint from the original Autopilot, that would be fine.

Robot 2

Yes. Autopilot would probably run a different sequence of blueprints because the Leaderboard order would be wacky and the winning blueprint would quite likely be different.

It almost goes without saying, but this is more suspect the earlier you do it in the model selection process. If you’re doing a deep dive on a model you’ve almost locked in on, that’s one thing, but if you’re still choosing among many options, it’s a different situation.

Robot 1

Brilliant, thank you!


Updated February 20, 2023
Back to top