"LightAutoML testers" team models OOFs and Test predictions (AutoML Grand Prix)
Dataset Description
[AutoML Grand Prix] 1st Place Solution Team LightAutoML testers - OOF and Test predictions
🏆 Solution description
Github repo with the training code
TLDR: 24 base regression and classification models, GBDT + NN, with their blend.
We trained all models (CatBoost and LGBM for regression, DenseLight and FT-Transformer for both regression and classification) with original and clipped target (clip all price values higher than 500k for training fold) using original and augmented with kagglex (added only in train fold) datasets (thanks to @lashfire).
Our final ensemble with 10-Fold CV scores is:
Related Datasets
-
2021-2027 Achievements Planned (latest)
@esifunds