How to Apply Platt Calibration to bestRun.Model in AutoML

29 Views Asked by At

I am using AutoML in ML.net.

  • trainData contains the Training Data
  • testData contains the TestData Data
  1. The code works well where I "CreateBinaryClassificationExperiment" for 20 seconds.

  2. After the 20 seconds, I return:

    var bestModel = bestRun.Model;

What I try to understand how to do now, is how to apply a Platt Calibration. I have scratched my head all day but are simply not sure what code that is needed after var bestModel = bestRun.Model; and how to return the Platt Calibration into:
(I would do the same for Isotonic and Naive Calibrations which would follow the same steps as I understand)

var plattCalibratedPredictions

Any help would be greatful!

Code:

void run_CreateBinaryClassificationExperiment(IDataView trainData, IDataView testData, MLContext mlContext)
{
    //Use "AutoML" to find the "Best Model"
    BinaryClassificationMetrics bestValidationMetrics = null; string bestTrainerName = null;
    var experiment = mlContext.Auto().CreateBinaryClassificationExperiment(new BinaryExperimentSettings
    {
        MaxExperimentTimeInSeconds = 7,
        CacheBeforeTrainer = CacheBeforeTrainer.On,
        CacheDirectoryName = "C:/Aintelligence/temp/cache",
        MaximumMemoryUsageInMegaByte = 8192, // Set the maximum memory usage (adjust as needed)
        OptimizingMetric = BinaryClassificationMetric.PositivePrecision
    });
    var progressHandler = new Progress<RunDetail<BinaryClassificationMetrics>>(ph =>
    {
        if (ph.ValidationMetrics != null)
        {
            Invoke((MethodInvoker)delegate
            {
                // Check if the current run has better metrics than the best so far
                listBox1.Items.Add($"Current trainer - {ph.TrainerName}, {ph.ValidationMetrics.PositivePrecision}");
                if (bestValidationMetrics == null || ph.ValidationMetrics.PositivePrecision > bestValidationMetrics.PositivePrecision)
                {
                    bestValidationMetrics = ph.ValidationMetrics; //Save the best Model, so far!
                    bestTrainerName = ph.TrainerName;
                }
            });
        }
    });
    var results = experiment.Execute(trainData, testData, labelColumnName: "Label", progressHandler: progressHandler);
    var bestRun = results.BestRun;
    var metrics = bestRun.ValidationMetrics;
    Invoke((MethodInvoker)delegate { richTextBox1.Text = $"Best model {bestTrainerName}, {bestValidationMetrics?.PositivePrecision ?? 0.0}"; });


    //Use Platt to calibrate
    var plattCalibrator = mlContext.BinaryClassification.Calibrators.Platt();


    // Access the best model
    var bestModel = bestRun.Model;

    //What code do we have to put here to calibrate the "bestModel" with "Platt Calibration" on the "testData"
    //So the "Platt Calibration" is returned into: "plattCalibratedPredictions" 

    var plattCalibratedPredictions;

    //The "Evaluate" method is suitable for evaluating predictions that have been calibrated.
    var _metrics = mlContext.BinaryClassification.Evaluate(plattCalibratedPredictions, labelColumnName: "Label"); 
}
0

There are 0 best solutions below