Can I overwrite the hyperparameters of an Optuna trial object after it has already suggested values?

1.3k Views Asked by At

Occasionally Optuna will suggest a sample that I don't really want to evaluate - usually either because it is the same as, or too close to, a previously evaluated solution. In this case I would like to evaluate a random sample instead.

Is there a way to overwrite the suggested values from trial.suggest_float() (for example) for a single trial and pass that back to the optimiser?

Given two methods: eval_check(), which takes a list of variables and makes a determination as to whether the sample should be evaluated or not, returning True if it is to be evaluated and False if not; and,evaluate() which takes a list of variables and evaluates them, returning a real number - below is a sketch of what I am trying to achieve.

import numpy as np
import random

class Objective(object):
   def __call__(self, trial):
       # suggest values
       x1 = trial.suggest_float('x1', 0.0, 1.0)
       x2 = trial.suggest_int('x2', 10, 20)
       x3 = trial.suggest_categorical('x3', ['cat1', 'cat2', 'cat3'])

       # check if we should evaluate, if not get random value
       while not eval_check([x1, x2, x3]):
           x1 = np.random.uniform(0.0, 1.0)
           x2 = np.random.randint(10, 20)
           x3 = random.choice(['cat1', 'cat2', 'cat3'])

       return evaluate([x1, x2, x3])


sampler = optuna.samplers.NSGAIISampler(population_size=100)     
study = optuna.create_study(sampler=sampler)
study.optimize(Objective(),n_trials=1000)

Now, I know this won't work as I want it to, because, say the Optuna trial object suggests the sample [0.5, 15, 'cat2'] but eval_check doesn't like that sample and so suggests the random sample [0.2, 18, 'cat1'], if I return the output of evaluate([0.2, 18, 'cat1']), then Optuna will think this is the output of evaluate([0.5, 15, 'cat2']) and associate that score with that sample in its model.

Is there a way that I can overwrite the suggested hyperparameters in the trial object such that when I return the score, Optuna will associate that score with the new overwritten hyperparameters in its model?

1

There are 1 best solutions below

2
On

Is there a way that I can overwrite the suggested hyperparameters in the trial object such that when I return the score, Optuna will associate that score with the new overwritten hyperparameters in its model?

Optuna doesn't assume users overwrite a parameters of trial. Instead, how about adding a constraint to the search space?

Or, raising optuna.TrialPruned for an undesired combination of parameters also skips a computing objective function when not eval_check([x1, x2, x3]) becomes True.