I am using Ray Tune with a conditional search space. One hyper parameter depends on the value of another hyper parameter. I modified the tune basic example to add a conditional hyper parameter as discussed in the search space guide.
My example is:
"""This example demonstrates basic Ray Tune random search and grid search with hierarchical search space."""
import time
import numpy as np
import ray
from ray import train, tune
def evaluation_fn(step, width, height):
time.sleep(0.1)
return (0.1 + width * step / 100) ** (-1) + height * 0.1
def easy_objective(config):
# Hyperparameters
width, height = config["width"], config["height"]
for step in range(config["steps"]):
# Iterative training function - can be any arbitrary training procedure
intermediate_score = evaluation_fn(step, width, height)
# Feed the score back back to Tune.
train.report({"iterations": step, "mean_loss": intermediate_score})
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
"--smoke-test", action="store_true", help="Finish quickly for testing"
)
args, _ = parser.parse_known_args()
#ray.init(configure_logging=False)
# This will do a grid search over the `activation` parameter. This means
# that each of the two values (`relu` and `tanh`) will be sampled once
# for each sample (`num_samples`). We end up with 2 * 50 = 100 samples.
# The `width` and `height` parameters are sampled randomly.
# `steps` is a constant parameter.
tuner = tune.Tuner(
easy_objective,
tune_config=tune.TuneConfig(
metric="mean_loss",
mode="min",
num_samples=5 if args.smoke_test else 10,
),
param_space={
"steps": 5 if args.smoke_test else 100,
"width": tune.uniform(0, 20),
"height": tune.sample_from(lambda spec: spec.config.width * np.random.normal()),
"activation": tune.grid_search(["relu", "tanh"]),
},
)
results = tuner.fit()
print("Best hyperparameters found were: ", results.get_best_result().config)
height depends on the value of width. I can run without problem the tuning script. However, I would like to be able to see the value of height in Tensorboard, but it does not show. Similarly, the value of height for each trial is not shown in the stdout, whereas the other parameters are correctly shown.
Is there a way to get conditional parameters to be seen in Tensorboard and on cmd line stdout?