No evaluator found. Use `DefaultTrainer.test(evaluators=)`, or implement its `build_evaluator` method

439 Views Asked by At

I am using Detectron2 in a notebook and I keep getting the error: No evaluator found. Use DefaultTrainer.test(evaluators=), or implement its build_evaluator method.

I already have the build_evaluator function in the Trainer function.

class AugTrainer(DefaultTrainer):
    @classmethod
    def build_evaluator(cls, cfg, dataset_name, output_folder=None):
        return COCOEvaluator(dataset_name, output_dir=output_folder)
    
    @classmethod
    def build_train_loader(cls, cfg):
        return build_detection_train_loader(cfg, mapper=custom_mapper)

Trainer gets called here:

trainer = DefaultTrainer(cfg) if not is_augment else AugTrainer(cfg)
trainer.resume_or_load(resume=is_resume_training)
trainer.train()

I thought COCOEvaluator would also get called when the Trainer gets called.

print("### EVALUATING ON VALIDATION DATA ####")
# trained model weights
cfg.MODEL.WEIGHTS = str(MODEL_PATH)
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.6   # set a custom testing threshold

cfg.SOLVER.IMS_PER_BATCH = 64

evaluator = COCOEvaluator(DATA_REGISTER_VALID, cfg, False, output_dir=cfg.OUTPUT_DIR, use_fast_impl=True)

val_loader = build_detection_test_loader(cfg, DATA_REGISTER_VALID)

results = inference_on_dataset(trainer.model, val_loader, evaluator=evaluator)
    
# print the evaluation results
print("Evaluation results for dataset {}: \n".format(DATA_REGISTER_VALID))
print("Average Precision (AP) in given IoU threshold: \n")
print(results["bbox"])

I don't know what I'm doing wrong. Thanks in advance.

I've tried following these methods:

I want the evaluator to print Average Precision (AP) and Evaluation results for dataset

1

There are 1 best solutions below

0
On

It seems like you would like to perform evaluation during the training process instead of after the training process. To do this, follow these steps:

  1. Define a CocoTrainer class that includes the evaluator:
class CocoTrainer(DefaultTrainer):
    @classmethod
    def build_evaluator(cls, cfg, dataset_name, output_folder=None):
        if output_folder is None:
            os.makedirs("coco_eval", exist_ok=True)
            output_folder = "coco_eval"
        return COCOEvaluator(dataset_name, cfg, False, output_folder)
  1. Set the model configurations for training:
from detectron2.evaluation import COCOEvaluator
# Import required libraries

cfg = get_cfg() # obtain detectron2's default config
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml"))
cfg.DATASETS.TRAIN = ("train",) # training set
cfg.DATASETS.TEST = ("val",) # validation set
cfg.TEST.EVAL_PERIOD = 100 # do evaluation once after 100 iterations on the cfg.DATASETS.TEST (Validation set)
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml") # Let training initialize from the model zoo
cfg.SOLVER.BASE_LR = 0.0005 # pick a good LR, 0.00025 or 0.0005 seems like a good start
cfg.DATALOADER.NUM_WORKERS = 4
cfg.SOLVER.IMS_PER_BATCH = 4 # this is the real "batch size"
cfg.SOLVER.MAX_ITER = (500) # 500 or 1000 iterations are a good start; for better accuracy, increase this value or comment the line to use the default value in cfg (90000)
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 16 # (default: 512), select smaller if faster training is needed
cfg.MODEL.ROI_HEADS.NUM_CLASSES = len(classes)
cfg.OUTPUT_DIR = "your/output/dir"
  1. Save the configurations:
import yaml
with open("your/output/dir/my_dataset_cfg.yaml", "w") as file:
    yaml.dump(cfg, file)
  1. Start the training process:
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
trainer = CocoTrainer(cfg)
trainer.resume_or_load(resume=False)
trainer.train()

I hope this helps you!