Motivation
- I have a detectron2 Mask R-CNN baseline model that is good enough to predict some object boundaries accurately.
 
- I'd like to convert these predicted boundaries to COCO polygons to annotate the next dataset (supervised labeling).
 
- To do this, I need to run inference on an image dataset that does not have annotations.
 
- The detectron2 methods 
register_coco_instances and load_coco_json require images with annotations to properly label the predicted objects. 
Questions
- Can I register the test dataset without an annotations file?
 
- If not, what's the easiest way to generate COCO or Labelme JSON files with basic image info without annotations?
 
Code
dataset_name = "test_data"
image_dir = "data/test"
coco_file = "data/test_annotations.json"
# Register dataset
# A COCO file is needed with image info, which I don't have
register_coco_instances(dataset_name , {}, coco_file, image_dir)
test_dict = load_coco_json(coco_file, image_dir, dataset_name=dataset_name )
metadata = MetadataCatalog.get(dataset_name)
# config details omitted for brevity
cfg = get_cfg()
predictor = DefaultPredictor(cfg)
# Make predictions for all images
for sample in test_dict:
    image_filename = sample["file_name"]
    img = cv2.imread(image_filename)
    outputs = predictor(img)
    # Display or save image with predictions to file
				 
				
Here's a method to generate the image details from a directory of images and write it to an existing COCO JSON file:
You'll need to create a baseline COCO JSON file with your categories if you don't already have one. It should look something like this: