I am attempting to train an object classifier on MaskRCNN and the tutorial I am following uses VGG label software in which converts the labelled data into one JSON file. I have used labelme for my data and need to prepare this for MaskRCNN.
Labelme gives a JSON file for each labelled image in this format:
{ "version": "4.6.0", "flags": {}, "shapes": [
{
"label": "Green",
"points": [
[
1385.6666666666665,
2.121212121212121
],
[
1349.3030303030303,
174.84848484848484
],
[
1400.8181818181818,
296.06060606060606
],
[
1482.6363636363635,
344.5454545454545
],
[
1619.0,
338.48484848484844
],
[
1715.969696969697,
244.54545454545453
],
[
1728.090909090909,
120.30303030303028
],
[
1712.939393939394,
71.81818181818181
],
[
1679.6060606060605,
11.212121212121211
]
],
"group_id": null,
"shape_type": "polygon",
"flags": {}
},
I have a directory of images and corresponding JSON files, any help on what to do to combine. Can't get labelme_json_to_dataset to work and I believe this is the solution ?
you can use lableme2coco.py script in lableme repository under examples\instance_segmentation folder with this command:
it will convert your annotation files in one json file with coco format srcfiles contains your labels with images in same folder and lables.txt contains your labels