Is there any easy solution for exporting/genarating a custom dataset after applying some transforms i.e., Bounding Box Augmentation?
To elaborate, my goal is to
- import my original dataset(drone images containing small objects) which is in coco format,
- crop the objects from the original image with 20-50% background and the object with a random shift from the cropped image's centres
- decompose the original bounding box annotations and distribute them to the cropped images with scaled and transformed co-ordinates
- export/save to disk the cropped images as a new dataset with annotations ie., in yolov5 or coco format
Example:
Originalimage.jpg
Originalimage.jpg
and the annotations are(for example in yolo format):
Originalimage.txt
0 0.3343 0.6527 0.0061 0.0062
0 0.2631 0.6390 0.0058 0.0042
1 0.2790 0.6580 0.0377 0.0125
1 0.1930 0.6303 0.0380 0.0172
1 0.3380 0.5542 0.0372 0.0174
0 0.3702 0.5525 0.0102 0.0086
0 0.3908 0.5963 0.0063 0.0057
0 0.3885 0.5603 0.0061 0.0048
0 0.2083 0.6379 0.0047 0.0038
1 0.3411 0.5700 0.0391 0.0125
Originalimage_crop1.jpg
transform original bbox(line 4): 1 0.1930 0.6303 0.0380 0.0172 to augmented bbox: 1 xnew ynew hnew wnew and save to Originalimage_crop1.txt
Originalimage_crop2.jpg
transform original bbox(line 10): 1 0.3411 0.5700 0.0391 0.0125 to augmented bbox: 1 xnew ynew hnew wnew and save to Originalimage_crop2.txt
and so on...
So far i have only found roboflow.ai to do it in data preprocessing step when tiling an image into nxn grids, it also genarates new annotations to address new object locations in the cropped images. but this tiling/griding is not object aware, so often cuts the objects and genarate a lot of empty/background(without any objects) images.