Is it possible to convert vertices of Google OCR vision into a text preserving it's original indentation?
For instance using the example here, for the following image
Google vision returns each detected text with it's 4-vertices (it's Cartesian coordinates).
After executing the python function "detect_text" we get:
Result :
"WAITING? PLEASE TURN OFF YOUR ENGINE" bounds: (52,137),(375,137),(375,330),(52,330)
"WAITING" bounds: (59,137),(342,151),(340,190),(57,177)
"?" bounds: (345,151),(375,152),(373,191),(343,190)
"PLEASE" bounds: (204,205),(318,210),(317,232),(203,227)
"TURN" bounds: (205,236),(288,239),(287,261),(204,258)
"OFF" bounds: (301,240),(356,242),(355,263),(300,261)
"YOUR" bounds: (205,269),(290,272),(289,293),(204,290)
"ENGINE" bounds: (205,302),(323,304),(323,326),(205,324)
I've found different ways to plot these coordinates, but how can I print the result with it's original indentation?
Expected output :
WAITING ?
PLEASE
TURN OFF
YOUR
ENGINE
Of course, this will not preserve text formatting, but at least we'll keep relatively original spaces and tabulations!
Thank you in advance!
