I have to do an OCR for scholar purposes. I don't know if I can use embedded rotation functions in SDL so I've written a basic rotation function. It seems to work but I have some problems: It works only with some image and when it works, the new image is kinda messy. Example:
Image before rotation:
Image after 60 degree rotation:
Here's my code:
#include <SDL/SDL.h>
#include <SDL/SDL_image.h>
#include <math.h>
#include "image.h"
double deg_to_rad(int angle)
{
return (angle*0.017453292519943);
}
SDL_Surface *rotate(SDL_Surface *image, int angle)
{
/* Trigo */
double cosa = cos(deg_to_rad(angle));
double sina = sin(deg_to_rad(angle));
int wo = abs((image->w)*cosa + (image->h)*sina);
int ho = abs((image->h)*cosa + (image->w)*sina);
/* Création d'une nouvelle image */
SDL_Surface *new_img;
Uint32 rmask, gmask, bmask, amask;
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
rmask = 0xff000000;
gmask = 0x00ff0000;
bmask = 0x0000ff00;
amask = 0x000000ff;
#else
rmask = 0x000000ff;
gmask = 0x0000ff00;
bmask = 0x00ff0000;
amask = 0xff000000;
#endif
new_img = SDL_CreateRGBSurface(0,wo,ho,32, rmask, gmask, bmask, amask);
int center_x = image->w / 2;
int center_y = image->h / 2;
int new_center_x = wo/2;
int new_center_y = ho/2;
for (int x = 0; x < wo; x++)
{
for (int y = 0; y < ho; y++)
{
int xo = abs(cosa*(x - new_center_x) + sina*(y - new_center_y) + center_x);
int yo = abs((-1)*sina*(x - new_center_x) + cosa*(y - new_center_y) + center_y);
lockSurface(image);
lockSurface(new_img);
Uint8 r,g,b;
Uint32 pixel;
Uint32 color;
if (xo >= 0 && yo >= 0 && xo < image->w && yo < image->h
&& x >= 0 && x < wo && y >=0 && y < ho)
{
pixel = getPixel(image, xo, yo);
SDL_GetRGB(pixel, image->format, &r, &g, &b);
color = SDL_MapRGB(image->format, r, g, b);
setPixel(new_img, x, y, color);
}
unlockSurface(image);
unlockSurface(new_img);
}
}
return new_img;
}
What am I doing wrong?
If you imagine the image as a "grid" of pixels, it will be clear why your function is distorting it - rotating that grid does not map pixels to pixels, and by taking the value of just one closest pixel pre-rotation and inserting it into the post-rotation pixel, you're somewhat literally "cutting corners". You'll want to use an area-mapping or supersampling algorithm to avoid those visual artifacts you are seeing - this means considering the pixel "shape" in the first case and mixing the parts of the "pixel squares" together, if that makes sense?
The pictures (and code!) in "A Fast Algorithm for General Raster Rotation" by Alan W. Paeth are worth many words.