Contrast & brightness of images using IplImage

1.2k Views Asked by At

Please have a look at the following code

using namespace cv;
double alpha = 1.6;
int beta = 50;
int i = 0;
IplImage* input_img = cvLoadImage("c:\\Moori.jpg", CV_LOAD_IMAGE_GRAYSCALE);
IplImage* imageGray = cvCreateImage(cvSize(input_img->width, input_img->height), IPL_DEPTH_8U, 1);
    for( int y = 0; y < input_img->height; y++ )
    { 
        for( int x = 0; x < input_img->width; x++ )
        { 

            i = y * imageGray->width + x;
            imageGray->imageData[i] = (alpha * input_img->imageData[i]) + beta;
        }
    }
cvNamedWindow("Image IplImage", 1);
cvShowImage("Image IplImage", imageGray);
waitKey();
cvReleaseImage(&imageGray);
cvReleaseImage(&input_img);
cvDestroyWindow("Image IplImage");

when I run this code, it shows an image with many dark pixels. But, when i run the code, which is available at: http://docs.opencv.org/doc/tutorials/core/basic_linear_transform/basic_linear_transform.html it works fine. I want to do by IplImage. Please help

3

There are 3 best solutions below

4
On

If you are using C++ I am not sure why anybody would want to use IplImage. But your problem is this line

 imageGray->imageData[i] = (alpha * input_img->imageData[i]) + beta;

It can overflow. Also imageData is a char*, and a char may be signed or unsigned, you need to make it unsigned. You need use saturate_cast to prevent overflow, and a cast to get rid of the signed char:

imageGray->imageData[i] = saturate_cast<uchar>((alpha * static_cast<uchar>(input_img->imageData[i])) + beta);

You can use this little program to see what is going on:

#include <opencv2/core/core.hpp>
#include <iostream>     // std::cout
#include <vector>       // std::vector
int main(int argc, char** argv)
{
    double alpha = 1.6;
    int beta = 50;

    std::vector<uchar> z;
    for(int i = 0; i <= 255; ++i)
        z.push_back(i);

    char* zp = reinterpret_cast<char *>(&z[0]);

    for(int i = 0; i <= 255; ++i)
        std::cout << i << " -> " << int(cv::saturate_cast<uchar>(alpha * static_cast<uchar>(zp[i]) + beta))  << std::endl;
}
7
On

You dont have to cast the image values to uchar. You have to reinterpret the values as uchar. It Means you have to assume that the data bits actually represent an unsigned char, regardless of the type of pointer. It can be done as follows:

uchar* ptr = reinterpret_cast<uchar*>(imageGray->imageData);
ptr[i] = saturate_cast<uchar>(alpha * ptr[i] + beta);
0
On

saturate_cast is for c++. http://docs.opencv.org/modules/core/doc/intro.html

Finally, I have solved this problem.

IplImage* img;
cvNamedWindow("Display");
while(true)
{
    img = cvLoadImage("Moori.jpg");
    CvScalar brVal = cvScalarAll(abs(10.0));
    cvAddS(img, brVal, img, NULL);
    IplImage *pTempImg = cvCreateImage(cvGetSize(img), IPL_DEPTH_8U, img->nChannels);
    cvSet( pTempImg, cvScalarAll(1), NULL );
    double scale = 1.5;
    cvMul(img, pTempImg, img, scale);
    cvReleaseImage(&pTempImg);
    cvShowImage("Display", img);
    cvReleaseImage(&img);
    int c=cvWaitKey(10);              
    if(c==27) break;
}
cvDestroyWindow("Display");