My problem seems to be a very simpel problem but I can't really find a good solution.
I'm working on an application that detects motions in an webcam stream. The plugin is written in JavaScript and WebGL. To this end it works fairly good.
I want to extand the application with color-tracking and ultimatley object recogniztion.
For now the color detection simply pass the a given color and the camera texture to a shader. The shader converts the texture and color to CIELAB space and checks the euclidean distance(on the A and B axes,Not the luminance component). If it is within the given distance the texture keeps the color, else the fragment is set to black. The result is barely "OK".
So my question is, is there a more robust and better way to find these colors? I choosed the CIELAB space since it is somewhat invariant to shadows etc.
EDIT: Seems that the biggest problem is that I use a Guassian Filter to reduce noise in the video image, this leads to a darker image. Even though LAB is, as stated "somewhat invariant to shadows etc" it makes the detection less efficient. So I'm guessing I need a different way to reduce noise in the image. And I have only tried a spatial median filter that uses the last 5 frames. But it is just not good enough. So an better solution for noise reduction would be MUCH appreciated.
Best regards
You need to normalize the Gaussian filter kernel values to prevent darkening/brightening of the image - you may take a look at this gaussian blur tutorial. You can also try Symmetric nearest neighbor or Kuwahara filters.
Regarding the color tracking, it seems to me you can also use HSV color space.