Take a look at the image below. If you're not color blind, you should see some A's and B's. There are 3 A's and 3 B's in the image, and they all have one thing in common: Their color is the background + 10% of value, saturation and hue, in that order. For most people, the center letters are very hard to see - saturation doesn't do much, it seems!
This is a bit troublesome though because I'm making some character recognition software, and I'm filtering the image based on known foreground and background colors. But sometimes these are quite close, while the image is noisy. In order to decide whether a pixel belongs to a letter or to the background, my program checks Euclidean RGB distance:
(r-fr)*(r-fr) + (g-fg)*(g-fg) + (b-fb)*(b*fb) <
(r-br)*(r-br) + (g-bg)*(g-bg) + (b-bb)*(b*bb)
This works okay, but for close backgrounds and foregrounds, it works pretty bad sometimes.
Are there some better metrics to look for? I've looked into color perception models but those mostly model brightness rather than perceptive difference which I'm looking for. Maybe one that models saturation as less effective, and certain hue differences also? Any pointers to some interesting metrics would be very useful.
As was mentioned in the comments, the answer is using a perceptual color space, but I thought I'd throw together a visual example of how the edge detection behaves in the two color spaces. (Code is at the end.) In both cases, the Sobel edge detection is performed on the 3-channel color image, and then the result is flattened to gray scale.
RGB space:
L*a*b space (image is logarithmic, as the edges on the third letters are much more significant than the edges on the first letters, which are more significant than the edges on the second letters):
OpenCV C++ code: