Locating Position of Black Pixels from ImageGrab

1.2k Views Asked by At

I am currently creating a PianoTiles AI, that has to locate all the black pixels from an ImageGrab. I have got all the positions of the Image Grab however I need to find out if there are black pixels in there and if so where they are so my AI can click them. Bellow I have put a snip-it of my code.

I have already had a look around the web but cant find anything. I think that the code goes something like this.

from PIL import ImageGrab, ImageOps    

class Coordinates:    
    lines = [    
    (520, 300, 525, 760),    
    (630, 300, 635, 760),    
    (740, 300, 745, 760),    
    (850, 300, 855, 760)]    
    restartcheck = (660, 590, 725, 645)    
    restartbtn = (695, 615)    


blackpixelpositions = []    

def findtiles():    
    for line in Coordinates.lines:  
        i = ImageGrab.grab(line)  
        for pixel in i.getdata():  
            #if pixel is black  
            # x, y = pixel position  
             blackpixelpositions.append((x,y))  

All I need is the above code to work and give me the black pixel positions.

2

There are 2 best solutions below

0
Mark Setchell On BEST ANSWER

You should try and avoid looping over images and using functions such as getpixel() to access each pixel as it is really slow - especially for large images if you are grabbing modern 4-5k screens.

It is generally better to convert your PIL image to a Numpy array and then use vectorised Numpy routines to process your images. So, in concrete terms, let's say you get a PIL image either by screen-grabbing or opening a file:

im = Image.open('someFile.png')

you can then make a Numpy array from the image like this:

n = np.array(im)

and search for black pixels like this:

blacks = np.where((n[:, :, 0:3] == [0,0,0]).all(2)))

which will give you an array of x coordinates and an array of y coordinates of the black pixels, e.g. you could do:

xcoords, ycoords = np.where((n[:, :, 0:3] == [0,0,0]).all(2))
0
Artog On

You have an issue with i.getdata() that it flattens the data, i.e. you loose pixel coordinates (unless you keep track manually). so you will only know that there exists a black pixel, but not where. You can use getpixel instead:

def get_black_pixels(image):
    found = []
    width, height = image.size
    for y in range(height):
        for x in range(width):
            if all(map(lambda x: x < 20, image.getpixel((x,y)))):
                found.append((x,y))
    return found

The line:

all(map(lambda x: x < 20, image.getpixel((x,y))))

just checks that all values (r,g,b) is below 20, which you can change to some other threshold value.