Pixel-to-axial hexagon coordinates for nonstandard hex sizes

34 Views Asked by At

The geometry I am working with is pointy-top hexagons that are fit into squares rather than being true hexagons. This leaves them "squashed" and the distance from the top and bottom points is now sqrt(3)/2 * width from what it would have been in a true hex.

When laying out these tiles, all the odd rows get a size/2 x-offset while every row gets a -size/4 y-offset from the row above (this assumes 0, 0 in the top-left corner).

My main resource for working with hexes comes from the Red Blob Games blog. The basic pixel-to-hex algorithm is as follows:

function pixel_to_pointy_hex(point):
    var q = (sqrt(3)/3 * point.x  -  1./3 * point.y) / size
    var r = (                        2./3 * point.y) / size
    return axial_round(Hex(q, r))

I haven't been able to figure out how to tweak this code to work based on the fact that I have the "smashed" hexes. Is this even possible, or do I have to start from scratch with a different algorithm? Included is an image of how the layout looks with these tile. In the axial system the top left hex is q = 0, r = 0. enter image description here

1

There are 1 best solutions below

0
Christopher Theriault On

Amit from Red Blob Games was kind enough to suggest making a local pixel map and starting from there. Although, he also did suggest doing an affine transformation on the global space first so that the standard hex formulas could be used. However, I don't think my squashed hexes could have been produced exactly by such a transformation to begin with, so the local pixel map is a better option.

With the local map, you find out if the pixel is in the central hex, and if it is, return that hex, if it isn't, return the correct neighbor. So I started building such a map array by hand when I realized it's better to just generate it from code after importing the image. Then I realized that PyGame already also has a means to import an image directly into a pixel array all on its own. So here's my final function which worked like a champ:

def pixel_to_hex (x : int, y : int) :
    global pixel_map
    global pixel_map_x
    global pixel_map_y
    local_x = x % pixel_map_x
    local_y = y % pixel_map_y
    local_color = pixel_map[local_x, local_y]
    base_q = (y // pixel_map_y) 
    base_r = base_q << 0x01
    base_q = (base_q * -0x01) + (x // pixel_map_x)
    offset = [0, 0]
    if local_color == C_RED :
        offset = hx.neighbor_offsets[0x02]
    elif local_color == C_GREEN :
        offset = hx.neighbor_offsets[0x01]
    elif local_color == C_BLACK :
        offset = hx.neighbor_offsets[0x04]
    elif local_color == C_WHITE :
        offset = hx.neighbor_offsets[0x05]
    return (base_q + offset[0x00], base_r + offset[0x01])

This is working on the premise that the (0, 0) hex in the axial system is top left and so is the (0, 0) of the Cartesian coordinates of the screen, with q + r + s = 0. If you check out my pixel map image you'll see it forms a set of hexagons if you tile it in both directions. Best part is no floats or square roots required, and there are no rounding errors so it's impossible not to identify a hex for every pixel on the screen.

local_pixel_map