Suppose I want to represent binary data as a black and white image, with only sixteen distinct levels for the gray values for each pixel so that each two adjacent pixels (lengthwise) represent a single byte. How can I do this? If, for example, I use the following:
import numpy as np
path = r'mybinaryfile.bin'
bin_data = np.fromfile(path, dtype='uint8')
scalar = 20
width = int(1800/scalar)
height = int(1000/scalar)
for jj in range(50):
wid = int(width*height)
img = bin_data[int(jj*wid):int((jj+1)*wid)].reshape(height,width)
final_img = cv2.resize(img, (scalar*width, scalar*height), interpolation = cv2.INTER_NEAREST)
fn = f'tmp/output_{jj}.png'
cv2.imwrite(fn, final_img)
I can create a sequence of PNG files that represent the binary file, with each 20 by 20 block of pixels representing a single byte. However, this creates too many unique values for the grays (256), so I need to reduce it to fewer (16). How can I "split" each pixel into two pixels with 16 distinct gray levels (4 bpp, rather than 8) instead?
Using 4 bpp rather than 8 bpp should double the number of image files since I'm keeping the resolution the same but doubling the number of pixels I use to represent a byte (2 pixels per byte rather than 1).
I have understood that you want to take an 8-bit number and split the upper four bits and the lower four bits.
This can be done with a couple of bitwise operations.
For the gray scale image to be created the data needs to be converted to a value in the range 0 to 255. However you want to keep only 16 discrete values. This can be done by normalising the 4-bit values in the range of 0 to 1. The multiple the value by 255 to get back to uint8 values.
My full testcase was:
Which gave a transcript of:
And generated the following image:
The original data has 18 bytes and there are 36 blocks/"20x20_pixels"
And if I change the dimensions to 1800, 1000 that you have in the question I get: