I have a tensor of 10 samples, each which contain 10 time-series 20x20x3 RGB images which I would like to extract the green color channel
The images are stored in an array of arrays called images
For example:
images[0][0][:,:,1]
returns back the green channel for for one image in one sample.
However, when I try to use the command:
images[0][:][:,:,1]
I receive the error:
IndexError: too many indices for array
How would I generalize my first line of code to pull all of the green channel images from the 1st sample?
Shapes of the data:
images.shape
(10,)
images[0].shape
(10,)
images[0][0].shape
(20,20,3)
Here is a sample of the data. The data are images which were extrated from a .mat file so it is stored as an array of arrays, with a sample shown below:
images
array([[array([[[41, 0, 0],
[43, 0, 0],
[45, 0, 0],
...,
[18, 0, 0],
[ 5, 0, 0],
[ 0, 0, 0]],
[[45, 0, 0],
[50, 0, 0],
[49, 0, 0],
...,
[ 3, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0]],
[[49, 0, 0],
[49, 0, 0],
[48, 0, 0],
...,
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0]],
...,
[[16, 0, 0],
[ 5, 0, 0],
[ 1, 0, 0],
...,
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0]],
[[ 3, 0, 0],
[ 1, 0, 0],
[ 0, 0, 0],
...,
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0]],
[[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
...,
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0]]], dtype=uint8),
array([[[87, 0, 0],
[92, 0, 0],
[86, 0, 0],
...,
[33, 0, 0],
[51, 0, 0],
[60, 0, 0]],
[[90, 0, 0],
[88, 0, 0],
[79, 0, 0],
...,
[11, 0, 0],
[21, 0, 0],
[41, 0, 0]],
[[89, 0, 0],
[82, 0, 0],
[62, 0, 0],
...,
[12, 0, 0],
[ 4, 0, 0],
[16, 0, 0]],
...,
[[77, 0, 0],
[77, 0, 0],
[76, 0, 0],
...,
[48, 0, 0],
[44, 0, 0],
[42, 0, 0]],
[[88, 0, 0],
[85, 0, 0],
[85, 0, 0],
...,
[54, 0, 0],
[53, 0, 0],
[51, 0, 0]],
[[89, 0, 0],
[89, 0, 0],
[88, 0, 0],
...,
[55, 0, 0],
[54, 0, 0],
[53, 0, 0]]], dtype=uint8),
Something like this?
but if
images
is already a numpy array, you can simply doimages[0,:,:,:,1]
andimages[:,:,:,:,1]