Why in LDA, n-components doesn't works properly?

133 Views Asked by At

I tried to use LDA and find a 3-channel output. But its output has just 2 channels.

from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA

x = []
y = []
for i in range(len(img)):
    for j in range(len(img[0])):
        x.append([i,i,j,j])
        y.append(img[i][j])
y = np.array(y)
x = np.array(x)
y.shape,x.shape
lad = out = _ = ''
lda = LDA(n_components=3)
out = lda.fit(x, y).transform(x)
print(out.shape,y.shape,x.shape)

I used [i, i, j, j] because LDA asked me to have x with more features.

and printed output is: ((392960, 2), (392960,), (392960, 4)) but for out.shape I seek (392960,3)

Can anyone help me with this, please?

1

There are 1 best solutions below

1
On

The number of components in LinearDiscriminantAnalysis is always lower than the number of classes as it projects the data into an affine subspace of dimension at most the number of classes minus 1. It looks like your y has 3 classes.