I can't figure out where my code went wrong I don't show all 4 clusters in my plot. Any ideas?
kmeans = KMeans(n_clusters=4)
kmeans.fit(x)
y_kmeans = kmeans.fit_predict(x)
print(kmeans.cluster_centers_)
print(kmeans.labels_)
plt.scatter(x[y_kmeans==0, 0], x[y_kmeans==0, 1], s=100, c='red', label ='Cluster 1')
plt.scatter(x[y_kmeans==1, 0], x[y_kmeans==1, 1], s=100, c='blue', label ='Cluster 2')
plt.scatter(x[y_kmeans==2, 0], x[y_kmeans==2, 1], s=100, c='green', label ='Cluster 3')
plt.scatter(x[y_kmeans==3, 0], x[y_kmeans==3, 1], s=100, c='magenta', label ='Cluster 4')
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s=200, c='yellow', label =
'Centroids')
plt.title('Clusters of Customers')
plt.xlabel('Scores')
plt.ylabel('')
plt.show()

Is your data continuous or categorical? It seems to be categorical. It does not make a lot of sense to calculate distance between binary variables. Not all data is well-suited for clustering.
I don't have your actual data, but I'll show you how to do clustering correctly, and incorrectly, using the canonical MTCars sample data.
As you can see, the choice of features that you use for clustering makes a huge difference in the outcome (obviously). The first example looks somewhat like your results and the second example looks like a more useful/interesting clustering experiment.