Keras model predicting different outputs for the same input

46 Views Asked by At

I am using stellargraph for learning a GraphSage model. This is my code:

num_samples = [10, 5]
unsupervised_samples = UnsupervisedSampler(G,nodes=G.nodes(),length=10, number_of_walks=5)
# Generate training data for the encoder model
generator = GraphSAGELinkGenerator(G, batch_size=512, num_samples=num_samples)
train_gen = generator.flow(unsupervised_samples)
graphsage = GraphSAGE(layer_sizes=[32,64], generator=generator, bias=True, dropout=0.0, normalize="l2")     # encoder, produces embeddings
x_inp, x_out = graphsage.in_out_tensors()
pred = link_classification(output_dim=1, output_act="sigmoid", edge_embedding_method="ip")(x_out)
model = keras.Model(x_inp, pred)
model.compile(optimizer=Adam(lr), loss=binary_crossentropy, metrics=[binary_accuracy])
history = model.fit(generator, epochs=15, verbose=0, workers=4, shuffle=True)

After training the link classification model, the model can be used to get embeddings of the nodes in another graph, that we will call G2. Suppose it has three nodes with ids specified below. This is the code, following the library docs:

node_ids = ['123', '456', '789']
embedding_model = keras.Model(x_in[0::2], x_out[0])
node_gen = GraphSAGENodeGenerator(G2, batch_size=512, num_samples=num_samples).flow(node_ids)
embeddings = embedding_model.predict(node_gen)

The odd thing is that if I run the last line multiple times, I will get different predictions even if both the model and the inputs are the same. The output arrays contain values that are different among several runs, but in most cases they are pretty similar. However, this seems strange to me and I wonder if I made any mistake. Any help would be appreciated. Moreover, the docs don't explain why the model is defined as embedding_model = keras.Model(x_in[0::2], x_out[0]). Why we take only the odd values in x_in? And why take only the first value of x_out?

0

There are 0 best solutions below