It seems like it is not possible to load pretrained embeddings to a layer. See here
What I did as a workaround is the following:
model = create_model()
E = [p for p in model.parameters if p.name == 'E'][0]
emb = np.asarray(np.loadtxt('embeddings.txt', delimiter=' '), dtype='float32')
model = model.clone(CloneMethod.clone, { E: constant(emb) })
with embeddings.txt having the following format where the number of rows is the number of words in the vocabulary I use and the number of columns is the dimensions I have chosen for my embeddings: -0.05952413007617 0.12596195936203 -0.189506858587265 ... -0.0871662572026253 -0.0454806201159954 -0.126074999570847 ... ...
Does the above seem like a correct workaround? I kicked off a training session and the number of parameters is reduced compared to what I had when training the embeddings layer which could be a good indication.
This has been fixed. For example: