Why the norm of the embedding vector exceeds the limit, it is necessary to normalize

293 Views Asked by At

I would like to ask you guys, I saw that max_norm=1 in a piece of code and I checked that he said that the maximum norm is 1. What does this mean? Why does the norm of the embedded vector exceed the limit, so it needs to be normalized.

Randomly initialize the user's vector,

    self.users = nn.Embedding( n_users, dim, max_norm=1 )

max_norm (python:float, optional) – The maximum norm, if the norm of the embedding vector exceeds this limit, renormalization will be performed. There is also this, assuming n_users is specified, then the specified dim=10 is a 10-dimensional user, so why does this need a concept of dimension, is there any difference between different dimensions?

1

There are 1 best solutions below

0
Kan Robert On
embedding_dim (int) – the size of each embedding vector
max_norm (float, optional) – If given, each embedding vector with norm larger than max_norm is renormalized to have norm max_norm.

Embedding is to map a number to a vector. embedding_dim is the dimension of this vector. max_norm specifies the maximum allowed norm for the vector.