You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Sep 11, 2020. It is now read-only.
"UNK " is added to the tokenizer word lists in nlppip.py because the
from keras.preprocessing.text import Tokenizer is one-based.
The Tensorflow implementation of word embedding and embedding lookup are zero-based.
Curiously, the closest (e.g. - cosine-similarity) embedding vector after training for 200 epochs to embedding vector before training 0 is:
How could one embedding vector appear in so many [orthogonal?] topics.
The text was updated successfully, but these errors were encountered: