You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 10, 2025. It is now read-only.
Currently we send a request per unique word to the embedding server to get that words embedding vector.
The server supports sending multiple words at a time and getting back the results. We should chunk up the requests to make fewer API calls which should make the embedding fetching quicker.
This is the function that will need to be modified to run the queries in batches and then correctly assign the result once the batch has been effected.
Things to consider :
The server might fail if one or more of the words does not have a representation in the corpus. We would need to fix that here :
It would be also good to give some feedback on this process that can show in the classification interface to let a user know how much of the embedding has been loaded.
The text was updated successfully, but these errors were encountered:
Currently we send a request per unique word to the embedding server to get that words embedding vector.
The server supports sending multiple words at a time and getting back the results. We should chunk up the requests to make fewer API calls which should make the embedding fetching quicker.
smooshr/src/utils/calc_embedings.js
Lines 1 to 20 in 8b11ccb
This is the function that will need to be modified to run the queries in batches and then correctly assign the result once the batch has been effected.
Things to consider :
The server might fail if one or more of the words does not have a representation in the corpus. We would need to fix that here :
smooshr/server/server.py
Lines 66 to 80 in 8b11ccb
It would be also good to give some feedback on this process that can show in the classification interface to let a user know how much of the embedding has been loaded.
The text was updated successfully, but these errors were encountered: