If I use torchserve api, inference does not work. #2
EvelinaAleksiutenko
started this conversation in
General
Replies: 1 comment
-
Sorry I haven't maintained this for a while so maybe something changed on the dependencies. Did you also modify the Also try if this works; curl -X POST http://[API_URL]/predictions/[MODEL_NAME]
-H 'Content-Type: application/json'
-d '{"data":"my test sentence"}' & let me know of any error messages you see |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi! When I run torch-serve start and after it run streamlit with lit-ner.py, I set LOCAL = True to use pytorchserve API. But it runs endlessly without an error and without an inference.
Beta Was this translation helpful? Give feedback.
All reactions