Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I can't get it running using Docker #72

Closed
hadamard-2 opened this issue Jan 17, 2025 · 5 comments
Closed

I can't get it running using Docker #72

hadamard-2 opened this issue Jan 17, 2025 · 5 comments

Comments

@hadamard-2
Copy link

Command:
docker compose up --build

Error:

1732.0 ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them.
1732.0     nvidia-cudnn-cu12==9.1.0.70 from https://download.pytorch.org/whl/cu121/nvidia_cudnn_cu12-9.1.0.70-py3-none-manylinux2014_x86_64.whl#sha256=165764f44ef8c61fcdfdfdbe769d687e06374059fbb388b6c89ecb0e28793a6f (from torch==2.5.1):
1732.0         Expected sha256 165764f44ef8c61fcdfdfdbe769d687e06374059fbb388b6c89ecb0e28793a6f
1732.0              Got        0e7856146e8191170b6debed547160879bfda9735dae199526da7a2e30adae04
[+] Running 0/1
 - Service kokoro-tts  Building                                                                                              1851.5s
failed to solve: process "/bin/sh -c pip3 install --no-cache-dir torch==2.5.1 --extra-index-url https://download.pytorch.org/whl/cu121" did not complete successfully: exit code: 1
@remsky
Copy link
Owner

remsky commented Jan 18, 2025

Would need more details to give advice. What machine are you on, which branch was this pulled from?

@hadamard-2
Copy link
Author

I'm running on a Windows machine; lemme know if you need more details.
As for the branch, I believe I used v0.0.5post1-stable.

@remsky
Copy link
Owner

remsky commented Jan 25, 2025

Give it a shot on the beta build mentioned in the readme if you can. It should get rid of the need for the gradio, once we finish some of the features on the new webui it'll be merged in,

@hadamard-2
Copy link
Author

[+] Running 3/3o docker.io/library/kokoro-tts-kokoro-tts                                                                        0.0s
 ✔ Service kokoro-tts                 Built                                                                                  3201.4s
 ✔ Network kokoro-tts_default         Created                                                                                   0.1s
 ✔ Container kokoro-tts-kokoro-tts-1  Created                                                                                   0.2s
Attaching to kokoro-tts-1
kokoro-tts-1  |
kokoro-tts-1  | ==========
kokoro-tts-1  | == CUDA ==
kokoro-tts-1  | ==========
kokoro-tts-1  |
kokoro-tts-1  | CUDA Version 12.3.2
kokoro-tts-1  |
kokoro-tts-1  | Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
kokoro-tts-1  |
kokoro-tts-1  | This container image and its contents are governed by the NVIDIA Deep Learning Container License.
kokoro-tts-1  | By pulling and using the container, you accept the terms and conditions of this license:
kokoro-tts-1  | https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
kokoro-tts-1  |
kokoro-tts-1  | A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
kokoro-tts-1  |
kokoro-tts-1  | INFO:     Started server process [29]
kokoro-tts-1  | INFO:     Waiting for application startup.
kokoro-tts-1  | 10:27:48 AM | INFO     | Loading TTS model and voice packs...
kokoro-tts-1  | 10:27:49 AM | INFO     | Initialized new PyTorch backend on GPU
kokoro-tts-1  | 10:27:49 AM | ERROR    | Failed to initialize model with warmup: File not found: kokoro-v0_19-half.pth in paths: ['/app/api/src/models']
kokoro-tts-1  | 10:27:49 AM | ERROR    | Failed to initialize model: Failed to initialize model with warmup: File not found: kokoro-v0_19-half.pth in paths: ['/app/api/src/models']
kokoro-tts-1  | ERROR:    Traceback (most recent call last):
kokoro-tts-1  |   File "/app/api/src/inference/model_manager.py", line 107, in initialize_with_warmup
kokoro-tts-1  |     model_path = await paths.get_model_path(model_file)
kokoro-tts-1  |   File "/app/api/src/core/paths.py", line 104, in get_model_path
kokoro-tts-1  |     return await _find_file(model_name, search_paths)
kokoro-tts-1  |   File "/app/api/src/core/paths.py", line 44, in _find_file
kokoro-tts-1  |     raise RuntimeError(f"File not found: {filename} in paths: {search_paths}")
kokoro-tts-1  | RuntimeError: File not found: kokoro-v0_19-half.pth in paths: ['/app/api/src/models']
kokoro-tts-1  |
kokoro-tts-1  | During handling of the above exception, another exception occurred:
kokoro-tts-1  |
kokoro-tts-1  | Traceback (most recent call last):
kokoro-tts-1  |   File "/app/.venv/lib/python3.10/site-packages/starlette/routing.py", line 693, in lifespan
kokoro-tts-1  |     async with self.lifespan_context(app) as maybe_state:
kokoro-tts-1  |   File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
kokoro-tts-1  |     return await anext(self.gen)
kokoro-tts-1  |   File "/app/.venv/lib/python3.10/site-packages/fastapi/routing.py", line 133, in merged_lifespan
kokoro-tts-1  |     async with original_context(app) as maybe_original_state:
kokoro-tts-1  |   File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
kokoro-tts-1  |     return await anext(self.gen)
kokoro-tts-1  |   File "/app/.venv/lib/python3.10/site-packages/fastapi/routing.py", line 133, in merged_lifespan
kokoro-tts-1  |     async with original_context(app) as maybe_original_state:
kokoro-tts-1  |   File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
kokoro-tts-1  |     return await anext(self.gen)
kokoro-tts-1  |   File "/app/.venv/lib/python3.10/site-packages/fastapi/routing.py", line 133, in merged_lifespan
kokoro-tts-1  |     async with original_context(app) as maybe_original_state:
kokoro-tts-1  |   File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
kokoro-tts-1  |     return await anext(self.gen)
kokoro-tts-1  |   File "/app/api/src/main.py", line 59, in lifespan
kokoro-tts-1  |     device, model, voicepack_count = await model_manager.initialize_with_warmup(voice_manager)
kokoro-tts-1  |   File "/app/api/src/inference/model_manager.py", line 135, in initialize_with_warmup
kokoro-tts-1  |     raise RuntimeError(f"Failed to initialize model with warmup: {e}")
kokoro-tts-1  | RuntimeError: Failed to initialize model with warmup: File not found: kokoro-v0_19-half.pth in paths: ['/app/api/src/models']
kokoro-tts-1  |
kokoro-tts-1  | ERROR:    Application startup failed. Exiting.
kokoro-tts-1 exited with code 3

@remsky
Copy link
Owner

remsky commented Jan 31, 2025

Try the latest version on master. If it doesn't auto-download them for some reason, I've included scripts you can run prior to building, which should manually download and place them for you

https://github.com/remsky/Kokoro-FastAPI/tree/master/docker/scripts

@remsky remsky closed this as completed Jan 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants