diff --git a/README.md b/README.md index 360e4bf..980839a 100644 --- a/README.md +++ b/README.md @@ -136,6 +136,11 @@ pip3 uninstall -y torch pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu121 # cu121 means cuda 12.1 ``` +Now you are done with installing, try to modify one of the examples and run the below command! + +``` +python -m example.retrieval_qa.retrieval_qa_huggingface_demo +``` ### Frontend Dev Setup ``` @@ -148,7 +153,8 @@ npm run build If you are on EC2, you can launch a GPU instance with the following config: - EC2 `g5.2xlarge` (if you want to run a pretrained LLM with 7B parameters) - Deep Learning AMI PyTorch GPU 2.0.1 (Ubuntu 20.04) - Alt text + Alt text - EBS: at least 100G + Alt text diff --git a/docker/README.md b/docker/README.md index 3f50217..54a9996 100644 --- a/docker/README.md +++ b/docker/README.md @@ -125,6 +125,8 @@ For example, here is a command to run `cambioml\pykoi` version `0.1_ec2_linux`. docker run -d -e RETRIEVAL_MODEL=mistralai/Mistral-7B-v0.1 -p 5000:5000 --gpus all --name pykoi_test cambioml/pykoi:0.1_ec2_linux ``` +***Note: this command may take a few minutes*** since it's loading a LLM. + If you are running it in the background, with a `-d` tag, you can check the logs using the following command: ``` docker logs [CONTAINER_NAME]