Skip to content

Latest commit

 

History

History
58 lines (32 loc) · 2.03 KB

README.md

File metadata and controls

58 lines (32 loc) · 2.03 KB

FastAI Deployment

Deploy your own Model

Deploying your on model is a easy as replacing the model (server/export.pkl) with your own model. Of course, it makes sense to also modify the HTML/CSS a bit to your liking.

Command to launch the container:

docker build -t churros_samosa_classifier . && docker run --rm -it -p 5000:5000 churros_samosa_classifier

Then you can deploy the Container to any Cloud Provider of your choosing.

I recommend Render, which is what I used for my deployment but everything else should work fine, too.

Steps for deploying on Render

  • Fork this repository

  • Replace the model (export.pkl)

  • Change the HTML and CSS to your liking

  • In the Render Dashboard create a new Web Service

  • Link your repository

  • Make sure Docker is selected

  • Done!