Skip to content

Commit 46be0e6

Browse files
author
Vadim Nicolai
committedFeb 2, 2025
Small improvement.
1 parent f0421dd commit 46be0e6

File tree

3 files changed

+3647
-6070
lines changed

3 files changed

+3647
-6070
lines changed
 

‎README.md

+83
Original file line numberDiff line numberDiff line change
@@ -39,4 +39,87 @@ To learn more about the AI SDK, Next.js, and FastAPI take a look at the followin
3939
- [Vercel AI Playground](https://play.vercel.ai) - try different models and choose the best one for your use case.
4040
- [Next.js Docs](https://nextjs.org/docs) - learn about Next.js features and API.
4141
- [FastAPI Docs](https://fastapi.tiangolo.com) - learn about FastAPI features and API.
42+
4243
# email-finder
44+
45+
Below is a brief README-style explanation for running DeepSeek-R1 locally with Ollama.
46+
47+
---
48+
49+
# DeepSeek-R1: Local Setup Guide
50+
51+
This guide explains how to run DeepSeek-R1 on your local machine using [Ollama](https://ollama.ai).
52+
53+
## 1. Install Ollama
54+
55+
1. **Download**: Visit the [Ollama website](https://ollama.ai) and download the installer for your operating system.
56+
2. **Install**: Install Ollama as you would any other application.
57+
58+
## 2. Download and Test DeepSeek-R1
59+
60+
1. **Open Terminal**: Launch your terminal or command prompt.
61+
2. **Run the Model**:
62+
63+
```bash
64+
ollama run deepseek-r1
65+
```
66+
67+
This command automatically downloads the DeepSeek-R1 model (default size) and runs a sample prompt.
68+
69+
3. **Alternate Model Sizes** (optional):
70+
```bash
71+
ollama run deepseek-r1:<size>b
72+
```
73+
Replace `<size>` with `1.5`, `7`, `8`, `14`, `32`, `70`, or `671` to download/run smaller or larger versions.
74+
75+
## 3. Run DeepSeek-R1 as a Service
76+
77+
To keep DeepSeek-R1 running in the background and serve requests via an API:
78+
79+
```bash
80+
ollama serve
81+
```
82+
83+
This exposes DeepSeek-R1 at `http://localhost:11434/api/chat` for integration with other applications.
84+
85+
## 4. Test via CLI and API
86+
87+
- **CLI**: Once DeepSeek-R1 is running, simply type:
88+
```bash
89+
ollama run deepseek-r1
90+
```
91+
- **API**: Use `curl` to chat with DeepSeek-R1 via the local server:
92+
```bash
93+
curl http://localhost:11434/api/chat -d '{
94+
"model": "deepseek-r1",
95+
"messages": [{ "role": "user", "content": "Hello DeepSeek, how are you?" }],
96+
"stream": false
97+
}'
98+
```
99+
100+
## 5. Next Steps
101+
102+
- **Python Integration**: Use the `ollama` Python package to integrate DeepSeek-R1 into applications:
103+
104+
```python
105+
import ollama
106+
107+
response = ollama.chat(
108+
model="deepseek-r1",
109+
messages=[{"role": "user", "content": "Hi DeepSeek!"}],
110+
)
111+
print(response["message"]["content"])
112+
```
113+
114+
- **Gradio App**: Build a simple web interface (e.g., for RAG tasks) using [Gradio](https://gradio.app).
115+
116+
For more details on prompt construction, chunk splitting, or building retrieval-based applications (RAG), refer to the official documentation and tutorials.
117+
118+
---
119+
120+
## References
121+
122+
- [Ollama Documentation](https://ollama.ai)
123+
- [DeepSeek-R1 Article](#) (replace `#` with your desired URL if available)
124+
125+
---

‎package-lock.json

-6,070
This file was deleted.

‎pnpm-lock.yaml

+3,564
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)
Please sign in to comment.