-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy path_botex.env
126 lines (85 loc) · 4.45 KB
/
_botex.env
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
# botex can be configured by
#
# - using the dialog of the command line interface `botex`,
# - setting the required parameters via its function arguments, or
# - setting the required parameters as environment variables.
#
# You can use the file below to set the environment variables.
# For this, copy it to `botex.env` and edit the values.
# ___ DO NOT COMMIT 'botex.env' ___
# See https://pypi.org/project/python-dotenv/
# for more information on the file format.
# ------------------------------------------------------------------------------
# General configuration
# ------------------------------------------------------------------------------
# botex stores all its data in a SQLite database. You can set the path of the
# database file here. The respective file will be created if it does not exist.
BOTEX_DB="botex.sqlite3"
# ------------------------------------------------------------------------------
# oTree configuration
# ------------------------------------------------------------------------------
# You can either configure botex to use an already running oTree instance
# or let it start an instance for you.
# If you want to use an already running oTree instance, you need to set the
# URL endpoint of the server. The below is the oTree default.
OTREE_SERVER_URL='http://localhost:8000'
# If you want botex to start oTree for you, as we do in the tests,
# you need to set the path of your oTree project folder instead
# OTREE_PROJECT_PATH="tests/otree"
# You can then also change the default port of 8000
# not to infer with any other oTree instances
# OTREE_PORT=8000
# By default, botex make oTree log to a file 'otree.log' in the project folder.
# You can change this here.
# OTREE_LOG_FILE="otree.log"
# By default, botex mimics the bahavior of 'otree devserver' and starts
# oTree without elevated authentication level. You can change this here
# by setting oTree authentication level to either "DEMO" or "STUDY".
# OTREE_AUTH_LEVEL="STUDY"
# Regardless whether you use an already running oTree instance or let botex
# start one for you, if you are using oTree in authentication levels
# "DEMO" or "STUDY", you need to set an oTree REST key for botex to communicate
# with the the oTree API.
# Also, you will need an oTree admin user name and password if you want to
# use botex to download oTree data via the oTree web frontend.
# If you let botex start oTree for you, it will configure oTree with the
# values that you set here.
# OTREE_REST_KEY="a fancy rest key"
# OTREE_ADMIN_PASSWORD="a fancy password"
# ------------------------------------------------------------------------------
# LiteLLM configuration
# ------------------------------------------------------------------------------
# You will need to configure the following section if you want to
# use commercial LLM APIs like OpenAI or Google Gemini via the
# LiteLLM interface. This will apply to most use cases.
# For commercial LLM APIs, you need API keys.
# botex uses the environment variable system of LiteLLM to
# send the API keys to the LLM servers.
OPENAI_API_KEY="akey"
GEMINI_API_KEY="akey"
# Add and edit API keys as needed.
# See https://docs.litellm.ai/docs/set_keys for more detail.
# ------------------------------------------------------------------------------
# llama.cpp configuration
# ------------------------------------------------------------------------------
# Everything below is only needed if you want to use llama.cpp for local LLMs
# If you are accessing a running llama.cpp sever, you only need to set its URL
# The below is the default if you start llama.cpp locally.
# LLAMA_SERVER_URL="http://localhost:8080"
# Instead, You can call botex.start_llamacpp_server() to start a llama.cpp
# instance locally. For this you need to set at least the two following
# arguments
LLAMACPP_SERVER_PATH="path/to/llama-server"
LLAMACPP_LOCAL_LLM_PATH="path/to/model.gguf"
# In this case, you should also configure the following if you have a GPU
# that you want llama.cpp to use.
LLAMACPP_NUMBER_OF_LAYERS_TO_OFFLOAD_TO_GPU=0
# You can also increase the number of slots for more parallelism if
# you have plenty of GPU memory.
LLAMACPP_NUM_SLOTS=1
# The below are some additional llama.cpp settings that you can can use
# to configure llama.cpp if you let botex start the server.
# The values below are the defaults that botex uses.
# LLAMACPP_CONTEXT_LENGTH=None # Uses the default of the respective LLM
# LLAMACPP_TEMPERATURE=0.7
# LLAMACPP_MAXIMUM_TOKENS_TO_PREDICT=10000