You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"Influences the randomness of the model’s output. A higher value leads to more random and diverse responses, while a lower value produces more predictable outputs.".to_string(),
412
+
TOOLTIP_OFFSET,
409
413
cx, actions, scope
410
414
);
411
415
412
416
self.handle_tooltip_actions_for_slider(
413
417
id!(top_p),
414
418
"Top P, also known as nucleus sampling, is another parameter that influences the randomness of LLM output. This parameter determines the threshold probability for including tokens in a candidate set used by the LLM to generate output. Lower values of this parameter result in more precise and fact-based responses from the LLM, while higher values increase randomness and diversity in the generated output.".to_string(),
419
+
TOOLTIP_OFFSET,
415
420
cx, actions, scope
416
421
);
417
422
418
423
self.handle_tooltip_actions_for_label(
419
424
id!(stream_label),
420
425
"Streaming is the sending of words as they are created by the AI language model one at a time, so you can show them as they are being generated.".to_string(),
426
+
TOOLTIP_OFFSET,
421
427
cx, actions, scope
422
428
);
423
429
424
430
self.handle_tooltip_actions_for_slider(
425
431
id!(max_tokens),
426
432
"The max tokens parameter sets the upper limit for the total number of tokens, encompassing both the input provided to the LLM as a prompt and the output tokens generated by the LLM in response to that prompt.".to_string(),
433
+
TOOLTIP_OFFSET,
434
+
cx, actions, scope
435
+
);
436
+
437
+
self.handle_tooltip_actions_for_label(
438
+
id!(stop_label),
439
+
"Stop sequences are used to make the model stop generating tokens at a desired point, such as the end of a sentence or a list. The model response will not contain the stop sequence and you can pass up to four stop sequences.".to_string(),
440
+
TOOLTIP_OFFSET,
427
441
cx, actions, scope
428
442
);
429
443
430
444
self.handle_tooltip_actions_for_slider(
431
445
id!(frequency_penalty),
432
446
"This parameter is used to discourage the model from repeating the same words or phrases too frequently within the generated text. It is a value that is added to the log-probability of a token each time it occurs in the generated text. A higher frequency_penalty value will result in the model being more conservative in its use of repeated tokens.".to_string(),
447
+
TOOLTIP_OFFSET_BOTTOM,
433
448
cx, actions, scope
434
449
);
435
450
436
451
self.handle_tooltip_actions_for_slider(
437
452
id!(presence_penalty),
438
453
"This parameter is used to encourage the model to include a diverse range of tokens in the generated text. It is a value that is subtracted from the log-probability of a token each time it is generated. A higher presence_penalty value will result in the model being more likely to generate tokens that have not yet been included in the generated text.".to_string(),
439
-
cx, actions, scope
440
-
);
441
-
442
-
self.handle_tooltip_actions_for_label(
443
-
id!(stop_label),
444
-
"Stop sequences are used to make the model stop generating tokens at a desired point, such as the end of a sentence or a list. The model response will not contain the stop sequence and you can pass up to four stop sequences.".to_string(),
0 commit comments