Skip to content

Commit

Permalink
fmt, remove outdated docs
Browse files Browse the repository at this point in the history
  • Loading branch information
Yun-Kim committed Feb 3, 2025
1 parent 47a857b commit adb8766
Show file tree
Hide file tree
Showing 3 changed files with 3 additions and 137 deletions.
135 changes: 3 additions & 132 deletions ddtrace/contrib/_openai.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
"""
The OpenAI integration instruments the OpenAI Python library to emit metrics,
traces, and logs (logs are disabled by default) for requests made to the models,
completions, chat completions, edits, images, embeddings, audio, files, fine-tunes,
and moderations endpoints.
The OpenAI integration instruments the OpenAI Python library to emit traces for requests made to the models,
completions, chat completions, images, embeddings, audio, files, and moderations endpoints.
All metrics, logs, and traces submitted from the OpenAI integration are tagged by:
All traces submitted from the OpenAI integration are tagged by:
- ``service``, ``env``, ``version``: see the `Unified Service Tagging docs <https://docs.datadoghq.com/getting_started/tagging/unified_service_tagging>`_.
- ``openai.request.endpoint``: OpenAI API endpoint used in the request.
Expand All @@ -15,84 +13,6 @@
- ``openai.user.api_key``: OpenAI API key used to make the request (obfuscated to match the OpenAI UI representation ``sk-...XXXX`` where ``XXXX`` is the last 4 digits of the key).
Metrics
~~~~~~~
The following metrics are collected by default by the OpenAI integration.
.. important::
If the Agent is configured to use a non-default Statsd hostname or port, use ``DD_DOGSTATSD_URL`` to configure
``ddtrace`` to use it.
.. important::
Ratelimit and token metrics only reflect usage of the supported completions, chat completions, and embedding
endpoints. Usage of other OpenAI endpoints will not be recorded as they are not provided.
.. py:data:: openai.request.duration
The duration of the OpenAI request in seconds.
Type: ``distribution``
.. py:data:: openai.request.error
The number of errors from requests made to OpenAI.
Type: ``count``
.. py:data:: openai.ratelimit.requests
The maximum number of OpenAI requests permitted before exhausting the rate limit.
Type: ``gauge``
.. py:data:: openai.ratelimit.tokens
The maximum number of OpenAI tokens permitted before exhausting the rate limit.
Type: ``gauge``
.. py:data:: openai.ratelimit.remaining.requests
The remaining number of OpenAI requests permitted before exhausting the rate limit.
Type: ``gauge``
.. py:data:: openai.ratelimit.remaining.tokens
The remaining number of OpenAI tokens permitted before exhausting the rate limit.
Type: ``gauge``
.. py:data:: openai.tokens.prompt
The number of tokens used in the prompt of an OpenAI request.
Type: ``distribution``
.. py:data:: openai.tokens.completion
The number of tokens used in the completion of a OpenAI response.
Type: ``distribution``
.. py:data:: openai.tokens.total
The total number of tokens used in the prompt and completion of a OpenAI request/response.
Type: ``distribution``
(beta) Prompt and Completion Sampling
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand All @@ -101,22 +21,9 @@
- Prompt inputs and completions for the ``completions`` endpoint.
- Message inputs and completions for the ``chat.completions`` endpoint.
- Embedding inputs for the ``embeddings`` endpoint.
- Edit inputs, instructions, and completions for the ``edits`` endpoint.
- Image input filenames and completion URLs for the ``images`` endpoint.
- Audio input filenames and completions for the ``audio`` endpoint.
Prompt and message inputs and completions can also be emitted as log data.
Logs are **not** emitted by default. When logs are enabled they are sampled at ``0.1``.
Read the **Global Configuration** section for information about enabling logs and configuring sampling
rates.
.. important::
To submit logs, you must set the ``DD_API_KEY`` environment variable.
Set ``DD_SITE`` to send logs to a Datadog site such as ``datadoghq.eu``. The default is ``datadoghq.com``.
(beta) Streamed Responses Support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand Down Expand Up @@ -172,32 +79,6 @@
Default: ``DD_SERVICE``
.. py:data:: ddtrace.config.openai["logs_enabled"]
Enable collection of prompts and completions as logs. You can adjust the rate of prompts and completions collected
using the sample rate configuration described below.
Alternatively, you can set this option with the ``DD_OPENAI_LOGS_ENABLED`` environment
variable.
Note that you must set the ``DD_API_KEY`` environment variable to enable sending logs.
Default: ``False``
.. py:data:: ddtrace.config.openai["metrics_enabled"]
Enable collection of OpenAI metrics.
If the Datadog Agent is configured to use a non-default Statsd hostname
or port, use ``DD_DOGSTATSD_URL`` to configure ``ddtrace`` to use it.
Alternatively, you can set this option with the ``DD_OPENAI_METRICS_ENABLED`` environment
variable.
Default: ``True``
.. py:data:: (beta) ddtrace.config.openai["span_char_limit"]
Configure the maximum number of characters for the following data within span tags:
Expand Down Expand Up @@ -225,16 +106,6 @@
Default: ``1.0``
.. py:data:: (beta) ddtrace.config.openai["log_prompt_completion_sample_rate"]
Configure the sample rate for the collection of prompts and completions as logs.
Alternatively, you can set this option with the ``DD_OPENAI_LOG_PROMPT_COMPLETION_SAMPLE_RATE`` environment
variable.
Default: ``0.1``
Instance Configuration
~~~~~~~~~~~~~~~~~~~~~~
Expand Down
4 changes: 0 additions & 4 deletions ddtrace/contrib/internal/openai/patch.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@
from ddtrace.contrib.trace_utils import with_traced_module
from ddtrace.contrib.trace_utils import wrap
from ddtrace.internal.logger import get_logger
from ddtrace.internal.utils.formats import asbool
from ddtrace.internal.utils.formats import deep_getattr
from ddtrace.internal.utils.version import parse_version
from ddtrace.llmobs._integrations import OpenAIIntegration
Expand All @@ -23,10 +22,7 @@
config._add(
"openai",
{
"logs_enabled": asbool(os.getenv("DD_OPENAI_LOGS_ENABLED", False)),
"metrics_enabled": asbool(os.getenv("DD_OPENAI_METRICS_ENABLED", True)),
"span_prompt_completion_sample_rate": float(os.getenv("DD_OPENAI_SPAN_PROMPT_COMPLETION_SAMPLE_RATE", 1.0)),
"log_prompt_completion_sample_rate": float(os.getenv("DD_OPENAI_LOG_PROMPT_COMPLETION_SAMPLE_RATE", 0.1)),
"span_char_limit": int(os.getenv("DD_OPENAI_SPAN_CHAR_LIMIT", 128)),
},
)
Expand Down
1 change: 0 additions & 1 deletion ddtrace/llmobs/_integrations/openai.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@
from typing import Optional
from typing import Tuple

from ddtrace import config
from ddtrace.internal.constants import COMPONENT
from ddtrace.internal.utils.version import parse_version
from ddtrace.llmobs._constants import INPUT_DOCUMENTS
Expand Down

0 comments on commit adb8766

Please sign in to comment.