Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(llmobs): implement answer relevancy ragas metric #11915

Merged
merged 19 commits into from
Jan 15, 2025

Conversation

lievan
Copy link
Contributor

@lievan lievan commented Jan 13, 2025

Implements answer relevancy metric for ragas integration.

About Answer Relevancy

Answer relevancy metric focuses on assessing how pertinent the generated answer is to the given prompt. A lower score is assigned to answers that are incomplete or contain redundant information and higher scores indicate better relevancy. This metric is computed using the question, the retrieved contexts and the answer.

The Answer Relevancy is defined as the mean cosine similarity of the original question to a number of artificial questions, which where generated (reverse engineered) based on the response.

Example trace

image

Checklist

  • PR author has checked that all the criteria below are met
  • The PR description includes an overview of the change
  • The PR description articulates the motivation for the change
  • The change includes tests OR the PR description describes a testing strategy
  • The PR description notes risks associated with the change, if any
  • Newly-added code is easy to change
  • The change follows the library release note guidelines
  • The change includes or references documentation updates if necessary
  • Backport labels are set (if applicable)

Reviewer Checklist

  • Reviewer has checked that all the criteria below are met
  • Title is accurate
  • All changes are related to the pull request's stated goal
  • Avoids breaking API changes
  • Testing strategy adequately addresses listed risks
  • Newly-added code is easy to change
  • Release note makes sense to a user of the library
  • If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment
  • Backport labels are set in a manner that is consistent with the release branch maintenance policy

Copy link
Contributor

github-actions bot commented Jan 13, 2025

CODEOWNERS have been resolved as:

ddtrace/llmobs/_evaluators/ragas/answer_relevancy.py                    @DataDog/ml-observability
tests/llmobs/llmobs_cassettes/tests.llmobs.test_llmobs_ragas_evaluators.answer_relevancy_inference.yaml  @DataDog/ml-observability
ddtrace/llmobs/_evaluators/ragas/base.py                                @DataDog/ml-observability
ddtrace/llmobs/_evaluators/ragas/models.py                              @DataDog/ml-observability
ddtrace/llmobs/_evaluators/runner.py                                    @DataDog/ml-observability
tests/llmobs/_utils.py                                                  @DataDog/ml-observability
tests/llmobs/conftest.py                                                @DataDog/ml-observability
tests/llmobs/test_llmobs_ragas_evaluators.py                            @DataDog/ml-observability

@pr-commenter
Copy link

pr-commenter bot commented Jan 13, 2025

Benchmarks

Benchmark execution time: 2025-01-15 20:24:49

Comparing candidate commit 640d2ad in PR branch evan.li/ragas-answer-rel with baseline commit bbc1bd8 in branch main.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 394 metrics, 2 unstable metrics.

@lievan lievan force-pushed the evan.li/ragas-answer-rel branch from 9c8806f to 5edd2c8 Compare January 14, 2025 14:29
@lievan lievan marked this pull request as ready for review January 14, 2025 14:37
@lievan lievan requested a review from a team as a code owner January 14, 2025 14:37
ddtrace/llmobs/_evaluators/ragas/answer_relevancy.py Outdated Show resolved Hide resolved
ddtrace/llmobs/_evaluators/ragas/answer_relevancy.py Outdated Show resolved Hide resolved
tests/llmobs/conftest.py Show resolved Hide resolved
tests/llmobs/conftest.py Show resolved Hide resolved
tests/llmobs/test_llmobs_ragas_evaluators.py Show resolved Hide resolved
tests/llmobs/test_llmobs_ragas_evaluators.py Outdated Show resolved Hide resolved
tests/llmobs/test_llmobs_ragas_evaluators.py Outdated Show resolved Hide resolved
Copy link
Contributor

@Yun-Kim Yun-Kim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a next step/PR I think I'd like llmobs eval tests to avoid mocking the submit_evaluation function and instead create a dummy eval writer and use eval events to assert finished evals (like here for spans). WYDT?

tests/llmobs/test_llmobs_ragas_evaluators.py Outdated Show resolved Hide resolved
Co-authored-by: Yun Kim <35776586+Yun-Kim@users.noreply.github.com>
@lievan
Copy link
Contributor Author

lievan commented Jan 15, 2025

As a next step/PR I think I'd like llmobs eval tests to avoid mocking the submit_evaluation function and instead create a dummy eval writer and use eval events to assert finished evals (like here for spans). WYDT?

@Yun-Kim sounds good, will add this as a todo for leaning up tests/ragas integration!

@lievan lievan enabled auto-merge (squash) January 15, 2025 19:31
@lievan lievan added the changelog/no-changelog A changelog entry is not required for this PR. label Jan 15, 2025
@lievan lievan merged commit 13b1457 into main Jan 15, 2025
274 of 275 checks passed
@lievan lievan deleted the evan.li/ragas-answer-rel branch January 15, 2025 20:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog A changelog entry is not required for this PR.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants