Skip to main content

Textual similarity evaluators

This article refers to the Microsoft Foundry (new) portal.
The Microsoft Foundry SDK for evaluation and Foundry portal are in public preview, but the APIs are generally available for model and dataset evaluation (agent evaluation remains in public preview). Evaluators marked (preview) in this article are currently in public preview everywhere.
It’s important to compare how closely the textual response generated by your AI system matches the response you would expect. The expected response is called the ground truth. Use a LLM-judge metric like Similarity with a focus on the semantic similarity between the generated response and the ground truth. Or, use metrics from the field of natural language processing (NLP), including F1 score, BLEU, GLEU, ROUGE, and METEOR with a focus on the overlaps of tokens or n-grams between the two.

Similarity

Similarity measures the degrees of semantic similarity between the generated text and its ground truth with respect to a query. Compared to other text-similarity metrics that require ground truths, this metric focuses on semantics of a response, instead of simple overlap in tokens or n-grams. It also considers the broader context of a query.

F1 score

F1 score measures the similarity by shared tokens between the generated text and the ground truth. It focuses on both precision and recall. The F1-score computes the ratio of the number of shared words between the model generation and the ground truth. The ratio is computed over the individual words in the generated response against those words in the ground truth answer. The number of shared words between the generation and the truth is the basis of the F1 score.
  • Precision is the ratio of the number of shared words to the total number of words in the generation.
  • Recall is the ratio of the number of shared words to the total number of words in the ground truth.

BLEU score

Bleu score computes the Bilingual Evaluation Understudy (BLEU) score commonly used in natural language processing and machine translation. It measures how closely the generated text matches the reference text.

GLEU score

Gleu score computes the Google-BLEU (GLEU) score. It measures the similarity by shared n-grams between the generated text and ground truth. Similar to the BLEU score, it focuses on both precision and recall. It addresses the drawbacks of the BLEU score using a per-sentence reward objective.

ROUGE score

Rouge score computes the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) scores, a set of metrics used to evaluate automatic summarization and machine translation. It measures the overlap between generated text and reference summaries. ROUGE focuses on recall-oriented measures to assess how well the generated text covers the reference text. The ROUGE score is composed of precision, recall, and F1 score.

METEOR score

Meteor score measures the similarity by shared n-grams between the generated text and the ground truth. Similar to the BLEU score, it focuses on precision and recall. It addresses limitations of other metrics like the BLEU score by considering synonyms, stemming, and paraphrasing for content alignment.

Using textual similarity evaluators

Textual similarity evaluators compare generated responses against ground truth text using different algorithms:
  • Similarity - LLM-based semantic similarity evaluation
  • F1, BLEU, GLEU, ROUGE, METEOR - Algorithmic token/n-gram overlap metrics
Examples:
EvaluatorWhat it measuresRequired inputsRequired parametersOutputDefault threshold
builtin.similaritySemantic similarity to ground truthquery, response, ground_truthdeployment_name1-5 integer3
builtin.f1_scoreToken overlap using precision and recallground_truth, response(none)0-1 float0.5
builtin.bleu_scoreN-gram overlap (machine translation metric)ground_truth, response(none)0-1 float0.5
builtin.gleu_scorePer-sentence reward variant of BLEUground_truth, response(none)0-1 float0.5
builtin.rouge_scoreRecall-oriented n-gram overlapground_truth, responserouge_type0-1 float0.5
builtin.meteor_scoreWeighted alignment with synonymsground_truth, response(none)0-1 float0.5

Example input

Your test dataset should contain the fields referenced in your data mappings:
{"query": "What is the largest city in France?", "response": "Paris is the largest city in France.", "ground_truth": "The largest city in France is Paris."}
{"query": "Explain machine learning.", "response": "Machine learning is a subset of AI that enables systems to learn from data.", "ground_truth": "Machine learning is an AI technique where computers learn patterns from data."}

Configuration example

Data mapping syntax:
  • {{item.field_name}} references fields from your test dataset (for example, {{item.response}}).
testing_criteria = [
    {
        "type": "azure_ai_evaluator",
        "name": "Similarity",
        "evaluator_name": "builtin.similarity",
        "initialization_parameters": {"deployment_name": model_deployment},
        "data_mapping": {
            "query": "{{item.query}}",
            "response": "{{item.response}}",
            "ground_truth": "{{item.ground_truth}}",
        },
    },
    {
        "type": "azure_ai_evaluator",
        "name": "BLEUScore",
        "evaluator_name": "builtin.bleu_score",
        "data_mapping": {
            "ground_truth": "{{item.ground_truth}}",
            "response": "{{item.response}}",
        },
    },
    {
        "type": "azure_ai_evaluator",
        "name": "ROUGEScore",
        "evaluator_name": "builtin.rouge_score",
        "initialization_parameters": {"rouge_type": "rouge1"},
        "data_mapping": {
            "ground_truth": "{{item.ground_truth}}",
            "response": "{{item.response}}",
        },
    },
]
See Run evaluations in the cloud for details on running evaluations and configuring data sources.

Example output

LLM-based evaluators like similarity use a 1-5 Likert scale. Algorithmic evaluators output 0-1 floats. All evaluators output pass or fail based on their thresholds. Key output fields:
{
    "type": "azure_ai_evaluator",
    "name": "Similarity",
    "metric": "similarity",
    "score": 4,
    "label": "pass",
    "reason": "The response accurately conveys the same meaning as the ground truth.",
    "threshold": 3,
    "passed": true
}