evaluate_comparative#

langsmith.evaluation._runner.evaluate_comparative(experiments: Tuple[str | UUID, str | UUID], /, evaluators: Sequence[Callable[[Sequence[Run], Example | None], ComparisonEvaluationResult | dict | Awaitable[ComparisonEvaluationResult | dict]]], experiment_prefix: str | None = None, description: str | None = None, max_concurrency: int = 5, client: Client | None = None, metadata: dict | None = None, load_nested: bool = False, randomize_order: bool = False) ComparativeExperimentResults[source]#

Evaluate existing experiment runs against each other.

This lets you use pairwise preference scoring to generate more reliable feedback in your experiments.

Parameters:
  • experiments (Tuple[Union[str, uuid.UUID], Union[str, uuid.UUID]]) – The identifiers of the experiments to compare.

  • evaluators (Sequence[COMPARATIVE_EVALUATOR_T]) – A list of evaluators to run on each example.

  • experiment_prefix (Optional[str]) – A prefix to provide for your experiment name. Defaults to None.

  • description (Optional[str]) – A free-form text description for the experiment.

  • max_concurrency (int) – The maximum number of concurrent evaluations to run. Defaults to 5.

  • client (Optional[langsmith.Client]) – The LangSmith client to use. Defaults to None.

  • metadata (Optional[dict]) – Metadata to attach to the experiment. Defaults to None.

  • load_nested (bool) – Whether to load all child runs for the experiment. Default is to only load the top-level root runs.

  • randomize_order (bool) – Whether to randomize the order of the outputs for each evaluation. Default is False.

Returns:

The results of the comparative evaluation.

Return type:

ComparativeExperimentResults

Examples

Suppose you want to compare two prompts to see which one is more effective. You would first prepare your dataset:

>>> from typing import Sequence
>>> from langsmith import Client
>>> from langsmith.evaluation import evaluate
>>> from langsmith.schemas import Example, Run
>>> client = Client()
>>> dataset = client.clone_public_dataset(
...     "https://smith.langchain.com/public/419dcab2-1d66-4b94-8901-0357ead390df/d"
... )
>>> dataset_name = "Evaluate Examples"

Then you would run your different prompts: >>> import functools >>> import openai >>> from langsmith.evaluation import evaluate >>> from langsmith.wrappers import wrap_openai >>> oai_client = openai.Client() >>> wrapped_client = wrap_openai(oai_client) >>> prompt_1 = “You are a helpful assistant.” >>> prompt_2 = “You are an exceedingly helpful assistant.” >>> def predict(inputs: dict, prompt: str) -> dict: … completion = wrapped_client.chat.completions.create( … model=”gpt-3.5-turbo”, … messages=[ … {“role”: “system”, “content”: prompt}, … { … “role”: “user”, … “content”: f”Context: {inputs[‘context’]}” … f”nninputs[‘question’]”, … }, … ], … ) … return {“output”: completion.choices[0].message.content} >>> results_1 = evaluate( … functools.partial(predict, prompt=prompt_1), … data=dataset_name, … description=”Evaluating our basic system prompt.”, … blocking=False, # Run these experiments in parallel … ) # doctest: +ELLIPSIS View the evaluation results for experiment:… >>> results_2 = evaluate( … functools.partial(predict, prompt=prompt_2), … data=dataset_name, … description=”Evaluating our advanced system prompt.”, … blocking=False, … ) # doctest: +ELLIPSIS View the evaluation results for experiment:… >>> results_1.wait() >>> results_2.wait() >>> import time >>> time.sleep(10) # Wait for the traces to be fully processed

Finally, you would compare the two prompts directly:

>>> import json
>>> from langsmith.evaluation import evaluate_comparative
>>> def score_preferences(runs: list, example: schemas.Example):
...     assert len(runs) == 2  # Comparing 2 systems
...     assert isinstance(example, schemas.Example)
...     assert all(run.reference_example_id == example.id for run in runs)
...     pred_a = runs[0].outputs["output"]
...     pred_b = runs[1].outputs["output"]
...     ground_truth = example.outputs["answer"]
...     tools = [
...         {
...             "type": "function",
...             "function": {
...                 "name": "rank_preferences",
...                 "description": "Saves the prefered response ('A' or 'B')",
...                 "parameters": {
...                     "type": "object",
...                     "properties": {
...                         "reasoning": {
...                             "type": "string",
...                             "description": "The reasoning behind the choice.",
...                         },
...                         "preferred_option": {
...                             "type": "string",
...                             "enum": ["A", "B"],
...                             "description": "The preferred option, either 'A' or 'B'",
...                         },
...                     },
...                     "required": ["preferred_option"],
...                 },
...             },
...         }
...     ]
...     completion = openai.Client().chat.completions.create(
...         model="gpt-3.5-turbo",
...         messages=[
...             {"role": "system", "content": "Select the better response."},
...             {
...                 "role": "user",
...                 "content": f"Option A: {pred_a}"
...                 f"\n\nOption B: {pred_b}"
...                 f"\n\nGround Truth: {ground_truth}",
...             },
...         ],
...         tools=tools,
...         tool_choice={
...             "type": "function",
...             "function": {"name": "rank_preferences"},
...         },
...     )
...     tool_args = completion.choices[0].message.tool_calls[0].function.arguments
...     loaded_args = json.loads(tool_args)
...     preference = loaded_args["preferred_option"]
...     comment = loaded_args["reasoning"]
...     if preference == "A":
...         return {
...             "key": "ranked_preference",
...             "scores": {runs[0].id: 1, runs[1].id: 0},
...             "comment": comment,
...         }
...     else:
...         return {
...             "key": "ranked_preference",
...             "scores": {runs[0].id: 0, runs[1].id: 1},
...             "comment": comment,
...         }
>>> def score_length_difference(runs: list, example: schemas.Example):
...     # Just return whichever response is longer.
...     # Just an example, not actually useful in real life.
...     assert len(runs) == 2  # Comparing 2 systems
...     assert isinstance(example, schemas.Example)
...     assert all(run.reference_example_id == example.id for run in runs)
...     pred_a = runs[0].outputs["output"]
...     pred_b = runs[1].outputs["output"]
...     if len(pred_a) > len(pred_b):
...         return {
...             "key": "length_difference",
...             "scores": {runs[0].id: 1, runs[1].id: 0},
...         }
...     else:
...         return {
...             "key": "length_difference",
...             "scores": {runs[0].id: 0, runs[1].id: 1},
...         }
>>> results = evaluate_comparative(
...     [results_1.experiment_name, results_2.experiment_name],
...     evaluators=[score_preferences, score_length_difference],
...     client=client,
... )  
View the pairwise evaluation results at:...
>>> eval_results = list(results)
>>> assert len(eval_results) >= 10  
>>> assert all(
...     "feedback.ranked_preference" in r["evaluation_results"]
...     for r in eval_results
... )  
>>> assert all(
...     "feedback.length_difference" in r["evaluation_results"]
...     for r in eval_results
... )