LLM-as-a-Judge

{{One source|date=May 2025}}

LLM-as-a-Judge is a conceptual framework in natural language processing (NLP) that employs large language models (LLMs) as evaluators to assess the performance of other language-based systems or outputs.

Instead of relying solely on human annotators, the approach leverages the general language capabilities of advanced language models to serve at automated judges.

LLM-as-a-Judge may be more cost-effective and may be added to automated evaluation pipelines.

Unlike traditional automatic evaluation metrics such as ROUGE and BLEU—which rely on transparent, rule-based comparisons with surface-level n-grams—LLM-as-a-Judge relies on the opaque internal reasoning of large language models—offering evaluations that likely incorporate deeper semantic understanding, but at the cost of interpretability.

Typically, a more powerful LLM is employed to evaluate the outputs of smaller or less capable language models—for example, using GPT-4 to assess the performance of a 13-billion-parameter LLaMA model.{{Cite Q | Q123527686 }}

References

{{Scholia|topic}}

{{Reflist}}

Category:Large language models

{{LLM-stub}}

{{machine-learning-stub}}