source
create_metric_decorator
create_metric_decorator (metric_class)
*Factory function that creates decorator factories for different metric types.
Args: metric_class: The metric class to use (DiscreteMetrics, NumericMetrics, etc.)
Returns: A decorator factory function for the specified metric type*
Example usage
from ragas_experimental.metric import DiscreteMetric, MetricResult
from pydantic import BaseModel
from ragas_experimental.llm import ragas_llm
from openai import OpenAI
llm = ragas_llm(provider="openai",model="gpt-4o",client=OpenAI())
discrete_metric = create_metric_decorator(DiscreteMetric)
@discrete_metric(llm=llm,
prompt="Evaluate if given answer is helpful\n\n{response}",
name='new_metric',values=["low","med","high"])
def my_metric(llm,prompt,**kwargs):
class response_model(BaseModel):
output: t.List[bool]
reason: str
response = llm.generate(prompt.format(**kwargs),response_model=response_model)
total = sum(response.output)
if total < 1:
score = 'low'
else:
score = 'high'
return MetricResult(result=score, reason=response.reason)
result = my_metric.score(response='my response') # result
print(result)
print(result.reason)
low
The context or details of the user's response ('my response') are not provided, making it impossible to evaluate its helpfulness accurately.