Ranx#

Ranx is python library that implements tools for estimating performance of the ranging models. Check the documentation here.

import numpy as np
import pandas as pd

from ranx import Qrels, Run, evaluate
from sklearn.datasets import make_blobs

from surprise.prediction_algorithms.knns import KNNBasic
from surprise.dataset import Dataset
from surprise.reader import Reader

Creating task#

To learn how to use ranx we need an experimental task - it’s created in the following cell.

In this task, the goal is to develop an algorithm that can effectively match “items” with “objects.” The dataset has two columns, namely objects and items, each with their respective indices. The rank column indicates the suitability of each item for the corresponding object.

We have two solutions:

  • Just random results - random predict;

  • Solution provided by the surprise.prediction_algorithms.knns.KNNBasic module - KNN predict.

So the metric values should be better for the KNN model.

r_width = 10
r_height = 30

# creating task
R, c = make_blobs(
    n_samples=r_height,
    n_features=r_width,
    centers=3,
    random_state=10
)
R = np.round((R-R.min())*10/(R.max()-R.min())).astype(int)
R_frame = pd.Series(
    R.ravel(),
    index = pd.MultiIndex.from_tuples(
        [
            (str(j),str(i)) 
            for j in np.arange(R.shape[1]) 
            for i in np.arange(R.shape[0])
        ],
        names = ["object", "item"]
    ),
    name = "rank"
).reset_index()
# we need to define relevant elemnts
# we'll understand items that got raiting
# higher than 5 as relevant
R_frame["relevant"] = (R_frame["rank"] > 5).astype("int")


# creating predicts for comparison
np.random.seed(10)
R_frame["random predict"] = np.random.normal(size=len(R_frame))
reader = Reader(rating_scale=(0,10))
surp_dataset = Dataset.load_from_df(
    R_frame[["object", "item", 'rank']], 
    reader
)
my_data_set = surp_dataset.build_full_trainset()
model = KNNBasic(k=25,verbose=False)
model = model.fit(my_data_set)
R_frame["KNN predict"] = R_frame.apply(
    lambda row: model.predict(
        row["object"], 
        row["item"]
    ).est,
    axis=1
)

R_frame.sample(10, random_state=10)
object item rank relevant random predict KNN predict
24 0 24 5 0 1.123691 4.980553
65 2 5 6 1 -0.529296 5.475319
113 3 23 5 0 -0.739357 5.736547
261 8 21 1 0 0.279605 1.732450
188 6 8 6 1 -0.405730 5.896200
181 6 1 2 0 -0.678947 2.994664
59 1 29 6 1 -0.362180 5.554170
87 2 27 5 0 0.393341 5.115708
293 9 23 7 1 0.188331 6.784238
277 9 7 8 1 -0.970198 6.944433

Usage ranx#

To use ranx you need to define Qrels and Run:

  • Qrels - or query relevance judgments, stores the ground truth for conducting evaluations;

  • Runs - stores the relevance scores estimated by the model under evaluation.

The example below is typical of how I create qurels and runs from pandas.DataFrames using the from_df method.

qrels = Qrels.from_df(
    df=R_frame,
    q_id_col="object", 
    doc_id_col="item",
    score_col="rank"
)
random_run = Run.from_df(
    df=R_frame,
    q_id_col="object",
    doc_id_col="item",
    score_col="random predict"
)
knn_run = Run.from_df(
    df=R_frame,
    q_id_col="object",
    doc_id_col="item",
    score_col="KNN predict"
)

Now, by using the evaluate function, we can finally calculate metrics.

Note You may notice that some metrics are the same for random model and KNN model. But it means that these metrics only require binary socre_col in Qrels. The example is shown in the following section.

metrics = [
    "hits@5", 
    "hit_rate@5",
    "precision@5",
    "recall@5",
    "f1@5",
    "mrr@5",
    "map@5",
    "dcg@5",
    "dcg_burges@5",
    "ndcg@5",
    "ndcg_burges@5"
]
pd.DataFrame({
    "random results" : evaluate(
        qrels, random_run, metrics
    ),
    "KNN results" : evaluate(
        qrels, knn_run, metrics
    )
})
random results KNN results
hits@5 5.000000 5.000000
hit_rate@5 1.000000 1.000000
precision@5 1.000000 1.000000
recall@5 0.169007 0.169007
f1@5 0.289127 0.289127
mrr@5 1.000000 1.000000
map@5 0.169007 0.169007
dcg@5 15.076328 24.645380
dcg_burges@5 308.888558 1165.549729
ndcg@5 0.604429 0.984139
ndcg_burges@5 0.267174 0.924945

Metrics with binary relevance#

Some metrics only need a binary score_col in the qurels to work properly. Here is an example that such metrics would work fine if we pass a binary variable as score_col to the qurels. We will consider an item relevant if its rank is more than 5.

R_frame["relevant?"] = (R_frame["rank"] > 5).astype("int")
bin_qrels = Qrels.from_df(
    df=R_frame,
    q_id_col="object", 
    doc_id_col="item",
    score_col="relevant?"
)

metrics = [
    "hits@5", 
    "hit_rate@5",
    "precision@5",
    "recall@5",
    "f1@5",
    "map@5"
]
pd.concat(
    {
        "numeric score_col" : pd.DataFrame({
            "random results" : evaluate(
                qrels, random_run, metrics
            ),
            "KNN results" : evaluate(
                qrels, knn_run, metrics
            )
        }),
        "binary score_col" : pd.DataFrame({
            "random results" : evaluate(
                bin_qrels, random_run, metrics
            ),
            "KNN results" : evaluate(
                bin_qrels, knn_run, metrics
            )
        })
    },
    axis = 1
)
numeric score_col binary score_col
random results KNN results random results KNN results
hits@5 5.000000 5.000000 2.400000 5.000000
hit_rate@5 1.000000 1.000000 1.000000 1.000000
precision@5 1.000000 1.000000 0.480000 1.000000
recall@5 0.169007 0.169007 0.148942 0.307404
f1@5 0.289127 0.289127 0.226654 0.468981
map@5 0.169007 0.169007 0.110045 0.307404

As you can see, the numerical scores for the metrics under consideration are the same for random and KNN. But when you’re using binary relevance variables, the scores for KNN outperform the random model.

Using aspects#

Qrels target dtype#

It’s interesting, but the column passed as score_col to the Qrels object should be exactly int64 - other options will cause an error.

The following cell shows how it works.

frame = pd.DataFrame({
    "users_id" : ["1", "2", "3"],
    "item_id" : ["1", "2", "3"],
    "target" : np.array([1,2,3]).astype('int32')
})

display(frame.dtypes)

try:
    Qrels.from_df(
        df=frame,
        q_id_col="users_id",
        doc_id_col="item_id",
        score_col="target"
    )
except Exception as e:
    print("Got exception: ", e)
users_id    object
item_id     object
target       int32
dtype: object
Got exception:  DataFrame scores column dtype must be `int`