1 year, 3 months ago

Second semester project for the Interactive Machine Learning 2022/2023 course at MIM UW

This is the second semester project for the Interactive Machine Learning 2022/2023 course at MIM UW. This time, the task is to assign correct labels to samples annotated by 'simulated' experts and estimate their true positive rates for a prediction problem related to the identification of firefighter activities based on multiple sensor readings.

Overview

The goal of this project is to assign labels to a set of instances annotated by faulty labelers and estimate their true positive rates for all prediction classes. 

Competition rules are given in Terms and Conditions.

A detailed description of the task, data, and evaluation metric is in the Task description section. 

The deadline for submitting solutions is June 11, 2023

Terms & Conditions
 
 
Please log in to the system!
Rank Team Name Is Report   Preliminary Score Final Score Submissions
1
lastmanstanding
True True 0.8078 0.904900 5
2
baseline
True True 0.9582 0.873600 6
3
ggruza
True True 0.9048 0.814200 13
4
iml2023project2
True True 0.6524 0.791000 11
5
MJ
True True 0.6775 0.786000 35
6
mgrot
True True 0.6920 0.754000 4
7
jandziuba
True True 0.6127 0.716800 4
8
Mateusz Błajda
True True 0.6563 0.716500 5
9
EW
True True 0.7290 0.624900 9
10
basiekjusz
True True 0.6000 0.558600 2
11
MB
True True 0.6349 0.497200 5
12
karol
True True 0.6226 0.488300 15
13
kuba
True True 0.5874 0.448700 8

The task in this project is twofold.
The first part is to estimate the probability of each object belonging to 5 classes based on noisy annotations from many imperfect experts.
The second part is to estimate the true positive rate in each class for every expert, which is one of the indicators of expert quality.   

For every sample, you are given both, the representation of this sample in a file named train_X and annotations assigned by experts in the annotations file. (the files are aligned by rows)
Both files are saved in numpy .npy format:
-  train_X file is of shape (n_samples, n_features)
- annotations file is of shape (n_samples,  n_classes, n_experts)

A 1 in annotations array in place (i, j, k) indicates that the k-th expert has said that the i-th sample has the j-th label. Each expert might indicate that the sample belongs to 0, 1 or multiple classes.
Please keep in mind that not every expert annotated each sample, lack of annotation is indicated as NaN in the appropriate slice of the array. 
Moreover, as in the standard active learning scenario, not every sample was labeled by any expert, in other words, there are some samples with no annotations. 
Those samples will not be used for the evaluation, but are left to more faithfully model the problem.

Format of submissions: solutions should be submitted as text files with 2 sections. 

The first section has a number of rows equal to the number of annotated samples in the dataset. Every line should have 5 floating point numbers separated by commas, indicating the probabilities that i-th labeled sample belongs to each of the 5 classes.
(The samples should be submitted in the same order as in the annotations input file, keep in mind that estimations for not labeled samples should not be submitted)

The second section should contain n_experts lines describing the estimated true positive rates of the experts. Each line should contain exactly 5 comma-separated floats, denoting the estimation of the true positive rate of this expert in the corresponding classes in the same order as in the annotations file. 

Evaluation: the evaluation of submitted class probabilities will be done using the macro ROC AUC metric with true labels of the samples. Evaluation of the estimated true positive rates will be done using Spearman rank correlation with the hidden real true positive rates of those experts. The final score will be an average value of those 2 numbers.

During the challenge, your solutions will be evaluated on a fraction of the data set and a fraction of the experts' tpr estimations, and your best preliminary score will be displayed on the public Leaderboard. After the end of the competition, the selected solutions will be evaluated on the remaining part of the data set and this result will be used for the evaluation of the project.

This forum is for all users to discuss matters related to the competition. Good manners apply!
There is no topics in this competition.