6 years, 6 months ago

AAIA'18 Data Mining Challenge: Predicting Win-rates of Hearthstone Decks

AAIA'18 Data Mining Challenge is the fifth competition organized within the framework of International Symposium Advances in Artificial Intelligence and Applications (https://fedcsis.org/2018/aaia). This time, the task is to assess win-rates of Hearthstone decks in games played between AI bots. The competition is kindly sponsored by Silver Bullet Labs, eSensei and Polish Information Processing Society (PTI).

Overview

This year's competition is a continuation of the topic from the previous year - data analytics related to video games. In particular, we focus on a popular collectible card video game Hearthstone: Heroes of Warcraft.

The purpose of this challenge is to discover reliable algorithms for predicting win-rates of Hearthstone decks. The task for participants is to construct a prediction model that can learn win chances of new decks, based on a history of match-ups between AI bots playing with similar decks. In order to give participants a freedom of choosing a representation of the data, apart from a preprocessed dataset in a tabular format, we provide raw JSON files that describe each game in more details - we are interested whether the data regarding the way in which cards are played during the game can be useful in the proposed task.

More details regarding the task and a description of the competition data can be found in Task description section.

Special session at AAIA'18: As in previous years, a special session devoted to the competition will be held at the conference. This year, it is called International Workshop on AI Methods in Data Mining Challenges (DMGATE). We will invite authors of selected reports to extend them for publication in the conference proceedings (after reviews by Organizing Committee members) and presentation at the conference. The publications will be treated as short papers and will be indexed by IEEE Digital Library and Web of Science. The invited teams will be chosen based on their final rank, innovativeness of their approach and quality of the submitted report. We also encourage regular paper submissions to this event, describing new approaches for analyzing data sets published during any of the challenges organized at Knowledge Pit.

References:

  1. Helping AI to Play Hearthstone: AAIA'17 Data Mining Challenge. FedCSIS 121-125
  2. Maciej Swiechowski, Tomasz Tajmajer, Andrzej Janusz: Improving Hearthstone AI by Combining MCTS and Supervised Learning Algorithms. CIG 2018: 1-8
  3. Andrzej Janusz, Tomasz Tajmajer, Maciej Swiechowski, Lukasz Grad, Jacek Puczniewski, Dominik Slezak: Toward an Intelligent HS Deck Advisor: Lessons Learned from AAIA'18 Data Mining Competition. FedCSIS 2018: 189-192
  4. Andrzej Janusz, Lukasz Grad, Dominik Slezak: Utilizing Hybrid Information Sources to Learn Representations of Cards in Collectible Card Video Games. ICDM Workshops 2018: 422-429
  5. http://eu.battle.net/hearthstone/en/
  6. https://hearthsim.info/
Terms & Conditions
 
 

AAIA'18 Data Mining Challenge: Predicting Win-rates of Hearthstone Decks has ended and we would like to thank all participants for their involvement and hard work! 

The official Winners:

  1. Quang Hieu Vu, ZALORAand Dymitr Ruta, EBTIC, Khalifa University, United Arab Emirates  (team hieuvq)
  2. Ling Cen, EBTIC, Khalifa University, United Arab Emirates, and Andrzej Ruta, ING Bank Slaski, Poland (team amy)
  3. Jan Jakubik, Wrocław University of Science and Technology, Poland (team jj)

Congratulation on your excellent results!

We would also like to distinguish two more teams:

  • kmichael08: Micha Kuźba and Ryszard Poklewski-Koziełł
  • adamwitkowski: Adam Witkowski, Jan Betley, and Anna Sztyber,

and invite them to contribute extended versions of their reports to our special session at the FedCSIS 2018 conference. We will be sending separate invitation letters shortly.

All participants of the challenge are also invited to submit regular conference papers describing their solutions to DMGATE'18 event which will be held at the FedCSIS'18 conference.

All competition data files (including the ground truth win-rates) were published in the Data files section. If you are planning to use them in your research, please indicate KnowledgePit as the data source and include references to our papers related to the challenge. 

Data description and format: The data for this competition is provided in two different formats. The main one is a collection of JSON files which describe in details games played between four different bots using 400 Hearthstone decks. These games can be used to estimate win-rate of the decks and learn how particular cards are used by the bots. Based on this knowledge, the task for participants is to estimate win-rates of a different set of 200 decks, played by the same bots.

The decks used in the provided collection of training games are given in a separate file named trainingDecks.json. Each row of this file corresponds to a different deck and stores a JSON with a deck identifier, hero identifier, and a list of card names along with their cardinality in the deck (there can be one or two cards of the same type in a deck). Analogically, the descriptions of test decks are provided in the same format in the file testDecks.json. It is allowed to use external knowledge bases about Hearthstone cards as long as they are publically available and their source is clearly stated in the submitted competition report. One example of such a source is the HearthPwn portal.

For the convenience of participants, we provide an additional table training_games.csv, whose rows correspond to JSON files describing the games between bots. Each row of this table gives a simple summary of the corresponding game. In particular, it provides ids of the playing bots, decks which they were using and the result of the game.

The format of submissions: The participants of the competition are asked to predict win-rates of 200 Hearthstone decks described in the file testDecks.json for the four bots which were used to generate the training data. The predictions should be sent using the submission system which will become available on April 13, 23:59 GMT. A file with a solution should consist of 800 lines, formatted in the same way as the exemplary solution provided as the file testSubmissionTemplate.csv. In each line, there should be an identifier of a bot, id of a deck and the predicted win-rate separated by semicolons. The win-rates should be expressed as percentages (real numbers between 0.0 and 100.0).

Evaluation of results: The submitted solutions will be evaluated online and the preliminary results will be published on the competition leaderboard. The preliminary score will be computed on a small subset of the test records, fixed for all participants. The final evaluation will be performed after completion of the competition using the remaining part of the test records. Those results will also be published online. It is important to note that only teams which submit a report describing their approach before the end of the contest will qualify for the final evaluation. The winning teams will be officially announced during a special session devoted to this competition, which will be organized at the FedCSIS'18 conference.

The assessment of solutions will be done using the RMSE measure. In order to keep the rule that the higher the score the better result, scores on the Leaderboard will correspond to minus RMSE values. For every pair bot-test deck, the reference win-rates were computed only based on games with the bots and decks from the training data.

The award ceremony will take place during the FedCSIS'18 conference (Sep 9-12, 2017, Poznań, Poland).

Rank Team Name Is Report   Preliminary Score Final Score Submissions
1
hieuvq
True True -5.4017 -5.573399 2
2
amy
True True -5.0007 -5.648201 2
3
jj
True True -4.8043 -5.667595 2
4
dymitrruta
True True -4.5338 -5.696228 2
5
amorgun
True True -5.9108 -5.847379 2
6
a.ruta
True True -5.7275 -5.860696 2
7
mrgrizz
True True -5.8823 -5.934092 2
8
kmichael08
True True -5.7588 -5.981266 2
9
towca
True True -5.6148 -6.043132 2
10
adamwitkowski
True True -5.8483 -6.349185 2
11
karol
True True -6.5144 -6.667479 2
12
db346864
True True -7.6573 -7.441551 2
13
tkacperek
True True -7.3138 -7.543330 2
14
tgarbus
True True -7.4190 -7.612707 2
15
kabeem
True True -7.2948 -7.742122 2
16
leafproduction
True True -7.8085 -7.779585 2
17
qwerty
True True -7.5112 -7.840545 2
18
tronowski
True True -7.7784 -7.863463 2
19
isia
True True -7.7701 -8.112951 2
20
siwy
True True -8.3782 -8.116051 2
21
whatisgoingon
True True -8.3352 -8.387689 2
22
subuk
True True -7.9331 -8.392594 2
23
bandy
True True -7.3897 -8.548265 2
24
bayes_brothers
True True -8.7250 -8.695642 2
25
mw371854
True True -8.7185 -8.721913 2
26
borysp
True True -8.9348 -9.222077 2
27
godul
True True -8.1651 -9.404691 2
28
msonic
True True -8.8102 -9.571263 2
29
janekf
True True -10.5465 -10.535439 2
30
ip360730
True True -11.3806 -10.691340 2
31
bearstrikesback
True True -10.5882 -10.736522 2
32
kasrad
True True -10.1628 -10.842473 2
33
optymista
True True -10.6682 -12.168536 2
34
qwymierne
True True -12.4330 -12.518539 2
35
szarki
True True -14.6353 -13.033823 2
36
adi_nar
True True -11.6931 -13.152160 2
37
raulmm7
True True -15.7891 -15.137684 2
38
csiluszyk
True True -18.8158 -17.246092 2
39
sodar
True True -18.8158 -17.246092 2
40
gg370808
True True -49.1816 -49.867121 2
41
kobrar
True True -49.2659 -49.967878 2
42
alphapred
False True -5.8050 No report file found or report rejected. 2
43
asztyber
False True -6.7871 No report file found or report rejected. 2
44
dishonesty
False True -7.1665 No report file found or report rejected. 2
45
francoisw89
False True -7.1684 No report file found or report rejected. 2
46
rekcahd
False True -7.6432 No report file found or report rejected. 2
47
dniwe
False True -8.6005 No report file found or report rejected. 2
48
iwannabetheverybest
False True -6.5452 No report file found or report rejected. 2
49
baseline_solution
False True -8.8039 No report file found or report rejected. 2
50
sadfsahfdasdf
False True -8.9644 No report file found or report rejected. 2
51
jvdputten
False True -8.8982 No report file found or report rejected. 2
52
lraszkiewicz
False True -9.7730 No report file found or report rejected. 2
53
gerstorger
False True -10.7830 No report file found or report rejected. 2
54
marseel
False True -10.8871 No report file found or report rejected. 2
55
windblade
False True -11.4044 No report file found or report rejected. 2
56
dawnfather
False True -11.4558 No report file found or report rejected. 2
57
skybinder
False True -11.5093 No report file found or report rejected. 2
58
vividfight
False True -11.5116 No report file found or report rejected. 2
59
grandgob
False True -11.5303 No report file found or report rejected. 2
60
vexfight
False True -11.5658 No report file found or report rejected. 2
61
grapplebelt
False True -11.5816 No report file found or report rejected. 2
62
swiftroar
False True -11.5856 No report file found or report rejected. 2
63
shiftbuttons
False True -11.5950 No report file found or report rejected. 2
64
darkshadow
False True -11.6047 No report file found or report rejected. 2
65
vadiaceu
False True -11.6931 No report file found or report rejected. 2
66
ducktile
False True -11.2168 No report file found or report rejected. 2
67
randomseed19
False True -11.6931 No report file found or report rejected. 2
68
ramich
False True -12.6435 No report file found or report rejected. 2
69
twsthomas
False True -7.8072 No report file found or report rejected. 2
70
doubleloop
False True -19.0317 No report file found or report rejected. 2
71
rgod
False True -19.9659 No report file found or report rejected. 2
72
hjasud
False True -8.9517 No report file found or report rejected. 2
73
mateuszjanczura
False True -49.0983 No report file found or report rejected. 2
74
pp332493
False True -12.5925 No report file found or report rejected. 2
75
luki4824
False True -49.2246 No report file found or report rejected. 2
76
adambak
False True -11.6931 No report file found or report rejected. 2
77
podludek
False True -11.6931 No report file found or report rejected. 2
78
pabloxrl
False True -49.2803 No report file found or report rejected. 2
79
mpk
False True -52.8773 No report file found or report rejected. 2
80
tbb
False True -1000.0000 No report file found or report rejected. 2
81
alexionby
False True -24.4276 No report file found or report rejected. 2
82
czlowiekrakieta
False True -1000.0000 No report file found or report rejected. 2
  • April 3, 2018: start of the competition, training data become available,
  • May 6, 2018 (23:59 GMT): deadline for submitting the predictions,
  • May 7, 2018 (23:59 GMT): deadline for sending the reports, end of the competition,
  • May 15, 2018: online publication of the final results, sending invitations for submitting short papers for the special session at FedCSIS'18,

Authors of the top-ranked solutions (based on the final evaluation scores) will be awarded prizes funded by our sponsors:

  • First Prize: 1000 USD + one free FedCSIS'18 conference registration,
  • Second Prize: 500 USD + one free FedCSIS'18 conference registration,
  • Third Prize: one free FedCSIS'18 conference registration.

The award ceremony will take place during the FedCSIS'18 conference (Sep 9-12, 2017, Poznań, Poland).

  • Andrzej Janusz, University of Warsaw & eSensei
  • Maciek Świechowski, Warsaw University of Technology & Silver Bullet Labs
  • Tomasz Tajmajer, University of Warsaw & Silver Bullet Labs
  • Łukasz Grad, University of Warsaw & eSensei
  • Jacek Puczniewski, Silver Bullet Labs & eSensei
  • Dominik Ślęzak, University of Warsaw & eSensei

In case of any questions please post on the competition forum or write an email at janusza {at} mimuw.edu.pl.

Please log in to the system!