1 year, 7 months ago

ICME 2023 Grand Challenge: Predicting Frags in Tactic Games

ICME 2023 Grand Challenge: Predicting Frags in Tactic Games is the first data science competition organized in association with the IEEE International Conference on Multimedia and Expo (https://www.2023.ieeeicme.org/) conference series at KnowledgePit. The task continues the topic from IEEE BigData 2021 Cup: Predicting Victories in Video Games. This time, we ask participants to predict the chances of scoring a frag in a tactical video game Tactical Troops: Anthracite Shift (http://tacticaltroops.net) based on game screenshots and auxiliary information about the game. The competition is sponsored by the IEEE ICME 2023 conference.

Overview

Tactical Troops: Anthracite Shift is a top-down, turn-based game that mixes tactical skills and the feeling of 80's sci-fi movies. Players can command a troop of soldiers to fight on one of 24 multi-player maps with artificial bots or human opponents. The goal of the challenge is to learn an efficient model for predicting chances that a player will score a frag in the next game turn (i.e., eliminate at least one troop of the opposing player) using screenshot data and basic information about the game state. A high-quality prediction model for this task will be used in a number of analytical tasks related to the game. It will also help us design better AI-controlled agents for future video games.

More details regarding the task and the description of the challenge datasets are available in the Task description section below. The competition started on February 12 and ended on March 19.

Special session at IEEE ICME 2023: A special session devoted to the challenge will be held at the IEEE ICME 2023 conference. We will invite authors of selected competition reports to extend them for publication in the conference proceedings (after reviews by Organizing Committee members) and presentation at the conference. The publications will be indexed in the same way as regular conference papers. The invited teams will be chosen based on their final rank, the innovativeness of their approach, and quality of the submitted report.

Terms & Conditions
 
 
Rank Team Name Is Report   Preliminary Score Final Score Submissions
1
icme2023
True True 0.8946 0.873600 18
2
NxGTR
True True 0.8412 0.820600 92
3
Dymitr
True True 0.8533 0.812300 441
4
The Lord of the Machine Learning
True True 0.8385 0.806200 108
5
Cyan
True True 0.8249 0.800800 108
6
baseline
True True 0.8189 0.799400 1
7
goldboy
True True 0.8185 0.797700 5
8
witteam
True True 0.8144 0.793700 2
9
MB
True True 0.8058 0.789200 3
10
sk
True True 0.8120 0.785000 57
11
Climber
True True 0.7829 0.776900 7
12
hieuvq
True True 0.8184 0.773100 246
13
PSO
True True 0.7822 0.761500 9
14
fulltank
True True 0.6292 0.615100 1
15
kubapok
False True 0.7742 No report file found or report rejected. 3
16
DeepTeam
False True 0.7191 No report file found or report rejected. 1
17
TheArtificialTeam
False True 0.5408 No report file found or report rejected. 3
Please log in to the system!

The task in this competition is to learn how to predict whether a moving player can score a frag in his/her turn of Tactical Troops: Anthracite Shift game.

The available data has three different modalities, i.e., each data instance is characterized by a brief description of a game state at the beginning of the player’s turn, meta-data of the map on which the game is played, and a game screenshot from the beginning of the turn. Each screenshot covers the whole game arena and allows the extraction of valuable information, such as exact troop positioning with respect to terrain features, their class, and primary weapon type. Names of screenshot files correspond to the identifiers of training and test instances.

The meta-data of the game maps is composed of collections of bitmap masks and a small metadata.json file corresponding to each map in the game. The bitmap masks indicate special map features, such as static obstacles, impassible terrain, etc. The json file contains general information about the map size and enables translation between in-game coordinates and screenshot pixels. For example, if someone wanted to pre-train a model on a task of finding troop positions and their classes on game screenshots, the ground truth positions in pixel coordinates can be computed as:

px = (x - x_min) * game_to_pixel_scale
py = (y_max - y) * game_to_pixel_scale

where x and y are the in-game coordinates from the description of a game state.

The training data contains 50000 instances (game turns) and the test set is composed of additional 10000 cases from a different collection of games. For training data, information about the target variable, i.e., whether the player was able to score a frag, is available in the table describing the game states. It corresponds to the column named frag. In the test data, values from that column are missing.

Solution format: The predictions for the test instances should be submitted to the online evaluation system as a text file. The file should have exactly 10000 lines, and each line should contain exactly one number from the [0,1] interval indicating the chance that the moving player scores at least one frag during the turn. The ordering of predictions should be the same as the ordering of instances in the game_states_data_test.csv table.

Evaluation: the quality of submissions will be evaluated using the AUC measure. Solutions will be evaluated online and the preliminary results will be published on the public leaderboard. The preliminary score will be computed on a small (10%) subset of the test records, fixed for all participants. The final evaluation will be performed after the completion of the competition using the remaining part of the test cases. Those results will also be published online. It is important to note that only teams that submit a report describing their approach before the end of the challenge will qualify for the final evaluation and will be considered for the invitation to submit a paper for the special session at the IEEE ICME 2023 conference.

  • February 12, 2023: start of the competition, datasets become available,
  • March 19, 2023: deadline for submitting the solutions,
  • March 19, 2023: deadline for sending the reports, end of the competition,
  • March 23, 2023: online publication of the final results, sending invitations for submitting papers to the associated workshop/special session at IEEE ICME 2023,
  • April 17, 2023: deadline for submitting invited papers,
  • April 24, 2023: notification of paper acceptance after reviews by the Competition Organizing Committee,
  • May 1, 2023: camera-ready of accepted papers due.

Authors of the top-ranked solutions (based on the final evaluation scores) will be awarded prizes funded by the IEEE ICME 2023 conference:

  •     1,000 AUD for the winning solution (+ the cost of one IEEE ICME 2023 registration if the author attends the conference)
  •     550 AUD for the 2nd place solution (+ the cost of one IEEE ICME 2023 registration if the author attends the conference)
  •     300 AUD for the 3rd place solution (+ the cost of one IEEE ICME 2023 registration if the author attends the conference)

 

  • Andrzej Janusz, QED Software
  • Rafał Tyl, Silver Bullet Labs
  • Dominik Ślęzak, eSensei Sp. z o.o.
This forum is for all users to discuss matters related to the competition. Good manners apply!
  Discussion Author Replies Last post
Challenge report to be referenced in the paper M 0 by M
Wednesday, April 05, 2023, 11:39:02
Humble request for the three final evaluation entries in the future competitions M 2 by M
Wednesday, March 22, 2023, 05:51:45
Time Zone of the end of the competition March 19, 2023 M 2 by M
Friday, March 17, 2023, 21:11:46
Deadline extension Dymitr 1 by Andrzej
Friday, March 17, 2023, 09:04:26
Whats the private sharing policy? Carlos 2 by Carlos
Thursday, March 09, 2023, 14:46:30
Data files Karol 1 by Andrzej
Monday, February 27, 2023, 14:27:33