Pour continuer la tendance des concours créés par les joueurs (et non par les employés de codingame), on retrouve cette fois ci csj , harshch000 & mcargille à la réalisation.
J’ai implémenté une simulation du jeu, en C++, qui m’a permis de reproduire le comportement du jeu. Avec un algorithme Monte Carlo et quelques heuristiques haut niveau, j’ai pu obtenir un bot qui tenait assez bien la route.
J’ai écrit un post mortem sur le forum de Codingame avec plus de détails sur l’implémentation (méthode d’évaluation, recherche des actions des adversaires etc). C’est en anglais.
L’AI Challenge consiste à créer une intelligence artificielle: vous créerez un programme informatique (dans n’importe quel langage) qui contrôle une colonie de fourmis devant se battre contre d’autres colonies pour la domination totale!
Ci dessous une partie des top players afin de visualiser le jeu:
NB: cliquez sur le bouton « plein écran » afin d’avoir un meilleur confort visuel et les touches +/- pour ajuster la vitesse de la partie.
Les règles sont les suivantes :
Le joueur gagne des points (2) en rasant les fourmilières adverses. Il perd un point par fourmilière perdue.
A chaque fois que le joueur « ramasse » un bloc de nourriture, ce dernier disparait et une nouvelle fourmi est créée dans une des fourmilière de ce joueur.
Les parties sont jouées sur des cartes de 2 à 10 joueurs.
It computes every routes in a certain range in order to protect my hills, attack enemy hills or collect food. Add them to a list and sort them afterwards
Food routes are less important than defendHills & attackHills ones. I picked a coef 3 in order to make this distinction. Thus a 3 cells AttackHill route is equivalent to a 9 food route.
After the filter applied, I give the « real » orders right away. (I won’t change them)
The defendHills make 2 of my closest ants go towards one enemy. No more.
One piece of food is target by only one of my ant (the closest, if not used for defending or attacking ofc)
Phase II: Attack close enemy ants
After that I make my free ants attacks close enemy. The idea is to « push » enemy and to diffuse my ants towards enemy hills.
Phase III : The Sentinel
After that I make, if I got enough ants, one of them stay close to my hill. This way I can see if an enemy is coming. This ant will not stay static because of the previous calls. If a food just appears, it will go collect it which will generate a new ant (which will probably take the « Gardian » position, if nothing else is needed) Same if an enemy ant approches, it will go fight it.
Phase IV: Explore the map
Then and last, I call the ExploreMap function where Free ants will go explore the map.
I use a map monitoring the last tour I’ve seen every cell in order to make the ant goes somewhere « new ».
It goes towards 8 differents directions (E/N/S/W and combinaison like N-E)
I start one cell further the view range of the picked ant. If alls cells are already seen, I go one cell further. (Water tiles are not taken into account)
The 8 cells are then sorted by turn they have been seen. The ant will go toward the « oldest »
Directions are made with an A* algorithm. I use the following one that I’ve modified just a little bit.
I have a list of timeout routes which helps me to not compute too long – complicated – routes again and again.
I wanted to do the same with computed routes (to not have to compute them again and again if needed, but they can change if I discover that an unseen tile is actually a water tile) I started to filter them every turn but it was really too time-consuming.
Everytime I give an order, I test the « safety » of the area. My implementation is not really good, but it gives a basic idea of the security (not optimized though)
I sorted safety into 3 different categories (following some advice I found on the forum, Memetix® style)
SAFE: no worries it’s ok
KILL: I can expect to have a 1-1 trade with the enemy
DIE: It will be worse.
Some of the following conditions make my ants decide (or not) if they should go to a KILL area like :
If there’s a food between the enemy and I. If I have a lot of ants etc etc.
If it’s considered to be unwise to go, the ant will try a close direction, or to stay put or even to go back in order to avoid death.
1. A* Algorithm
First of all, the A star algorithm.
I picked C# because it’s the language I use at work and I’m the most used to. However, existing Astar code are legion in c++ but real few in C#
I tested 2 very well graded on source code website. Istarted with what I thought was the best of them. Timeout used to occur if the path was 10-15 tiles long. I developed my bot during 2-3 weeks with this algo, making a lot of concessions because of the timeout problems. Then I found out it wasn’t normal at all and found a new one, which was described as « not perfect » because it’s not giving the shortest path.
It was the worst error I made, to not have discovered this one before. I kinda lost one week and a half.
I started with monitoring how many ants were seeing every tile, trying to send them where tiles were « less » seen. It was working ok but I had some very bad « cycles » where ants used to jitter.
I switched to the last turn seen way, which gave me better results.
Before my final submission (like 12 hours before :s) I still had some bad behaviours: in some maze_map, ants used to stack at one point and jitter alltogether, like if it was awesome to finally have some security or friends or I don’t know. Anyway, I patched it quickly: ants which don’t have any orders at the end, if surrended by more than 7 ants in some small perimeter should go towards the closest enemy.
It’s dirty, it’s last minute modification, but I think it was worth it.
NB: After almost every group of modification I made during the last 2 days, I ran a couple of test games (~10) where I’m supposed to win against middle-skilled opponents. It happens that results were pretty bad so I could review the previous modifications.
I spent a lot of time on the exploration system. I wanted to go step by step and not rush the global feature bot too soon.
The first implementation I made was just monitoring enemy position, creating an array of « Unsafe » positions and … that’s it. It was obviously not good at all, my ants were just fearing every single enemy. I found the memetix post (link) and tried to implement it, but I guess I screwed up because I had to make some modifications and the result is far from my expectations. Time went fast and I couldn’t have a « proper » fighting system and the result is that I can’t really play map control like pro players do: having walls of ants facing each other.
Anyway, it was a really interesting contest. I’m glad I found it quite soon (I think I missed the 2 first weeks but I had 1month and a half to develop something) I’m also glad that « Fredo », a colleague of mine followed me on this event, so we could compare and improve our bots. I think the result would have been way worse (yeah, it’s possible I’m sure!) if I was alone on it.
Even if they won’t see that, I’d like to thanks the organizers for their amazing job. The game engine was awesome, the starter pack perfects, tools aswell. The finals (even if it’s still going on) are really interesting. The community was very friendly and best players gave tips and shared their source code at the end, which is very nice and very interesting.