TorchCraftAI
  • Tutorials
  • API
  • Blog
  • GitHub

›Tutorial - Micro-Manamagent

Getting Started

  • Installation (Linux)
  • Installation (Windows)
  • Installation (macOS)
  • Play games with CherryPi

TorchCraftAI

  • Overview
  • System Architecture
  • Core Abstractions
  • Modules Overview
  • Module Training Blueprints

Tutorial - Building Placement

  • Building Placement Intro
  • Neural Network Architecture
  • Supervised Learning
  • Reinforcement Learning

Tutorial - Micro-Manamagent

  • Micromanagement Intro
  • Model
  • Training Setup

Micromanagement Scenarios Tutorial

We train the model introduced in the previous_section using Evolution Strategies. To make training more feasible, we choose to do some reward shaping to encourage our model to learn. We show that we are able to learn interesting behaviors in a variety of different situations. To try some of these yourself, check out how the model learns to spread units out in the 10mu_5co scenario, or how it learns to kite in the 1vu_1zl scenario.

Evolution Strategies

Evolution Strategies (ES) are a lightweight, scalable, gradient-free optimization technique. In the TorchCraftAI micro training, we use ES to optimize the weights of the micromanagement policy network.

Here's how ES works in the micro trainer:

  • Initialize the policy network \(\theta\).
  • Randomly perturb the network weights. Here we use antithetic sampling to speed up convergence
  • Run episodes with the perturbed weights and measure the associated rewards.
  • Sort (rank) the rewards, then remap them to equally distributed values along [-0.5, 0.5].
  • Weigh the perturbations by the ranked-and-normalized reward.
  • Modify the network weights by the sum of the weighted perturbations.

Mathematically, we do the following steps in a loop:

  1. Generate some noise vectors \(\delta_i\), where \(i=1, \ldots, n\). To do antithetic sampling, let's say \(\delta_{2i+1} = -\delta{2i}\).
  2. Obtain the reward \(r_i\) for \(n\) battles using model \(\theta + \alpha \delta_i\). \(\alpha\) is -sigma in the codebase.
  3. Calculate the reward transform, we use a rank transform like \(t(r_i) = \frac{\mathrm{rank}(r_i)}{n}\).
  4. Find the parameter update, \(\theta \leftarrow \theta + \eta t(r_i) \delta_i\). \(\eta\) is the learning rate -lr in the codebase.

The ES trainer itself is in cpid/estrainer.cpp. The hyperparameters are command flags specified in flags.h.

Reward Shaping

Scenarios come with customizable reward functions specified in reward.cpp. We have found this formula effective:

Reward = (win=1, loss=0)/2 + (ratio of enemies killed)/4 + (ratio of friendly units surviving)/8 + ratio of enemy HP depleted)/16

This reward gives 0.5 for a win. 0.25 (1/4) is given proportionally according to the ratio of enemies killed. 0.125 (1/8) is given if all our units survive, and finally, 0.0625 (1/16) is given depending on the ratio of damage dealt to enemies. This reward essentially tries to state that comparatively, two games that are both wins will break the tie in a way that encourages enemies killed, then allies surviving, then damage dealt.

By experimenting with the shapes in reward.cpp you may come up with something even better.

Results

We trained the default parameters on a variety of scenarios. We took the model with the highest train-time reward, and tested it on 100 battles and recorded the win rates. These numbers are not tuned at all, but could serve as an interesting baseline on how easy each of the scenarios are, or what you should expect to see from training. You can also notice how the choice of opponents can greatly affect the results of training in that scenario. The striking example is that the attack-weakest first heuristic does terribly in large battles, where as attack move is quite strong in most large battle settings - the difference between 100% and 0% winrate in most large battle scenarios.

Additionally, our initial tutorial provides plenty of space for improvement. Many of these scenarios are not solved, though since they are mirror matches, they should be easy for the expert human. We encourage you to play with the model and setup on these tasks, and see how far you can get!

The name of the scenario is the exact string to pass to -scenario. Here are again the abbreviations we use:

  • ar - Archon
  • bc - Battlecruiser
  • co - Corsair
  • de - Devourer
  • dn - Drone
  • dr - Dragoon
  • fi - Firebat
  • go - Goliath
  • hy - Hydralisk
  • it - Infested_Terran
  • mr - Marine
  • mu - Mutalisk
  • pr - Probe
  • sc - Scout
  • st - Siege_Tank_Tank_Mode
  • sv - SCV
  • ul - Ultralisk
  • vu - Vulture
  • wr - Wraith
  • zg - Zergling
  • zl - Zealot

Here are the results of training our model on a variety of scenarios:

Symmetric scenarios

These are basicaly scenarios that are guaranteed to be balanced. Several of the scenarios with 2 unit types rely on focus firing down one of the types first.

Goliaths and Battlecruisers

Test Win Ratesweakest-closestclosestattack-move
ar0.410.500.50
ar+sc0.840.860.76
bc0.360.890.75
big_ar1.000.000.00
big_dr0.910.000.00
big_fb0.990.000.00
big_gh0.970.190.00
big_go0.800.000.13
big_hy1.000.000.00
big_it1.001.001.00
big_mr0.930.090.00
big_pr1.000.000.00
big_sv1.000.000.00
big_st0.930.080.29
big_ul0.000.000.00
big_vu1.000.000.00
big_wr1.000.540.08
big_zg1.000.000.00
big_zl0.970.000.00
co0.280.580.52
de0.000.000.00
dn0.270.220.57
dr0.450.610.48
dr+sc0.200.570.61
fi0.540.240.23
go0.390.720.66
go+bc0.990.810.89
go+wr0.660.630.76
hy0.430.660.65
hy+mu0.191.000.93
it0.991.001.00
mr0.170.330.43
mr+wr0.390.610.52
mu0.280.890.91
pr0.140.330.54
sv0.290.100.42
st0.450.770.56
ul0.320.570.50
vu0.220.480.67
wr0.430.490.63
xzl+ydr_xzl+ydr0.240.080.13
zg0.200.130.28
zl0.300.290.25

Formation based scenarios

These formation based scenarios ideally involve the units grouping before fighting. However, a few of them can achieve good winrrates with good focus fire. Our model is not able to learn the regrouping tactic.

Mutalisks

Test Win Ratesweakest-closestclosestattack-move
conga_ar0.470.690.03
conga_dr0.000.000.01
conga_fb0.240.110.47
conga_mr0.290.430.69
conga_mu0.240.990.79
conga_pr0.740.010.88
conga_sv0.520.000.29
conga_ul0.200.090.00
conga_zg0.000.030.64
conga_zl0.040.040.00
surround_ar0.670.530.32
surround_dr0.080.010.64
surround_fb0.590.220.10
surround_mu0.240.850.90
surround_pr0.060.010.52
surround_sv0.220.010.28
surround_ul0.250.040.12
surround_zg0.020.000.00
surround_zl0.120.010.05

Asymmetric scenarios

Many of these scenarios rely on kiting to defeat a slower, melee ranged opponent. A few rely on spreading units apart to avoid splash damage.

Mutalisks vs Corsairs

Marines vs Zerglings

Test Win Ratesweakest-closestclosestattack-move
10mr_13zg0.900.900.97
10mu_5co0.900.880.92
15mr_16mr0.060.000.00
15wr_17wr0.300.000.00
1dr_1zl1.001.001.00
1dr_3zg0.310.550.75
1go_2zl0.000.000.00
1mu_3mr0.000.000.00
1st_2zl0.961.000.95
1vu_1hy0.760.720.93
1vu_1zl1.001.000.98
1vu_3zg0.970.990.95
2dr+3zl_2dr+3zl0.460.280.50
2dr_3zl0.660.810.14
2hy_1sst1.001.001.00
2mr_1zl0.030.030.00
2vu_7zg0.981.000.92
30zg_10zl1.001.001.00
3dr_10zg0.000.000.00
3go_8zl0.000.000.00
3hy_2dr0.010.100.01
3mr_3mr0.360.640.64
3mu_9m30.000.000.00
3st_7zl0.560.950.89
3vu_11zg0.970.990.99
3vu_3hy0.560.140.00
4hy_2sst0.000.000.00
5mr_5mr0.340.470.45
5mu+20zg_5gl+5vu0.930.850.75
5vu_10zl0.040.000.01
5wr_5wr0.390.470.76
6mr_4zl0.000.000.00
7zg_2gl0.000.070.13
8mu_5co0.100.210.10
vu_zl0.280.350.01

Other scenarios

These scenarios do not involve combat. The rewards are specifically tailored to the scenario. In all of the hug* scenarios, the reward is negative distance. In popoverlords, the reward is -1 for how long it took to kill all overlords. In ignorecivilians, the reward is 1 for a perfect success, and between 0-1 for each civilian we kill. We can see the model focuses the high templar, but isn't able to learn to completely ignore all civilians. One tactic we observed is that the model would oscillate between two civilians so it never managed to kill any.

Ignore civilians

ScenarioTest reward
hugmiddle-21.24
hugmiddleeasy0.00
hugoverlords-37.18
ignorecivilians0.88
popoverlords-5986.96
← Model
  • Evolution Strategies
  • Reward Shaping
  • Results
    • Symmetric scenarios
    • Formation based scenarios
    • Asymmetric scenarios
    • Other scenarios
TorchCraftAI
Docs
Getting Started (Linux)Getting Started (Windows)Getting Started (Mac)API Reference
Community
Starcraft AI DiscordStarcraft AI Facebook groupTorchCraftAI on GitHub
More
TorchCraft on GitHubStarData on GitHubBlog
Facebook Open Source
Copyright © 2019 Facebook