How did Google's DeepMind train AlphaStar to play Starcraft 2 with reinforcement learning and did that mean running the Starcraft game in an accelerated way billions of times to train the neural network?
AlphaStar's behaviour is generated by a deep neural network that receives input data from the raw game interface (a list of units and their properties), and outputs a sequence of instructions that constitute an action within the game. More specifically, the neural network architecture applies a transformer torso to the units (similar to relational deep reinforcement learning), combined with a deep LSTM core, an auto-regressive policy head with a pointer network, and a centralised value baseline. We believe that this advanced model will help with many other challenges in machine learning research that involve long-term sequence modelling and large output spaces such as translation, language modelling and visual representations.
AlphaStar also uses a novel multi-agent learning algorithm. The neural network was initially trained by supervised learning from anonymised human games released by Blizzard. This allowed AlphaStar to learn, by imitation, the basic micro and macro-strategies used by players on the StarCraft ladder. This initial agent defeated the built-in "Elite" level AI - around gold level for a human player - in 95% of games.
To answer your question. I don't think it played billions or millions of times. You need orders or magnitude fewer games (maybe in the ten or hundred thousand range) to get the neural network to the level where it can beat humans. Rather than worry about the number of games played I would look at the number of neural network agents playing the game against each other, the conditions under which the game is played, and the overall rules of the game as played by the agents as the true measuring sticks for complexity in this situation.