It's my understanding that the most state-of-the-art chess and go AIs work like this:
- You program in the rules to the game.
- The computer plays itself a bazillion times, choosing moves ~randomly, and records the results.
- The computer then uses various machine learning algorithms to learn what the characteristics of moves-that-lead-to-victory are.
- Then you have it play a game by picking the move that is the most "move-that-leads-to-victoryish" every time it gets to move.
There isn't anything about this program that wouldn't work for ASL, it would just be much, much harder at every step. The rules would be far, far harder to program in. And you would need to do many, many more simulated games. This is for two reasons:
1) An ASL player makes many, many more decisions than a chess player. Over the course of a chess game you might make as many as 50 decisions. You make more decisions than that in defensive fire: every time your opponent spends a MF/MP, you have to decide whether every single unit in your OB is going to fire or not.
2) Each of those decisions is much more complex in ASL than it is in chess. If you want to move a rook, there are at most 14 places you could move it to. But if you want to move a squad, there could easily be 1000s of different moves you could make. Even something comparatively simple like firing a squad is tough: a 467 has 468 hexes it could possibly target (assuming LOS and one location per hex)! Sure, most of them will be empty, but good players fire at empty hexes sometimes, so that's a possibility the computer needs to consider.
You might be able to explore most strategies in chess by simulating 100 billion games, but 100 billion games of ASL wouldn't be enough for you to explore enough of the possibilities to find all the good strategies (maybe not enough to find a single good strategy!). And you would need even more simulated games to find strategies that are useful across scenarios.
Humans don't have this problem, because our brains automatically filter out all of the quadrillions of dumb options (like moving a squad away from the objective in a zig-zag, then tossing smoke in its own hex; or AFV moving into woods for no reason) and focuses on only a handful of good ones. If you can program in some heuristics into your AI to ignore obviously dumb strategies, then you might be able to simulate enough games to do something reasonable. You have to be careful, though, because this limits the capabilities of the AI. As noted upthread, the chess and go AIs came up with worldbeating strategies that no human had ever thought of because all the humans thought they seemed obviously dumb, but they weren't. So pruning out obviously dumb strategies could prevent the AI from finding strategies that are actually innovative, but just seem wrong to the programmer.
My guess is that this is such a hard problem that it will only be solvable by:
- some sort of massive increase in computing ability (i.e., great quantum computing or a p=np algorithm)
- a fully general AI.
If it was solved, though, I'm in the camp that an AI would be unbeatable, but very exciting to play against.