How would a supercomputer handle ASL?

bendizoid

Official ***** Dickweed
Joined
Sep 11, 2006
Messages
4,630
Reaction score
3,244
Location
Viet Nam
Country
llUnited States
The AI would play a few billion games a second against itself and would improve very rapidly. Ultimately, it would always do the right move to increase its chance of winning. Take heart, it will still probably lose say 1/20 games to bad luck against a very well played opponent.
 
Last edited:

Tuomo

Keeper of the Funk
Joined
Feb 10, 2003
Messages
4,652
Reaction score
5,537
Location
Rock Bottom
Country
llUnited States
Computers can master any intelligence task that humans can master, plus many that they cannot.
I think that's an over simplification. I think it's true that they can do a surprising amount of things well, but my gut says that humans will remain competitive in tasks requiring synthesis from incomplete data, creativity, and drawing parallels from fields that may not seem related.

Will that be enough to win any given scenario against an AI? Depends on the scenario and the dice.

Even if the human loses, he or she will be able to walk away from the table and still be a world class general purpose processor, able to do well in an astonishing variety of challenges involving all human senses. The ASL AI will just sit there and be a failure at everything else except ASL.

Me, I'm taking the human.

Tom
 

boylermaker

Senior Member
Joined
Jan 22, 2012
Messages
581
Reaction score
526
Location
Virginia
Country
llUnited States
It's my understanding that the most state-of-the-art chess and go AIs work like this:

  1. You program in the rules to the game.
  2. The computer plays itself a bazillion times, choosing moves ~randomly, and records the results.
  3. The computer then uses various machine learning algorithms to learn what the characteristics of moves-that-lead-to-victory are.
  4. Then you have it play a game by picking the move that is the most "move-that-leads-to-victoryish" every time it gets to move.
There isn't anything about this program that wouldn't work for ASL, it would just be much, much harder at every step. The rules would be far, far harder to program in. And you would need to do many, many more simulated games. This is for two reasons:

1) An ASL player makes many, many more decisions than a chess player. Over the course of a chess game you might make as many as 50 decisions. You make more decisions than that in defensive fire: every time your opponent spends a MF/MP, you have to decide whether every single unit in your OB is going to fire or not.

2) Each of those decisions is much more complex in ASL than it is in chess. If you want to move a rook, there are at most 14 places you could move it to. But if you want to move a squad, there could easily be 1000s of different moves you could make. Even something comparatively simple like firing a squad is tough: a 467 has 468 hexes it could possibly target (assuming LOS and one location per hex)! Sure, most of them will be empty, but good players fire at empty hexes sometimes, so that's a possibility the computer needs to consider.

You might be able to explore most strategies in chess by simulating 100 billion games, but 100 billion games of ASL wouldn't be enough for you to explore enough of the possibilities to find all the good strategies (maybe not enough to find a single good strategy!). And you would need even more simulated games to find strategies that are useful across scenarios.

Humans don't have this problem, because our brains automatically filter out all of the quadrillions of dumb options (like moving a squad away from the objective in a zig-zag, then tossing smoke in its own hex; or AFV moving into woods for no reason) and focuses on only a handful of good ones. If you can program in some heuristics into your AI to ignore obviously dumb strategies, then you might be able to simulate enough games to do something reasonable. You have to be careful, though, because this limits the capabilities of the AI. As noted upthread, the chess and go AIs came up with worldbeating strategies that no human had ever thought of because all the humans thought they seemed obviously dumb, but they weren't. So pruning out obviously dumb strategies could prevent the AI from finding strategies that are actually innovative, but just seem wrong to the programmer.

My guess is that this is such a hard problem that it will only be solvable by:
  1. some sort of massive increase in computing ability (i.e., great quantum computing or a p=np algorithm)
  2. a fully general AI.
If it was solved, though, I'm in the camp that an AI would be unbeatable, but very exciting to play against.
 

Actionjick

Forum Guru
Joined
Apr 23, 2020
Messages
7,466
Reaction score
4,992
Location
Kent, Ohio
First name
Darryl
Country
llUnited States
It's my understanding that the most state-of-the-art chess and go AIs work like this:

  1. You program in the rules to the game.
  2. The computer plays itself a bazillion times, choosing moves ~randomly, and records the results.
  3. The computer then uses various machine learning algorithms to learn what the characteristics of moves-that-lead-to-victory are.
  4. Then you have it play a game by picking the move that is the most "move-that-leads-to-victoryish" every time it gets to move.
There isn't anything about this program that wouldn't work for ASL, it would just be much, much harder at every step. The rules would be far, far harder to program in. And you would need to do many, many more simulated games. This is for two reasons:

1) An ASL player makes many, many more decisions than a chess player. Over the course of a chess game you might make as many as 50 decisions. You make more decisions than that in defensive fire: every time your opponent spends a MF/MP, you have to decide whether every single unit in your OB is going to fire or not.

2) Each of those decisions is much more complex in ASL than it is in chess. If you want to move a rook, there are at most 14 places you could move it to. But if you want to move a squad, there could easily be 1000s of different moves you could make. Even something comparatively simple like firing a squad is tough: a 467 has 468 hexes it could possibly target (assuming LOS and one location per hex)! Sure, most of them will be empty, but good players fire at empty hexes sometimes, so that's a possibility the computer needs to consider.

You might be able to explore most strategies in chess by simulating 100 billion games, but 100 billion games of ASL wouldn't be enough for you to explore enough of the possibilities to find all the good strategies (maybe not enough to find a single good strategy!). And you would need even more simulated games to find strategies that are useful across scenarios.

Humans don't have this problem, because our brains automatically filter out all of the quadrillions of dumb options (like moving a squad away from the objective in a zig-zag, then tossing smoke in its own hex; or AFV moving into woods for no reason) and focuses on only a handful of good ones. If you can program in some heuristics into your AI to ignore obviously dumb strategies, then you might be able to simulate enough games to do something reasonable. You have to be careful, though, because this limits the capabilities of the AI. As noted upthread, the chess and go AIs came up with worldbeating strategies that no human had ever thought of because all the humans thought they seemed obviously dumb, but they weren't. So pruning out obviously dumb strategies could prevent the AI from finding strategies that are actually innovative, but just seem wrong to the programmer.

My guess is that this is such a hard problem that it will only be solvable by:
  1. some sort of massive increase in computing ability (i.e., great quantum computing or a p=np algorithm)
  2. a fully general AI.
If it was solved, though, I'm in the camp that an AI would be unbeatable, but very exciting to play against.
Interesting post and an interesting thread.
 

MajorDomo

DM? Chuck H2O in his face
Joined
Sep 1, 2003
Messages
3,179
Reaction score
1,025
Location
Fluid
Country
llUnited States
Actually, current AI computers can "solve" all 5x5 and less chess positions with brute force calculation.

5x5 means that all possible chess positions involving 5 or less pieces on each side are in the computer database. So the computer will never lose a winning position, or a drawn position and will be able to shift a losing position to draw or win should its opponent make an imprecise move.

Most of a chess game is outside of this 5x5 matrix.

What AI does in this case outside the 5x5 is use its evaluation algorithm. The algorithm evaluates moves based on plusses and minuses and assigns a score. High score among all evaluated moves is chosen.

This is "reasoning" very similar to human evaluation. Advantages the computer has it considers all moves to several levels (ie responses, next moves, responses...). The more limited human mind will usually consider most/some of the possibilities. The better the human player, the more effectively their mind will explore more relevant possibilities and pare off less promising branches.

While ASL rules would be challenging to create in a database for the computer, it certainly can be done (without necessarily putting Klaus' brain in a fish tank, like a movie I once saw).

Then the computer could create its own evaluation algorithm by playing a couple billion scenarios while you slept, ate or whatever.

As noted earlier, luck does affect some scenarios. But as Steve Pleva would win an ASL series against say one of us, the computer would do the same versus Steve.

In the Northwestern Chess 3.0 era, humans needed to provide, refine and code the AI. Today, computers can learn on their own. I did my MS paper on AI applied to computer endgames in the prehistoric days.
 
Last edited:

Sparafucil3

Forum Guru
Joined
Oct 7, 2004
Messages
11,335
Reaction score
5,070
Location
USA
First name
Jim
Country
llUnited States
Computers can master any intelligence task that humans can master, plus many that they cannot. The learning computers are extending civilizations knowledge in many areas and will continue.
Computers can't master NP problems. There are limits. I am not sure ASL is an NP problem. Setting aside the counters, there are > 70 ASL boards which can be set up single, with any other board, in 2 or more board configuration, with boards rotated in multiple ways, with various terrain alterations and overlays, etc. Each of those configurations need to be exhaustively played for AI to understand the game. Now, presume it needs to play 10K+ times on each of the boards to begin to learn how to do it. As I said up-thread, I don't think it is something it will be able to do any time soon, but I don't think it is outside the realm of the possible. -- jim
 

MichalS

Member
Joined
Feb 23, 2018
Messages
91
Reaction score
88
Location
Bratislava & Wien
First name
Michal
Country
llSlovakia
From what I've seen on Go AIs, the human time investment required for developing such a single purpose AI makes it quite unrealistic. At the same time, I do not believe the task could be achieved by a general AI (if indeed there is such a thing).

Nevertheless, if it would go forward, it would most probably have a number of specific purpose modules, such as:
  • hard data module: units, mapboards, rules...
  • probability calculator module: the simplest of all
  • problem decomposition module: breaking down the VC down to a set of discrete and manageable problems
  • strategy design module, probably with heuristic/stochastic elements due to the number of options involved
  • (strategy) evaluation module: now this one is tricky, since what is considered an asset or success is highly situation dependent
  • tactical problem solving module: this one is actually not so difficult, due to the limited number of variables and values involved
  • anything else?
At the same time I do not believe that it is so easy to achieve machine learning just by letting the AI play against itself. The problem is: it is playing against itself! Effectiveness of behaviour is supposed to be derived from the same standard as the actual behaviour. I don't think that works. Do you think a dedicated human novice player could become an elite player just by playing him-/herself? Machine learning requires some objective baseline of success in order to achieve said learning - if you want AIs to design bridges, you need to program an objective physics simulator so that AIs can test success. Playing against itself hardly provides such an objective baseline. (I wonder if splitting the AI into several discrete players could lead towards incremental improvement, hm...)
 

MichalS

Member
Joined
Feb 23, 2018
Messages
91
Reaction score
88
Location
Bratislava & Wien
First name
Michal
Country
llSlovakia
... in other words, rather than talking abstractly about the cognitive load of playing ASL one would have to look at the actual mental operations human ASL players conduct while playing and try to model these. At the same time, since a computer is not built in terms of hardware as human brain, after successful emulation of human faculties the engineers would develop (or instigate the AI to develop) its own decision making processes which, I assume, would be superior to human faculties.
 

MichalS

Member
Joined
Feb 23, 2018
Messages
91
Reaction score
88
Location
Bratislava & Wien
First name
Michal
Country
llSlovakia
Also, how would the AI be supposed to handle tasks that are more or less trivial for a machine (LOS and VBM adjudication, probability calculation, not forgetting a rule or a unit or its capability such as special ammo or smoke capability) but where human players frequently make mistakes? Should it be programmed to err now and then or are you supposed to be playing against a truly inhuman opponent?
 

Gordon

Forum Guru
Joined
Apr 6, 2017
Messages
2,488
Reaction score
2,940
Country
llUnited States
We already have a way to create opponents like that, it's called "sex", it just takes a while.
 
Last edited:

boylermaker

Senior Member
Joined
Jan 22, 2012
Messages
581
Reaction score
526
Location
Virginia
Country
llUnited States
Machine learning requires some objective baseline of success in order to achieve said learning - if you want AIs to design bridges, you need to program an objective physics simulator so that AIs can test success. Playing against itself hardly provides such an objective baseline. (I wonder if splitting the AI into several discrete players could lead towards incremental improvement, hm...)
Well, in a trivial sense, there is the clearest possibly objective baseline: each scenarios VCs are known, so the AI would know whether its attacker-behavior led to a win for the attacker or not, and vice versa for the defender.

In general, these AIs don't really play games in the way that we do, where we have the past of the game in our memory and make moves in the present with future moves in mind. The AI just sees the board, determines that the best move at the moment is to move a rook to this space, and then shuts down until you show it a new board. If you move one of your pieces, now it has a new board to play with, and figures out what to do with that situation. But if you shuffled up all the pieces in a totally new arrangement, it wouldn't faze the AI, it would just figure out its best move for the new board. It can still pull off long term strategies, doing things like making short-term sacrifices for long-term gains, but it doesn't know that it has a plan or a strategy in the sense that we do.

So the point of playing games against itself isn't that it helps the AI learn strategies or tactics or whatnot. It is to produce a huge database of possible moves. Then in the future, when it is playing you, it can look through the database for similar situations, and see which moves led to success and which moves led to failure.
 

MichalS

Member
Joined
Feb 23, 2018
Messages
91
Reaction score
88
Location
Bratislava & Wien
First name
Michal
Country
llSlovakia
That is a good description, thank you. However, "leading to success or failure" is not a matter of a single move - and it highly depends on what the opponent does in response. If you have a good opponent and an unlimited amount of time, you might learn to beat him/her over time (assuming the opponent is not improving or adapting). If you have yourself as an opponent, I am not sure! How do you evaluate whether a move was leading towards success, if the opponent plays crap and does not know how to evaluate success either?

(Also Go AIs required playing against progressively better human opponents in order to improve.)
 

Sparafucil3

Forum Guru
Joined
Oct 7, 2004
Messages
11,335
Reaction score
5,070
Location
USA
First name
Jim
Country
llUnited States
So the point of playing games against itself isn't that it helps the AI learn strategies or tactics or whatnot. It is to produce a huge database of possible moves. Then in the future, when it is playing you, it can look through the database for similar situations, and see which moves led to success and which moves led to failure.
And this works well for Chess, Go, Checkers, and games like that with fixed units, fixed playing space, and fixed outcomes. How many official ASL boards are there? 90'ish? Each of which can be played as a full board or half board (180 'ish). When played as pairs, they can be butted on 4 sides and the middle. Let's not forget the overlays and terrain SSR which might modify the boards. Assuming we stick to just two boards, how many possible board combinations does that make? I know it starts at 180 X 179 and gets bigger from there. That's just the boards.

Again, not saying it is impossible that AI eventually get's there. I am saying it isn't as "simple" as Chess, Go, etc. The closest parallel is in the Star Craft AI, but even that only had to concern itself with a finite number of boards and the outcome is less RNG based. I am sure that someone could do ASL but I am also sure it would take some time to do ALL of ASL. -- jim
 

MajorDomo

DM? Chuck H2O in his face
Joined
Sep 1, 2003
Messages
3,179
Reaction score
1,025
Location
Fluid
Country
llUnited States
Today drones can map geology (see Dragon's Breathe underground caverns mapping for example). AI can guide low flying missles from x to y over the earth's terrain.

So I don't see it as a significant hurdle to say " butt board 18's upper seam to a flipped board 16 now upper edge" and have AI map the terrain features for scenario play.

I see the most challenging aspect - the ASLRB.

There are many imprecise, contradictory, extraneous and unclear sentences, phrases and paragraphs throughout the tome. The AI would need human intervention (IE Perry sez) to resolve the many inconsistencies present. This human interfacing would be slow.

I would start AI using existing vlogs to point out rules violated, question others and perfect its "rule book" The final result would be a great electronic Watson/Alexa rule book.
 

Sparafucil3

Forum Guru
Joined
Oct 7, 2004
Messages
11,335
Reaction score
5,070
Location
USA
First name
Jim
Country
llUnited States
Today drones can map geology (see Dragon's Breathe underground caverns mapping for example). AI can guide low flying missles from x to y over the earth's terrain.
Mapping is easy with GPS. Terrain avoidance is also not too hard. We have done that with Cruise missiles for decades. Not even close to the same problem space.


So I don't see it as a significant hurdle to say " butt board 18's upper seam to a flipped board 16 now upper edge" and have AI map the terrain features for scenario play.
For AI to work and work well, it has to have "been there, done that" built in. Either by reviewing and learning from game logs or running simulations of games to build the repository of information to learn from. The issue is the complexity of the search space and the complexity of the units involved. Generally speaking, AI isn't "innovating", it's playing from a script it has seen before.

FWIW, I have built and operated a drone which mapped a field for me, using cameras, Lidar, and GPS. I wrote it all in Python and it wasn't all that hard. -- jim
 

Michael Dorosh

der Spieß des Forums
Joined
Feb 6, 2004
Messages
15,733
Reaction score
2,765
Location
Calgary, AB
First name
Michael
Country
llCanada
So,

Occasionally a random thought flicks through my head and a recent one was. I wonder how a supercomputer would play ASL? They've done chess, Go - how would it handle ASL?

It would obviously adhere and know all the rules - so a perfect game of ASL would be played. How about gun placement? I've heard that described as more of an art than a science..........

What do you think?
The Combat Mission software does exactly this. It plays a tactical level game and applies all the rules. 20 years ago Steel Panthers did it. The upcoming Second Front will do it as well. Examine those games and you'll have your answers. The PC version of Lock N Load has a very good AI, for what it is worth.
 

boylermaker

Senior Member
Joined
Jan 22, 2012
Messages
581
Reaction score
526
Location
Virginia
Country
llUnited States
How do you evaluate whether a move was leading towards success, if the opponent plays crap and does not know how to evaluate success either?

(Also Go AIs required playing against progressively better human opponents in order to improve.)
This is a good question, and I'm not sure I know the answer. AlphaZero is self-taught, though, and while I'm not sure that it's state-of-the-art any more, it was when it came out a few years ago: https://en.wikipedia.org/wiki/AlphaZero

How does it do that? Not entirely sure. One of the problems with modern Machine Learning techniques is that while they are very good at solving particular problems, it is difficult to peak under the hood and figure out what they are doing (here is a very cool blog post about Google engineers trying to reverse-engineer their own ML algorithm: https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html).

Possibly they are able to figure out whether the opponent-self played well or badly in each practice game, and they choose tactics that were good against good opponents? After all, if you use pro tactics against a crap opponent, you might do slightly less well than you could have if you knew they were crap from the start, but you're still gonna do pretty dang good!
 
Top