I don't think you need to code the rules to make an ASL-playing AI. If you have a large enough training data set of actual games (and you could probably get that pretty easily by hacking the VASL server to record people's games), then you could just machine-learn your way to an AI that doesn't know the rules, but has good enough instincts that its proposed moves are usually legal. That cuts the amount of rules coding you need to "enough to grok most scenario VC", which is a much, much smaller subset.
If you're making an Alpha-Go sort of AI that can rely on a human being to do rules enforcement, that's no problem.
No way this is going to work.
First, Alpha-Go was trained not by looking at human games, it was just given the rules (in the form of "here is a function that gives you your possible moves in any situation" and "here is a function that decides who wins the game for a given sequence of moves"), and then it played a huuuuuge number of games versus itself. As in, I guess, more games than have been played by all humans, ever. Way more. And what is a beautiful success for machine learning is that this actually worked - by doing so, and using a clever learning to play "what works" more often than "what doesn't works", it reached a level beyond that reached by human players.
Obviously, this would not have been possible with human rules enforcement: you'd need people to check on millions (billions, probably) of games. For this to be even possible, the rules have to be computer-enforced. This is why Go is such a perfect game for this: the rules are extremely simple to enforce - at any given time, the list of legal moves for a player is always a sublist of a fixed list of 362 possibilities (19x19 places to play a stone, and "pass").
As for your AI that observes human games and learns the rules... this is not going to work, either. For one thing, VASL games are full of rules mistakes. The server doesn't try to enforce the rules, and people take shortcuts. The AI also would not be able to discern "moves" (player actions) that are taken because the rules say they must be, and those that are under the player's choice. You'd need annotated games.
Another problem would be that there are way too many rules, and a vast number of them never get used - so your AI would not get to "learn" them: to stand a chance of doing this, it would need a lot of instances where each rule enters into play. It might work for ASLSK; for ASL, it would never learn the interactions of Deep Snow and Glider landings, or any other rare and tricky rules combination.
Could you make an AI that tries to take moves that are 90% legal? Probably. 99% legal? Probably. That would still average several illegal moves per game. And if you're not coding the rules by themselves, all you can add to the AI's "knowledge" is that "in THIS exact situation, THIS move is illegal; pick another one". Kinda like teaching Calvinball to a computer.