Because I have the ambition to get this thread to at least
12 pages
, I have dug up my old post giving hard facts & counted numbers of my personal standard-out-of-the-box AH/MMP dice and will paste this in below with a few amendments:
So, instead of
speculating about AH/MMP dice, I did the work and provide
hard facts.
Sample of 1000 DRs of my BV3 dice rolled the way I usually do (no dice tower, dice cup on desk):
DR number of times my dice number of times expected with "perfect" die
2 24 27.77777777
3 46 55.55555555
4 79 83.33333333
5 107 111.111111111
6 138 138.888888888
7 159 166.666666666
8 141 138.888888888
9 125 111.111111111
10 80 83.3333333333
11 72 55.5555555555
12 23 27.7777777777
My DR average in 1000 rolls is 7099, the "perfect" expected DR average would be 7000.
From here on, please, math-gurus, correct me if I have screwed up somewhere, as I am not much into math.
Emiprical variance for my dice would be:
24 * (2-7.099)² +
46 * (3-7.099)² +
49 * (4-7.099)² +
107 * (5-7.099)² +
138 * (6-7.099)² +
159 * (7-7.099)² +
141 * (8-7.099)² +
125 * (9-7.099)² +
80 * (10-7.099)² +
72 * (11-7.099)² +
23 * (12-7.099)² = 6199.82
6199.82 / 1000 (i.e. # of sample) = 6.19982 = Empirical variance for my dice @ a 1000 sample.
Now Empicial variance for "perfect" dice:
27.77 * (2-7)² +
55.55 * (3-7)² +
83.33 * (4-7)² +
111.11 * (5-7)² +
138.88 * (6-7)² +
166.66 * (7-7)² +
138.88 * (8-7)² +
111.11 * (9-7)² +
83.33 * (10-7)² +
55.55 * (11-7)² +
27.77 * (12-7)² = 5833.33
5833.33 / 1000 = 5,83333 = Empicical variance for "perfect" dice @ a 1000 sample.
Standard deviation would be the square root of
Empirical variance.
√6.19982 = 2.48994 is the Standard deviation for my dice @ a 1000 sample.
√5.83333 = 2.41523 is the Standard deviation for "perfect" dice @ a 1000 sample.
The Standard Deviation for my dice with a sample of 1000 rolls is:
7099 +/- 2.48994% or in other words between 6998.26 and 7147.27.
"Perfect" dice @ a 1000 sample could expect to roll
7000 +/- 2.41523% or in other words between 6915.47 and 7084.53.
To put this into perspective:
Using "perfect" (not merely "precision") dice, with a 1000 rolls you can expect statistical differences of 169.06 pips in the overall total of expected 7000 pips.
Using "my lowly dice", with the 1000 rolls I can expect a difference of a (surprisingly lesser) difference of only 149.01 pips in the overall total of expected 7099 pips.
This is a
difference of 169.06 - 149.01 =
20.05 pips for "my lowly dice" compared to "perfect dice"
in 7000 pips in a sample of 1000 rolls.
Thus you can specify the difference of "perfect" (not merely "precision") dice to "my lowly standard MMP BV dice" at 20.05 / 7000 = 0.29% in a sample of 1000 rolls.
Note, that the total number of pips of "my lowly dice" in a sample of 1000 rolls was 7099, while those of "perfect dice" would be only 7000. In other words "my lowly dice" roll
worse than "perfect dice".
Let's take hard numbers a little bit further looking at DR averages:
Worst result within Standard Deviation for my dice with a sample of 1000 rolls is:
7099 + 2.48994% = 7147.27 pips => DR average of 7147.27 / 1000 = 7.147
Best result within Standard Deviation for "perfect dice" with a sample of 1000 rolls is:
7000 - 2.41523% = 6915.47 pips => DR average of 6915.47 / 1000 = 6.915
So, the
worst that could conceivably be expected to happen to me within the range of the Standard deviation of "perfect" dice pitted against my low production process BV3 ones is a
DR average of 6.916 against 7.147 in 1000 DRs to my disfavor.
I am crushed. Oh, the injustice of it. It
must have turned the game against me. Seriously?
But what lies within the limits of statistics the other way around?
Best result within Standard Deviation for my dice with a sample of 1000 rolls is:
7099 - 2.48994% = 6998.26 pips => DR average of 6998.26 / 1000 = 6.998
Worst result within Standard Deviation for "perfect dice" with a sample of 1000 rolls is:
7000 + 2.41523% = 7084.53 pips => DR average of 7084.53 / 1000 = 7.084
At the same time within statistical expectations it could have optimally turned out for me to be a
DR average of 6.998 against 7.085 in 1000 DRs to my favor.
At this point, I find it very hard to understand people insisting upon precision dice making a
significant difference.
Of course, it
can make a difference, but it is highly unlikely to make a
significant difference.
As you would not base your plan of attack depending on your killer-stack rolling consecutive snakes, it is in the same way nonsense to base the assessment of your chances to win on the difference of 'precision dice' or even 'perfect dice' compared to standard MMP dice.
So if any of you "dice-superstitionists" is playing me in a tournament and should feel harrassed by my dice and call me out to change them: If I were to give in to your capers (which I would not unless precision dice were mandated by the tournament's rules), you would harm yourself based on statistical hard tested facts on my set of standard dice.
If you consider me a cheater in the first place for using standard dice, of course, you would suspect me lying about my having tested my dice or claiming that - by stroke of luck - my standard dice happen to be pretty precise while all others are not. I'll provide you with that perfect recipe to make your world "whole" again...
Instead of "claiming" and "assuming", I challenge everyone in doubt to more productively
do the work and test your dice.
Even those for whom standard dice are anathema own that box of 'Beyond Valor'. Prove me wrong or simply provide more data by rolling
your lowly standard dice - and your precision dice, too, while you are at it.
von Marwitz