Ray Tapio is da Man!

Joined
Apr 7, 2004
Messages
2,679
Reaction score
70
Location
Atlanta, GA
Country
llUnited States
Steve,
Completely off topic:)..... but I noticed that my previous message was my #269 and your message was also your #269..... hmm... does that help in any way in adding balance to this discussion? :laugh::laugh:

Best regards,:salute:
John
Yeah, but he has 100 more rep points for the same number of posts, so you better get crackin before that imbalance unbalances the discussion. :D
 

Bret Hildebran

Elder Member
Joined
Jan 31, 2003
Messages
4,884
Reaction score
1,279
Location
NE OH
Country
llUnited States
If I were a scenario designer and looking for more information from scenario playtests, I would set up a scale of results and not rely on win/loss results.
And I think during the playtest cycle, designers are accumulating more data than W/L. Everyone I've ever playtested for is looking for info on how the scenario played out - where was the defense anchored? How'd the attack go? Extreme dice? Key moments? CVP? Buildings/Location controlled? Viewpoints on the scenario? In addition to who won/lost of course. During playtest you have the luxury of getting a lot more data than just W/L.

It's after release that in the global scale we're stuck with simple W/L on ROAR. Beats the heck outta' nothing, but we lose all the context that the playtest coordinator was able to accumulate, but of course the ROAR population of games is a few orders of magnitude bigger than the playtest, at least for popular scenarios...

Bottom line - I think you'd have to evaluate the data in those two populations a lot differently given the ROAR data is essentially is the binary data while the playtest data provides much more context giving you a feel if the German win was a low odds snakes on the final CC or a blowout in T2 while ROAR records those two the same...
 

Jazz

Inactive
Joined
Feb 3, 2003
Messages
12,199
Reaction score
2,751
Location
The Empty Quarter
Country
llLithuania
And I think during the playtest cycle, designers are accumulating more data than W/L. Everyone I've ever playtested for is looking for info on how the scenario played out - where was the defense anchored? How'd the attack go? Extreme dice? Key moments? CVP? Buildings/Location controlled? Viewpoints on the scenario? In addition to who won/lost of course. During playtest you have the luxury of getting a lot more data than just W/L.

It's after release that in the global scale we're stuck with simple W/L on ROAR. Beats the heck outta' nothing, but we lose all the context that the playtest coordinator was able to accumulate, but of course the ROAR population of games is a few orders of magnitude bigger than the playtest, at least for popular scenarios...

Bottom line - I think you'd have to evaluate the data in those two populations a lot differently given the ROAR data is essentially is the binary data while the playtest data provides much more context giving you a feel if the German win was a low odds snakes on the final CC or a blowout in T2 while ROAR records those two the same...
Isn't there a "Fun" (excitement?) rating in a ROAR entry? As I recall it was a number between 1 - 10 that the person making the entry could indicate just how much fun was had. Adding a similar scale for percieved balance would at least hold out a glimmer of hope for continuous data and get beyond the attribute data of W/L.
 

Will Fleming

Senior Member
Joined
Apr 22, 2003
Messages
4,413
Reaction score
429
Location
Adrift on the Pequod
Country
llUnited States
Got that on the WeASL baby! Each game can be rated on fun and also balance.

Hell, you don't even have to register a game. You can just make a comment on the scenario and rate it. The W/L rating is not changed, but the player rating for balance is affected.
 

fwheel73

Member
Joined
Dec 14, 2006
Messages
1,643
Reaction score
80
Location
Oklahoma
Country
llUnited States
Balance & Fun Rec Can Be Found at ROAR

Isn't there a "Fun" (excitement?) rating in a ROAR entry? As I recall it was a number between 1 - 10 that the person making the entry could indicate just how much fun was had. Adding a similar scale for percieved balance would at least hold out a glimmer of hope for continuous data and get beyond the attribute data of W/L.
Jazz,
I think the current scale, actually 0 to 9:

0=Not known/no opinion :nuts:
1=Candyland instead! :clown:
2=Highly unfavorable:nada:
3=Unfavorable:OHNO:
4=Slightly unfavorable :sneak:
5=As many scenarios above as below :hmmm:
6=Slight recommend :broccoli:
7=Recommend :clap:
8=Highly recommended :thumup:
9=Must play :hail:

This scale is a reasonable recommendation. With near 4,000 scenarios at ROAR and only (IIRC) 600 - 800 deemed balanced with 10+ playings at the 50-60% won/loss percentage level.... it appears we have a reasonable tool to determine if it is fun-- # 1 to 9..... and a reasonable tool to say if a scenario is balanced--we just need more folks to input their games. There were only 3933 games input last year.... so it seems a huge number must be missing.

I don't think there is a reason to recreate a mechanism to determine if a scenario is balanced and is fun.... ROAR is decent. Just need to use it!

It seems to follow from the discussion, that the designer's use of only capable players in their playtesting process is very important in creating scenerios that are likely to be "balanced"..... and fun.

Finally...... whether anything is changed there still are a lot of scenarios to be played and there will be many to buy this year and next.:laugh:

Best regards,:salute:
John

(Thanks Steve)
 
Top