(I have had the following post in a half-done state for a couple of weeks, and I apologize for the long delay in getting it finished and posted.)
After reading the feedback to the simulations that were proposed, I agree with many of the concerns raised and my opinion has shifted a bit on the topic. I still think simulations would be a good and necessary part of vetting Concepts in a subforum, but I don't think they should be as involved as they were originally mentioned.
When I suggested simulations, my intention was that they would NOT be big, laborious, manpower-intensive things. That is why I suggested a simple 5-part format (Concept, Typing, Abilities, Stats, Movepool) that significantly compresses the CAP process when doing a simulation analysis. But even with an abbreviated format, I still pictured it as an ANALYSIS, which I guess I did see as being a fairly formal and rigorous process. When Birkal suggested we do something akin to a Flash CAP, I was think "Yeah! That's perfect!". But after reading others' take on it, now I'm not too keen on that level of detail and formalization for simulating Concepts.
To determine the best way to accomplish simulations, let's clarify some of the key goals for simulations, in terms of what we DO WANT and DON'T WANT to happen. Then we can determine the format which best achieves the positives and best avoids the negatives, knowing that we probably won't get a "perfect" format that is all good with no bad. Tradeoffs are inevitable.
Simulation Ideals
We want simulations to expose the obvious flaws in Concepts. We do not expect simulations to uncover every subtle or detailed issue.
Maybe I am undershooting the goal here, but I don't think we need to be heroes on this. We aren't trying to GUARANTEE that every CAP will have a fantastic Concept in the future. We are just trying to give CAP a BETTER chance for a good concept. Instead of thinking of this as a way to identify "the best" concepts, think of this as a way of eliminating "bad" concepts and leaving mostly "potentially good" concepts remaining.
We have a mountain of concepts to deal with in CAP, and most of them are terrible. Sticking with the mountain analogy -- we're not trying to make gold, we're just trying to mine some ore. Mining is mostly an exercise in digging through and discarding lots of worthless dirt and rock, and keeping some stuff that exhibits properties that it MIGHT have precious metal inside. *
(Mineralogy nerds, please don't call me out on the science of my analogy. Hopefully you get the point, right?)
We want to force people to "play forward" a Concept in a structured, analytical manner.
This is really the biggest thing, in my opinion. Regardless of how we do it, we want to make people think about a Concept beyond just assessing if it "looks like a good idea". Even the most rigorous thinkers tend to consider Concepts holistically when judging if they are good or bad. People make numerous assumptions and take all sorts of logical shortcuts in their thinking, and don't really realize they are doing so. I know I am guilty of this, and I am generally pretty good at thinking in a structured way.
By forcing everyone to consider Concepts in a structured, sequential manner and to commit to discreet outcomes in each step of their thinking -- that's how we get real value from a simulation. If simulations are relatively unstructured, they will be little more than "Think about this concept and post your impressions". This will encourage the same holistic thinking we always get with concept commentary, and really won't expose any flawed concepts beyond what we are capable of exposing already. A structured approach doesn't guarantee anything, but it should help us look below the surface of the concept, which would be a good thing.
We want simulations to encourage autonomous contribution by QCers.
If we require too much or too close interaction between members to play forward a Concept, we'll cause more problems than we solve. We don't want multiple people to have to get together offline to discuss concepts, because it will decrease transparency for the rest of the project (anti-community = bad).
If we require people to go back and forth with each other in some convoluted forum posting process, it will take way too long with participants all over the globe and long lag times between interactions (longer processes = bad).
If we encourage small groups of knowledgeable users to do their own little mini-CAPs amongst themselves to vet concepts, we'll end up with know-it-all posters in the regular CAP project who have already pre-built the CAP amongst themselves beforehand (annoying elite cliques = bad).
There are probably ways to mitigate all these problems, but we can avoid all that if we make simulations largely independent exercises by individuals. The result of simulation should be a single forum post in the subforum (ie. "I have done a simulation of the XYZ concept, here it is: <simulation in whatever format we deem appropriate>). Others can comment on the simulation, point out issues, etc -- but the creation of the simulation itself should mostly be an individual exercise, in order to avoid the problems mentioned above.
Simulation Implementation
I think
HeaLnDeaL is on the right track in terms of how a relatively simple concept simulation could be structured. Here is a proposed structure to consider:
A simulation would consist of a single post with five parts/steps -
Assessment,
Typing,
Ability,
Stats,
Moves.
Each step of the simulation should do the following:
1) Explain multiple viable high-level options that could reasonably be considered by the community, based on the choices of previous steps. The simulation step should illustrate that the community would have more than one interesting, competitively-viable option to discuss and decide.
2) Pick an option to add to the foundation for the next simulation step. The option chosen should be the one most likely to be chosen by the community in a real CAP. This is a subjective call, but should be based on reason as much as possible. The point is not to accurately predict the future or read peoples' minds. Each step simply needs a discreet outcome to serve as a basis for the next step, and these outcomes should not be random or a whim.
Assessment - Mention a few high-level directions the project could take to satisfy the concept.
Typing - Present a few typing options that are reasonably representative and diverse, probably should acknowledge CAP's heavy historical bias for rare or unique typing, etc.
Ability - Like HealnDeal mentioned, only the primary ability needs to be considered for simulation. Abilities that achieve similar general competitive goals for the concept should not be presented as "multiple options".
Stats - No need to quote specific stat lines here, general build descriptions and bias phrases are fine. If specific stats need to be referenced, they should be ranges.
Moves - Present the key moves and controversial moves of the main movesets that would be viable for the concept. Since Moves is the last competitive step, it is not necessary to choose an outcome. Simply simulate that a vibrant discussion is reasonably likely.
Discussing/Critiquing Simulations
Critiquing simulations is how multiple people can "work together on a simulation", with the goal of identifying narrow or dead-end concepts. A narrow concept is one with limited choices at some point. A dead-end concept is one that has no viable options at some point. All projects tend to become more narrow towards the end, so we are really looking for concepts that narrow or dead-end too early in the process.
Commenters should critique the content and reasoning of posted simulations. Point out options that were not considered, and why they would are not legitimately competitively-viable or not likely to have meaningful intelligent support. Also point out options that would have highly polarized voting support, independent of competitive logic (ie. options that don't have a chance in hell or options that will win by a landslide in public poll, regardless of what the "intelligent people" think).
Commenters may present alternative outcomes from certain steps in order to illustrate dead ends that are not obvious in the simulation, or to illustrate increased variety of choices than what was presented in the simulation. Commenters should not effectively rewrite the entire simulation in the course of a critique. If the commenter thinks the simulation completely missed the mark, they should just do a simulation of their own and post it as such.
Manpower and Timing
I am definitely concerned about whether CAP will have the manpower to do something like this, particularly in a dedicated subforum outside the normal workflow of an ongoing CAP process. And considering the other issues we have right now, in terms of starting the next CAP project in the wake of ORAS, this may not be something we should try to do right now. Or maybe it is EXACTLY what we need to do in order to give the next CAP a better rudder, in the face of ORAS-related uncertainty.
But is the format mentioned above really all that much work? It's basically asking people to think about a concept and make a single post with a certain logical structure. That isn't a whole lot of work, although admittedly, it took me weeks to get this post completed, so I can't say a single post is "no big deal".
nyttyn presented some compelling reasons that detailed Flash-CAP-like simulations would never work. Does a simpler, more individual effort make it more feasible to accomplish?
BTW, I am assuming simulations would only be done for concepts that have made it past a high-level vetting process whereby a QC team (or other process) sifts through all the Concepts and proposes which ones should be simulated. Later we can work out the exact mechanics of the pre-vetting process to ensure it is both efficient and fair, with checks and balances and all that. Right now, my main concern is whether we can pull off a more structured way of validating concepts via simulations of some kind.