-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Review and clarify spec selection criteria #1481
Comments
A dump of notes to document the impact of adding a spec to browser-specs. I see @Elchi3 proposed a breakout session at TPAC 2024 that takes a similar approach, see Curating the web platform's data and documentation. When a spec gets added to browser-specs with a "good" standing and released in the
Something similar happens when a spec gets added to browser-specs with a "pending" standing and released in the web-specs npm package, except that:
Adding a spec with a "good" standing means that we commit to data curation and patching. We don't want to do that lightly if the spec is too much in flux (unless it is in scope of a chartered standardization group). Adding a spec with a "good" standing may also be seen as recognition that the spec is on the standards track, even if the data is clear that the spec is incubated in a Community Group with no guarantee whatsoever. Adding a spec with a "pending" standing means that we commit to reporting anomalies. Adding a spec with a "pending" standing should not create particular issues. One exception to the rule: the spec may export terms that are already defined in another spec. Such duplicates are not a problem for Respec, but may confuse Bikeshed. We could add a check in Strudy to detect these duplicates once the spec has been added to the list. Catching these duplicates when the spec gets added to browser-specs could theoretically be done too, see #1289. Adding a spec with a "pending" standing means Strudy will analyze the spec, but that analysis typically does not include detection of most Web IDL anomalies, which are rather done during the curation process for specs in "good" standing. It could be interesting to report these anomalies early on, to smoothen the switch to a "good" standing. The web-features project should not need to see new specs early for now: features typically only emerge after keys have been added to BCD. Web platform tests and the MDN BCD collector probably prefer to see new specs relatively early, so as to detect support as soon as possible, where relatively early is something like "when the spec is about to ship in a browser". Other projects would probably prefer to see new specs only when they start being implemented somewhere and have a clearer standardization status. All in all, looking at spec selection criteria, we could perhaps be flexible on criteria for adding a spec with a "pending" standing:
In practice, the criteria listed in the README seem already pretty good as-is to add a spec with a "pending" standing. I think we need additional criteria for adding a spec with a "good" standing, to clarify the conditions that we consider to evaluate support:
The above does not restrict addition to "shipped in one of the main browser engines" for Web Platform Tests and the MDN BCD Collector. That restriction should perhaps be made though, especially if the spec can be added with a "pending" standing. Also, the evaluation remains subjective without it: what does "ongoing" mean for implementation? Intent to prototype, Intent to ship, known milestones? Granted, the term "shipped" is also overloaded: by default? behind a flag? origin trial? MDN defines a similar set of requirements for documenting a new technology. I think the notion of standards track in that document includes what I call pre-standardization groups. Details include looking at "signs of interest" from non-supporting browsers. |
Via #1477 (comment). The criteria have always been fuzzy. That makes it hard to evaluate when a new spec should join (and/or what standing to give it). Now that the list contains >650 specs, we should probably be able to reflect on experience and refine the criteria accordingly.
To start with, it would probably be useful to document the consequences of adding a spec to the list for Webref, MDN, Specref, WPT.
The text was updated successfully, but these errors were encountered: