Last Call Review Template I-D Title(s): Filename(s): Reviewer Name: Date: Please organize your comments in the following categories below. Review Summary: Overall: * Does/Do the draft(s) provide clear identification of the scope of work? E.g., is the class of device, system, or service being characterized clearly articulated. * If a terminology memo, are the measurement areas clearly defined or otherwise cited? Is the working set of supporting terminology sufficient and correct? To your knowledge, are the areas of the memo that may conflict with other bodies of work? Are there any measurements or terminology that are superfluous? Are any missing? * If a methodology memo, does the methodology AND its corresponding terminology adequately define a benchmarking solution for its application area? Do the methodologies present sufficient detail for the experimental control of the benchmarks? * If neither a terminology or methodology, does the offered memo offer complementary information important to the use or application of the related benchmarking solution? * Do you feel there are undocumented limitations or caveats to the benchmarking solution being proposed? If so, please describe. * Does the memo attempt to define acceptance criteria for any of the benchmark areas? Technical Content: (Accuracy, Completeness of coverage) Are definitions accurate? Is the terminology offered relevant? To your knowledge, are there technical areas that are erroneous? Are there questionable technical areas that need to be re-examined or otherwise scrutinized. Does the solution adequately address IPv6? Do you feel the memo(s) being offered are technically mature enough for advancement to informational RFC? Clarity and Utility: If you had a need, would you utilize the benchmarking solutions advocated by this and its related memos? If not, why? Conformance to BMWG principles: (see BMWG charter) http://www.ietf.cnri.reston.va.us/html.charters/bmwg-charter.html Do you have confidence that the benchmarks, as explicitly defined, will yield consistent results if repeated on the same device (DUT/SUT), multiple times for a given test condition. If not, cite benchmark(s) and issue(s). Do you have confidence that the benchmarks, if executed for a given test condition, utilizing the documented methodology on multiple test infrastructure (e.g., test equipment), would yield correct and consistent results on the same DUT/SUT? (Said differently, are the benchmark's methodology written with enough exacting detail, that benchmark implementation differences do not yield a difference in the measured quantities?) If not, cite benchmark(s) and issue(s). Do you feel that the benchmarks form a basis of comparison between implementations of quantity being characterized? (I.e., are the benchmarks suitable for comparing solutions from different vendors.) If not, cite benchmarks and issues. For those benchmarks cited above, do you feel that the benchmarks, as specified, have universal applicability for the given behavior being characterized? (i.e., benchmarks might not form a basis for cross-vendor comparison, can be used universally in a different role.) Editorial Comments: (includes any deficiencies noted w.r.t. I-D Nits, spelling, & grammar)