Following with the OpenBRR methodology, where we saw there are a number of evaluable categories that allow us to have a total score about programs by measuring certain points for each of them, also we can saw the BRR template (you can download here) where selecting the weights of each of these categories according to our needs and defining the features that covers, we can assign scores for each of these categories, based on various objective data that we get to compare different projects and to obtain a reference which better meet our needs.
All categories set out in this spreadsheet data can be described looking at different sources of information (with more or less effort):
- Oficial Web Site: where we can find information about the different features than available (Funcionality), the different versions that the program have (Quality), links to user documentation (Usability and Docummentation), technical documentation (Usability and Docummentation), links to repositories sources, links to the manager versions, information about the community (Community), documentation (Security), etc.
- SCM: this systemas we can obtain a lot of data like the structure of the community, the number of commits in the repository, number of developers that do commits (Community), etc.
- Mailing List: we can get information about the participation of the people in the project, the number of mails send to in the last month (Support and Community), etc.
- Bug-tracking: one of the main data sources, we can take information about the bugs in the program, both the open and resolved, the average number of critical bugs (Quality), etc.
- Other Tools that provide information such as Ohloh.net (Community) or FLOSSMetrics (Community), announcements of vulnerabilities on websites such as US-CERT (Security), references to resources that provide more information on performance and functionality (Performance), references to projects using the actual project (Scalability), search engines books on the program (Adoption), etc.
Thus, once the weights are introduced and valued all categories, it gives us an objective qualification for each of the projects compared on equal terms and focusing on the requirements to the specific needs of who evaluates.
From my point of view, and making use of the experience gained in the final work of this subject, this methodology is relatively rapid, simple and clear to provide a good evaluation of a program, but does not provide all the necessary inputs to make an firm decision. I think it’s highly recommended to extend any of the metrics used to provide greater specificity and effectiveness of the methodology, such as comparisons between the tools, demos of the tools and other data we obtain through other projects such as the aforementioned Ohloh or exploiting manager versions in a deeper way.