Abstract

Automated Model Extraction Rules take as input requirements (in natural language) to generate domain models. Despite the existing work on these rules, there is a lack of evaluations in industrial settings. To address this gap, we conduct an evaluation in an industrial context, reporting the extraction rules that are triggered to create a model from requirements and their frequency. We also asses the performance in terms of recall, precision and F-measure of the generated model compared to the models created by domain experts of our industrial partner. Results enable us to identify new research directions to push forward automated model extraction rules: the inclusion of new knowledge sources as input for the extraction rules, and the development of specific experiments to evaluate the understanding of the generated models.

Recommended Citation

Echeverría, J., Pérez, F., Cetina, C., & Pastor, Ó. (2017). Assessing the Performance of Automated Model Extraction Rules. In Paspallis, N., Raspopoulos, M. Barry, M. Lang, H. Linger, & C. Schneider (Eds.), Information Systems Development: Advances in Methods, Tools and Management (ISD2017 Proceedings). Larnaca, Cyprus: University of Central Lancashire Cyprus. ISBN: 978-9963-2288-3-6. http://aisel.aisnet.org/isd2014/proceedings2017/ISDMethodologies/3.

Paper Type

Event

Share

COinS
 

Assessing the Performance of Automated Model Extraction Rules

Automated Model Extraction Rules take as input requirements (in natural language) to generate domain models. Despite the existing work on these rules, there is a lack of evaluations in industrial settings. To address this gap, we conduct an evaluation in an industrial context, reporting the extraction rules that are triggered to create a model from requirements and their frequency. We also asses the performance in terms of recall, precision and F-measure of the generated model compared to the models created by domain experts of our industrial partner. Results enable us to identify new research directions to push forward automated model extraction rules: the inclusion of new knowledge sources as input for the extraction rules, and the development of specific experiments to evaluate the understanding of the generated models.