Authors:
Bastian Tenbergen
1
and
Marian Daun
2
Affiliations:
1
Department of Computer Science, State University of New York at Oswego, Oswego, U.S.A.
;
2
Center of Robotics, Technical University of Applied Sciences Würzburg-Schweinfurt, Schweinfurt, Germany
Keyword(s):
Model-Based Software Engineering, Graphical Representations, Model Comprehension, Model Quality, Empirical Study.
Abstract:
Model-driven development has established itself as one of the core practices in software engineering. Increases in quality demands paired with shorter times to market and increased mission-criticality of software systems have sensitized software engineering practitioners to make use of not only formal, but also semi-formal models, particularly graphical diagrams to express the system under development in ways that facilitate collaboration, validation & verification, as well as configuration and runtime monitoring. However, what does and does not constitute a “good” model, i.e., a model that is fit for a practical purpose? While some model quality frameworks exist, the trouble with most of these is that they often lack the ability to concretely quantify and thereby objectively differentiate a “good” from a “poor” model, i.e., models that can be easily understood by the model reader. Without being able to reliably produce easily comprehensible models, training new team members during o
n-boarding or educating software engineering students is dramatically hindered. In this paper, we report on a research trajectory towards reliably measuring the comprehensibility of graphical diagrams.
(More)