An Automated Quality Framework For EA Model Assessment
An enterprise architecture (EA) model is a well-defined structural and conceptual blueprint to help organizations conduct enterprise analysis, design, planning, and implementation. Its main purpose is to determine how an organization can most effectively achieve its
current and future objectives. The EA data stock is the basis for communication between the EA management and its stakeholders.
Enterprise architects rely heavily on the quality of the EA data repository when performing business analysis and drawing conclusions from the information to address business goals. Therefore the architects need to know what quality the data has they use.
The EA framework allows for multi layer and multi view structural modelling of processes, functions and related information. So depending on the enterprise scale and complexity, an EA model can be a huge and involved pile of data. Existing tools help
the architect to visualize and maintain the model and make sure the model as a whole is syntactically valid. But when it comes to the assessment of model quality and soundness, architects are on their own. Model evaluation usually means to look at the model and
its parts in a manual fashion to try and identify qualitative issues. This is a tedious, time-consuming and error-prone task.
The goal of this thesis is to develop an appropriate framework of metrics and implement a tool which can help EA architects to perform the task of model assessment in a structured and more automated manner. The tool should provide a holistic and view-based approach to uncover model quality issues and help to discover interrelated parts within the model which exhibit quality issues and are hard to recognize by hand.
As a first step I will look into and address the current state of research. In a second step I will define which aspects of EA models are of interest for the goals of this thesis and therefore should be considered when establishing metrics. I will do so with respect to
the ArchiMate EA framework which is open, independent and wide-spread. Central to this step are two questions. What outcome should the model quality assessment yield? And which information about the model must be collected in which way to achieve the desired outcome?
The next step will be the concrete development of metrics used in the evaluation process with the goal of establishing a quality framework. I will reason the process of metric modelling with respect to classification, quality properties and aspects, scale determination and metric calculation and measurement. This step also includes the analysis of related work, which may root in other domains than EA modeling, like software development or quality assurance measures in IT business.
As a follow-up step I will then show how the resulting metrics can be combined into a framework which offers a holistic and automated approach to guiding an architect through the task of EA model assessment.
Finally the implementation of this framework as software will be discussed. I will address the architecture of this software and technical requirements as well as general usage. I will conduct a case study based on an exemplary real-life EA model which will show the practical application of my software.
Eventually the conclusion of this thesis will address the practical implications of my work and provide an outlook on the future perspective and questions still open.