Deconstruction

Based on the didactic of concept of deconstruction, which is defined as the examination of an already existing construction for the unforeseen, unconscious, incompleteness and, in particular, the search for possible omissions, simplifications, additions and criticism, deconstruction is understood to mean the targeted questioning of the validity of machine models. For each model reconstructed with a learning block, related models are collected from the knowledge database and checked for their validity. Depending on the user’s preference, these can either be deconstructed consecutively or aborted once a complete, TSigma or SigmaZ deconstruction successfully completed. Pragmatic machine models are considered related if some part of their metadata is identical. This can be the timestamps, subject and the aim. If no relatives could be identified, the newly reconstructed model is saved to the knowledge base and the deconstruction process is terminated.

Alternative text

Complete deconstruction is used to extend and falsify existing models. In order to initiate the process, not only matching target and subject sets of the relative are required, but also temporal congruence. Instead of working with concrete timestamps, time intervals are to be considered. If a fully related model M_x is available, first check the original feature spaces of M and M_x for common features. If there are more than two common features, the metadata of M and M_x are combined to form M’ which is than fed into the reconstruction process. The block fed into the reconstruction consists of the intersection of the features of learning blocks of the models M and M’.

If it was not possible to identify enough common features on identical time stamps, an examination of the samples with which the two models were trained follows. If a defined threshold value is exceeded, the target values are then checked for reliability using Krippendorff a. If both conditions are fulfilled, the metadata of the models are combined to M’ and fed into the reconstruction. This time, the learning block passed to the reconstruction consists of the difference between the features of the learning blocks of M and M’. If neither enough common features nor timestamps could be identified, the complete deconstruction is failed and M is stored in the knowledge base. If the procedure failed only because of reliability, model disposal is initiated according to the deconstruction strategy.

If the initiated reconstruction has failed, the complete deconstruction process is not yet finished at this point. An attempt is made to extract partial models by temporal splitting of M’. In this process, called model differentiation, the learning block with which M’ was trained is clustered by time stamp. If the density distribution threshold is met and each cluster has a minimum number of samples, the reconstruction process is started with each cluster. Each successfully reconstructed model is stored in the knowledge base and M_x is discarded. If all reconstructions with submodels are unsuccessful, they are to be discarded as well as M_x. Furthermore, all models in the knowledge base that depend on M_x must be identified and removed.

Alternative text

The TSigma deconstruction checks whether it is possible to construct a new model on the next higher knowledge level. If there is a TSigma related model M_x, it is first checked whether a sufficient number of samples, with which the two models were trained, have sufficiently identical time stamps. If this is fulfilled, the corresponding target parameters of both models are to be tested for agreement using Krippendorff alpha. If a defined threshold value is exceeded, the target values of the models are merged into a learning block, which is fed into a new cycle at the next higher level of construction, reconstruction and deconstruction. Finally M is saved to the knowledge base.

Alternative text

If a SigmaZ related model exists, a threshold value is first used to check for a valid temporal difference between M and M’. The probability of success of the subsequent model unification is greater, the smaller the temporal distance between the considered models. Furthermore, for practical reasons it is required that the image of M and M_x have an intersection of two or more elements, otherwise no coherent training and test data set can be generated. If all necessary conditions are fulfilled, the model union is performed, in which the metadata are merged to a new model M’ and then fed into the reconstruction process. The data basis for the reconstruction are the learning blocks of the merged models. If the reconstruction is unsuccessful, the model disposal must be performed according to the deconstruction strategy, otherwise the related model M_x must be replaced by M’.

Alternative text