ReconstructionΒΆ

In didactics, reconstruction is understood as the application, repetition, imitation or search for order, patterns and models. Constructivist machine learning transfers this definition to classical machine learning and equates it with supervised learning. However, in contrast to the classical approach the constructivist approach generates several machine models and evaluates them with respect to their intersubjective validity. The goal is to identify the optimal representation of a learning block among the many competing machine models. The entire reconstruction process consists of three steps.

  • After a learning block has been successfully constructed, the number of input variables is first checked. If the parameter specified by the user, called maximum model complexity, is exceeded, an algorithmic feature selection takes place. If the input dimension or the number of samples is greater than the control parameter specified for it, the filter procedure is used, otherwise an embedded procedure. This change is necessary because embedded procedures are known to require a higher computing time than filter methods for large amounts of data.

  • After the complexity reduction comes step two, in which the learning block is divided into a training block and an evaluation block. As the name suggests, the training block serves as input for the supervised machine leanring procedure. The user selects the procedures to be used. Afte the training the precession must be checked using the evaluation data. If a minimum value is not reached, the user tries to work with the next supervised machine learning procedure. All models that have exceeded the minimum precession value must be buffered and transformed to a pragmatic machine model with their corresponding metadata. The following Table shows a possible result after a successful training.

    Attribute

    Datatype

    Value

    UID

    string

    C.1.3

    Image

    list

    0.0.1, 0.02

    Min Timestamp

    int

    176543319

    Max Timestamp

    int

    177504311

    Subject

    list

    ANN, RF

    Aim

    list

    C.1.K02

    The model marked C.1.3 maps conceptual knowledge at the first level with the identifier three. The training was conducted with a learning block with the features 0.0.1 and 0.0.2. Both time intervals refer to the samples from the learning block. An artificial neural network (ANN) and random forest (RF) were used as the supevised machine learning models. The unsupervised task was conducted with the k-Means, that was configured to find only 2 clusters.

  • Once all possible pragmatic models have been formed, the calculation of the degree of intersubjectivity is carried out to find the best representation of the learning block For this purpose the random reliability coefficient Krippendorff alpha is used. This gives the answer to the question to what extent the predictions of the supervised machine learning mdoels are consistent overall. Krippendorff a can handle both metric and nominal scales and is therefore suitable for both classification and regression tasks. Before all reconstructed models are ranked, those whose corresponding a value does not exceed a user-defined threshold value must be discarded. If machine models are present after the threshold value check, this means that they have the necessary precision and interrater reliability. Now the models are sorted in descending order according to the a coefficient. The largest a value identifies the winner. If there are several models among the winners, e.g. if the largest a value occurs several times, the model with the smallest original image space is chosen. Finally, the winner is fed into the deconstruction.

Alternative text