KI am Tatort

SHUTTLE: ‘Scientific High-throughput and Unified Toolkit for Trace analysis by forensic Laboratories in Europe’

Premium Unfold

Im Vortrag wird uns Martin eine grundlegende Einführung zu datenbasierter Modellbildung, oft als „KI“ bezeichnet geben. Was sind die Vorteile gegenüber konventionellen Verfahren und was entwickelt die AI-UI GmbH.
Anschließend geht es um:
KI am Tatort„.
Bei der forensischen Auswertung von Tatortdaten werden sehr oft tesafilmartige Spurenträger verwendet, um Proben von bestimmten Oberflächen zu sammeln. Diese Spuren müssen anschließend in einem manuellen und sehr aufwendigen Prozess von Laborpersonal unter dem Mikroskop auf Zusammenhänge untersucht werden.
Aus diesem Grund schlossen sich sechs europäische und ein israelisches Labor zusammen und schrieben mithilfe der EU ein Horizon-2020-Projekt aus, mit dem dieser Spurenklassifikationsprozess voll automatisiert werden sollte.
Die AI-UI GmbH stellt sich seit anderthalb Jahren zusammen mit Optimal Systems und Aura Optik als Konsortium dieser Aufgabe und entwickelte einen vollautomatischen Forensikroboter.
  • Was ist KI
  • Was ist der Unterschied zu klassischen Modellbildungsmethoden
  • Was macht die AI-UI GmbH
  • Welche Usecases wurden bereits gelöst.
  • KI am Tatort
18:00 Open Doors
18:15 Welcome
18:20 Vortrag mit Q&A
19:20 Free Beers, Fingerfood & Networking
21:00 Ende des Meetups
Über den Speaker:
Martin Schiele ist Wissenschaftler an der TU Ilmenau und arbeitet dort seit über sechs Jahren in den Bereichen Thermo- und Gasdynamik sowie Künstliche Intelligenz. Aufgrund der Erfahrungen mit der Komplexität des Themas, dem Mangel an fähigen Softwareentwicklern und dem Wunsch, KI in der Forschung mehr einzubinden, wurde er Mitentwickler und Gründer von AI-UI, einer Firma die ein GUI für KI programmiert.

AI for forensic trace detection and classification

Boosting Mask R-CNN Performance for long, thin Forensic Traces with Pre-Segmentation and IoU Region Merging


Mask R-CNN has recently achieved great success in the field of instance segmentation. However, weaknesses of the algorithm have been repeatedly pointed out as well, especially in the segmentation of long, sparse objects whose orientation is not exclusively horizontal or vertical. We present here an approach that significantly improves the performance of the algorithm by first pre-segmenting the images with a PSPNet algorithm. To further improve its prediction, we have developed our own cost functions and heuristics in the form of training strategies, which can prevent so-called (early) overfitting and achieve a more targeted convergence. Furthermore, due to the high variance of the images, especially for PSPNet, we aimed to develop strategies for a high robustness and generalization, which are also presented here.

AI for live process monitoring in industry 4.0

In-situ monitoring of hybrid friction diffusion bonded EN AW 1050/EN CW 004A lap joints using artificial neural nets


In this work, a dissimilar copper/aluminum lap joint was generated by force-controlled hybrid friction diffusion bonding setup (HFDB). During the welding process, the appearing torque, the welding force as well as the plunge depth are recorded over time. Due to the force-controlled process, tool wear and the use of different materials, the resulting dataseries varies significantly, which makes quality assurance according to classical methods very difficult. Therefore, a Convolutional Neural Network was developed which allows the evaluation of the recorded process data. In thisstudy, data from sound welds as well as data from samples with weld defects were considered. In addition to the different welding qualities, deviations from the ideal conditions due to tool wear and the use of different alloys were also considered. The validity of the developed approach is determined by cross validation during the training process and different amounts of training data. With an accuracy of 88.5%, the approach of using Convolutional Neural Network has proven to be a suitable tool for monitoring the processes.

AI-UI and Coral for Edge AI


As you can see, this robotic arm grasps two different kinds of components from the blue basket and puts them in the left. These two kinds of components are plastic and aluminium parts. The machine must find the components and distinguish them. The challenge is, the two different material reflect lights differently, because the light changes during the day inside a production line, which causes the traditional image processing algorithm to not work well often.

In this project, we will demonstrate how an industrial AI dedicated for this problem is developed and deployed on the Coral Dev Board, with AI-UI and a few mouse clicks.

Coral Dev Board

The Dev Board is a single-board computer that’s ideal when you need to perform fast machine learning (ML) inferencing in a small form factor. You can use the Dev Board to prototype your embedded system and then scale to production using the on-board Coral System-on-Module (SoM) combined with your custom PCB hardware.

Hardware setup

  • PC with Windows System: the most work is done here.
  • Coral Dev Board: the target device that the model is deployed to.
  • Type-C to USB cable: optional, if the PC and Coral Dev Board are under the same network.

Software setup

  • Software AI-UI installed on the PC
  • Windows Subsystem for Linux: installed on the PC with Edge TPU Compiler
  • AI-UI on-device: installed on the Coral Dev Board


The first step is to take a look at the dataset. Those images are taken by camera via the robotic arm through different light conditions. This is important for training a robust AI model. Because the real-world environment is complex in terms of light reflecting, for example the sun light changes during the day, somebody walks by with a phone on, production variance of the component surface, lights from other machine and so on. The more variants of light condition we consider, the better we can make use of the generalization ability of the AI model.


We can then import the data into AI-UI for next step: annotating.


Annotation is to tell the AI where to find an object and how this object is labeled, so that the AI can learn patterns to recognize it.

On the left tool panel, we choose circle as the annotating tool, then draw it on an object and then label it. By doing so, we want the AI in this project to learn the annotations as rectangle (box), which will be converted internally before model training. We use the circle to annotate the objects, so that the dataset can be applied for other scenarios requiring more precise predictions (for example more complex masks).

Annotation tool

A few moment later, all the images are annotated.

Annotated image
Annotated dataset


After annotation we should split the dataset into a train dataset and a validation dataset. The train dataset is used for model training and the model learns the information from it. The validation dataset contains images, that the model has never seen and this dataset is used for model evaluation.

First we create a project in „AI-UI Project“ asset view with name „coral robotic arm“. In the project view, we find a data selector node on the left panel and drag&drop it into the working area in the middle. In the node configuration by the right side, we find the settings, belonging to the chosen node. In this case, datasets. Here you can find the dataset that we just annotated. Select it and click the „proceed“ button.

After that, the data selector node will produce an output socket representing the selected dataset. Then we create a train val split node and connect the output socket of data selector and the input socket of train val split node.

In the configuration of train val split node, we use „normal“ mode rather than „shuffle“, that means the split is done „chronological“, „shuffle“ would split the images in a random way. After clicking „proceed“, the node will produce two outputs representing train dataset and validation dataset.

Select dataset
Split dataset


Data augmentation means a technique to enrich the data by applying different modifications randomly. In AI-UI, we can apply geometric transformations, flipping, cropping, rotation and noise injection on image datasets. To apply augmentation, create an „image augment node“ (by drag&drop), connect it with train set, then choose „Fliplr“ and „Flipud“ augmenters and click „proceed“. Because, we just want the data from the training dataset to be augmented. Validation data stays the same.

Image augment node
Node result

Neural network

Before training a model, we need to create a neural network. In this project, we use EfficientDet introduced by Tan et al. in EfficientDet: Scalable and Efficient Object Detection 2020.

To use this model in AI-UI, create an architecture asset in the software and double click it for editing. In the STAM (Systematic and Touchable Architecture Modelling) designer, find the EfficientDet node from left panel, drag and drop it into working area in the middle and select the node for configuration. In the node configuration, set the height and width of the actual image size and use the „efficientdet-lite0“ type, which is optimized for mobile and edge devices.

STAM designer view

Transfer learning

In this section, we will demonstrate how to train a model by using transfer learning from pre-trained weights in AI-UI.

Pre-trained weights are weights of a network that was trained on a large and general enough dataset (usually requiring hardware vastly). In transfer learning, we train this model further for a given task, taking the advantage of the knowledge gained by the previous training.

Model training

First, create an architecture node in the project and select the architecture created in previous section as we did for data selection. Then create a training node which takes augmented data as training set, validation data as validation set and architecture as inputs.

Model training workflow

In the training node configuration, set „Epochs“ to 5, „Batch size“ to 4 and „checkpoint“ to „efficientnet-lite0“ to enable transfer learning. Then click „proceed“ to start the training process.

Epochs and batch size

Model evaluation

During the process, the training node will show different metrics in real time. When the training process is complete, the training node will produce a model socket as output, that represents the resulting model. Because an AI-Model is nothing more than data, fed to a neural network by training. We can use the model to predict the validation set with predict node and the result shows model predictions are reliable.

To evaluate the model quantitatively, create a meanAP (mean average precision) node and feed it with predicted-data and true-data. Mean average precision is a common evaluation method in object detection task and the result shows AP over 80%, AP50 and AP75 over 95%, which means the model serves with very high accuracy.

Training history
Predict node
Mean AP node

Create AI-Service pipeline

To use the trained model, there is a need to create a productive pipeline.

First of all switch over to the tab „API-Requests“ and create a new one.

AI-service workflow

After creation, you will be able to leave the workflow back to the API-Requests overview and deploy your model.

AI-service asset

By clicking on the „Deploy“ button, you reach an overview of you devices.

Deploy AI-service

First, make sure the Coral Dev Board is connected with the PC.

If you have clicked on the „Deploy“ button, found in „API-Requests“, you are now able to choose a model format and see if the device is „online“.

Deployment view

For our Coral Development Board, we are going to choose Edge TPU, cause this is the hardware, the model will be used on. Protobuf or Tensorflow Lite won’t work this way, cause the edge-TPU-compiler will compile the model in a way, that makes it usable for the TPU. This is also the reason, why not all possible AI-Models are deployable that easy on the Edge-TPU (not all NN-Layers are compilable).

Model runtime selection

If you choose „Edge TPU“ another menu is popping up.

Edge TPU configuration

Now you will be able to choose optimization methods and type.

Also, you will have to choose a dataset that has been used for the model training or testing. TFlite uses it, to compile the model in the right way. Click on „Deploy“ and the model is send to the edge device.


In this section, we will demonstrate how to deploy this model into production with Coral Dev Board. If you connect a Coral Camera to your Dev-Board, you will be able to access the inference-video-stream by the demo-api. First, go to the tab „EDGE“.

Edge group list

Because we have not defined any group of devices, you won’t find any entry here. If you click on the arrows, you will reach a menu, that shows all single devices.

Edge device list

Check again, if it is online and double click on it. Then you will reach an device manager that shows how many requests have been done via the api and you can choose models that have been already deployed on-the-fly.

Device view

You can monitor the health status and check for the IP adress. This is the way, you will be able to run inferece via the camera device in your browser.

Open up the browser window and tab: (in this case) You choose the pattern: http://ip_of_edge_device:port_of_edge_device/demos/cam

Demo view

AI for brake related, non-exhaust emissions

Artificial Neural Network regression models for the prediction of brake-related emissions


The growth of electric propulsion systems motivates the automotive industry to transfer the focus from exhaust to non-exhaust emissions, with special attention to brake-related emissions. The literature lacks well-established approaches that describe the particulate emissions through reliable analytical correlations. Moreover, the mechanisms of brake particulate formation entail highly stochastic phenomena, which cannot be captured by means of traditional deterministic modelling tools. Machine learning algorithms have been recently used as an alternative method to seek for a branched correlation between tribological properties (i.e. friction coefficient and wear rate), pad composition, environmental and operating conditions. In this regard, the presented work focuses on the study and identification of sophisticated stochastic meta-models for the prediction of the number of emitted brake particulate and associated uncertainty. Specifically, artificial neural networks are developed and validated against brake emission data collected in real driving conditions at Technische Universität Ilmenau. The developed algorithms are intended for multiple use: (i) in the course of real driving emissions (RDE) testing, to support the experimental data; (ii) while driving, to inform the driver about the brake-related emission levels; (iii) as an on-board optimisation tool that identifies the brake actuation rules to minimise the release of particulate emissions.