Abstract
Manufacturing processes act on workpieces by exerting a sequence of varying control actions. This results in a sequence of inner and outer workpiece states. The goal is to reach a final state, which has dedicated geometrical and physical properties. Variations of the input and stochastic influences must therefore be compensated during processing, while the ressource-efficiency should be maximized. For this purpose, self-optimizing Artificial Intelligence (AI) control methods were developed. The corresponding Markov Decision Problem is solved via Machine Learning methods. The cost trade-off between pre-production data sampling to learn the required models and initial low-quality production with learning from production-experience is addressed by two corresponding approaches. 1) Deep Neural auto-encoders and state trackers deliver the input of an optimizing process control, which is constructed from Approximate Dynamic Programming with integrated Neural Networks, representing the learned process dynamics. 2) An explorative AI approach with re-inforcement learning, which automatically learns an implicite model for the control policy, based on the experience with each processing result. This approach can also adapt to process drifts (e.g. from tool wear). Other than classical control methods such as Model Predictive Control, the new approaches can compensate input quality variations, stochastic state perturbations and slowly varying conditions.

This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright (c) 2019 Norbert Link, Johannes Dornheim
