top of page

3D Printing Setpoints for Quality Assurance: A Deep Reinforcement Learning Approach
University of Cambridge

Introduction to the Problem

Quality control in extrusion-based additive manufacturing (AM) remains a critical concern, despite the significant adoption of the technology across various industries. Achieving consistent print quality is a complex challenge due to the dynamic and intricate nature of the 3D printing process. Factors such as material flow rates, nozzle temperatures, and environmental conditions can all impact the final product, leading to potential defects or inconsistencies. Without robust quality assurance mechanisms in place, the reliability and precision of the printed components are often compromised. This can result in material wastage, extended production times, and diminished performance in the final product. Therefore, the need for implementing real-time quality control systems during the 3D printing process is paramount. Such systems would ensure continuous monitoring, adjustment, and optimization of critical parameters, paving the way for more efficient, reliable, and scalable manufacturing solutions. As additive manufacturing continues to evolve, the integration of intelligent quality control methods will be essential to overcoming these challenges and maximizing the potential of this transformative technology. 

Solution

We have developed a novel on-the-fly closed-loop setpoint adjustment system for 3D printing that integrates a vision transformer-based computer vision (CV) module with a deep Q-learning reinforcement learning (RL) controller. This combination effectively addresses the dynamic challenges of the 3D printing process. Both components undergo separate training before being integrated for real-time deployment in actual 3D printing environments without further tuning. The CV module extracts crucial features related to extrusion, providing reliable feedback on extrusion conditions while minimizing the impact of uncertainties and variability in the printing environment. With ample domain randomization, the ViT model is trained to deliver high accuracy and strong generalization capabilities. On the RL side, synthetic data is generated from a distribution of classification results and simulated temperature tracking, enabling flexible adjustments to the policy network’s architecture and hyperparameters while narrowing the gap between simulated and real-world conditions. The four-phase reward shaping strategy, which uses progressively refined elliptical reward functions and accounts for classification inaccuracies, guides the policy network toward stable convergence despite environmental randomness. Setpoint adjustments, such as flow rate and temperature, are executed asynchronously to accommodate their differing response times. By utilizing offline training and a zero-shot deployment approach, the agent can make real-time decisions during live 3D printing operations. One example to adjust setpoints starting from an over-extrusion error (300% flow rate and 230℃ nozzle temperature) is shown as figure 1. The system is scalable, allowing additional process parameters to be integrated by expanding the reward function and modifying action execution schemes. Moreover, the flexibility of this method allows for application to other additive manufacturing processes. 

Figure 1.png

Figure 1. Setpoint adjustments with a start point simulating over-extrusion error. 

bottom of page