Comet’s AI solution to porosity measurement

April 05, 2023 | Anton du Plessis, Muofhe Tshibalanganda

Everyone has heard of ChatGPT and OpenAI, and many of the amazing AI tools now hitting the market. But have you tried Artificial Intelligence for the evaluation of CT data? You didn’t know you can use AI for image analysis? Read on.

This article demonstrates how Artificial Intelligence (AI) and Deep Learning (DL) work and how good this technique really is compared to traditional methods. We show how Dragonfly software is used to analyze porosity in an additively manufactured benchmark part, directly comparing the new deep learning AI method to traditional thresholding methods.

The problem

All manufactured parts have some porosity, and in the case of additive manufacturing, these pores can be very small and hard to detect. CT is very useful for imaging these pores but segmenting and analyzing them is always a challenge. Many algorithms can be used and almost all methods involve some human influence in the process. Typically, a human operator needs to select a threshold below which the darker regions of pores are selected and the denser material parts are not selected. This process becomes challenging when pores are small relative to the pixel size, or when image contrast is not ideal, making the segmentation process sometimes incorrect, and with different human operators getting different results. Using a pre-trained deep learning model removes this inherent human bias, and at the same time allows automation since the model can be called as part of a macro without any human input.

The benchmark part used in this demonstration case study was manufactured by laser powder bed fusion (L-PBF) additive manufacturing in AlSi10Mg at the Fraunhofer IWS. It is a 40 x 40 x 30 mm benchmark design meant to investigate manufacturability of small features with varying dimensions and orientations, and internal channels with complex design. CT scans were performed using a Comet Yxlon FF35 CT system using 200 kV and 200 µA at 35 µm resolution. The reconstructed data was loaded into Dragonfly and basic visualization tools show the part and its internal “open” channels.

Fig. 1 + 2: CT surface view of the additively manufactured benchmark part (left) with complex geometry and internal channels (right)

Porosity identification and measurement

In CT scans, a lot of details can be recognized inside of parts – in this case, as seen in the figure below, the following can be identified easily: (A) unintentional porosity due to the AM process, (B) designed internal channels and (C) internal channels containing unmelted powder.

Fig. 3: Cross-sectional CT image showing porosity and channels without and with remaining powder

In this work, we are interested in the characterization of the porosity (A) that is formed during the manufacturing process. To directly compare the manual thresholding method to the deep learning method, we select 10 regions of interest (ROIs) to get analyzed. One ROI is shown in the image below.

Fig. 4 + 5: Selection of an ROI as a subset for porosity analysis

For each ROI, we apply manual thresholding since the automatic Otsu* threshold fails – the pores are too small and too few to allow automated thresholding. A cross-sectional example of a failed Otsu threshold and manual threshold adjustment is shown in the figure below.

*Otsu’s method, named after Nobuyuki Otsu, for the performance of automatic image thresholding

Fig. 6 + 7: Traditional segmentation approaches: Otsu threshold in the ROI (left) not working correctly due to very few and small pores; manually adjusted threshold (right)

In the case of manual thresholding, there is inherent bias/error induced by human operators, which become stronger with smaller pores or more noisy data. As a demonstration, we repeat the 10 measurements by different operators and show the results of the manual quantification below in terms of porosity % values in each ROI.

Fig. 8: Differences in porosity values reported by different human operators, given the same data and the same 10 ROIs

By applying an AI/deep learning model, the human bias is removed. Dragonfly software is well known for its AI tools and in this case a pre-trained Additive Manufacturing porosity model is used “out of the box”. This model was trained previously on more than 50 datasets, covering most additive manufacturing porosity types as seen in typical laboratory CT image data. This model intends to permit the user to add more training data to allow a better match to the specific image of interest. But in this case, it is used successfully without any additional training. The application of the model to the complete sample is shown below.

Fig. 9: Deep learning segmentation and analysis of porosity in full benchmark part

One question that often comes up is: how repeatable is a deep learning model? Well, in this case the same model was applied by different operators and multiple times by the same operator. The answer was exactly identical in all cases. There is no deviation at all, the number of voxels classified is identical each time, so there is no error.

Fig. 10: Deep learning porosity analysis performed by 5 users on 10 ROIs

Conclusion

A benchmark additively manufactured part was analyzed for porosity and a deep learning model in the Dragonfly software was used to segment and quantify the porosity. This was compared to the traditional manual thresholding approach, which has some human error involved. The deep learning method removes user error and improves reliability for this challenging problem, also allowing automation of analyses. The model used here was a so-called U-net** with a depth of 5 and was trained previously on >50 datasets. It was used “out of the box” here (and is included in Dragonfly) but could also be trained further to make it more robust on a wider variety of datasets and work on more noisy data, for example.

**U-net, a popular convolutional network architecture for fast and precise image segmentation and mathematical model in Deep Learning

Latest Posts

Tiger head shows the difference

May 25, 2023 | Gina Naujokat, Jan Tamm

The CT scan of a tiger skull for natural science revealed the power of our HeliExtend Dual trajectory. Read our story and find the difference.

Read more

Dragonfly Super Resolution for Image Enhancement: an Application to Additive Manufacturing

May 09, 2023 | Anton du Plessis, Muofhe Tshibalanganda

The Dragonfly Deep-Learning, super-resolution model takes the input of poor resolution scans to create superior image quality results – almost like the high-resolution images but without the need to scan at high resolution! Learn about this significant advantage!

Read more

Battery Insights powered by Dragonfly

May 08, 2023 | Gina Naujokat

At Control, the new Battery Insights software package will get presented live with the Comet Yxlon CT system FF35 CT, and Anton du Plessis will demonstrate and explain the market-leading Dragonfly segmentation and deep learning functionality. Read more.

Read more