Advancing Wound Filling Extraction on 3D Faces:An Auto-Segmentation and Wound Face Regeneration Approach

2024-03-02 01:34DuongNguyenThinhLePhuongNguyenNgaLeandNguyenXuan

Duong Q.Nguyen,Thinh D.Le,Phuong D.Nguyen,Nga T.K.Le and H.Nguyen-Xuan,⋆

1Department of Mathematics and Statistics,Quy Nhon University,Quy Nhon City,55100,Viet Nam

2Applied Research Institute for Science and Technology,Quy Nhon University,Quy Nhon City,55100,Viet Nam

3CIRTECH Institute,HUTECH University,Ho Chi Minh City,72308,Viet Nam

ABSTRACT Facial wound segmentation plays a crucial role in preoperative planning and optimizing patient outcomes in various medical applications.In this paper, we propose an efficient approach for automating 3D facial wound segmentation using a two-stream graph convolutional network.Our method leverages the Cir3D-FaIR dataset and addresses the challenge of data imbalance through extensive experimentation with different loss functions.To achieve accurate segmentation, we conducted thorough experiments and selected a high-performing model from the trained models.The selected model demonstrates exceptional segmentation performance for complex 3D facial wounds.Furthermore,based on the segmentation model,we propose an improved approach for extracting 3D facial wound fillers and compare it to the results of the previous study.Our method achieved a remarkable accuracy of 0.9999993%on the test suite,surpassing the performance of the previous method.From this result,we use 3D printing technology to illustrate the shape of the wound filling.The outcomes of this study have significant implications for physicians involved in preoperative planning and intervention design.By automating facial wound segmentation and improving the accuracy of wound-filling extraction,our approach can assist in carefully assessing and optimizing interventions,leading to enhanced patient outcomes.Additionally,it contributes to advancing facial reconstruction techniques by utilizing machine learning and 3D bioprinting for printing skin tissue implants.Our source code is available at https://github.com/SIMOGroup/WoundFilling3D.

KEYWORDS 3D printing technology;face reconstruction;3D segmentation;3D printed model

1 Introduction

Nowadays,people are injured by traffic accidents,occupational accidents,birth defects,diseases that have made them lose a part of their body.In which, defects when injured in the head and face areas account for a relatively high rate [1].Wound regeneration is an important aspect of medical care, aimed at restoring damaged tissues and promoting wound healing in patients with complex wounds [2].However, the treatment of craniofacial and facial defects can be challenging due to the many specific requirements of the tissue and the complexity of the anatomical structure of that region[3].Traditional methods used for wound reconstruction often involve grafting techniques using automated grafts(from the patient’s own body)or allogeneic grafts(from a donor)[4].However,these methods have limitations such as availability,donor morbidity,and potential for rejection.In recent years,the development of additive manufacturing technology has promoted the creation of advanced techniques in several healthcare industries [5–7].The implementation of 3D printing technology in the preoperative phase enables clinicians to establish a meticulous surgical strategy by generating an anatomical model that accurately reflects the patient’s unique anatomy.This approach facilitates the development of customized drilling and cutting instructions,precisely tailored to the patient’s specific anatomical features, thereby accommodating the potential incorporation of a pre-formed implant[8].Moreover, the integration of 3D printing technology and biomaterials plays a crucial role in advancing remedies within the field of regenerative medicine,addressing the pressing demand for novel therapeutic modalities[9–12].The significance of wound reconstruction using 3D bioprinting in the domain of regenerative medicine is underscored by several key highlights,as outlined below:

-Customization and Precision:3D bioprinting allows for the creation of patient-specific constructs, tailored to match the individual’s wound geometry and requirements.This level of customization ensures a better fit and promotes improved healing outcomes.

-Tissue Regeneration:The ability to fabricate living tissues using 3D bioprinting holds great promise for wound reconstruction.The technique enables the deposition of cells and growth factors in a controlled manner, facilitating tissue regeneration and functional restoration[13,14].

-Reduced Donor Dependency:The scarcity of donor tissues and the associated risks of graft rejection are significant challenges in traditional wound reconstruction methods.3D bioprinting can alleviate these limitations by providing an alternative approach that relies on the patient’s own cells or bioinks derived from natural or synthetic sources[15].

-Complex Wound Healing:Certain wounds, such as large burns, chronic ulcers, or extensive tissue loss, pose significant challenges to conventional wound reconstruction methods.3D bioprinting offers the potential to address these complex wound scenarios by creating intricate tissue architectures that closely resemble native tissues.

-Accelerated Healing:By precisely designing the structural and cellular components of the printed constructs,3D bioprinting can potentially enhance the healing process.This technology can incorporate growth factors,bioactive molecules,and other therapeutic agents,creating an environment that stimulates tissue regeneration and accelerates wound healing[16].

Consequently,3D bioprinting technology presents a promising avenue for enhancing craniofacial reconstruction modalities in individuals afflicted by head trauma.

Wound dimensions, including length, width, and depth, are crucial parameters for assessing wound healing progress and guiding appropriate treatment interventions [17].For effective facial reconstruction, measuring the dimensions of a wound accurately can pose significant challenges in clinical and scientific settings [18].Firstly, wound irregularity presents a common obstacle.Wounds rarely exhibit regular shapes, often characterized by uneven edges, irregular contours, or irregular surfaces.Such irregularity complicates defining clear boundaries and determining consistent reference points for measurement.Secondly,wound depth measurement proves challenging due to undermined tissue or tunnels.These features, commonly found in chronic or complex wounds, can extend beneath the surface,making it difficult to assess the wound’s true depth accurately.Furthermore,the presence of necrotic tissue or excessive exudate can obscure the wound bed,further hindering depth measurement.Additionally,wound moisture and fluid dynamics pose significant difficulties.Wound exudate,which may vary in viscosity and volume,can accumulate and distort measurements.Excessive moisture or the presence of dressing materials can alter the wound’s appearance,potentially leading to inaccurate measurements.Moreover, the lack of standardization in wound measurement techniques and tools adds to the complexity.

Currently, deep learning has indeed emerged as a predominant technique for wound image segmentation as well as various other applications in medical imaging and computer vision [19–21].Based on the characteristics of the input data [22,23], three deep learning methods are used for segmentation and wound measurement, as shown in Fig.1.The study of Anisuzzaman et al.[23]presented case studies of these three methods.The methods used to segment the wound based on the characteristics of the input data are as follows:

-2D image segmentation:Deep learning methods in 2D for wound segmentation offer several advantages.Firstly, they are a well-established and widely used technique in the field.Additionally, large annotated 2D wound segmentation datasets are available, facilitating model training and evaluation.These methods exhibit efficient computational processing compared to their 3D counterparts, enabling faster inference times and improved scalability.Furthermore,deep learning architectures,such as convolutional neural networks,can be leveraged for effective feature extraction,enhancing the accuracy of segmentation results.However,certain disadvantages are associated with deep learning methods in 2D for wound segmentation.One limitation is the lack of depth information, which can restrict segmentation accuracy,particularly for complex wounds with intricate shapes and depth variations.Additionally,capturing the wound’s full spatial context and shape information can be challenging in 2D,as depth cues are not explicitly available.Furthermore,these methods are susceptible to variations in lighting conditions,image quality,and perspectives,which can introduce noise and affect the segmentation performance.

-2D to 3D reconstruction:By incorporating depth information,the conversion to 3D enables a better capture of wounds’shape and spatial characteristics,facilitating a more comprehensive analysis.Moreover,there is a potential for improved segmentation accuracy compared to 2D methods,as the additional dimension can provide richer information for delineating complex wound boundaries.Nevertheless, certain disadvantages are associated with converting from 2D to 3D for wound segmentation.The conversion process itself may introduce artifacts and distortions in the resulting 3D representation, which can impact the accuracy of the segmentation.Additionally,this approach necessitates additional computational resources and time due to the complexity of converting 2D data into a 3D representation[24].Furthermore,the converted 3D method may not completely overcome the limitations of the 2D method.

-3D mesh or point cloud segmentation:Directly extracting wound segmentation from 3D data (mesh/point cloud) offers several advantages.One notable advantage is the retention of complete 3D information on the wound, enabling accurate and precise segmentation.By working directly with the 3D data,this method effectively captures the wound’s intricate shape,volume,and depth details,surpassing the capabilities of both 2D approaches and converted 3D methods.Furthermore, the direct utilization of 3D data allows for a comprehensive analysis of the wound’s spatial characteristics,facilitating a deeper understanding of its structure and morphology.

Figure 1:Methods of using deep learning in wound measurement by segmentation

Hence,employing a 3D(mesh or point cloud)segmentation method on specialized 3D data,such as those obtained from 3D scanners or depth sensors,can significantly improve accuracy compared to the other two methods.The use of specialized 3D imaging technologies enables the capture of shape,volume, and depth details with higher fidelity and accuracy [25].Consequently, the segmentation results obtained from this method are expected to provide a more precise delineation of wound boundaries and a more accurate assessment of wound characteristics.Therefore, this method can enhance wound segmentation accuracy and advance wound assessment techniques.

Besides,facial wounds and defects present unique challenges in reconstructive surgery,requiring accurate localization of the wound and precise estimation of the defect area [26].The advent of 3D imaging technologies has revolutionized the field, enabling detailed capture of facial structures.However, reconstructing a complete face from a 3D model with a wound remains a complex task that demands advanced computational methods.Accurately reconstructing facial defects is crucial for surgical planning,as it provides essential information for appropriate interventions and enhances patient outcomes[27].Some prominent studies,such as Sutradhar et al.[28]utilized a unique approach based on topology optimization to create patient-specific craniofacial implants using 3D printing technology; Nuseir et al.[29] proposed the utilization of direct 3D printing for the fabrication of a pliable nasal prosthesis,accompanied by the introduction of an optimized digital workflow spanning from the scanning process to the achievement of an appropriate fit;and some other prominent studies presented in survey studies such as [30,31].However, these methods often require a lot of manual intervention and are prone to subjectivity and variability.To solve this problem,the method proposed in [32,33] leverages the power of modeling [34] to automate the process of 3D facial reconstruction with wounds,minimizing human error and improving efficiency.To extract the filling for the wound,the study[32]proposed the method of using the reconstructed 3D face and the 3D face of the patient without the wound.This method is calledoutlier extractionby the authors.These advancements can be leveraged to expedite surgical procedures,enhance precision,and augment patient outcomes,thereby propelling the progression of technology-driven studies on facial tissue reconstruction,particularly in bio 3D printing.However,this method still has some limitations as follows:

- The method of extracting filler for the wound after 3D facial reconstruction has not yet reached high accuracy.

- In order to extract the wound filling,the method proposed by[32]necessitated the availability of the patient’s pre-injury 3D facial ground truth.This requirement represents a significant limitation of the proposed wound filling extraction approach, as obtaining the patient’s preinjury 3D facial data is challenging in real-world clinical settings.

- To overcome these limitations,the present study aims to address the following objective:

- Train the 3D facial wound container segmentation automatic model using a variety of appropriate loss functions to solve the data imbalance problem.

- Propose an efficient approach to extract the 3D facial wound filling by leveraging the face regeneration model in the study[32]combined with the wound segmentation model.

- Evaluate the experimental results of our proposed method and the method described in the study by Nguyen et al.[32].One case study will be selected to be illustrated through 3D printing.

2 Methodology

Research reported by Nguyen et al.[32] has proposed a method to extract the filling for the wound for 3D face reconstruction.However,as we analyzed in Section 1,study[32]still has certain limitations.To address those limitations, we propose a unique approach to 3D face reconstruction with the combination of segmentation on injured 3D face data.This section introduces the structure of the 3D segmentation model and presents our proposed method.

2.1 Architecture of Two-Stream Graph Convolutional Network

Recent years have witnessed remarkable advancements in deep learning research within the domain of 3D shape analysis,as highlighted by Ioannidou et al.[35].This progress has catalyzed the investigation into translation-invariant geometric attributes extracted from mesh data,facilitating the precise labeling of vertices or cells on 3D surfaces.Along with the development of 3D shape analysis,the field of 3D segmentation has advanced tremendously and brought about many applications across various fields, including computer vision and medical imaging [36].Geometrically grounded approaches typically leverage pre-defined geometric attributes, such as 3D coordinates, normal vectors,and curvatures,to differentiate between distinct mesh cells.Several noteworthy models have emerged,including PointNet[37],PointNet++[38],PointCNN[39],MeshSegNet[40],and DGCNN[41].While these methods have demonstrated efficiency,they often employ a straightforward strategy of concatenating diverse raw attributes into an input vector for training a single segmentation network.Consequently, this strategy can generate isolated erroneous predictions.The root cause lies in the inherent dissimilarity between various raw attributes,such as cell spatial positions(coordinates)and cell morphological structures (normal vectors), which leads to confusion when merged as input.Therefore, the seamless fusion of their complementary insights for acquiring comprehensive highlevel multi-view representations faces hindrance.Furthermore, the use of low-level predetermined attributes in these geometry-centric techniques is susceptible to significant variations.To address this challenge,the two-stream graph convolutional network(TSGCNet)[42]for 3D segmentation emerges as an exceptional technique, showcasing outstanding performance and potential in the field.This network harnesses the powerful geometric features available in the mesh to execute segmentation tasks.Consequently,in this study,we have selected this model as the focal point to investigate its applicability and effectiveness in the context of our research objectives.In[42],the proposed methodology employs two parallel streams, namely theCstream and theNstream.TSGCNet incorporates input-specific graph-learning layers to extract high-level geometric representations from the coordinates and normal vectors.Subsequently, the features obtained from these two complementary streams are fused in the feature-fusion branch to facilitate the acquisition of discriminative multi-view representations,specifically for segmentation purposes.An overview of the architecture of the two-stream graph convolutional network is shown in Fig.2.

TheC-stream is designed to capture the essential topological characteristics derived from the coordinates of all vertices of a mesh.TheC-stream receives an input denoted asF0c,which is anM×12 matrix representing the coordinates(Mis the number of mesh cells).Each row of this matrix represents a node,and the columns correspond to the coordinates of the cell in a three-dimensional space.This stream incorporates an input-transformer module to align the input data with a canonical space.This module comprises shared Multilayer Perceptrons (MLPs) across nodes, as previously described by Charles et al.[37].TheC-stream progressively integrates a consecutive set of graph-attention layers along the forward path to systematically exploit multi-scale geometric attributes derived from the coordinates of the mesh.WhileC-stream can capture general geometric information, it lacks the sensitivity to distinguish subtle boundaries between adjacent nodes with different classes (e.g., the boundary between the injured and non-injured areas).To overcome this limitation, theN-stream is designed to extract boundary representations based on the normal vectors associated with the nodes.Unlike theC-stream, theN-stream uses graph max-pooling layers.This differentiation is essential as the normal vectors encompass unique geometric information that differs from the coordinates of the nodes.Because the normal vector carries only geometry information,theN-stream prefers to use max-pooling layers instead of graph-attention layers as in theC-stream.

Figure 2:Architectural overview of the TSGCNet model for segmentation on injured 3D face data

The TSGCNet model employs three layers in each stream to extract features.Subsequently, the features from these layers are combined as follows:

In order for the model to comprehensively understand the 3D mesh structure, Zhang et al.[42]combinedFcandFn,which can be expressed as:

wherePrepresents the feature matrices.Each row denotes the probabilities of a specific cell belonging toCdifferent classes.

2.2 Filling Extraction

We utilize the TSGCNet model, as presented in study [42], to perform segmentation of the wound area on the patient’s 3D face.This model demonstrates a remarkable capacity for accurately discriminating boundaries between regions harboring distinct classes.Our dataset comprises two distinct classes, namely facial abnormalities and normal regions.Due to the significantly smaller proportion of facial wounds compared to the normal area, an appropriate training strategy is necessary to address the data imbalance phenomenon effectively.To address this challenge,we utilize specific functions that effectively handle data imbalance within the semantic segmentation task.These functions include focal loss[43],dice loss[44],cross-entropy loss and weighted cross-entropy loss[45].

1)Focal lossis defined as:

whereptrepresents the predicted probability of the true class;αtis the balancing factor that assigns different weights to different classes;γis the focusing parameter that modulates the rate at which easy and hard examples are emphasized.Focal loss effectively reduces the loss contribution from wellclassified examples and focuses on samples that are difficult to classify correctly.This helps handle class imbalance and improves the model’s performance on minority classes.

2)Dice lossalso known as the Sorensen-Dice coefficient is defined as:

whereprepresents the predicted probability or output of the model;yis the ground truth or target labels;Nis the number of elements in the predicted and ground truth vectors;∊is a small constant added to the denominator to avoid division by zero.

3)Cross-entropy segmentation lossis defined as:

whereyicdenotes the ground truth label for thei-th sample andc-th class;picrepresents the predicted probability for thei-th sample andc-th class;Mis the total number of samples;Cis the number of classes.

4)Weighted Cross-Entropy lossis as follows:

wherew=represents the weight assigned to each point based on its class.

After identifying the optimal wound segmentation model, we proceed to extract the mesh containing the area that needs filling on the 3D face.Letv(x,y,z)∈M(V,F)is the vertices of the mesh containing the wound.In whichVandFare the set of vertices and faces of the mesh,respectively.By leveraging the 3D facial regeneration model for wound treatment trained in[32](G),along with a 3D facial wound segmentation model(S),we can extract a mesh denoted asM extracted,which contains the region to be filled for face reconstruction.Specifically, we utilize the results of 3D facial wound segmentation to extract the coordinates and face indices of the damaged area on the mesh.Subsequently,we create a mesh(M seg)that encompasses the injured area on the 3D face based on mesh segmentation.Concurrently,we extract the surface of the damaged area on the 3D face(M surface),reconstructed from the model presented in the study by[32].Finally,we obtain the wound filling mesh on the 3D face by combining the meshesM segandM surfaceinto a single watertight mesh,denoted asM extracted.Our proposal is described in detail in Algorithm 2 and is illustrated in Figs.3 and 4.

Figure 3:Several filling extraction results

Figure 4:An illustration of the wound-filling extraction algorithm

3 Experimental Results

3.1 Dataset Description

We utilize a dataset of 3D faces with craniofacial injuries called Cir3D-FaIR [32].The dataset used in this study is generated through simulations within a virtual environment,replicating realistic facial wound locations.A set of 3,678 3D mesh representations of uninjured human faces is employed to simulate facial wounds.Specifically, each face in the dataset is simulated with ten distinct wound locations.Consequently,the dataset comprises 40,458 human head meshes,encompassing uninjured faces and wounds in various positions.In practice,the acquired data undergoes mesh data processing by reducing the sample to 15,000 cells of the mesh,eliminating redundant information while preserving the original topology.Each 3D face mesh consists of 15,000 cells of the mesh and is labeled according to the location of wounds on the face,specifically indicating the presence of the wounds.This simulation dataset has been evaluated by expert physicians to assess the complexity associated with the injuries.Fig.5 showcases several illustrative examples of typical cases from the dataset.The dataset is randomly partitioned into distinct subsets, with 80% of the data assigned to training and 20% designated for validation.The objective is to perform automated segmentation of the 3D facial wound region and integrate it with the findings of Nguyen et al.[32]regarding defect face reconstruction to extract the wound-filling part specific to the analyzed face.

Figure 5:Illustrations of the face dataset with wounds

3.2 Experimental Setup

The wound area segmentation model on the patient’s 3D face is trained through experiments with different loss functions to select the most effective model, as outlined in Algorithm 1.The training process was conducted using a single NVIDIA Quadro RTX 6000 GPU over the course of 50 epochs.The Adam optimizer is employed in conjunction with a mini-batch size of 4.The initial learning rate was set at 1e-3,and it underwent a decay of 0.5 every 20 epochs.

In this study, the quantitative evaluation of segmentation performance on a 3D grid is accomplished through two metrics: (1) Overall Accuracy (OA), which is obtained by dividing the number of correctly segmented cells by the total number of cells; and (2) the calculation of Intersectionover-Union (IoU) for each class, followed by the calculation of the mean Intersection-over-Union(mIoU).The IoU is a vital metric used in 3D segmentation to assess the accuracy and quality of segmentation results.It quantifies the degree of overlap between the segmented region and the ground truth,providing insights into the model’s ability to accurately delineate objects or regions of interest within a 3D space.Training 3D models is always associated with challenges related to hardware requirements,processing speed,and cost.Processing and analyzing 3D data is more computationally intensive compared to 2D data.The hardware requirements for 3D segmentation are typically higher,including more powerful CPUs or GPUs, more RAM, and potentially specialized hardware for accelerated processing.Especially, performing segmentation on 3D data takes more time due to the increased complexity.In essence,the right loss function can lead to faster convergence,better model performance,and improved interpretability.Therefore,experimentation and thorough evaluation are crucial to determining which loss function works best for data.The model was trained on the dataset using four iterations of experiments, wherein different loss functions were employed.The outcomes of these experiments are presented in Table 1.The utilized loss functions demonstrate excellent performance in the training phase, yielding highly satisfactory outcomes on large-scale unbalanced datasets.Specifically, we observe that the model integrated with cross-entropy segmentation loss exhibits rapid convergence,requiring only 16 epochs to achieve highly favorable outcomes.As outlined in Section 2.2,the model exhibiting the most favorable outcomes,as determined by the cross-entropy segmentation loss function,was selected for the segmentation task.This particular model achieved an impressivemIoUscore of 0.9999986.Some illustrations for the segmentation result on a 3D face are shown in Fig.3.

Table 1: Results of training the model with the corresponding loss functions

Furthermore, in the context of limited 3D data for training segmentation models in dentistry,Zhang et al.[42]showcased the remarkable efficacy of the TSGCNet model.Their training approach involved the utilization of 80 dental data meshes,culminating in an impressive performance outcome of 95.25%.To investigate the effectiveness of the TSGCNet model with a small amount of face data with injuries,we train the TSGCNet model including 100 meshes for training and 20 meshes for testing.The TSGCNet model was trained for 50 epochs, employing a cross-entropy segmentation loss function.This approach achieved an overall accuracy of 97.69%.This result underlines the effectiveness of the two-stream graph convolutional network in accurately segmenting complex and minor wounds,demonstrating its ability to capture geometric feature information from the 3D data.However,training the model with a substantial dataset is crucial to ensure a comprehensive understanding of facial features and achieve a high level of accuracy.Consequently, we selected a model that achieved anmIoUindex of 0.9999986,as depicted in Table 1,to accurately segment facial injuries.

From the above segmentation result,our primary objective is to conduct a comparative analysis between our proposed wound fill extraction method and a method with similar objectives as discussed in the studies by Nguyen et al.[32,33].A notable characteristic of the Cir3D-FaIR dataset is that all meshes possess a consistent vertex order.This enables us to streamline the extraction process of the wound filler.Utilizing the test dataset,we employ the model trained in the study by Nguyen et al.[32]for the reconstruction of the 3D face.Subsequently, we apply our proposed method to extract the wound fill from the reconstructed 3D face.As previously stated,we introduce a methodology for the extraction of wound filling.The details of this methodology are explained in Algorithm 2 and Fig.4.

For the purpose of notational convenience,we designate the filling extraction method presented in the study by Nguyen et al.[32]as the”old proposal”.We conduct a performance evaluation of both our proposed method and the old proposal method on a dataset consisting of 8090 meshes, which corresponds to 20%of the total dataset.A comprehensive description of the process for comparing the two methods is provided in Algorithm 3.The results show that our proposal has an average accuracy of0.9999993%,while the method in the old proposal is0.9715684%.The accuracy of the fill extraction method has been improved,which is very practical in the medical reconstruction problem.After that,the study randomly extracted the method outputs from the test set,depicted in Fig.3.We have used 3D printing technology to illustrate the results of the actual model, which is significantly improved compared to the old method, as shown in Fig.6 and illustrate a 3D printed model to extract the wound filling as shown in Fig.7.

But he felt very bitterly parting from the home where he had been born, and where he had at least passed a short but happy childhood, and sitting down on a hill he gazed once more fondly on his native place

Figure 6:Filling extraction results with 3D printing

Figure 7:A 3D-printed pattern to fill a wound

The results of this study emphasize the potential of utilizing appropriate 3D printing technology for facial reconstruction in patients.This can involve prosthetic soft tissue reconstruction or 3D printing of facial biological tissue [46].3D bioprinting for skin tissue implants requires specialized materials and methods to create customized skin constructs for a range of applications, including wound healing and reconstructive surgery.The choice of materials and fusion methods may vary based on the specific site(e.g.,face or body)and the desired characteristics of the skin tissue implant.In the realm of 3D printing for biological soft tissue engineering,a diverse array of materials is strategically employed to emulate the intricate structures and properties inherent to native soft tissues.Hydrogels,such as alginate,gelatin,and fibrin,stand out as popular choices,primarily owing to their high water content and excellent biocompatibility.Alginate,derived from seaweed,exhibits favorable characteristics such as good printability and high cell viability,making it an attractive option.Gelatin,a denatured form of collagen, closely replicates the extracellular matrix, providing a biomimetic environment conducive to cellular growth.Fibrin, a key protein in blood clotting, offers a natural scaffold for cell attachment and proliferation.Additionally,synthetic polymers like polycaprolactone(PCL)and poly(lactic-co-glycolic acid) (PLGA) provide the benefit of customizable mechanical properties and degradation rates.Studies [47–49] have presented detailed surveys of practical applications of many types of materials for 3D printing of biological tissue.Our research is limited to proposing an efficient wound-filling extraction method with high accuracy.In the future, we will consider implementing the application of this research in conjunction with physician experts at hospitals in Vietnam.By harnessing 3D printing technology, as illustrated in Fig.8, healthcare professionals can craft highly tailored and precise facial prosthetics, considering each patient’s unique anatomy and needs.This high level of customization contributes to achieving a more natural appearance and better functional outcomes,addressing both aesthetic and functional aspects of facial reconstruction[14].This approach holds significant promise for enhancing facial reconstruction procedures and improving the overall quality of life for patients who have undergone facial trauma or have congenital facial abnormalities.Moreover, high-quality 3D facial scanning applications on phones are becoming popular.We are able to implement our proposal integration into smartphones to support sketching the reconstruction process on the injured face.This matter is further considered in our forthcoming research endeavors.

3.3 Limitations

Although our 3D facial wound reconstruction method achieves high performance, it still has certain limitations.Real-world facial data remains limited due to ethics in medical research.Therefore,we amalgamate scarce MRI data from patients who consented to share their personal data with the data generated from the MICA model to create a dataset.Our proposal primarily focuses on automatically extracting the region to be filled in a 3D face,addressing a domain similar to practical scenarios.We intend to address these limitations in future studies when we have access to a more realistic volume of 3D facial data from patients.

Furthermore, challenges related to unwanted artifacts, obstructions, and limited contrast in biomedical 3D scanning need to be considered.To tackle these challenges, we utilize cutting-edge 3D scanning technology equipped with enhanced hardware and software capabilities.This enables us to effectively mitigate artifacts and obstructions during data collection.We implement rigorous quality assurance protocols throughout the 3D scanning process, ensuring the highest standards of image quality.Additionally, we pay careful attention to patient positioning and provide guidance to minimize motion artifacts.Moreover, we employ advanced 3D scanning techniques, such as multi-modal imaging that combines various imaging modalities like CT and MRI.This approach significantly enhances image quality and improves contrast, which is essential for accurate medical image interpretation.

4 Conclusions

This study explored the benefits of using a TSGCNet to segment 3D facial trauma defects automatically.Furthermore,we have proposed an improved method to extract the wound filling for the face.The results show the most prominent features as follows:

- An auto-segmentation model was trained to ascertain the precise location and shape of 3D facial wounds.We have experimented with different loss functions to give the most effective model in case of data imbalance.The results show that the model works well for complex wounds on the Cir3D-FaIR face dataset with an accuracy of 0.9999993%.

- Concurrently, we have proposed a methodology to enhance wound-filling extraction performance by leveraging both a segmentation model and a 3D face reconstruction model.By employing this approach, we achieve higher accuracy than previous studies on the same problem.Additionally,this method obviates the necessity of possessing a pre-injury 3D model of the patient’s face.Instead,it enables the precise determination of the wound’s position,shape,and complexity,facilitating the rapid extraction of the filling material.

- This research proposal aims to contribute to advancing facial reconstruction techniques using AI and 3D bioprinting technology to print skin tissue implants.Printing skin tissue for transplants has the potential to revolutionize facial reconstruction procedures by providing personalized, functional, and readily available solutions.By harnessing the power of 3D bioprinting technology, facial defects can be effectively addressed, enhancing both cosmetic and functional patient outcomes.

- From this research direction, our proposed approach offers a promising avenue for advancing surgical support systems and enhancing patient outcomes by addressing the challenges associated with facial defect reconstruction.Combining machine learning, 3D imaging, and segmentation techniques provides a comprehensive solution that empowers surgeons with precise information and facilitates personalized interventions in treating facial wounds.

Acknowledgement:We would like to thank Vietnam Institute for Advanced Study in Mathematics(VIASM)for hospitality during our visit in 2023,when we started to work on this paper.

Funding Statement:The authors received no specific funding for this study.

Author Contributions:The authors confirm contribution to the paper as follows: study conception and design:Duong Q.Nguyen,H.Nguyen-Xuan,Nga T.K.Le;data collection:Thinh D.Le;analysis and interpretation of results:Duong Q.Nguyen,Thinh D.Le,Phuong D.Nguyen;draft manuscript preparation:Duong Q.Nguyen,Thinh D.Le,H.Nguyen-Xuan.All authors reviewed the results and approved the final version of the manuscript.

Availability of Data and Materials:Our source code and data can be accessed at https://github.com/SIMOGroup/WoundFilling3D.

Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.