3D Facial Modeling Based on Multi-scale Feature Fusion and Lighting Robustness
Abstract
Facial 3D modeling technology is widely used and has become an important research direction in the fields of artificial intelligence and computer vision. However, the modeling accuracy and robustness of existing technologies in dealing with weak texture areas and complex lighting conditions are insufficient, which limits their practical application in production. Therefore, a facial 3D modeling method based on multi-scale feature fusion and lighting robustness optimization was proposed, and a multiscale dense feature network and lighting robustness feature fusion network were constructed. The experimental outcomes indicated that the method exhibited excellent performance on the dataset. Among them, structural similarity reached 0.954, and the average absolute error was the lowest at 0.63 mm. Under dynamic lighting conditions, the feature consistency reached 0.941, and the point cloud error was reduced to 0.85 mm. In addition, tests in security and virtual reality scenarios showed that after using this method, the accuracy increased to 92.8%, the peak signal-tonoise ratio reached 33.0 dB, and the model running efficiency improved to 36 frames per second, verifying the practicality and reliability of the method. The research provides new ideas for developing stable, efficient, and practical facial 3D modeling methods, which is expected to promote the widespread application of related technologies in complex environments.
Keywords
Full Text:
PDF
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.
![]() | ![]() | ![]() |