WiMi Hologram (WIMI) Develops A Deep CNN-Based 3D Image Reconstruction Algorithm System
WiMi Hologram Cloud Inc. (WIMI) (“WiMi” or the “Company”), a leading global Hologram Augmented Reality (“AR”) Technology provider, today announced the development of a deep convolutional neural network-based 3D image reconstruction algorithm system. The system is an innovative model that extracts the features of the input image through a convolutional neural network, then generates the parameters of the 3D model through fully connected layers, and finally inputs these parameters into the 3D model for reconstruction.
The system contains several modules, including dataset preparation, feature extraction, parameter generation, 3D reconstruction, model evaluation, and application interface, each with unique functions and roles, forming a complete system.
Dataset preparation: The 3D image reconstruction algorithm needs a large amount of 3D model data as a training set so that the deep learning algorithm can learn the morphological and structural features of the 3D model. This module is responsible for collecting and producing the training dataset and performing data pre-processing and cleaning to ensure the quality and availability of the dataset. The dataset’s quality directly affects the algorithm’s accuracy and robustness. The dataset contains a variety of 3D models of different classes and morphologies to ensure the universality and generalization ability of the algorithm.
Feature extraction: This module performs feature extraction and representation of the input image using a convolutional neural network, which typically includes multiple convolutional and pooling layers, to extract high-level features from the input image.
Parameter generation: This module uses fully connected layers or other regression algorithms to map the feature vectors from the encoder output into the 3D space. These parameters control the morphology, size, pose, and other attributes of the 3D model.
3D reconstruction: This module inputs the parameters into the 3D model to generate the final 3D reconstruction model. This module typically uses deconvolution and upsampling layers to map the feature vectors from the encoder output into 3D space.
Model evaluation: This module evaluates the differences and errors between the generated 3D and original models. These errors can be used to optimize the algorithm parameters and improve the training dataset to increase the accuracy and robustness…
Click Here to Read the Full Original Article at All News…