Researchers have presented a new method that can be used to automatically create three-dimensional transformations from artificial 2D images with photorealistic claims.
Generative Adversarial Networks (GANs) have been popular for a long time. machine learning model, in which two neural networks compete against each other to make more accurate predictions, for example generating artificial 2D images of people. Researchers at Stanford University, with partial Nvidia involvement, have now used it to create high-resolution 3D images, to create existing 2D GAN images, for example, “3D-compatible” rendering based on head-on With different viewing angles. sight.
The researchers call their model EG3D (“Efficient Geometry-Aware 3D Generative Adversarial Network”) and is said to be much less computationally intensive as well as more accurate than previous efforts in this direction. as in Scientific Paper (PDF) As explained, previous 3D-GANs are either too computationally intensive or make predictions that are not 3D-compatible. EG3D basically checks the imagined geometric structure and transforms the whole thing accordingly. project stands via Github.com Also available to the public. A slightly older video from Stanford University also shows the whole thing:
Like Marktechpost.com Website informed ofThe biggest downside, however, is that it is difficult to edit and refine works. A machine learning model developed by the University of Wisconsin is called giraffehd, which can be used to determine and select different variables, can help here. According to the report, while the trend towards machine-generated 3D images is clearly emerging, there is still work to be done in terms of algorithms and broad applicability. Meanwhile, Nvidia itself already existed “GANverse3D” And “Instant NERF” Similar methods have been presented.
Source: Via pcgamer.com
Freelance twitter maven. Infuriatingly humble coffee aficionado. Amateur gamer. Typical beer fan. Avid music scholar. Alcohol nerd.