This post is also available in: עברית (Hebrew)
A new technology uses artificial intelligence to create a digital 3D scene out of a set of 2D images in seconds.
NVIDIA’s AI researchers developed a way to make digital 3D scenes out of 2D images within seconds. Instant NeRF is a form of inverse rendering that utilizes artificial intelligence to predict how light behaves in real life. It enables researchers to reconstruct a 3D scene from a handful of 2D images taken from various angles.
With a few dozen training photos, the resulting 3D scene is rendered within seconds after training. Based on 2D images as inputs, NeRF is essentially a neural network that represents and renders realistic 3D scenes using generative models. Using a mechanism that predicts the color of light scattered from any point in 3D space, neural networks are able to fill in the gaps and reconstruct the scene.
In order to reduce the time necessary for artificial intelligence training, Anvidia developed a special multi-resolution hash grid encoding technique that shortens the rendering time, thus reducing the learning time and facilitating the immediate processing of two-dimensional images. Check it out.