While it is amazing – and controversial for some – to see the footage of NASA astronauts walking on the Moon, it must be admitted that the quality of those times is not the best, even despite the agency’s efforts to improve it. Now, thanks to artificial intelligence technology, an expert has managed to create a completely new visual experience from the original footage.
A specialist in photo and video restoration, known under the nickname of DutchSteamMachine, he has created magic along with artificial intelligence by editing some footage of the Apollo missions to create vivid images.
“I really wanted to provide an experience never seen before for these old footage,” he told the portal. Universe Today.
For example, the video of the Apollo 16 mission rover, originally shot at 12 frames per second (FPS), has now been increased to 60fps:
Dazzling, right? And there’s more, like this enhanced view of the surface where the Apollo 15 mission landed, in Hadley Rille:
Or even more historical. Neil Armstrong can be clearly seen below in this improved version of the famous ‘little step’ video given in Apollo 11, shot with a 16mm camera inside the lunar module:
Wow awesome!
The artificial intelligence (AI) that DutchSteamMachine used is called DAIN (Depth-Aware video frame INterpolation). This AI it’s open source, free and in constant development.
“Interpolation motion” or “motion compensated interpolation for frames” is a way of processing in which intermediate animated frames are generated between existing and original ones, in order to make a more fluid video and compensate for blurring, etc.
However, the technique used by this expert is not something you can easily try at home. A high-end GPU with special coolers is required for this level of processing. DutchSteamMachine details that, for example, a video of just 5 minutes may take 6 to 20 hours to complete. Although the results speak for themselves.
The edition
Here is a more technical and detailed explanation for those skilled in the art.
“I divide the original file – in the highest possible quality – into individual PNG frames, and add them to the AI with the number of frames per second (1, 6, 12, or 24) and the desired amount of output for interpolation ( 2x, 4x, 8x). AI starts by using my GPU to identify two consecutive real frames. Using algorithms, it analyzes movements of the objects in the two frames and renders new ones. With an interpolation of about 5x, it is capable of rendering five false frames from the real two, “he explains.
“If the footage was shot at 12fps and the interpolation range is set to 5x, the resulting frames per second will be 60, which means that with just 12 actual frames, I made 48 fakes. Both are then exported as video and played back at 60fps.”
“Finally, I apply a color correction, as the original files have a blue or orange tint, and I synchronize the footage with the audio if possible,” he concludes.
Source: ScienceAlert