Understanding Video Rendering and Denoising in Blender
Videos are made up of a series of frames. In Blender, frames are typically rendered using the Cycles render engine, which is path-traced. This means it projects rays of ‘light’ from the camera’s perspective into the scene. These rays bounce around, reflecting off surfaces like wood, glass, and metal, before returning to the camera.
The advantage of this approach is its ability to produce realistic-looking results. However, it requires significant processing power and time to render each frame.
In the past, before AI-denoising, you’d need to sit and let the computer render for a long time because if you stopped halfway through the render, you’d see a bunch of dots and noise in the image; there’d bright pixels where the light rays have been calculated and dark pixels where they have not been. Nowadays, big companies like Intel have invented AI-Denoisers which take in these ‘half-calculated’ images which are filled with noise, fill in the gaps, and vollia they magically output images that don’t have noise in them. This saves a tremendous amount of time for artists to render computer graphics.
The problem with these denoisers is that they work on a frame by frame basis. That means they take in one frame, fill in the gaps, then move on to the next. But since each denoising pass is independent of the last, the way that the denoiser fills in those gaps is slightly different each frame. In a video of 600 frames, that leads to a lot of flickering issues. To address this, people like the Authors of Pidgeon Tools have developed temporal denoising solutions which denoise each frame while taking into account the frames that came directly before and after it. This reduces the flickering problem seen between each frame.
But their solution had an issue, it needed to denoise the frames based on the cleanest versions possible, which meant disabling Blender’s built-in motion blur. To reintroduce some motion blur, we attempt to apply the motion blur as a post-processing effect AFTER the temporal denoiser has done its magic. In order to do that we need to save data on how fast objects are moving in the frame as vector data and we also need to save data on how far away the objects are from the camera. I worked on a solution that changed how that depth data was saved and applied to the post processing motion blur effect and you can see the results below. This specifically fixed how the depth data was saved when moving objects across the environment background.
Below is a scene of a cube moving fast from left to right of the frame.
WITH NO MOTION BLUR AT ALL
It just appears to be flickering across the screen with no sense of motion.
With SUPER IMAGE DENOISER’s DEFAULT MOTION BLUR SOLUTION (BEFORE MY FIX)
This is slightly better but there’s no deformation of the cube.
My Solution
Notice improvements to the deformation.
Blender’s Built-In Motion Blur
This is the standard for how motion blur should look, the reason we can’t just use Blender’s results every single time is because in a more complex scene, we may need heavy temporal denoising to get rid of noise while rendering in a reasonable amount of time. My solution gets us part way there with Temporal Denoising + Motion Blur.