I recently discovered a personal technique that significantly improves convolution-based denoising filters by using an octagon sampling pattern inspired by sacred geometry.
Please note that the term “sacred” is used playfully in the story.
This method leverages GPU bilinear interpolation to approximate an isotropic convolution, automatically balancing out sampling errors inherent in bilinear interpolation. The approach eliminates the checkered artifacts common with traditional square kernel filters.

image880×572 32.8 KB
I haven’t found this particular technique documented online, but for blur/denoising post-processing, it offers a noticeable quality improvement.
=================================================
I was working on real-time volumetric clouds in three.js (see this article) where raycasting in discrete steps introduces noise. Since TAA-like denoising wasn’t an option due to challenges with cloud motion vectors, I had to rely on convolution-based techniques.
Convolution involves a sliding window approach where an operation is applied to compute each output pixel. In 2D, standard practice uses a square window with weights (like a Gaussian kernel) to compensate for discretization bias, but these filters introduce aliasing due to their non-isotropic sampling.

image413×342 1.85 KB
Despite compensation, square kernels inherently yield approximate, aliasing-prone sampling. Even neural nets that rely on such convolutions for tasks like image upscaling suffer from persistent aliasing artifacts.

image508×381 42 KB
This image, from a paper, illustrates artifacts from de-convolution in image upscaling, which is related to the issue at hand.
GPU pipelines provide nearly free bilinear interpolation. My idea was to sample from multiple directions at equal distances from the center pixel. However, bilinear interpolation itself is an approximation, and its error varies with sample position. I needed a geometry that balanced out these errors.
Exploring various geometries, I stumbled upon this subreddit, where enthusiasts create beautiful patterns. Reinvigorated by the beauty of geometry, I developed a sampling pattern where the samples are arranged in an octagon (at 22.5°, 67.5°, 112.5°, etc.). This yields 8 surrounding samples plus the center pixel – matching the cost of a standard convolution, especially since the GPU handles bilinear interpolation efficiently.

image663×665 35.1 KB
By carefully choosing the kernel radius, the samples are close to ideal positions, and the errors are uniformly balanced, alleviating the need for complex weight adjustments.
For larger kernel radii, I adapted concepts from the “A trous” wavelet transform, which convolves iteratively with increasing distances from the center. In this isotropic approximation, each kernel radius is chosen so that the samples align as near as possible to the 4-pixel intersections.

image840×835 28.6 KB
In denoising cloud images, sample weights are adjusted based on luminance and depth differences – with a base weight of 1/4 for the center and 3/32 for each of the 8 surrounding samples.
Results:

image1175×705 107 KB
With extra contrast:

image1172×700 64.9 KB
The sacred sampling pattern effectively eliminates checkered aliasing without changing the overall denoising quality or the amount of blur significantly. In scenarios like denoising, where filters are computationally inexpensive, this method can achieve a softer blur effect.
I also apply this technique to rendering Gaussian splats with alpha-hash transparency. Here, using a pass of the sacred denoiser enables me to tune down TAA (which otherwise introduces ghosting) and achieve smoother visuals.
Looking ahead, I’m considering adapting the FXAA algorithm using this approach, though I’m cautious about any impact on edge detection. Additionally, I wonder if such a convolution could be integrated into convolutional neural networks. It remains to be seen whether GPU bilinear interpolation would be efficient in that context or how its propagation of errors might influence multi-layer networks. Experimenting with a TensorFlow layer, for example, could be an interesting exploration.