Image HDR Reconstruction

I’ve been keeping an eye out for innovations and interesting work outside of the generative AI space recently, as I think there has been a lot of important machine learning stuff going on that hasn’t necessarily cut through the gen AI noise, and some of this work may well form part of the tools that we use in the near future. 

One example of this is the Computational Photography Lab at Simon Fraser University’s work on reconstructing high dynamic range (HDR) images from low dynamic range (LDR)

LDR images have limited contrast between the darkest and brightest areas, often appearing flat with less detail in shadows and highlights. HDR images have a wider range of contrast that preserves more detail in both the shadows and highlights, making them look more vibrant and true to life. HDR imagery is also more flexible and useful for visual effects, particularly where work may need to meet HDR broadcast specifications.

The researchers at the Computational Photography Lab took on the challenge of taking LDR images and expanding out their dynamic range, filling in the missing information required to create a HDR image. What makes their approach work is that they do this breaking the task down into two separate sub-tasks: extending the dynamic range and recovering lost colour details. Their video does a great job of explaining this process - I’ll add that the Lab is doing great work with their presentation videos, they’re a cut above the usual!

If you get the sense that the approach of breaking down images this way could be used to tackle other tricky image tasks, you’d be right,  the Computational Photography Lab have also looked at using a similar approach for relighting flash photography. It’s also  similar to how many VFX compositing tasks are approached, arbitrary output variables (AOVs) are renders split up into passes such as albedo, specular and lighting. This is part of the reason why this “decomposition” work is very compelling in the VFX space, it could help make the process of working with video and images much more flexible.

Previous
Previous

Entagma’s Machine Learning

Next
Next

Generative Extend in Premiere Pro (beta)