High Dynamic Range (HDR) imaging provides the ability to capture, manipulate and display real-world lighting. This is a significant upgrade from Standard Dynamic Range (SDR) which only handles up to 255 luminance values concurrently. While capture technologies have advanced significantly over the last few years, currently available HDR capturing sensors (e.g., smartphones) only improve the dynamic range by a few stops over conventional SDR. This content has a 10-12 f-stop range, which is still substantially less than a 20 f-stop; the desired dynamic range for true HDR images. Moreover, they frequently exhibit ghosting artifacts in challenging scenes. The ability to reliably obtain high-quality HDR images remains a challenge. Furthermore, there exists a huge amount of legacy content that has been captured using SDR which needs to be adapted to be visualized on HDR displays.
The focus of this grand challenge is to transform lower range content (SDR and lower f-stops) into HDR via the process of Inverse Tone Mapping (ITM).
Modern ITM operators employ deep-learning to generate HDR content. Typically, these methods recover the overall radiance and missing details in overexposed areas such as colors and high-frequency details. However, other important aspects are not taken into account such as noise levels, details/colors in underexposed areas, temporal coherency, etc.
We challenge researchers to provide a novel/improved ITM operator that improves the state-of-the-art. In this challenge, we provide a novel dataset of high-quality HDR images.
Impact
Exploring novel inverse tone mapping operators is extremely important for the industry because there is a need for high-quality conversion of legacy content shot in SDR for display on HDR displays. Furthermore, current HDR capturing technology, limited to a 10-12 f-stop range, is still not sufficient to cover all lighting conditions. Inverse tone mapping operators remain extremely important to improve the dynamic range of such content.
Rules
-
Previously published methods can be submitted ONLY by their authors and partecipate to the challenge.
-
NOTE: To submit a paper to ICIP22, the authors of previously published methods need to:
-
improve the current algorithm with significant changes;
-
improve the text of the paper with significant changes;
-
higlight that the method was previosuly published, cite it in the references, and highlight differences.
-
Sharing data (e.g., neural network weights) outside of teams is not permitted before the final leaderboard is published.
-
The maximum team size is 5.
Registration
The registration is now open;
REGISTER.
Datasets
The official dataset is composed of a set of pair of images an input and the expected output.
The input is a SDR image stored as a PNG file, the output is a HDR stored as HDR (Radiance format or .hdr).
An example can be downloaded here.
SDR images are encoded using sRGB non-linear
function; the HDR Toolbox implementation was used in this challenge; you can find it here.
To obtain linear SDR values, the parameter inverse has to be set to 1.
- The official dataset:
- A link to the dataset will be sent to registered teams.
- RGB values of the images in the dataset are NOT calibrated; i.e., they have relative values and NOT absolute values. The partecipants are free to rescale such images to a given maximum; this can be achieved by dividing the image by the maximum luminance value and multiplying it by the desired maximum. For example, if you want to have a maximum of 1,000 cd/m2 you have to do the following (MATLAB and HDR Toolbox syntax): img = (img * 1000) / max(max(lum(img)))
- The minimum and maximum luminance values of the HDR images in the dataset are reported here.
The official dataset is distributed under the Creative Commons BY-NC-SA license. Note that:
- Images from 0237_HDR.hdr(.png) too 3041_HDR.hdr(.png) (389 HDR images and 389 SDR images) are copyright by Remi Cozot.
- Images from 5001_HDR.hdr(.png) too5057_HDR.hdr (.png) (36 HDR images and 36 SDR images) are copyright by Francesco Banterle.
- Other HDR datasets:
Evaluation
The evaluation criteria are:
Please note that evaluation will be done using display-referred values; our reference display is with a peak of 1000cd/m2 and a minimum of 0.01 cd/m2.
Please note the committee will take into account recent publications on evaluation such as
How to cheat with metrics in single-image HDR reconstruction by Gabriel Eilertsen, Saghi Hajisharif, Param Hanji, Apostolia Tsirikoglou, Rafal K. Mantiuk, Jonas Unger.
Important Dates
- 04/02/22 - Registration is open.
- 11/02/22 - Release of the training dataset (with ground truth).
- 22/04/22 - Papers deadline; for who wants to SUBMIT a paper to ICIP.
- 15/05/22 - Release of the testing dataset (no ground truth).
30/06/22 15/07/22 (23:59 Anywhere on Earth)- The competition closes: data need to be sent to the official email of the competition:
- The results need to be sent to grandchallenge@isti.cnr.it as a link using one of the following methods: WeTransfer, Google Drive, and Dropbox.
- The images must maintain the original image size.
- The images need to be encoded using Radiance file format; i.e., .hdr.
- The intensity values should be scene-referred; display-referred version will be created by the organizers during the evaluation phase.
- The .zip file needs to contain a .pdf file with a brief explanation of the method with details about the pipeline/architecture (e.g., loss, training strategies), the employed datasets, etc.
- Other files are not permitted to be in the .zip file (executables, macOS log files, log files, etc.); if they will be present the submission will be withdrawn.
- 11/07/22 - Camera ready paper; for who sent and got accepted a paper.
- Early September - Announcement of the winner team and board.
Contact Us
E-mail us for more information or issue to grandchallenge@isti.cnr.it.