THE IS&T LONDON IMAGING MEETING: FUTURE COLOUR IMAGING Within the proceedings posted to the digital library, you will find the technical papers were presented in the inaugural London Imaging Meeting held on the 29th of September (tutorial day), the 30th of September, and the 1st of October 2020. The program comprised 2 keynotes, 5 focal presentations, 15 orals, 13 posters, and 9 late breaking posters (the latter are abstract only). The genesis of the London Imaging Meeting (LIM) began with a conversation we had with the Society of Imaging Science (IS&T) about 18 months ago. We had observed that it was common to attend a day-long technical workshop in London and that these were not only popular, but generally highly oversubscribed. Moreover, the 'day format' typically comprised talks only and there were not related archival papers. And, to pack as much content in as possible the day-workshops were long with little opportunity to speak with the speakers. We proposed that a two-day conference format could retain the punchy format of the one-day meeting, but make the travel easier (including from Europe) and, crucially, it would also provide a forum for the publication of new archival work. The single night in London would both facilitate researchers meeting each other and be the catalyst for new collaborations. Importantly, we pitched the LIM concept as a topics-based meeting. This year, the conference was titled "Future Colour Imaging". We reached out to five international experts in colour imaging to give focal talks to seed five sessions: Prof. Jon Hardeberg, NTNU, Norway (multispectral); Prof. Ronnier Luo, Zhejiang University (color science); Prof. Raimondo Schettini, University Milano-Bicocca (learning color imaging); Prof Hannah Smithson, University of Oxford (perception); and, Philipp Urban, Fraunhofer Institute for Computer Graphics Research IGD (3D printing). On each day we also had a superb Keynote. Prof. Felix Heide, Princeton University, gave a talk titled "Designing Cameras to Detect the 'Invisible': Towards Domain-Specific Computational Imaging", which, among other topics, considered how to place camera pipelines into today's commonly used CNN deep learning framework. Prof. Laurence Maloney, New York University, gave the keynote, "Surface Color Perception in Realistic Scenes: Previews of a Future Color Science". The keynote used the tool of VR to control the presentation of physical stimuli to observers to investigate how accurately we solve the color constancy problem. There were many strong contenders for the LIM best paper prize including, S. Mohajerani et al., "Illumination-Invariant Image from 4-Channel Images: The Effect of Near-infrared Data in Shadow Removal" (Simon Fraser University); M. Kim et al., "Contrast Sensitivity Functions for HDR Displays" (University of Cambridge); and, Y. Zhu, "Designing a Physically-feasible Colour Filter to Make a Camera More Colorimetric", (University of East Anglia). The best paper prize for LIM was awarded to: P. Backes and J. Fröhlich, "A Practical Approach on Non-regular Sampling and Universal Demosaicing of Raw Image Sensor Data" (Stuttgart Media University). We thank everyone who helped make LIM a success including the IS&T office, the presenters, the reviewers, our focal speakers and keynotes, and the audience who participated in making the event engaging and vibrant. A special thanks go to the Engineering and Physical Sciences Research Council (EPSRC) who provided funding through the grant EP/S028730/1. —Prof. Graham Finlayson, LIM Series Chair
The goal of this research work is to generate high quality chromatic contrast sensitivity (CCS) data over a large range, especially at low spatial frequencies surrounding 5 colour centres, e.g.white, red, yellow, green and blue. An experiment was carried out using forced-choice stair-case method to investigate the visible colour difference thresholds in different colour changing directions at different spatial frequencies. The just noticeable difference (JND) ellipses at different spatial frequencies were used to represent the data.
In this paper, skin tone heterogeneity in five facial areas (forehead, right cheekbone, left cheekbone, nose tip and chin) was investigated under six light sources with correlated color temperature (CCT) of 2850 K, 3500 K, 5000 K, 5500 K, 6500 K and 9000 K. Firstly, a facial image capturing protocol was developed and applied to five female participants, and their facial skin tone was analyzed based on the captured images. Through color characterization of the camera, XYZ values in each facial area were converted by a matrix from the extracted RGB data and then transformed to CAM02-UCS color space. MCDM with CAM02-UCS color difference was used to quantify skin tone heterogeneity in each facial area. The results under different light sources indicated that larger heterogeneity exists under the light source with lower CCT, and when the CCT of the light source ranges from 5000 K to 9000 K, there was smaller skin tone heterogeneity in each facial area.
This experiment was aimed to study the preference of mobile phone facial images captured under different simulated ambient lightings. The experiment was carried out by assessing the preference of images for two facial images under 11 lightings (5 correlated colour temperature levels at 2 illuminances plus a dark condition). Forty-five images were processed via CAT02 chromatic adaptation transform to simulate the pictures captured under different light environments. The results revealed that the preferred capture region was between 6500 and 8000K around -0.05 Duv. Furthermore, it was found that the preferred skin tones of all the 45 rendered had good agreement under all the ambient lightings of viewing, i.e.to have mean values of L*, Cab*, hab of [76.3 25.1, 46.4°] units under D65/10 o conditions.
How and to what extent the increase of Cab * affects on various subjective evaluations for congenital red-green color deficiency (CVD) and normal color vision (NCV) observers was investigated using scenery, food, and graph images. Results of "Pale vs Deep" evaluation indicate similar tendency for all color vision types in all test images, indicating that CVDs recognize the saturation change of images similar to NCVs using some kind of strategy. Individual differences of the CVDs in the results of other adjective pairs such as "Unnatural vs Natural" are generally larger than those of NCVs. Some color combinations in the graph images are indiscriminable for either protan or deutan, and thus are not recommended to be used.
We exploit evolutionary computation to optimize the handcrafted Structural Similarity method (SSIM) through a datadriven approach. We estimate the best combination of luminance, contrast and structure components, as well as the sliding window size used for processing, with the objective of optimizing the similarity correlation with human-expressed mean opinion score on a standard dataset. We experimentally observe that better results can be obtained by penalizing the overall similarity only for very low levels of luminance similarity. Finally, we report a comparison of SSIM with the optimized parameters against other metrics for full reference quality assessment, showing superior performance on a different dataset.
In texture analysis, stationarity is a fundamental property. There are various ways to evaluate if a texture image is stationary or not. One of the most recent and effective of these is a standard test based on non-decimated stationary wavelet transform. This method permits to evaluate how stationary is an image depending on the scale considered. We propose to use this feature to characterize an image and we discuss the implication of such approach.
Experiments were carried out to investigate the simultaneous lightness contrast effect on a self-luminous display using simultaneous colour matching method. The Albers ' contrast pattern named ' double-crosses ' was used. The goals of this study were to model lightness contrast effect and modify it in the CAM16 colour appearance model. Five coloured targets were studied, and 41 test/background combinations were displayed on a calibrated display. Twenty normal colour vision observers performed colour matching in the experiment. In total, 820 matches were accumulated. The result shows present CAM16 has an unsatisfactory prediction for the effect, especially in the positive region which means the background is brighter than the target. Two models were established based on the visual data, i. e., with and without modification to the lightness difference in CAM16 space. Both of the models predict the effect with high accuracy and reliability.
Objects in real three-dimensional environments receive illumination from all directions, characterized in computer graphics by an environmental illumination map. The spectral content of this illumination can vary widely with direction [1], which means that the computational task of recovering surface color under environmental illumination cannot be reduced to correction for a single illuminant. We report the performance of human observers in selecting a target surface color from three distractors, one rendered under the same environmental illumination as the target, and two rendered under a different environmental illumination. Surface colors were selected such that, in the vast majority of trials, observers could identify the environment that contained non-identical surface colors, and color constancy performance was analyzed as the percentage of correct choices between the remaining two surfaces. The target and distractor objects were either matte or glossy and presented either with surrounding context or in a dark void. Mean performance ranged from 70% to 80%. There was a significant improvement in the presence of context, but no difference for matte and glossy stimuli, and no interaction between gloss and context. Analysis of trial-by-trial responses showed a dependence on the statistical properties of previously viewed images. Such analyses provide a means of investigating mechanisms that depend on environmental features, and not only on the properties of the instantaneous proximal image.