skip to main content
research-article

Learning the Image Processing Pipeline

Published: 01 October 2017 Publication History

Abstract

Many creative ideas are being proposed for image sensor designs, and these may be useful in applications ranging from consumer photography to computer vision. To understand and evaluate each new design, we must create a corresponding image processing pipeline that transforms the sensor data into a form, that is appropriate for the application. The need to design and optimize these pipelines is time-consuming and costly. We explain a method that combines machine learning and image systems simulation that automates the pipeline design. The approach is based on a new way of thinking of the image processing pipeline as a large collection of local linear filters. We illustrate how the method has been used to design pipelines for novel sensor architectures in consumer photography applications.

References

[1]
A. El Gamal, “Trends in CMOS image sensor technology and design,” in IEDM Tech. Dig., Dec. 2002, pp. 805–808.
[2]
B. Blanco-Filgueira, P. L. Martínez, and J. B. R. Aranda, “A review of CMOS photodiode modeling and the role of the lateral photoresponse,” IEEE Trans. Electron Devices, vol. 63, no. 1, pp. 16–25, Jan. 2016.
[3]
E. R. Fossum and D. B. Hondongwa, “A review of the pinned photodiode for CCD and CMOS image sensors,” IEEE J. Electron Devices Soc., vol. 2, no. 3, pp. 33–43, May 2014.
[4]
J.-T. Chang, “Compact camera module with lens array,” U.S. Patent 8 289 409 B2, Oct. 16, 2012.
[5]
D. Reshidko and J. Sasian, “Current trends in miniature camera lens technology,” SPIE Newsroom, vol. 2, Feb. 2016. 10.1117/2.1201602.006327.
[6]
A. Longoni, F. Zaraga, G. Langfelder, and L. Bombelli, “The transverse field detector (TFD): A novel color-sensitive CMOS device,” IEEE Electron Device Lett., vol. 29, no. 12, pp. 1306–1308, Dec. 2008.
[7]
T. Georgiev, Z. Yu, A. Lumsdaine, and S. Goma, “Lytro camera technology: Theory, algorithms, performance analysis,” Proc. SPIE, vol. 8667, p. 86671J, Mar. 2013.
[8]
T. E. Bishop and P. Favaro, “The light field camera: Extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 34, no. 5, pp. 972–986, May 2012.
[9]
K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph., vol. 32, no. 4, p. 46, 2013.
[10]
P. Baranov and O. Drak, “A new color filter array with high light sensitivity and high resolution properties,” in Proc. IEEE NW Russia Young Res. Elect. Electron. Eng. Conf. (EIConRusNW), Feb. 2015, pp. 17–23.
[11]
S. K. Nayar, F. Yasuma, and T. Mitsunaga, “Generalized assorted pixel camera systems and methods,” U.S. Patent 2015 0 070 562 A1, Mar. 12, 2015.
[12]
F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: Postcapture control of resolution, dynamic range, and spectrum,” IEEE Trans. Image Process., vol. 19, no. 9, pp. 2241–2253, Sep. 2010.
[13]
Y. Monno, M. Tanaka, and M. Okutomi, “Multispectral demosaicking using guided filter,” Proc. SPIE, vol. 8299, p. 82990O, Jan. 2012.
[14]
H. Tang, X. Zhang, S. Zhuo, F. Chen, K. N. Kutulakos, and L. Shen, “High resolution photography with an RGB-infrared camera,” in Proc. IEEE Int. Conf. Comput. Photogr. (ICCP), Apr. 2015, pp. 1–10.
[15]
S. K. Nayar, “Computational Cameras: Redefining the Image,” Computer, vol. 39, no. 8, pp. 30–38, Aug. 2006.
[16]
S. P. Lansel, “Local linear learned method for image and reflectance estimation,” Ph.D. dissertation, Dept. Elect. Eng., Stanford Univ., Stanford, CA, USA, 2011.
[17]
C. A. Poynton, “SMPTE tutorial: ‘Gamma’ and its disguises: The nonlinear mappings of intensity in perception, CRTs, film, and video,” SMPTE J., vol. 102, no. 12, pp. 1099–1108, 1993.
[18]
H. Guo, H. He, and M. Chen, “Gamma correction for digital fringe projection profilometry,” Appl. Opt., vol. 43, no. 14, pp. 2906–2914, 2004.
[19]
Q. Tian and H. Jiang, “Accelerating a learning–based image processing pipeline for digital cameras,” in Proc. IS&T/SPIE Electron. Imag., Mar. 2015.
[20]
D. Khashabi, S. Nowozin, J. Jancsary, and A. W. Fitzgibbon, “Joint demosaicing and denoising via learned nonparametric random fields,” IEEE Trans. Image Process., vol. 23, no. 12, pp. 4968–4981, Dec. 2014.
[21]
J. E. Farrell, F. Xiao, P. B. Catrysse, and B. A. Wandell, “A simulation tool for evaluating digital camera image quality,” Proc. SPIE, vol. 5294, pp. 124–131, Dec. 2003.
[22]
J. E. Farrell, P. B. Catrysse, and B. A. Wandell, “Digital camera simulation,” Appl. Opt., vol. 51, no. 4, pp. A80–A90, 2012.
[23]
J. Farrell, M. Okincha, and M. Parmar, “Sensor calibration and simulation,” Proc. SPIE, vol. 6817, pp. 68170R-1–68170R-9, Feb. 2008.
[24]
J. Chenet al., “Digital camera imaging system simulation,” IEEE Trans. Electron Devices, vol. 56, no. 11, pp. 2496–2505, Nov. 2009.
[25]
M. R. Luo, G. Cui, and B. Rigg, “The development of the CIE 2000 colour-difference formula: CIEDE2000,” Color Res. Appl., vol. 26, no. 5, pp. 340–350, 2001.
[26]
G. H. Golub, M. Heath, and G. Wahba, “Generalized cross-validation as a method for choosing a good ridge parameter,” Technometrics, vol. 21, no. 2, pp. 215–223, 1979.
[27]
X. Zhang and B. A. Wandell, “A spatial extension of CIELAB for digital color-image reproduction,” J. Soc. Inf. Display, vol. 5, no. 1, pp. 61–63, 1997.
[28]
M. Parmar and B. A. Wandell, “Interleaved imaging: An imaging system design inspired by rod-cone vision,” Proc. SPIE, vol. 7250, pp. 725008-1–725008-8, Jan. 2009.
[29]
M. Kumar, E. O. Morales, J. E. Adams, and W. Hao, “New digital camera sensor architecture for low light imaging,” in Proc. 16th IEEE Int. Conf. Image Process. (ICIP), Nov. 2009, pp. 2681–2684.
[30]
J. Wang, C. Zhang, and P. Hao, “New color filter arrays of high light sensitivity and high demosaicking performance,” in Proc. 18th IEEE Int. Conf. Image Process. (ICIP), Sep. 2011, pp. 3153–3156.
[31]
Y. Hel-Or, “The canonical correlations of color images and their use for demosaicing,” HP Labs, Haifa, Israel, Tech. Rep. HPL-2003-164R1, 2004.
[32]
Photography—Electronic Still Picture Imaging—Resolution and Spatial Frequency Responses, document ISO 12233:2014, International Organization for Standardization, 2014.
[33]
J.-C. Jeonget al., “High-quality stereo depth map generation using infrared pattern projection,” ETRI J., vol. 35, no. 6, pp. 1011–1020, 2013.
[34]
J. Smisek, M. Jancosek, and T. Pajdla, “3D with Kinect,” in Consumer Depth Cameras for Computer Vision. London, U.K.: Springer, 2013, pp. 3–25.
[35]
C. Fredembach, Y. Lu, and S. Süsstrunk, “Camera design for the simultaneous capture of near-infrared and visible images,” U.S. Patent 8 462 238 B2, Jun. 11, 2013.
[36]
H. Watanabe, “Solid-state imaging device and camera module,” U.S. Patent 2015 0 221 691, Aug. 6, 2015.
[37]
H. Takeda, S. Farsiu, and P. Milanfar, “Kernel regression for image processing and reconstruction,” IEEE Trans. Image Process., vol. 16, no. 2, pp. 349–366, Feb. 2007.
[38]
P. Milanfar, “A tour of modern image filtering: New insights and methods, both practical and theoretical,” IEEE Signal Process. Mag., vol. 30, no. 1, pp. 106–128, 2013.
[39]
K. Zhang, D. Tao, X. Gao, X. Li, and Z. Xiong, “Learning multiple linear mappings for efficient single image super-resolution,” IEEE Trans. Image Process., vol. 24, no. 3, pp. 846–861, Mar. 2015.
[40]
D. G. Stork and M. D. Robinson, “Theoretical foundations for joint digital–optical analysis of electro-optical imaging systems,” Appl. Opt., vol. 47, no. 10, pp. B64–B75, 2008.
[41]
F. Heideet al., “FlexISP: A flexible camera image processing framework,” ACM Trans. Graph., vol. 33, no. 6, p. 231, 2014.
[42]
F. Heide, W. Heidrich, and G. Wetzstein, “Fast and flexible convolutional sparse coding,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2015, pp. 5135–5143.
[43]
F. G. Germain, I. A. Akinola, Q. Tian, S. Lansel, and B. A. Wandell, “Efficient illuminant correction in the local, linear, learned (L3) method,” Proc. SPIE, vol. 9404, pp. 940404-1–940404-7, Feb. 2015.

Cited By

View all

Index Terms

  1. Learning the Image Processing Pipeline
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image IEEE Transactions on Image Processing
        IEEE Transactions on Image Processing  Volume 26, Issue 10
        Oct. 2017
        486 pages

        Publisher

        IEEE Press

        Publication History

        Published: 01 October 2017

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 25 Jan 2025

        Other Metrics

        Citations

        Cited By

        View all

        View Options

        View options

        Figures

        Tables

        Media

        Share

        Share

        Share this Publication link

        Share on social media