Coarse-to-Fine Network-Based Intra Prediction in Versatile Video Coding
Abstract
:1. Introduction
- An intra prediction method based on Coarse-to-fine networks that employ both convolutional neural networks and fully connected layers is proposed to enhance the coding efficiency of VVC intra prediction.
- The coarse network is designed to adjust the impact on prediction performance based on the positions and conditions of reference samples.
- The fine network generates improved prediction samples by considering continuity with adjacent reference samples and facilitates prediction through upscaling at block sizes not supported by the coarse network.
- A dataset construction method, considering block partitioning and an objective function based on transform and quantization, is proposed to enhance network usability in video codecs.
2. Related Works
3. Proposed Coarse-to-Fine Network for Intra Prediction
3.1. Coarse Networks
3.2. Fine Networks
4. Training Considering Video Codec
4.1. Dataset Construction
4.2. Objective Function
5. Codec Integration
5.1. Network Integration in VVC
5.2. Mode Mapping for LFNST
6. Performance Evaluation
6.1. Evaluation of Coarse-to-Fine Network
6.2. Visualization Analysis
6.3. Complexity Analysis
6.4. Conparison with State-of-the-Arts
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Kalampogia, A.; Koutsakis, P.H. 264 and H.265 video bandwidth prediction. IEEE Trans. Multimed. 2018, 20, 171–182. [Google Scholar] [CrossRef]
- Sullivan, G.; Ohm, J.; Han, W.; Wiegand, T. Overview of the high efficiency video coding (HEVC) standard. IEEE Trans. Circuits Syst. Video Technol. 2012, 22, 1649–1668. [Google Scholar] [CrossRef]
- Bross, B.; Wang, Y.; Ye, Y.; Liu, S.; Chen, J.; Sullivan, G.; Ohm, J. Overview of the Versatile Video Coding (VVC) standard and its applications. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 3736–3764. [Google Scholar] [CrossRef]
- Huang, Y.; An, J.; Huang, H.; Li, X.; Hsiang, S.; Zhang, K.; Gao, H.; Ma, J.; Chubach, O. Block partitioning structure in the VVC standard. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 3818–3833. [Google Scholar] [CrossRef]
- Gao, H.; Esenlik, S.; Alshina, E.; Steinbach, E. Geometric partitioning mode in Versatile Video Coding: Algorithm review and analysis. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 3603–3617. [Google Scholar] [CrossRef]
- Pfaff, J.; Filippov, A.; Zhao, X.; Chen, J.; Wiegand, T.; Auwera, G. Intra prediction and mode coding in VVC. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 3834–3847. [Google Scholar] [CrossRef]
- Chien, W.; Zhang, L.; Winken, M.; Li, X.; Liao, R.; Gao, H.; Hsu, C.; Liu, H.; Chen, C. Motion vector coding and block merging in the Versatile Video Coding standard. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 3848–3861. [Google Scholar] [CrossRef]
- Zhao, X.; Kim, S.; Zhao, Y.; Egilmez, H.; Koo, M.; Liu, S.; Lainema, J.; Karczewicz, M. Transform coding in the VVC standard. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 3878–3890. [Google Scholar] [CrossRef]
- Schwarz, H.; Coban, M.; Chuang, T.; Bossen, F.; Alshin, A.; Laineman, J.; Helmrich, C.; Wiegand, T. Quantization and entropy coding in the Versatile Video Coding (VVC) standard. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 3891–3906. [Google Scholar] [CrossRef]
- Karczewicz, M.; Hu, N.; Taquet, J.; Chen, C.; Misra, K.; Andersson, K.; Yin, P.; Lu, T.; François, E.; Chen, J. VVC in-loop filters. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 3907–3925. [Google Scholar] [CrossRef]
- Pfaff, J.; Stallenberger, B.; Schafer, M.; Merkle, P.; Helle, P.; Hinz, T.; Schwarz, H.; Marpe, D.; Wiegand, T. Affine linear weighted intra prediction. Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29, doc. JVET-N0217. In Proceedings of the 14th Meeting, Geneva, Switzerland, March 2019. [Google Scholar]
- Koo, M.; Salehifar, M.; Lim, J.; Kim, S. Low Frequency Non-Separable Transform (LFNST). In Proceedings of the 2019 Picture Coding Symposium (PCS), Ningbo, China, 12–15 November 2019. [Google Scholar]
- Alshina, E.; Galpin, F.; Li, Y.; Santamaria, M.; Wang, H.; Wang, L.; Xie, Z. EE1: Summary of exploration experiments on neural network-based video coding. Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29, doc. JVET-AB0023. In Proceedings of the 28th Meeting, Mainz, Germany, October 2022. [Google Scholar]
- Li, J.; Li, Y.; Lin, C.; Zhang, K.; Zhang, L. A neural network enhanced video coding framework beyond VVC. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 1781–1785. [Google Scholar]
- Pfaff, J.; Helle, P.; Merkle, P.; Schäfer, M.; Stallenberger, B.; Hinz, T.; Schwarz, H.; Marpe, D.; Wiegand, T. Data-driven intra-prediction modes in the development of the Versatile Video Coding standard. ICT Discov. 2020, 3. [Google Scholar]
- Chen, J.; Ye, Y.; Kim, S. Algorithm description for Versatile Video Coding and Test Model 11 (VTM 11). Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29, doc. JVET-T2002. In Proceedings of the 20th Meeting, by Teleconference (Online), October 2020. [Google Scholar]
- Sun, H.; Cheng, Z.; Takeuchi, M.; Katto, J. Enhanced intra prediction for video coding by using multiple neural networks. IEEE Trans. Multimed. 2020, 22, 2764–2779. [Google Scholar] [CrossRef]
- Hu, Y.; Yang, W.; Li, M.; Liu, J. Progressive spatial recurrent neural network for intra prediction. IEEE Trans. Multimed. 2019, 21, 3024–3037. [Google Scholar] [CrossRef]
- Dumas, T.; Roumy, A.; Guillemot, C. Context-adaptive neural network-based prediction for image compression. IEEE Trans. Image Process. 2019, 29, 679–693. [Google Scholar] [CrossRef]
- Brand, F.; Seiler, J.; Kaup, A. Intra-frame coding using a conditional autoencoder. IEEE J. Sel. Top. Signal Process. 2021, 15, 354–365. [Google Scholar] [CrossRef]
- Dumas, T.; Galpin, F.; Bordes, P. Combined Neural Network-based Intra Prediction and Transform Selection. In Proceedings of the 2021 Picture Coding Symposium (PCS), Bristol, UK, 29 June–2 July 2021. [Google Scholar]
- Blanch, M.; Blasi, S.; Smeaton, A.; O’Connor, N.; Mrak, M. Attention-based neural networks for chroma intra prediction in video coding. IEEE J. Sel. Top. Signal Process. 2021, 15, 366–377. [Google Scholar] [CrossRef]
- Park, J.; Kim, B.; Lee, J.; Jeon, B. Machine Learning-Based Early Skip Decision for Intra Prediction in VVC. IEEE Access 2022, 10, 111052–111065. [Google Scholar] [CrossRef]
- Yu, J.; Lin, Z.; Yang, J.; Shen, X.; Lu, X.; Huang, T. Generative image inpainting with contextual attention. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 5505–5514. [Google Scholar]
- Yu, J.; Lin, Z.; Yang, J.; Shen, X.; Lu, X.; Huang, T. Free-form image inpainting with gated convolution. In Proceedings of the International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 4471–4480. [Google Scholar]
- Jin, X.; Chen, Z.; Liu, S.; Zhou, W. Augmented Coarse-to-Fine Video Frame Synthesis with Semantic Loss. In Proceedings of the PRCV 2018: Pattern Recognition and Computer Vision, Guangzhou, China, 23–26 November 2018; pp. 439–452. [Google Scholar]
- Luo, D.; Ye, M.; Li, S.; Li, X. Coarse-to-Fine Spatio-Temporal Information Fusion for Compressed Video Quality Enhancement. IEEE Signal Process. Lett. 2022, 29, 543–547. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 7132–7141. [Google Scholar]
- Cybenke, G. Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 1989, 2, 303–314. [Google Scholar] [CrossRef]
- Arora, R.; Basu, A.; Mianjy, P.; Mukherjee, A. Understanding deep neural networks with rectified linear units. arXiv 2016, arXiv:1611.01491. [Google Scholar]
- Maas, A.; Hannun, A.; Ng, A.Y. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the 30th International Conference on Machine Learning (ICML), Atlanta, GA, USA, 16–21 June 2013. [Google Scholar]
- Ma, D.; Zhang, F.; Bull, D. BVI-DVC: A Training database for deep video compression. IEEE Trans. Multimed. 2021, 24, 3847–3858. [Google Scholar] [CrossRef]
- Lu, X.; Liu, S.; Li, Z. Tencent Video Dataset (TVD): A video dataset for learning-based visual data compression and analysis. arXiv 2021, arXiv:2105.05961. [Google Scholar]
- VVC Test Model. Available online: https://vcgit.hhi.fraunhofer.de/jvet/VVCSoftware_VTM/-/tree/VTM-11.0 (accessed on 12 December 2020).
- Paul, M.; Antony, A.; Sreelekha, G. Performance improvement of HEVC using adaptive quantization. In Proceedings of the 2014 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Delhi, India, 24–27 September 2014; pp. 1428–1433. [Google Scholar]
- Lam, E.; Goodman, J. A mathematical analysis of the DCT coefficient distributions for images. IEEE Trans. Image Process. 2000, 9, 1661–1666. [Google Scholar] [CrossRef]
- Kingma, D.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Abdoli, M.; Guionnet, T.; Raulet, M.; Kulupana, F.; Blasi, S. Decoder-side intra mode derivation for next generation video coding. In Proceedings of the 2020 IEEE International Conference on Multimedia and Expo (ICME), London, UK, 6–10 July 2020. [Google Scholar]
- Li, Y.; Wang, H.; Wang, L.; Galpin, F.; Ström, J. Algorithm description for neural network-based video coding 3 (NNVC 3). Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29, doc. JVET-AB2019. In Proceedings of the 28th Meeting, Mainz, Germany, October 2022. [Google Scholar]
- Bossen, F.; Boyce, J.; Suehring, K.; Li, X.; Seregin, V. VTM common test conditions and software reference configurations for SDR video. Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29, doc. JVET-T2010. In Proceedings of the 20th Meeting, by Teleconference (Online), October 2020. [Google Scholar]
- BjØntegaard, G. Calculation of average PSNR differences between RD curves. Video Coding Experts Group (VCEG) of ITU-T SG 16 WP 3, doc. VCEG-M33. In Proceedings of the 13th Meeting, Austin, TX, USA, April 2001. [Google Scholar]
(a) Position-Wise Weight Derivation | |||
---|---|---|---|
Layer | Layer Type | Number of Neurons | Non-Linearity |
Fully connected | Sigmoid | ||
Fully connected | ReLU | ||
(b) Prediction Part | |||
Layer | Layer Type | Number of Neurons | Non-Linearity |
Fully connected | LeakyReLU | ||
Fully connected | LeakyReLU | ||
Fully connected | LeakyReLU | ||
Fully connected | - |
Layer | Layer type | Filter Size | Number of Channels | Non-Linearity |
---|---|---|---|---|
convolutional | 5 5 | 64 | Hyperbolic Tangent | |
convolutional | 3 3 | 32 | Hyperbolic Tangent | |
convolutional | 3 3 | - |
(,) | () | () | Transposition | Coarse Network | Fine Network |
---|---|---|---|---|---|
(4, 4) | 1 | 1 | no | ||
(4, 8) | 1 | 1 | no | ||
(8, 4) | 1 | 1 | yes | ||
(8, 8) | 1 | 1 | no | ||
(4, 16) | 1 | 1 | no | ||
(16, 4) | 1 | 1 | yes | ||
(8, 16) | 1 | 1 | no | ||
(16, 8) | 1 | 1 | yes | ||
(16, 16) | 1 | 1 | no | ||
(4, 32) | 1 | 1 | no | ||
(32, 4) | 1 | 1 | yes | ||
(8, 32) | 1 | 2 | no | ||
(32, 8) | 2 | 1 | yes | ||
(16, 32) | 1 | 2 | no | ||
(32, 16) | 2 | 1 | yes | ||
(32, 32) | 1 | 1 | no | ||
(64, 64) | 2 | 2 | no |
Video Sequence | Coarse | Coarse-to-Fine | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
Y | U | V | Y | U | V | Y | U | V | ||
Class A1 (4 K) | Tango2 | −1.51% | −1.76% | −1.47% | −1.76% | −1.69% | −1.88% | −1.83% | −1.26% | −1.87% |
FoodMarket4 | −1.60% | −0.82% | −1.21% | −2.12% | −1.38% | −1.47% | −2.20% | −1.36% | −1.81% | |
Campfire | −1.07% | −0.98% | −1.35% | −1.10% | −1.03% | −1.40% | −1.13% | −1.05% | −1.45% | |
Class A2 (4 K) | CatRobot1 | −0.97% | −1.21% | −1.13% | −1.18% | −1.14% | −1.38% | −1.23% | −1.26% | −1.15% |
DaylightRoad2 | −0.73% | −1.14% | −1.21% | −0.83% | −1.61% | −1.14% | −0.90% | −1.42% | −1.29% | |
ParkRunning3 | −1.05% | −0.96% | −0.95% | −1.09% | −1.01% | −1.02% | −1.04% | −1.01% | −1.02% | |
Class B (1080 p) | MarketPlace | −1.17% | −1.11% | −0.77% | −1.33% | −0.87% | −0.77% | −1.36% | −1.53% | −1.35% |
RitualDance | −1.59% | −0.93% | −1.13% | −1.82% | −1.21% | −1.25% | −2.00% | −1.28% | −1.46% | |
Cactus | −0.99% | −1.10% | −0.96% | −1.13% | −1.08% | −0.99% | −1.12% | −1.06% | −0.85% | |
BasketballDrive | −0.85% | −0.87% | −0.59% | −1.00% | −1.02% | −0.86% | −1.06% | −0.58% | −0.64% | |
BQTerrace | −0.89% | −0.96% | −1.17% | −0.94% | −1.07% | −1.24% | −0.88% | −1.15% | −1.19% | |
Class C (WVGA) | BasketballDrill | −1.16% | −0.98% | −0.73% | −1.31% | −1.15% | −0.56% | −1.31% | −0.79% | −0.85% |
BQMall | −0.82% | −0.79% | −0.59% | −0.93% | −0.56% | −0.34% | −0.93% | −0.75% | −0.93% | |
PartyScene | −0.77% | −0.51% | −0.72% | −0.79% | −0.53% | −0.67% | −0.75% | −0.45% | −0.20% | |
RaceHorses | −0.90% | −1.39% | −1.14% | −1.06% | −0.85% | −1.06% | −1.10% | −0.96% | −1.12% | |
Class D (WQVGA) | BasketballPass | −0.71% | −0.33% | −0.95% | −0.99% | −0.35% | −0.98% | −0.88% | −0.32% | −1.03% |
BQSquare | −0.38% | −0.10% | 0.12% | −0.42% | −0.32% | −0.38% | −0.41% | 0.06% | 0.68% | |
BlowingBubbles | −1.18% | −0.25% | −1.36% | −1.26% | −0.90% | −1.43% | −1.23% | −1.32% | −1.60% | |
RaceHorses | −1.06% | −1.78% | −1.23% | −1.42% | −0.27% | −1.35% | −1.46% | −0.59% | −1.60% | |
Class E (720 p) | FourPeople | −1.35% | −1.29% | −1.26% | −1.66% | −1.37% | −1.41% | −1.73% | −1.34% | −1.62% |
Johnny | −1.17% | −0.26% | −1.50% | −1.43% | −1.16% | −2.20% | −1.64% | −1.42% | −1.38% | |
KristenAndSara | −1.02% | −0.82% | −1.00% | −1.32% | −1.08% | −0.64% | −1.43% | −1.47% | −0.36% | |
Overall | Class A1 | −1.39% | −1.19% | −1.34% | −1.66% | −1.37% | −1.58% | −1.72% | −1.22% | −1.71% |
Class A2 | −0.92% | −1.11% | −1.09% | −1.03% | −1.25% | −1.18% | −1.06% | −1.23% | −1.16% | |
Class B | −1.10% | −0.99% | −0.92% | −1.24% | −1.05% | −1.02% | −1.29% | −1.12% | −1.10% | |
Class C | −0.91% | −0.92% | −0.80% | −1.02% | −0.77% | −0.66% | −1.03% | −0.74% | −0.77% | |
Class D | −0.83% | −0.62% | −0.86% | −1.02% | −0.46% | −1.03% | −0.99% | −0.54% | −0.89% | |
Class E | −1.18% | −0.79% | −1.25% | −1.47% | −1.20% | −1.41% | −1.60% | −1.41% | −1.12% | |
Average | −1.04% | −0.93% | −1.01% | −1.22% | −0.98% | −1.11% | −1.26% | −1.01% | −1.10% |
Class | Enc. RT | Dec. RT | ||
---|---|---|---|---|
Coarse | Coarse-to-Fine | Coarse-to-Fine | ||
A1 | 508% | 676% | 939% | 9402% |
A2 | 603% | 679% | 898% | 7188% |
B | 653% | 726% | 933% | 7816% |
C | 587% | 647% | 850% | 8549% |
D | 547% | 599% | 782% | 7538% |
E | 684% | 739% | 966% | 9090% |
Average | 594% | 678% | 896% | 8409% |
Video Sequence | MIP [11] | CAE [21] | Coarse-to-Fine | |||||||
---|---|---|---|---|---|---|---|---|---|---|
Y | U | V | Y | U | V | Y | U | V | ||
Class A1 (4 K) | Tango2 | −1.08% | −0.44% | −0.51% | −2.20% | −3.13% | −1.92% | −2.91% | −1.51% | −2.89% |
FoodMarket4 | −1.62% | −0.78% | −0.83% | −3.62% | −2.84% | −2.70% | −3.21% | −2.63% | −3.00% | |
Campfire | −0.56% | −0.20% | −0.11% | −0.82% | −0.33% | −0.26% | −1.46% | −1.29% | −1.58% | |
Class A2 (4 K) | CatRobot1 | −0.57% | −0.08% | −0.20% | −1.20% | −1.65% | −1.24% | −1.64% | −1.66% | −1.15% |
DaylightRoad2 | −0.37% | 0.03% | 0.00% | −0.80% | −1.63% | −0.84% | −1.32% | −1.86% | −1.13% | |
ParkRunning3 | −0.50% | −0.18% | −0.17% | −1.05% | −0.94% | −1.01% | −1.37% | −1.08% | −1.12% | |
Class B (1080 p) | MarketPlace | −0.67% | −0.30% | −0.47% | −1.22% | −0.93% | −1.37% | −1.86% | −0.96% | −1.49% |
RitualDance | −0.42% | 0.20% | 0.16% | −1.21% | −0.92% | −1.15% | −1.95% | −0.91% | −0.65% | |
Cactus | −0.48% | −0.02% | 0.09% | −0.96% | −1.06% | −0.98% | −1.61% | −0.32% | −1.25% | |
BasketballDrive | −0.39% | −0.52% | 0.15% | −0.91% | −1.16% | −1.08% | −1.35% | 0.03% | −0.03% | |
BQTerrace | −0.29% | −0.03% | 0.26% | −0.70% | −0.50% | −0.49% | −1.00% | −1.99% | −1.44% | |
Class C (WVGA) | BasketballDrill | −0.42% | −0.03% | 0.17% | −0.78% | −0.54% | −0.84% | −1.85% | −0.86% | −0.09% |
BQMall | −0.50% | −0.27% | −0.50% | −0.79% | −0.96% | −1.00% | −0.76% | −0.49% | −0.88% | |
PartyScene | −0.57% | 0.02% | −0.27% | −0.62% | −0.56% | −0.51% | −1.05% | −1.38% | −1.25% | |
RaceHorses | −0.44% | 0.08% | −0.33% | −0.98% | −0.95% | −0.64% | −1.41% | −0.90% | −0.59% | |
Class D (WQVGA) | BasketballPass | −0.29% | −0.55% | −0.51% | −0.37% | −1.04% | −2.06% | −1.04% | 3.69% | 1.43% |
BQSquare | −0.74% | −0.22% | 0.22% | −0.52% | −0.75% | −0.16% | −0.52% | −0.79% | −1.03% | |
BlowingBubbles | −0.48% | −0.28% | −0.68% | −0.71% | −1.03% | −1.14% | −0.88% | −2.27% | −0.81% | |
RaceHorses | −0.48% | −0.12% | 0.20% | −0.98% | −1.21% | −0.85% | −1.79% | −1.92% | −2.75% | |
Class E (720 p) | FourPeople | −0.43% | −0.10% | −0.18% | −1.15% | −1.16% | −1.46% | −2.08% | −0.79% | −2.13% |
Johnny | −0.53% | −0.68% | −0.95% | −1.45% | −1.56% | −1.75% | −2.11% | −2.02% | −2.89% | |
KristenAndSara | −0.46% | −0.84% | 0.59% | −1.07% | −1.65% | −1.12% | −1.46% | −0.94% | 0.23% | |
Overall | Class A1 | −1.09% | −0.47% | −0.48% | −2.22% | −2.10% | −1.63% | −2.53% | −1.81% | −2.49% |
Class A2 | −0.48% | −0.08% | −0.12% | −1.02% | −1.41% | −1.03% | −1.44% | −1.53% | −1.13% | |
Class B | −0.45% | −0.13% | 0.04% | −1.00% | −0.91% | −1.01% | −1.55% | −0.83% | −0.97% | |
Class C | −0.48% | −0.05% | −0.23% | −0.77% | −0.75% | −0.75% | −1.27% | −0.91% | −0.71% | |
Class D | −0.50% | −0.29% | −0.19% | −0.65% | −1.01% | −1.05% | −1.06% | −0.32% | −0.79% | |
Class E | −0.47% | −0.54% | −0.18% | −1.22% | −1.46% | −1.44% | −1.88% | −1.25% | −1.60% | |
Average | −0.56% | −0.24% | −0.18% | −1.10% | −1.20% | −1.12% | −1.57% | −1.04% | −1.20% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Park, D.; Moon, G.; Oh, B.T.; Kim, J.-G. Coarse-to-Fine Network-Based Intra Prediction in Versatile Video Coding. Sensors 2023, 23, 9452. https://doi.org/10.3390/s23239452
Park D, Moon G, Oh BT, Kim J-G. Coarse-to-Fine Network-Based Intra Prediction in Versatile Video Coding. Sensors. 2023; 23(23):9452. https://doi.org/10.3390/s23239452
Chicago/Turabian StylePark, Dohyeon, Gihwa Moon, Byung Tae Oh, and Jae-Gon Kim. 2023. "Coarse-to-Fine Network-Based Intra Prediction in Versatile Video Coding" Sensors 23, no. 23: 9452. https://doi.org/10.3390/s23239452
APA StylePark, D., Moon, G., Oh, B. T., & Kim, J. -G. (2023). Coarse-to-Fine Network-Based Intra Prediction in Versatile Video Coding. Sensors, 23(23), 9452. https://doi.org/10.3390/s23239452