Export Citations
Save this search
Please login to be able to save your searches and receive alerts for new content matching your search criteria.
- research-articleDecember 2024
Cafca: High-quality Novel View Synthesis of Expressive Faces from Casual Few-shot Captures
- Marcel C. Buehler,
- Gengyan Li,
- Erroll Wood,
- Leonhard Helminger,
- Xu Chen,
- Tanmay Shah,
- Daoye Wang,
- Stephan Garbin,
- Sergio Orts-Escolano,
- Otmar Hilliges,
- Dmitry Lagun,
- Jérémy Riviere,
- Paulo Gotardo,
- Thabo Beeler,
- Abhimitra Meka,
- Kripasindhu Sarkar
SA '24: SIGGRAPH Asia 2024 Conference PapersArticle No.: 29, Pages 1–12https://doi.org/10.1145/3680528.3687580Volumetric modeling and neural radiance field representations have revolutionized 3D face capture and photorealistic novel view synthesis. However, these methods often require hundreds of multi-view input images and are thus inapplicable to cases with ...
- research-articleNovember 2024
Efficient neural implicit representation for 3D human reconstruction
Pattern Recognition (PATT), Volume 156, Issue Chttps://doi.org/10.1016/j.patcog.2024.110758AbstractHigh-fidelity digital human representations are increasingly in demand in the digital world, particularly for interactive telepresence, AR/VR, 3D graphics, and the rapidly evolving metaverse. Even though they work well in small spaces, ...
Highlights- Generates high-quality, animatable human avatars from single-camera video in minutes.
- Pre-trained model: Enhances human estimation in dynamic NeRF for better avatars.
- Efficient rendering: Replaces standard NeRF with a SoTA model ...
- ArticleOctober 2024
NeuroNCAP: Photorealistic Closed-Loop Safety Testing for Autonomous Driving
- William Ljungbergh,
- Adam Tonderski,
- Joakim Johnander,
- Holger Caesar,
- Kalle Åström,
- Michael Felsberg,
- Christoffer Petersson
Computer Vision – ECCV 2024Pages 161–177https://doi.org/10.1007/978-3-031-73404-5_10AbstractWe present a versatile NeRF-based simulator for testing autonomous driving (AD) software systems, designed with a focus on sensor-realistic closed-loop evaluation and the creation of safety-critical scenarios. The simulator learns from sequences ...
- ArticleOctober 2024
Texture-GS: Disentangling the Geometry and Texture for 3D Gaussian Splatting Editing
Computer Vision – ECCV 2024Pages 37–53https://doi.org/10.1007/978-3-031-72698-9_3Abstract3D Gaussian splatting, emerging as a groundbreaking approach, has drawn increasing attention for its capabilities of high-fidelity reconstruction and real-time rendering. However, it couples the appearance and geometry of the scene within the ...
- ArticleOctober 2024
CaesarNeRF: Calibrated Semantic Representation for Few-Shot Generalizable Neural Rendering
Computer Vision – ECCV 2024Pages 71–89https://doi.org/10.1007/978-3-031-72658-3_5AbstractGeneralizability and few-shot learning are key challenges in Neural Radiance Fields (NeRF), often due to the lack of a holistic understanding in pixel-level rendering. We introduce CaesarNeRF, an end-to-end approach that leverages scene-level CA...
-
- research-articleJuly 2024
VRMM: A Volumetric Relightable Morphable Head Model
SIGGRAPH '24: ACM SIGGRAPH 2024 Conference PapersArticle No.: 46, Pages 1–11https://doi.org/10.1145/3641519.3657406In this paper, we introduce the Volumetric Relightable Morphable Model (VRMM), a novel volumetric and parametric facial prior for 3D face modeling. While recent volumetric prior models offer improvements over traditional methods like 3D Morphable Models (...
- research-articleJuly 2024
Conditional visibility aware view synthesis via parallel light fields
Neurocomputing (NEUROC), Volume 588, Issue Chttps://doi.org/10.1016/j.neucom.2024.127644AbstractIn the area of neural rendering-based novel view synthesis, illumination is important since shadows cast by objects under various light sources provide indications about their geometries and materials. However, due to high physical device ...
- research-articleJuly 2024
EMOVA: Emotion-driven neural volumetric avatar
Image and Vision Computing (IAVC), Volume 146, Issue Chttps://doi.org/10.1016/j.imavis.2024.105043Abstract3D facial reconstruction is essential for metaverse applications. Traditional mesh-based methods have difficulty rendering photorealistic faces and complex objects. Recent advancements in Neural Radiance Fields (NeRFs) have excelled in ...
Graphical abstractDisplay Omitted
Highlights- A novel network, EMOVA, which uses two emotional stimuli from images and the corresponding voice to learn subtle differences in facial expression.
- The attention-based fusion method to efficiently combine the information of images and ...
- research-articleJuly 2024
Real-time volume rendering with octree-based implicit surface representation
Computer Aided Geometric Design (CAGD), Volume 111, Issue Chttps://doi.org/10.1016/j.cagd.2024.102322AbstractRecent breakthroughs in neural radiance fields have significantly advanced the field of novel view synthesis and 3D reconstruction from multi-view images. However, the prevalent neural volume rendering techniques often suffer from long rendering ...
Highlights- Proposing a novel octree-based method to reconstruct implicit surfaces from multi-view images.
- Enabling real-time rendering while maintaining reconstruction quality comparable to existing techniques.
- Enabling converting slow-...
- research-articleMay 2024
Point cloud enhancement optimization and high-fidelity texture reconstruction methods for air material via fusion of 3D scanning and neural rendering
Expert Systems with Applications: An International Journal (EXWA), Volume 242, Issue Chttps://doi.org/10.1016/j.eswa.2023.122736AbstractIn order to realize the digital management and manufacturing of air material, the methods of 3D point cloud enhancement optimization and high-fidelity texture reconstruction in entity digitization of air material based on the fusion of 3D ...
- research-articleNovember 2024
ProLiF: Progressively-connected Light Field network for efficient view synthesis
Computers and Graphics (CGRS), Volume 120, Issue Chttps://doi.org/10.1016/j.cag.2024.103913AbstractThis paper presents a simple yet practical network architecture, ProLiF (Progressively-connected Light Field network), for the efficient differentiable view synthesis of complex forward-facing scenes in both the training and inference stages. The ...
Graphical abstractDisplay Omitted
Highlights- Introduction of ProLiF, a simple and efficient network architecture for differentiable view synthesis.
- Development of a progressive training strategy with novel regularization losses to ensure multi-view 3D consistency.
- ...
- research-articleMay 2024
BTD-RF: 3D scene reconstruction using block-term tensor decomposition
Applied Intelligence (KLU-APIN), Volume 54, Issue 8Pages 6319–6332https://doi.org/10.1007/s10489-024-05476-0AbstractThe Neural Radiance Field (NeRF) exhibits excellent performance for view synthesis tasks, but it requires a large amount of memory and model parameters during three-dimensional (3D) scene reconstruction. This paper proposes a block-term tensor ...
- research-articleMarch 2024
Unsupervised Point Cloud Representation Learning by Clustering and Neural Rendering
International Journal of Computer Vision (IJCV), Volume 132, Issue 8Pages 3251–3269https://doi.org/10.1007/s11263-024-02027-5AbstractData augmentation has contributed to the rapid advancement of unsupervised learning on 3D point clouds. However, we argue that data augmentation is not ideal, as it requires a careful application-dependent selection of the types of augmentations ...
- research-articleDecember 2023
LiveNVS: Neural View Synthesis on Live RGB-D Streams
SA '23: SIGGRAPH Asia 2023 Conference PapersArticle No.: 37, Pages 1–11https://doi.org/10.1145/3610548.3618213Existing real-time RGB-D reconstruction approaches, like Kinect Fusion, lack real-time photo-realistic visualization. This is due to noisy, oversmoothed or incomplete geometry and blurry textures which are fused from imperfect depth maps and camera ...
- research-articleDecember 2023
Towards Practical Capture of High-Fidelity Relightable Avatars
- Haotian Yang,
- Mingwu Zheng,
- Wanquan Feng,
- Haibin Huang,
- Yu-Kun Lai,
- Pengfei Wan,
- Zhongyuan Wang,
- Chongyang Ma
SA '23: SIGGRAPH Asia 2023 Conference PapersArticle No.: 23, Pages 1–11https://doi.org/10.1145/3610548.3618138In this paper, we propose a novel framework, Tracking-free Relightable Avatar (TRAvatar), for capturing and reconstructing high-fidelity 3D avatars. Compared to previous methods, TRAvatar works in a more practical and efficient setting. Specifically, ...
- research-articleNovember 2023
Portrait Expression Editing With Mobile Photo Sequence
SA '23: SIGGRAPH Asia 2023 Technical CommunicationsArticle No.: 17, Pages 1–4https://doi.org/10.1145/3610543.3626160Mobile cameras have revolutionized content creation, allowing casual users to capture professional-looking photos. However, capturing the perfect moment can still be challenging, making post-capture editing desirable. In this work, we introduce ExShot, ...
- research-articleMarch 2024
GsNeRF: Fast novel view synthesis of dynamic radiance fields
Computers and Graphics (CGRS), Volume 116, Issue CPages 491–499https://doi.org/10.1016/j.cag.2023.10.002AbstractSynthesis of new views in dynamic 3D scenes is a challenging task in 3D vision. However, most current approaches rely on radiance fields built upon multi-view camera systems for dynamic scenes, which are expensive and time-consuming, and depend ...
Graphical abstractDisplay Omitted
Highlights- We proposed a model to represent dynamic scenes, which achieved high rendering quality on both synthetic and real datasets.
- We applied a special tensor decomposition technique to accelerate the training process and reduce the space ...
- research-articleMarch 2024
Neural 3D face rendering conditioned on 2D appearance via GAN disentanglement method
Computers and Graphics (CGRS), Volume 116, Issue CPages 336–344https://doi.org/10.1016/j.cag.2023.08.008AbstractPreviewing the shaded output of 3D models has been a long-standing requirement in the field. Typically, this is achieved by applying common materials; however, this approach is often labor-intensive and can yield only rough results in the trial ...
Graphical abstractDisplay Omitted
Highlights- Rendering 2D images from 3D face meshes with a single 2D reference image.
- Geometry and appearance disentanglement through mesh explicit representation.
- Injecting 3D information into the StyleGAN2 generator for superior geometry ...
- research-articleOctober 2023
Unsupervised learning of style-aware facial animation from real acting performances
Graphical Models (GMOE), Volume 129, Issue Chttps://doi.org/10.1016/j.gmod.2023.101199AbstractThis paper presents a novel approach for text/speech-driven animation of a photo-realistic head model based on blend-shape geometry, dynamic textures, and neural rendering. Training a VAE for geometry and texture yields a parametric ...
Graphical abstractDisplay Omitted
- research-articleAugust 2023
NeLT: Object-Oriented Neural Light Transfer
ACM Transactions on Graphics (TOG), Volume 42, Issue 5Article No.: 163, Pages 1–16https://doi.org/10.1145/3596491This article presents object-oriented neural light transfer (NeLT), a novel neural representation of the dynamic light transportation between an object and the environment. Our method disentangles the global illumination of a scene into individual objects’...