Junqiu Zhu (朱君秋)

junqiuzhu@UCSB.edu

Curriculum Vitae [Nov 2023]

Latest News

  • [March 2024] I will serve as a member of the SIGGRAPH Asia 2024 Paper Programe Committee!

A Short Bio

I'm a postdoctoral fellow of University of California Santa Babara advised by Prof. Lingqi Yan. I receieved my Ph.D. degree from Shandong University, China, in 2022, working under the supervision of Prof. Xiangxu Meng, co-supervised by Prof. Lu Wang, Prof. Yanning Xu. I was remotely supervised by Prof. Lingqi Yan during my doctoral studies.


Research

My research is in Computer Graphics and mainly focuses on visual appearance modeling and rendering. I work to understand why materials in the real world look the way they do and how we can accurately and effectively model their interaction with light.


My Publications

A Practical and Hierarchical Yarn-based Shading Model for Cloth
Junqiu Zhu, Zahra Montazeri, Jean-Marie Aubry, Ling-Qi Yan, Andrea Weidlich
Computer Graphics Forum (Proceedings of Eurographics Symposium on Rendering 2023) [Paper]
EGSR 2023 Best Paper Award and EGSR 2023 Best Visual Effects Award
Realistic cloth rendering is a longstanding challenge in computer graphics due to the intricate geometry and hierarchical structure of cloth: Fibers form plies which in turn are combined into yarns which then are woven or knitted into fabrics. Previous fiber-based models have achieved high-quality close-up rendering, but they suffer from high computational cost, which limits their practicality. In this paper, we propose a novel hierarchical model that analytically aggregates light simulation on the fiber level by building on dual-scattering theory. Based on this, we can perform an efficient simulation of ply and yarn shading. Compared to previous methods, our approach is faster and uses less memory while preserving a similar accuracy. We demonstrate both through comparison with existing fiber-based shading models. Our yarn shading model can be applied to curves or surfaces, making it highly versatile for cloth shading. This duality paired with its simplicity and flexibility makes the model particularly useful for film and games production.


A Realistic Surface-based Cloth Rendering Model
Junqiu Zhu, Adrain Jarabo, Carlos Aliaga, Ling-Qi Yan, Matt (Jen-Yuan) Chiang
ACM SIGGRAPH 2023 ((Conference Track)) [Paper] [Video] [Supplementary] [Presentation]
We propose a surface-based cloth shading model that generates realistic cloth appearance with ply-level details. It generalizes previous surface-based models to a broader set of cloth including knitted and thin woven cloth. Our model takes into account the most dominant visual features of cloth, including anisotropic S-shaped reflection highlight, cross-shaped transmission highlights, delta transmission, and shadowing masking. We model these elements via a comprehensive micro-scale BSDF and a meso-scale effective BSDF formulation. Then, we propose an implementation that leverages the Monte Carlo sampler of path tracing for reducing precomputation to the bare minimum, by evaluating the effective BSDF as a Monte Carlo estimate, and encoding visibility using anisotropic spherical Gaussians. We demonstrate our model by replicating a set of woven and knitted fabrics, showing good match with respect to captured photographs


Practical Level-of-detail Aggregation of Fur Appearance
Junqiu Zhu, Sizhe Zhao, Lu Wang, Yanning Xu, Ling-Qi Yan
ACM Transactions on Graphics (Proceedings of SIGGRAPH 2022) [Project Page]
Fur appearance rendering is crucial for the realism of computer generated imagery, but is also a challenge in computer graphics for many years. Much effort has been made to accurately simulate the multiple-scattered light transport among fur fibers, but the computation cost is still very high, since the number of fur fibers is usually extremely large. In this paper, we aim at reducing the number of fur fibers while preserving realistic fur appearance. We present an aggregated fur appearance model, using one thick cylinder to accurately describe the aggregated optical behavior of a bunch of fur fibers, including the multiple scattering of light among them. Then, to acquire the parameters of our aggregated model, we use a lightweight neural network to map individual fur fiber s optical properties to those in our aggregated model. Finally, we come up with a practical heuristic that guides the simplification process of fur dynamically at different bounces of the light, leading to a practical level-of-detail rendering scheme. Our method achieves nearly the same results as the ground truth, but performs 3.8 -13.5 times faster.


Recent Advances in Glinty Appearance Rendering
Junqiu Zhu, Sizhe Zhao, Yanning Xu, Xiangxu Meng, Lu Wang, Ling-Qi Yan
Computational Visual Media, 2022 [PDF]
The interaction between light and materials is key to physically-based realistic rendering. However, it is also complex to be analyzed, especially when the materials contain a large number of details and thus exhibit glinty visual effects. Recent methods of glinty appearance are favored as an important component of next-generation computer graphics. In this survey, we propose a comprehensive and state-of-the-art research on glinty appearance rendering. We start by presenting a unified definition of glinty appearance based on the microfacet theory. Then, we summarize prior works from two aspects, including representation and practical rendering. Further, we implement typical methods on our unified platform and compare the performances in terms of visual effects, rendering speed, and memory consumption. Finally, we briefly discuss the limitations and future research directions. Our analyses, implementations, and comparisons will provide insights for readers to choose the proper methods in engineering in this line of research.


Real-Time Microstructure Rendering with MIP-mapped Normal Map Samples
Haowen Tan*, Junqiu Zhu* (*: dual first authors), Xiangxu Meng, Yanning Xu, Lu Wang, Ling-Qi Yan
Computer Graphics Forum (2022) [Paper] [Video]
Normal map-based microstructure rendering method can generate both glint and scratch appearance accurately, but the extra high-resolution normal map that defines every microfacet normal may incur high storage and computation costs. We present an example-based real-time rendering method for arbitrary microstructure materials, which also greatly reduces required storage space. Our method takes a small-size normal map sample as input. We implicitly synthesize a high-resolution normal map from the normal map sample and construct MIP-mapped position-normal 4D Gaussian lobes. Based on the above MIP-mapped 4D lobes and a LUT data structure for the synthesized high-resolution normal map, an efficient Gaussian query method is presented to evaluate the P-NDFs (Position-Normal Distribution Functions) for shading. We can render complex scenes with both glint and scratch surfaces in real-time (> 30 fps) with a full high-definition resolution, and the space required for each microstructure material is decreased to 30MB.


Neural Complex Luminaires: Representation and Rendering
Junqiu Zhu, Yaoyi Bai, Zilin Xu, Steve Bako, Edgar Velázquez-Armendáriz, Lu Wang, Pradeep Sen, Miloš Hašan, Ling-Qi Yan
ACM Transactions on Graphics (Proceedings of SIGGRAPH 2021) [PDF] [Video] [Code]
Physically-based rendering of complex luminaires, such as grand chandeliers in concert halls, can be extremely costly. The emitting sources are typically encased in complex refractive geometry, creating difficult light paths that require many samples to evaluate with Monte Carlo approaches. Previous work has attempted to speed up this process, but the methods are either inaccurate, require very large lightfield storage, and/or do not fit well into modern path tracing frameworks. Inspired by the success of deep networks, which can model complex relationships robustly and be evaluated efficiently, we propose to use a machine learning framework to compress a complex luminaire's light field into an implicit neural representation. Our approach can easily plug into conventional renderers, as it works with the standard techniques of path tracing and multiple importance sampling (MIS). Our solution is to train three networks to perform the essential operations for evaluating the complex luminaire at a specific point and view direction, importance sampling a point on the luminaire given a shading location, and blending to determine the transparency of luminaire queries to properly combine them with other scene elements. We perform favorably relative to state-of-the-art approaches and render final images that are close to the high sample count reference with only a fraction of the computation and storage costs, with no need to store the original luminaire geometry and materials.


A Stationary SVBRDF Material Modeling Method Based on Discrete Microsurface
Junqiu Zhu, Yanning Xu, Lu Wang
Computer Graphics Forum (Proceedings of Pacific Graphics 2019) [PDF]
Microfacet theory is commonly used to build reflectance models for surfaces. While traditional microfacet‐based models assume that the distribution of a surface's microstructure is continuous, recent studies indicate that some surfaces with tiny, discrete and stochastic facets exhibit glittering visual effects, while some surfaces with structured features exhibit anisotropic specular reflection. Accordingly, this paper proposes an efficient and stationary method of surface material modeling to process both glittery and non‐glittery surfaces in a consistent way. Our method comprises two steps: in the preprocessing step, we take a fixed‐size sample normal map as input, then organize 4D microfacet trees in position and normal space for arbitrary‐sized surfaces; we also cluster microfacets into 4D K‐lobes via the adaptive k‐means method. In the rendering step, moreover, surface normals can be efficiently evaluated using pre‐clustered microfacets. Our method is able to efficiently render any structured, discrete and continuous micro‐surfaces using a precisely reconstructed surface NDF. Our method is both faster and uses less memory compared to the state‐of‐the‐art glittery surface modeling works.